过去,我写过几篇文章( 使用Akka HTTP,Akka Streams和反应性mongo在Scala中构建REST服务以及使用Akka,Scala和websockets进行ReactiveMongo ),使用MongoDB将更新直接从数据库推送到Scala应用程序。 如果您只想为应用程序订阅流事件列表,那么这是一项非常不错的功能,在该事件中,当应用程序关闭时错过一个流事件并没有关系。 虽然MongoDB是一个很棒的数据库,但它并非适合所有目的。 有时,您需要具有良好定义的架构的关系数据库,或者可以结合SQL和noSQL世界的数据库。 我个人一直非常喜欢Postgresql。 它是最好的关系数据库之一,具有强大的GIS支持(我非常喜欢),并且获得了越来越多的JSON / Schema-less支持(我需要深入研究)。 我在Postgresql中不了解的功能之一是它提供了一种订阅机制。 当阅读“ 在Go中侦听PostgreSQL中的PostgreSQL的通用JSON通知 ”一文时,我了解到这一点,该文章说明了如何在Go中使用它。 在本文中,我们将尝试看看您需要做什么,才能在Scala中获得类似的工作(Java的方法几乎是相同的)。
这在Postgresql中如何运作
实际上,在Postgresql中监听通知非常容易。 您所要做的只是以下几点:
LISTEN virtual;
NOTIFY virtual;
Asynchronous notification "virtual" received from server process with PID 8448.
NOTIFY virtual, 'This is the payload';
Asynchronous notification "virtual" with payload "This is the payload" received from server process with PID 8448.
想要侦听事件的连接将使用要侦听的通道的名称调用LISTEN。 发送连接仅使用通道名称和可能的有效负载运行NOTIFY。
准备数据库
在引言中提到的有关Go的文章中,最酷的事情是它提供了一个存储过程,该存储过程在表行被插入,更新或删除时自动发送通知。 以下内容(取自Go中的PostgreSQL侦听来自PostgreSQL的通用JSON通知)创建了一个存储过程,该存储过程在被调用时发送通知。
--
CREATE OR REPLACE FUNCTION notify_event() RETURNS TRIGGER AS $$ DECLARE
data json;
notification json;
BEGIN
-- Convert the old or new row to JSON, based on the kind of action.
-- Action = DELETE? -> OLD row
-- Action = INSERT or UPDATE? -> NEW row
IF (TG_OP = 'DELETE') THEN
data = row_to_json(OLD);
ELSE
data = row_to_json(NEW);
END IF;
-- Contruct the notification as a JSON string.
notification = json_build_object(
'table',TG_TABLE_NAME,
'action', TG_OP,
'data', data);
-- Execute pg_notify(channel, notification)
PERFORM pg_notify('events',notification::text);
-- Result is ignored since this is an AFTER trigger
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
---
关于此存储过程的真正酷事是,数据已转换为JSON,因此我们可以轻松地在应用程序中对其进行处理。 对于此示例,我将使用与Go文章中使用的表和数据相同的表,然后首先创建一个表:
CREATE TABLE products (
id SERIAL,
name TEXT,
quantity FLOAT
);
并在表发生任何事件时创建触发器。
CREATE TRIGGER products_notify_event
AFTER INSERT OR UPDATE OR DELETE ON products
FOR EACH ROW EXECUTE PROCEDURE notify_event();
此时,只要在products表上插入,更新或删除一行,就会创建一个通知事件。 我们可以使用pgsql命令行简单地测试一下:
triggers=# LISTEN events;
LISTEN
triggers=# INSERT INTO products(name, quantity) VALUES ('Something', 99999);
INSERT 0 1
Asynchronous notification "events" with payload "{"table" : "products", "action" : "INSERT", "data" : {"id":50,"name":"Something","quantity":99999}}" received from server process with PID 24131.
triggers=#
如您所见,INSERT导致了一个异步事件,其中包含数据。 因此,到目前为止,我们几乎已经遵循了Go文章中概述的步骤。 现在让我们看一下如何从Scala访问通知。
从Scala访问通知
首先,让我们设置项目的依赖项。 和往常一样,我们使用SBT。 该项目的build.sbt如下所示:
name := "postgresql-notifications"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies ++= Seq("org.postgresql" % "postgresql" % "9.4-1200-jdbc41",
"org.scalikejdbc" %% "scalikejdbc" % "2.2.8",
"com.typesafe.akka" %% "akka-actor" % "2.4-SNAPSHOT",
"org.json4s" %% "json4s-native" % "3.2.10"
)
resolvers += "Akka Snapshot Repository" at "http://repo.akka.io/snapshots/"
快速总结:
- scalikeJDBC :该项目提供了一个易于使用的JDBC包装器,因此我们不必使用Java的连接处理方式。
- akka :我们使用Akka框架来管理与数据库的连接。 由于JDBC驱动程序不是异步ar可以推送数据,因此我们需要设置一个间隔。
- json4s :这只是一个简单的Scala JSON库。 我们使用它来快速将传入的数据转换为简单的case类。
我们将首先向您展示此示例的完整源代码,然后说明各个部分:
import akka.actor.{Props, ActorSystem, Actor}
import org.apache.commons.dbcp.{PoolingDataSource, DelegatingConnection}
import org.json4s.DefaultFormats
import org.postgresql.{PGNotification, PGConnection}
import scalikejdbc._
import org.json4s.native.JsonMethods._
import scala.concurrent.duration._
/**
* Simple case class to marshall to from received event.
*/
case class Product(id : Long, name: String, quantity: Long)
/**
* Main runner. Just setups the connection pool and the actor system
*/
object PostgresNotifications extends App {
// initialize JDBC driver & connection pool
Class.forName("org.postgresql.Driver")
ConnectionPool.singleton("jdbc:postgresql://localhost:5432/triggers", "jos", "######")
ConnectionPool.dataSource().asInstanceOf[PoolingDataSource].setAccessToUnderlyingConnectionAllowed(true)
// initialize the actor system
val system = ActorSystem("Hello")
val a = system.actorOf(Props[Poller], "poller")
// wait for the user to stop the server
println("Press to exit.")
Console.in.read.toChar
system.terminate
}
class Poller extends Actor {
// execution context for the ticks
import context.dispatcher
val connection = ConnectionPool.borrow()
val db: DB = DB(connection)
val tick = context.system.scheduler.schedule(500 millis, 1000 millis, self, "tick")
override def preStart() = {
// make sure connection isn't closed when executing queries
// we setup the
db.autoClose(false)
db.localTx { implicit session =>
sql"LISTEN events".execute().apply()
}
}
override def postStop() = {
tick.cancel()
db.close()
}
def receive = {
case "tick" => {
db.readOnly { implicit session =>
val pgConnection = connection.asInstanceOf[DelegatingConnection].getInnermostDelegate.asInstanceOf[PGConnection]
val notifications = Option(pgConnection.getNotifications).getOrElse(Array[PGNotification]())
notifications.foreach( not => {
println(s"Received for: ${not.getName} from process with PID: ${not.getPID}")
println(s"Received data: ${not.getParameter} ")
// convert to object
implicit val formats = DefaultFormats
val json = parse(not.getParameter) \\ "data"
val prod = json.extract[Product]
println(s"Received as object: $prod\n")
}
)
}
}
}
}
如果您熟悉Akka和scalikeJDBC,则代码将很熟悉。 我们从一些常规设置开始:
/**
* Simple case class to marshall to from received event.
*/
case class Product(id : Long, name: String, quantity: Long)
/**
* Main runner. Just setups the connection pool and the actor system
*/
object PostgresNotifications extends App {
// initialize JDBC driver & connection pool
Class.forName("org.postgresql.Driver")
ConnectionPool.singleton("jdbc:postgresql://localhost:5432/triggers", "jos", "######")
ConnectionPool.dataSource().asInstanceOf[PoolingDataSource].setAccessToUnderlyingConnectionAllowed(true)
// initialize the actor system
val system = ActorSystem("Hello")
val a = system.actorOf(Props[Poller], "poller")
// wait for the user to stop the server
println("Press to exit.")
Console.in.read.toChar
system.terminate
}
在这里,我们定义案例类,将传入的JSON转换为该案例类,设置连接池,定义Akka-System并启动Poller actor。 这里没有什么特别的,唯一特别的是第23行。要从Scala添加侦听器,我们需要访问基础的JDBC连接。 由于scalikeJDBC使用连接池,因此我们需要显式调用setAccessToUnderlyingConnectionAllowed,以确保在调用getInnerMostDelegate时允许我们访问实际的连接,而不仅仅是从连接池中包装一个连接。 在这里需要注意的是,如果不进行设置,则不会收到错误消息或其他任何消息,而只会从此方法调用中获得Null值。
有了这些,我们的Actor开始了,让我们看看它的作用:
class Poller extends Actor {
// execution context for the ticks
import context.dispatcher
val connection = ConnectionPool.borrow()
val db: DB = DB(connection)
val tick = context.system.scheduler.schedule(500 millis, 1000 millis, self, "tick")
override def preStart() = {
// make sure connection isn't closed when executing queries
// we setup the
db.autoClose(false)
db.localTx { implicit session =>
sql"LISTEN events".execute().apply()
}
}
override def postStop() = {
tick.cancel()
db.close()
}
def receive = {
case "tick" => {
db.readOnly { implicit session =>
val pgConnection = connection.asInstanceOf[DelegatingConnection].getInnermostDelegate.asInstanceOf[PGConnection]
val notifications = Option(pgConnection.getNotifications).getOrElse(Array[PGNotification]())
notifications.foreach( not => {
println(s"Received for: ${not.getName} from process with PID: ${not.getPID}")
println(s"Received data: ${not.getParameter} ")
// convert to object
implicit val formats = DefaultFormats
val json = parse(not.getParameter) \\ "data"
val prod = json.extract[Product]
println(s"Received as object: $prod\n")
}
)
}
}
}
}
我们在actor中要做的第一件事是设置scalikeJDBC所需的一些属性,并设置一个计时器,该计时器每500 ms触发一条消息。 还要注意preStart和postStop功能。 在preStart中,我们执行一小段SQL,该SQL告诉postgres该连接将监听名称为“ events”的通知。 我们还将DB.autoClose设置为fall,以避免会话池机制关闭会话和连接。 我们希望保持这些状态不变,因此我们可以接收事件。 当actor终止时,我们确保清理计时器和连接。
在接收函数中,我们首先获取真实的PGConnection,然后从连接获取通知:
val pgConnection = connection.asInstanceOf[DelegatingConnection].getInnermostDelegate.asInstanceOf[PGConnection]
val notifications = Option(pgConnection.getNotifications).getOrElse(Array[PGNotification]())
如果没有通知,将返回Null,因此我们将其包装在Option中,如果为Null,则仅返回一个空数组。 如果有任何通知,我们只需在foreach循环中处理它们并打印出结果:
implicit val formats = DefaultFormats
val json = parse(not.getParameter) \\ "data"
val prod = json.extract[Product]
println(s"Received as object: $prod\n")
在这里,您还可以看到我们只是从通知中获取“数据”元素,并将其转换为我们的Product类以进行进一步处理。 您现在要做的就是启动应用程序,并从同一pgsql终端添加一些事件。 如果一切顺利,您将在控制台中看到类似以下的输出:
Received for: events from process with PID: 24131
Received data: {"table" : "products", "action" : "INSERT", "data" : {"id":47,"name":"pen","quantity":10200}}
Received as object: Product(47,pen,10200)
Received for: events from process with PID: 24131
Received data: {"table" : "products", "action" : "INSERT", "data" : {"id":48,"name":"pen","quantity":10200}}
Received as object: Product(48,pen,10200)
Received for: events from process with PID: 24131
Received data: {"table" : "products", "action" : "INSERT", "data" : {"id":49,"name":"pen","quantity":10200}}
Received as object: Product(49,pen,10200)
Received for: events from process with PID: 24131
Received data: {"table" : "products", "action" : "INSERT", "data" : {"id":50,"name":"Something","quantity":99999}}
Received as object: Product(50,Something,99999)
现在,您已经有了基本的构造,可以轻松地使用它,例如,将其用作响应流的源,或者仅使用websocket进一步传播这些事件。
翻译自: https://www.javacodegeeks.com/2015/10/listen-to-notifications-from-postgresql-with-scala.html