PySpark_Streaming+DBUtils+MySQL

Design Patterns for using foreachRDD
dstream.foreachRDD is a powerful primitive that allows data to be sent out to external systems.
However, it is important to understand how to use this primitive correctly and efficiently.

spark2.3.0版本的官网介绍说dstream.foreachRDD是一个功能强大的原语,允许将数据发送到外部系统。但是,了解如何正确有效地使用此原语非常重要。

官网给出了foreachRDD的一些用法,但是没有给出类似wordcount的完整代码,在使用数据库连接池的过程还是踩了一些坑,分享给大家。

示例代码实现的是wordcount结果写入数据库,并使用数据库连接池。我使用的spark版本是2.3.0,使用python编写应用程序,部署模式是on yarn client。

# foreachRDD_dbutils.py
import os
os.environ.setdefault('SPARK_HOME','/opt/appl/spark')

import findspark
findspark.init()

from pyspark import SparkContext
from pyspark.streaming import StreamingContext

import MySQLdb
from DBUtils.PooledDB import PooledDB

class Pool(object):
    # 创建一个类变量
    __pool = None

    def __init__(self):
        pass

    # 获取连接池
    @staticmethod
    def get_connection():
        if Pool.__pool is None:
            Pool.__pool = PooledDB(MySQLdb,5,host='******',user='root',passwd='root',database='******',charset='utf8')
        return Pool.__pool.connection()

    # 关闭连接池
    @staticmethod
    def close():
        if Pool.__pool is not None:
            Pool.__pool.close()


def sendPartition(partition):
    connection = Pool.get_connection()
    cursor = connection.cursor()
    for record in partition:
        # python 必须用str将int转换成string
        cursor.execute("insert into wordcount(word,wordcount) values('" + record[0] + "'," + str(record[1]) + ")")
    # 批量提交到数据库执行
    connection.commit()
    # 关闭连接是指把连接放回连接池,而不是真正的关闭
    connection.close()

# 将wordcount统计结果写到mysql中
if __name__ == "__main__":
    sc = SparkContext(appName='spark_streaming_test',master='yarn')
    ssc = StreamingContext(sc,5)

    lines = ssc.socketTextStream('172.30.1.243', 9999)

    counts = lines.flatMap(lambda line : line.split(' ')) \
            .map(lambda word : (word, 1)) \
            .reduceByKey(lambda a,b : a + b)

    counts.foreachRDD(lambda rdd : rdd.foreachPartition(sendPartition))

    counts.pprint()

    ssc.start()
    try:
        ssc.awaitTermination()
    except:
        pass
    finally:
        # 关闭连接池
        Pool.close()

提交程序执行:

ssh://[email protected]:22/opt/appl/anaconda3/bin/python -u /opt/appl/pycharm-projects/spark_streaming_test/foreachRDD_dbutils.py
/opt/appl/spark/conf/spark-env.sh: line 72: hadoop: command not found
2019-01-17 21:28:56 WARN  NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2019-01-17 21:29:00 WARN  Client:66 - Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
-------------------------------------------
Time: 2019-01-17 21:29:25
-------------------------------------------

-------------------------------------------
Time: 2019-01-17 21:29:30
-------------------------------------------

注意:

  • 数据库连接池需要是静态并且最好是懒加载的。
  • 上面的数据库连接池如果不是懒加载会发生连接池对象序列化异常。官网对于这点也有说明。
  • 数据库连接池懒加载的过程有线程并发不同步的风险,但是如果我加了锁就会发生锁序列化异常,所以这里没有更好的解决方案。
  • 可能会发生 Error from python worker: /bin/python: No module named pyspark 错误。

你可能感兴趣的:(PySpark_Streaming+DBUtils+MySQL)