如果出现下面的错误码,大家很清楚的知道是缺少mysql数据库驱动包 mysql-connector-java-5.1.27-bin.jar类似的jar包,下载地址:mysql-connector-java-5.1.27-bin.jar,但是这个jar包是添加到哪个目录下可能都不太清楚,接着往下看。。。。。。
: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:45)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$5.apply(JDBCOptions.scala:99)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$5.apply(JDBCOptions.scala:99)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:99)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:35)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
出现了这个错误,那你必须清楚你的程序运行spark的是哪种模式,一般有两种:
这种情况下,一般的PYTHONPATH会设置为 $SPARK_HOME/python,这个时候只需要将mysql-connector-java-5.1.27-bin.jar添加到 $SPARK_HOME/jars 下就行
一般在正式环境下,为了避免开发人员对python环境的污染,系统会采用anaconda或者docker部署纯净的python环境进行代码开发和程序部署。
这种情况下,当然先要知道虚拟环境的部署目录,例如我的虚拟环境叫python_common,路径为D:\soft\Anaconda3\envs\python_common
,这时候只需要把mysql-connector-java-8.0.12.jar
拷贝到D:\soft\Anaconda3\envs\python_common\Lib\site-packages\pyspark\jars
下,然后重启你的notebook或者命令行就行了。
def load_table_myspark(sparkSession, comm, table_name):
"""
:argument 将MySQL策略库中的策略组合加载到程序中
:param sparkSession
:param comm: common配置模块
:param table_name: 要查询的表名,表名可以是原始表名,也可以是(select * from t) as t 这种构造表
:return: spark DataFrame
"""
df = None
db_config = comm.db_config
try:
df = sparkSession.read.format('jdbc').options(
url=db_config['url'],
driver=db_config['driver'],
dbtable=table_name,
user=db_config['user'],
password=db_config['password']
).load()
except Exception as e:
print("-----数据加载失败,错误异常信息:", e)
return df