使用spark读取SQLserver数据

 

部署在阿里云的服务一定要记得在安全组开放端口

 

redis 6379,mssql 1433等等

 

sqlserver的驱动依赖

        
            com.microsoft.sqlserver
            sqljdbc4
            4.0
        

关于依赖参考链接https://www.cnblogs.com/benfly/p/12671965.html

import org.apache.spark.sql.SparkSession

object SqlServerRead {
  def main(args: Array[String]): Unit = {


    val dbServer = "192.168.1.127"

    val db = "hxhr"

    val url = s"jdbc:sqlserver://$dbServer:1433;databaseName=$db"

    val spark: SparkSession = SparkSession
      .builder()
      .master("local[*]")
      .appName("Test")
      .getOrCreate()

    val jdbcDF = spark.read
      .format("jdbc")
      .option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
      .option("url", url)
      .option("user", "sa")
      .option("password", "SA123!")
      .option("dbtable", "A01_201906219FE")
      .load()
    jdbcDF.show


    //    tableDF.write.mode(SaveMode.Overwrite).jdbc(url,table,properties)

    spark.stop()

  }
}

 

你可能感兴趣的:(大数据,数据库)