【九】Spark SQL用JDBC代码访问ThriftServer

1.启动ThriftServer

 默认端口在10000 可修改

cd /app/spark/spark-2.2.0-bin-2.9.0

./sbin/start-thriftserver.sh --master local[3] --jars /app/mysql-connector-java-5.1.46.jar

查看是否启动成功jps -m

2.项目目录

【九】Spark SQL用JDBC代码访问ThriftServer_第1张图片

3.pom.xml
 


  4.0.0
  com.sid.com
  sparksqltrain
  1.0-SNAPSHOT
  2008
  
    2.11.8
    2.2.0
  

  
    
      scala-tools.org
      Scala-Tools Maven2 Repository
      http://scala-tools.org/repo-releases
    
  

  
    
      scala-tools.org
      Scala-Tools Maven2 Repository
      http://scala-tools.org/repo-releases
    
  

  
    
    
      org.scala-lang
      scala-library
      ${scala.version}
    
    
    
      org.apache.spark
      spark-sql_2.11
      ${spark.version}
    
    
    
      org.apache.spark
      spark-hive_2.11
      ${spark.version}
    

    
      org.spark-project.hive
      hive-jdbc
      1.2.1.spark2
    

  

  
    src/main/scala
    src/test/scala
    
      
        org.scala-tools
        maven-scala-plugin
        
          
            
              compile
              testCompile
            
          
        
        
          ${scala.version}
          
            -target:jvm-1.5
          
        
      
      
        org.apache.maven.plugins
        maven-eclipse-plugin
        
          true
          
            ch.epfl.lamp.sdt.core.scalabuilder
          
          
            ch.epfl.lamp.sdt.core.scalanature
          
          
            org.eclipse.jdt.launching.JRE_CONTAINER
            ch.epfl.lamp.sdt.launching.SCALA_CONTAINER
          
        
      
    
  
  
    
      
        org.scala-tools
        maven-scala-plugin
        
          ${scala.version}
        
      
    
  

4.SparkSQLThriftServer.scala代码

package com.sid.com

import java.sql.DriverManager

/**
  * 通过JDBC方式访问
  * */
object SparkSQLThriftServer {
  def main(args: Array[String]): Unit = {
    Class.forName("org.apache.hive.jdbc.HiveDriver")
    //这里的url就是启动beeline的时候用的url sid是hive中的库名
    val connection = DriverManager.getConnection("jdbc:hive2://node1:10000","root","")
    connection.prepareStatement("use sid").execute()
    val pstmt = connection.prepareStatement("select id,name,salary,destination from emp")
    val rs = pstmt.executeQuery()
    while (rs.next()){
      println("id:"+ rs.getInt("id")+", name:"+rs.getString("name")
        +", salary:"+rs.getString("salary")
        +", destination:"+rs.getString("destination"))
    }

    rs.close()
    pstmt.close()
    connection.close()
  }
}

5.结果

【九】Spark SQL用JDBC代码访问ThriftServer_第2张图片

你可能感兴趣的:(spark,SQL)