Spark 之 dataframe 之 join




Spark DataFrame中join与SQL很像,都有inner join, left join, right join, full join;
那么join方法如何实现不同的join类型呢?
看其原型
def join(right : DataFrame, usingColumns : Seq[String], joinType : String) : DataFrame
def join(right : DataFrame, joinExprs : Column, joinType : String) : DataFrame
可见,可以通过传入String类型的joinType来实现。
joinType可以是”inner”、“left”、“right”、“full”分别对应inner join, left join, right join, full join,默认值是”inner”,代表内连接

personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person")).show()
personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "inner").show()

结果如下:

id_person name address id_order orderNum id_person
1 张三 深圳 3 533 1
1 张三 深圳 4 444 1
2 李四 成都 1 325 2
3 王五 厦门 2 34 3

“left”,”left_outer”或者”leftouter”代表左连接

personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "left").show()
personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "left_outer").show()

结果如下:

id_person name address id_order orderNum id_person
1 张三 深圳 3 533 1
1 张三 深圳 4 444 1
2 李四 成都 1 325 2
3 王五 厦门 2 34 3
4 朱六 杭州 null null null

“right”,”right_outer”及“rightouter”代表右连接

personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "right").show()
personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "right_outer").show()
 
 
   
   
   
   
  • 1

结果如下:

id_person name address id_order orderNum id_person
2 李四 成都 1 325 2
3 王五 厦门 2 34 3
1 张三 深圳 3 533 1
1 张三 深圳 4 444 1
null null null 5 777 11

“full”,”outer”,”full_outer”,”fullouter”代表全连接

personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "full").show()
personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "full_outer").show()
personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "outer").show()

结果如下:

id_person name address id_order orderNum id_person
1 张三 深圳 3 533 1
1 张三 深圳 4 444 1
2 李四 成都 1 325 2
3 王五 厦门 2 34 3
4 朱六 杭州 null null null
null null null 5 777 11

Scala测试源码:

import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.sql.SQLContext

case class Persons(id_person: Int, name: String, address: String)
case class Orders(id_order: Int, orderNum: Int, id_person: Int)

object DataFrameTest {
  def main(args: Array[String]) {
    val conf = new SparkConf().setMaster("local[2]").setAppName("DataFrameTest")
    val sc = new SparkContext(conf)

    val sqlContext = new SQLContext(sc)

    val personDataFrame = sqlContext.createDataFrame(List(Persons(1, "张三", "深圳"), Persons(2, "李四", "成都"), Persons(3, "王五", "厦门"), Persons(4, "朱六", "杭州")))
    val orderDataFrame = sqlContext.createDataFrame(List(Orders(1, 325, 2), Orders(2, 34, 3), Orders(3, 533, 1), Orders(4, 444, 1), Orders(5, 777, 11)))

    personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person")).show()
    personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "inner").show()
    personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "left").show()
    personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "left_outer").show()
    personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "right").show()
    personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "right_outer").show()
    personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "full").show()
    personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "full_outer").show()
    personDataFrame.join(orderDataFrame, personDataFrame("id_person") === orderDataFrame("id_person"), "outer").show()
  }
}
 
 
   
   
   
   

如何实现的呢?查看spark源码中sql部分可知其是将String类型转换为了JoinType
JoinType的伴生对象中对String类型的typ先转换成小写,然后去掉typ中的下划线 _ ,之后用模式匹配来决定用的是哪种join类型,另外,从源码中可知,除了内连接、左连接、右连接、全连接外,还有个LeftSemi连接,这种连接没用过,不太清楚

Spark中JoinType源码:

object JoinType {
  def apply(typ: String): JoinType = typ.toLowerCase.replace("_", "") match {
    case "inner" => Inner
    case "outer" | "full" | "fullouter" => FullOuter
    case "leftouter" | "left" => LeftOuter
    case "rightouter" | "right" => RightOuter
    case "leftsemi" => LeftSemi
    case _ =>
      val supported = Seq(
        "inner",
        "outer", "full", "fullouter",
        "leftouter", "left",
        "rightouter", "right",
        "leftsemi")

      throw new IllegalArgumentException(s"Unsupported join type '$typ'. " +
        "Supported join types include: " + supported.mkString("'", "', '", "'") + ".")
  }
}

sealed abstract class JoinType

case object Inner extends JoinType

case object LeftOuter extends JoinType

case object RightOuter extends JoinType

case object FullOuter extends JoinType

case object LeftSemi extends JoinType

hkl曰:其实测试了之后发现这个他的join的操作和我们对于mysql表的各种join操作是几乎一样的。搞清楚你的业务需求就知道该如何来使用连接的类型了。对于新手来说就是表连接的相等条件就是用  ===  不要搞错了。有新内容我会及时更新的。



转自:http://blog.csdn.net/anjingwunai/article/details/51934921

你可能感兴趣的:(spark,spark)