Spark的DataFrame和Schema详解和实战案例Demo

1、概念介绍

Spark是一个分布式计算框架,用于处理大规模数据处理任务。在Spark中,DataFrame是一种分布式的数据集合,类似于关系型数据库中的表格。DataFrame提供了一种更高级别的抽象,允许用户以声明式的方式处理数据,而不需要关心底层数据的细节和分布式计算的复杂性。Schema在Spark中用于描述DataFrame中的数据结构,类似于表格中的列定义。

让我们分别介绍一下DataFrame和Schema:

DataFrame:

DataFrame是由行和列组成的分布式数据集合,类似于传统数据库或电子表格的结构。Spark的DataFrame具有以下特点:
分布式计算:DataFrame是分布式的,可以在集群中的多个节点上进行并行处理,以实现高性能的大规模数据处理。
不可变性:DataFrame是不可变的,这意味着一旦创建,就不能修改。相反,对DataFrame的操作会生成新的DataFrame。
延迟执行:Spark采用了延迟执行策略,即DataFrame上的操作并不立即执行,而是在需要输出结果时进行优化和执行。
用户可以使用SQL语句、Spark的API或Spark SQL来操作DataFrame,进行数据过滤、转换、聚合等操作。DataFrame的优势在于其易用性和优化能力,Spark会根据操作的执行计划来优化整个计算过程,以提高性能。

Schema:

Schema是DataFrame中数据的结构描述,它定义了DataFrame的列名和列的数据类型。在Spark中,Schema是一个包含列名和数据类型的元数据集合。DataFrame的Schema信息对于优化计算和数据类型的正确解释至关重要。
通常,Schema是在创建DataFrame时自动推断的,也可以通过编程方式显式指定。指定Schema的好处是可以确保数据被正确解释并且避免潜在的类型转换错误。如果数据源不包含Schema信息或者需要修改Schema,可以使用StructType和StructField来自定义Schema。例如,可以创建一个包含多个字段和数据类型的Schema,如字符串、整数、日期等。

在使用Spark读取数据源时,如CSV文件、JSON数据、数据库表等,Spark会尝试自动推断数据的Schema。如果数据源本身没有提供足够的信息,可以使用schema选项来指定或者通过后续的数据转换操作来调整DataFrame的Schema。

总结:DataFrame是Spark中一种强大的分布式数据结构,允许用户以声明式的方式处理数据,而Schema则用于描述DataFrame中数据的结构信息,确保数据被正确解释和处理。这两个概念共同构成了Spark强大的数据处理能力。

代码实战

package test.scala

import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.Row
import org.apache.spark.sql.types.{IntegerType, StringType, StructType}

object TestSchema {
  def getSparkSession(appName: String, localType: Int): SparkSession = {
    val builder: SparkSession.Builder = SparkSession.builder().appName(appName)
    if (localType == 1) {
      builder.master("local[8]") // 本地模式,启用8个核心
    }

    val spark = builder.getOrCreate() // 获取或创建一个新的SparkSession
    spark.sparkContext.setLogLevel("ERROR") // Spark设置日志级别
    spark
  }

  def main(args: Array[String]): Unit = {
    println("Start TestSchema")
    val spark: SparkSession = getSparkSession("TestSchema", 1)

    val structureData = Seq(
      Row("36636", "Finance", Row(3000, "USA")),
      Row("40288", "Finance", Row(5000, "IND")),
      Row("42114", "Sales", Row(3900, "USA")),
      Row("39192", "Marketing", Row(2500, "CAN")),
      Row("34534", "Sales", Row(6500, "USA"))
    )

    val structureSchema = new StructType()
      .add("id", StringType)
      .add("dept", StringType)
      .add("properties", new StructType()
        .add("salary", IntegerType)
        .add("location", StringType)
      )

    val df = spark.createDataFrame(
      spark.sparkContext.parallelize(structureData), structureSchema)
    df.printSchema()
    df.show(false)

    val row = df.first()
    val schema = row.schema
    val structTypeList = schema.toList
    println(structTypeList.size)
    for (i <- 0 to structTypeList.size - 1) {
      val structType = structTypeList(i)
      println(structType.name, row.getAs(structType.name), structType.dataType, structType.dataType)
    }
  }
}

输出

Start TestSchema
Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
23/07/29 09:47:59 INFO SparkContext: Running Spark version 2.4.0
23/07/29 09:47:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
23/07/29 09:47:59 INFO SparkContext: Submitted application: TestSchema
23/07/29 09:47:59 INFO SecurityManager: Changing view acls to: Nebula
23/07/29 09:47:59 INFO SecurityManager: Changing modify acls to: Nebula
23/07/29 09:47:59 INFO SecurityManager: Changing view acls groups to:
23/07/29 09:47:59 INFO SecurityManager: Changing modify acls groups to:
23/07/29 09:47:59 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Nebula); groups with view permissions: Set(); users with modify permissions: Set(Nebula); groups with modify permissions: Set()
23/07/29 09:48:01 INFO Utils: Successfully started service ‘sparkDriver’ on port 60785.
23/07/29 09:48:01 INFO SparkEnv: Registering MapOutputTracker
23/07/29 09:48:01 INFO SparkEnv: Registering BlockManagerMaster
23/07/29 09:48:01 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/07/29 09:48:01 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/07/29 09:48:01 INFO DiskBlockManager: Created local directory at C:\Users\Nebula\AppData\Local\Temp\blockmgr-6f861361-4d98-4372-b78a-2949682bd557
23/07/29 09:48:01 INFO MemoryStore: MemoryStore started with capacity 8.3 GB
23/07/29 09:48:01 INFO SparkEnv: Registering OutputCommitCoordinator
23/07/29 09:48:01 INFO Utils: Successfully started service ‘SparkUI’ on port 4040.
23/07/29 09:48:01 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://LAPTOP-PEA8R2PO:4040
23/07/29 09:48:01 INFO Executor: Starting executor ID driver on host localhost
23/07/29 09:48:01 INFO Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService’ on port 60826.
23/07/29 09:48:01 INFO NettyBlockTransferService: Server created on LAPTOP-PEA8R2PO:60826
23/07/29 09:48:01 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
23/07/29 09:48:01 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, LAPTOP-PEA8R2PO, 60826, None)
23/07/29 09:48:01 INFO BlockManagerMasterEndpoint: Registering block manager LAPTOP-PEA8R2PO:60826 with 8.3 GB RAM, BlockManagerId(driver, LAPTOP-PEA8R2PO, 60826, None)
23/07/29 09:48:01 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, LAPTOP-PEA8R2PO, 60826, None)
23/07/29 09:48:01 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, LAPTOP-PEA8R2PO, 60826, None)

你可能感兴趣的:(BigData,Spark,Scala,spark,大数据,分布式)