spark sql 数据类型转换_如何在Spark SQL的DataFrame中更改列类型?

我想基础上的解决方案withColumn,withColumnRenamed并cast提出msemelman,马丁Senne等是简单和清晰。

我认为您的方法还可以,请记住,Spark DataFrame是行的(不可变的)RDD,因此我们从来没有真正替换过列,只是DataFrame每次使用新的架构创建新列。

假设您有一个具有以下架构的原始df:

scala> df.printSchema

root

|-- Year: string (nullable = true)

|-- Month: string (nullable = true)

|-- DayofMonth: string (nullable = true)

|-- DayOfWeek: string (nullable = true)

|-- DepDelay: string (nullable = true)

|-- Distance: string (nullable = true)

|-- CRSDepTime: string (nullable = true)

在一列或几列上定义了一些UDF:

import org.apache.spark.sql.functions._

val toInt    = udf[Int, String]( _.toInt)

val toDouble = udf[Double, String]( _.toDouble)

val toHour   = udf((t: String) => "%04d".format(t.toInt).take(2).toInt )

val days_since_nearest_holidays = udf(

(year:String, month:String, dayOfMonth:String) => year.toInt + 27 + month.toInt-12

)

更改列类型甚至从另一个构建新的DataFrame都可以这样写:

val featureDf = df

.withColumn("departureDelay", toDouble(df("DepDelay")))

.withColumn("departureHour",  toHour(df("CRSDepTime")))

.withColumn("dayOfWeek",      toInt(df("DayOfWeek")))

.withColumn("dayOfMonth",     toInt(df("DayofMonth")))

.withColumn("month",          toInt(df("Month")))

.withColumn("distance",       toDouble(df("Distance")))

.withColumn("nearestHoliday", days_since_nearest_holidays(

df("Year"), df("Month"), df("DayofMonth"))

)

.select("departureDelay", "departureHour", "dayOfWeek", "dayOfMonth",

"month", "distance", "nearestHoliday")

产生:

scala> df.printSchema

root

|-- departureDelay: double (nullable = true)

|-- departureHour: integer (nullable = true)

|-- dayOfWeek: integer (nullable = true)

|-- dayOfMonth: integer (nullable = true)

|-- month: integer (nullable = true)

|-- distance: double (nullable = true)

|-- nearestHoliday: integer (nullable = true)

这非常接近您自己的解决方案。简而言之,将类型更改和其他转换作为单独的udf vals使代码更易于阅读和重用。

你可能感兴趣的:(spark,sql,数据类型转换)