pyspark sql使用总结

pyspark sql使用总结

    • 将多列合并为一列
    • 去除重复项
    • string转为date格式

将多列合并为一列

concat函数,将多列合并为一列。eg:

from pyspark.sql.functions import concat, col, lit
report = sqlContext.table(report_table_name)
report1 = report.select(
    functions.get_json_object("report", '$.userid').alias("user_id"),
    concat(col("year"),lit("-"),col("month"),lit("-"),col('day')).alias("date")).where("year < 2019").drop_duplicates(subset=['user_id'])

去除重复项

drop_duplicates函数,用法如下:

report = report.select("user_id").where("year = 2018").drop_duplicates(subset = ["user_id"])

string转为date格式

date_format, unix_timestamp, to_date函数

from pyspark.sql.functions import date_format,unix_timestamp
output_format = "yyyy-MM-dd"
df = report1.select(to_date(date_format(unix_timestamp("date", "yyyy-MM-dd").cast("timestamp"),output_format)).alias("dt"),"user_id")
df.show()

你可能感兴趣的:(pyspark,pyspark,sql)