pyspark dataframe数据分析常用算子

目录

        • 1.createDataFrame,创建dataframe
        • 2.show
        • 3. filter,过滤
        • 4.空值过滤
        • 空值填充
        • 5. groupBy,分组
        • 6.重命名列
        • 7.explode:一列变多行
        • 8.去重
        • 9. when
        • 10.union,合并dataframe
        • 11.like
        • 12.数据保存
        • 13.drop
        • 14.cast:数据类型转换

1.createDataFrame,创建dataframe

df = spark.createDataFrame([
    (144.5, 185, 33, 'M', 'China'),
    (167.2, 165, 45, 'M', 'China'),
    (124.1, 170, 17, 'F', 'Japan'),
    (144.5, 185, 33, 'M', 'Pakistan'),
    (156.5, 180, 54, 'F', None),
    (124.1, 170, 23, 'F', 'Pakistan'),
    (129.2, 175, 62, 'M', 'Russia'),
    ], ['weight', 'height', 'age', 'gender', 'country'])

2.show

df.show()
默认会把超过20个字符的部分进行截断,如果不想截断,可以进行如下设置
df.show(truncate=False)
 

3. filter,过滤

(1)单条件过滤

df.filter(df['age'] == 33)
或者
df.filter('age = 33')

(2)多条件过滤

# 'or'
df.filter((df['age'] == 33) | (df['gender'] == 'M'))
# 'and'
df.filter((df['age'] == 33) & (df['gender'] == 'M'))

4.空值过滤

  1. 过滤某一个属性不为空的记录
df.filter("country is not null")
# 或者
df.filter(df["country"].isNotNull())
# 或者
df[df["country"].isNotNull()]

注意:空字符串""并不会被过滤出来
2. 过滤某一个属性为空的记录

df.filter("country is null")
# 或者
df.filter(df["country"].isNull())

空值填充

df.fillna({"country": "China"})

5. groupBy,分组

  1. 分组后统计数量
df.groupBy(df["age"]).count().show()
+---+-----+
|age|count|
+---+-----+
| 54|    1|
| 33|    2|
| 42|    1|
| 23|    2|
| 45|    1|
+---+-----+

6.重命名列

  1. alias
df.select(F.col("country").alias("state"))
  1. withColumnRenamed
df.withColumnRenamed("country", "state")

7.explode:一列变多行

import pyspark.sql.functions as F
from pyspark.sql.types import *
df = spark.createDataFrame([
    ('u1', 'i1', 'r001,r002,r003'),
    ('u2', 'i2', 'r002,r003'),
    ('u3', 'i3', 'r001')
    ], ['user_id', 'item_id', 'recall_id'])

首先基于recall_id这一列新建一列recall_id_lst

df = df\
    .withColumn("recall_id_lst", F.udf(lambda x: x.split(','), returnType=ArrayType(StringType()))(F.col("recall_id")))
# 结果
+-------+-------+--------------+------------------+
|user_id|item_id|     recall_id|     recall_id_lst|
+-------+-------+--------------+------------------+
|     u1|     i1|r001,r002,r003|[r001, r002, r003]|
|     u2|     i2|     r002,r003|      [r002, r003]|
|     u3|     i3|          r001|            [r001]|
+-------+-------+--------------+------------------+

然后把recall_id_lst这一列变成多行


df.select("user_id", "item_id", F.explode(F.col("recall_id_lst")).alias("recall_id_plat"))
# 结果
+-------+-------+--------------+
|user_id|item_id|recall_id_plat|
+-------+-------+--------------+
|     u1|     i1|          r001|
|     u1|     i1|          r002|
|     u1|     i1|          r003|
|     u2|     i2|          r002|
|     u2|     i2|          r003|
|     u3|     i3|          r001|
+-------+-------+--------------+

8.去重

基于多列去重

df.dropDuplicates(['weight', 'height'])

9. when

df.withColumn("age_range", F.when(df.age > 60, "old")
    .when((df.age > 18) & (df.age <= 60),"mid")
    .otherwise("young"))

10.union,合并dataframe

df.union(df)

11.like

df.filter(df.country.like('%Jap%'))

可用于判断某一列字段是否包含某些字符串

12.数据保存

df.write.mode("overwrite")\
                .save(path, header=True, format='csv')

13.drop

df = df.drop("age", "gender")

14.cast:数据类型转换

from pyspark.sql.types import FloatType
df = df.withColumn(col, df[col].cast(FloatType()))

后续会不断把常用到的算子整理到博客中~

【参考】:
1.http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions

你可能感兴趣的:(pyspark)