sparkDataFrame 与pandas中的DataFrame转换

1.sparkDataFrame的类型为pyspark.sql.dataframe.DataFrame,通过df.toPandas()即可转换为pandas中的dataFrame类型。

2.对于pandas中的DataFrame,想要转换为spark类型的,使用sqlContext = SQLContext(SparkContext()),sparkContext= sqlContext.createDataFrame(df)即可。

具体代码如下:

# -*- coding: utf-8 -*-
import pandas as pd
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
from pyspark import SparkContext

# 初始化数据

# 初始化pandas DataFrame
df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], index=['row1', 'row2'], columns=['c1', 'c2', 'c3'])

# 打印数据
print(df)

# 初始化spark DataFrame
sc = SparkContext()
if __name__ == "__main__":
    spark = SparkSession \
        .builder \
        .appName("testDataFrame") \
        .getOrCreate()

sentenceData = spark.createDataFrame([
    (0.0, "I like Spark"),
    (1.0, "Pandas is useful"),
    (2.0, "They are coded by Python ")
], ["label", "sentence"])

# 显示数据
sentenceData.select("label").show()

# spark.DataFrame 转换成 pandas.DataFrame
sqlContest = SQLContext(sc)
spark_df = sqlContest.createDataFrame(df)

# 显示数据
spark_df.select("c1").show()

# pandas.DataFrame 转换成 spark.DataFrame
pandas_df = sentenceData.toPandas()

# 打印数据
print(pandas_df)

你可能感兴趣的:(python)