pyspark连接hbase学习

1、读取数据

from pyspark.sql import SparkSession
from pyspark import SparkContext,SparkConf

spark=SparkSession.builder.appName("abv").getOrCreate() #创建spark对象
print('spark对象已创建')
host = 'learn'
table = 'student'
conf = {"hbase.zookeeper.quorum": host, "hbase.mapreduce.inputtable": table}
keyConv = "org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter"
valueConv = "org.apache.spark.examples.pythonconverters.HBaseResultToStringConverter"
hbase_rdd = spark.sparkContext.newAPIHadoopRDD("org.apache.hadoop.hbase.mapreduce.TableInputFormat","org.apache.hadoop.hbase.io.ImmutableBytesWritable","org.apache.hadoop.hbase.client.Result",keyConverter=keyConv,valueConverter=valueConv,conf=conf)
count = hbase_rdd.count()
hbase_rdd.cache()
output = hbase_rdd.collect()
for (k, v) in output:
        print (k, v)

2、写入数据

from pyspark.sql import SparkSession
from pyspark import SparkContext,SparkConf

spark=SparkSession.builder.appName("abv").getOrCreate() #创建spark对象
print('spark对象已创建')
host = 'learn'
table = 'student'
keyConv = "org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter"
valueConv = "org.apache.spark.examples.pythonconverters.StringListToPutConverter"
conf = {"hbase.zookeeper.quorum": host,"hbase.mapred.outputtable": table,"mapreduce.outputformat.class": "org.apache.hadoop.hbase.mapreduce.TableOutputFormat","mapreduce.job.output.key.class": "org.apache.hadoop.hbase.io.ImmutableBytesWritable","mapreduce.job.output.value.class": "org.apache.hadoop.io.Writable"}
 
rawData = ['3,info,name,Rongcheng','4,info,name,Guanhua']
#( rowkey , [ row key , column family , column name , value ] )
print('准备写入数据')
spark.sparkContext.parallelize(rawData).map(lambda x: (x[0],x.split(','))).saveAsNewAPIHadoopDataset(conf=conf,keyConverter=keyConv,valueConverter=valueConv)

遇到的问题:

找不类,需要把hbase的lib中hbase开头的jar复制到spark的jars包中,可以创建一个新的文件夹,再在spark-env.sh中添加SPARK_CLASSPATH=放入hbase的包的文件路径。并且下载spark-example-1.6.0.jar放到之前创建的文件夹中。

重启spark

你可能感兴趣的:(hbase,python)