spark-shell --packages org.apache.kudu:kudu-spark_2.10:1.1.0
Hadoop中有很多组件,为了实现复杂的功能通常都是使用混合架构,
其实跟Hbase是有点像的
1:支持主键(类似 关系型数据库)
2:支持事务操作,可对数据增删改查数据
3:支持各种数据类型
4:支持 alter table。可删除列(非主键)
5:支持 INSERT, UPDATE, DELETE, UPSERT
6:支持Hash,Range分区
进入Impala-shell -i node1ip
具体的CURD语法可以查询官方文档,我就不一一列了
http://kudu.apache.org/docs/kudu_impala_integration.html
建表
Create table kudu_table (Id string,Namestring,Age int,
Primary key(id,name)
)partition by hash partitions 16
Stored as kudu;
插入数据
Insert into kudu_table
Select * from impala_table;
注意
以上的sql语句都是在impala里面执行的。Kudu和hbase一样都是nosql查询的,Kudu本身只提供api。impala集成了kudu。
奉上我的Git地址:
https://github.com/LinMingQiang/spark-util/tree/spark-kudu
pom.xml
<dependency>
<groupId>org.apache.hivegroupId>
<artifactId>hive-metastoreartifactId>
<version>1.1.0version>
dependency>
<dependency>
<groupId>org.apache.hivegroupId>
<artifactId>hive-jdbcartifactId>
<version>1.1.0version>
dependency>
<dependency>
<groupId>org.apache.hivegroupId>
<artifactId>hive-serviceartifactId>
<version>1.1.0version>
<exclusions>
<exclusion>
<artifactId>servlet-apiartifactId>
<groupId>javax.servletgroupId>
exclusion>
exclusions>
dependency>
<dependency>
<groupId>org.apache.kudugroupId>
<artifactId>kudu-clientartifactId>
<version>1.3.0version>
dependency>
<dependency>
<groupId>org.apache.sparkgroupId>
<artifactId>spark-sql_2.10artifactId>
<version>1.6.0version>
dependency>
<dependency>
<groupId>org.kududbgroupId>
<artifactId>kudu-spark_2.10artifactId>
<version>1.3.1version>
dependency>
<dependency>
<groupId>org.apache.kudugroupId>
<artifactId>kudu-mapreduceartifactId>
<version>1.3.1version>
<exclusions>
<exclusion>
<artifactId>jsp-apiartifactId>
<groupId>javax.servlet.jspgroupId>
exclusion>
<exclusion>
<artifactId>servlet-apiartifactId>
<groupId>javax.servletgroupId>
exclusion>
exclusions>
val client = new KuduClientBuilder("master2").build()
val table = client.openTable("impala::default.kudu_pc_log")
client.getTablesList.getTablesList.foreach { println }
val schema = table.getSchema();
val kp = KuduPredicate.newComparisonPredicate(schema.getColumn("id"), KuduPredicate.ComparisonOp.EQUAL, "1")
val scanner = client.newScanTokenBuilder(table)
.addPredicate(kp)
.limit(100)
.build()
val token = scanner.get(0)
val scan = KuduScanToken.deserializeIntoScanner(token.serialize(), client)
while (scan.hasMoreRows()) {
val results = scan.nextRows()
while (results.hasNext()) {
val rowresult = results.next();
println(rowresult.getString("id"))
}
}
val sc = new SparkContext(new SparkConf().setMaster("local").setAppName("Test"))
val sparksql = new SQLContext(sc)
import sparksql.implicits._
val a = new KuduContext(kuduMaster, sc)
def getKuduRDD() {
val tableName = "impala::default.kudu_pc_log"
val columnProjection = Seq("id", "name")
val kp = KuduPredicate.newComparisonPredicate(new ColumnSchemaBuilder("id", Type.STRING).build(), KuduPredicate.ComparisonOp.EQUAL, "q")
val df = a.kuduRDD(sc, tableName, columnProjection,Array(kp))
df.foreach { x => println(x.mkString(",")) }
}
def writetoKudu() {
val tableName = "impala::default.student"
val rdd = sc.parallelize(Array("k", "b", "a")).map { n => STU(n.hashCode, n) }
val data = rdd.toDF()
a.insertRows(data, tableName)
}
case class STU(id: Int, name: String)
Kudu从1.0.0版开始,通过Data Source API与Spark集成。使用--packages选项包括kudu-spark依赖关系:
如果使用Spark与Scala 2.10,请使用kudu-spark_2.10工件
spark-shell --packages org.apache.kudu:kudu-spark_2.10:1.1.0
如果在Scala 2.11中使用Spark 2,请使用kudu-spark2_2.11工件
spark-shell --packages org.apache.kudu:kudu-spark2_2.11:1.1.0
然后导入kudu-spark并创建一个数据框:
import org.apache.kudu.spark.kudu._
import org.apache.kudu.client._
import collection.JavaConverters._
// Read a table from Kudu
val df = sqlContext.read.options(Map("kudu.master" -> "kudu.master:7051","kudu.table" -> "kudu_table")).kudu
// Query using the Spark API...
df.select("id").filter("id" >= 5).show()
// ...or register a temporary table and use SQL
df.registerTempTable("kudu_table")
val filteredDF = sqlContext.sql("select id from kudu_table where id >= 5").show()
// Use KuduContext to create, delete, or write to Kudu tables
val kuduContext = new KuduContext("kudu.master:7051", sqlContext.sparkContext)
// Create a new Kudu table from a dataframe schema
// NB: No rows from the dataframe are inserted into the table
kuduContext.createTable(
"test_table", df.schema, Seq("key"),
new CreateTableOptions()
.setNumReplicas(1)
.addHashPartitions(List("key").asJava, 3))
// Insert data
kuduContext.insertRows(df, "test_table")
// Delete data
kuduContext.deleteRows(filteredDF, "test_table")
// Upsert data
kuduContext.upsertRows(df, "test_table")
// Update data
val alteredDF = df.select("id", $"count" + 1)
kuduContext.updateRows(filteredRows, "test_table"
// Data can also be inserted into the Kudu table using the data source, though the methods on KuduContext are preferred
// NB: The default is to upsert rows; to perform standard inserts instead, set operation = insert in the options map
// NB: Only mode Append is supported
df.write.options(Map("kudu.master"-> "kudu.master:7051", "kudu.table"-> "test_table")).mode("append").kudu
// Check for the existence of a Kudu table
kuduContext.tableExists("another_table")
// Delete a Kudu table
kuduContext.deleteTable("unwanted_table")
注册为临时表时,必须为具有大写字母或非ASCII字符的名称的Kudu表分配备用名称。
具有包含大写字母或非ASCII字符的列名称的Kudu表可能不与SparkSQL一起使用。为解决这个问题可能Columns会被重命名。
<>
并且OR
谓词不被推送到Kudu,而是将被Spark任务解析。只有LIKE
具有后缀通配符的谓词被推送到Kudo,这意味着LIKE "FOO%"
被Kudo解析,但LIKE "FOO%BAR"
不是可解析的通配符。
Kudo不支持SparkSQL支持的所有类型,如Date
, Decimal
和复杂类型。
Kudu表只能在SparkSQL中注册为临时表。可能不会使用HiveContext查询Kudu表。
Kudu Python客户端为C ++客户端API提供了一个Python友好的界面。下面的示例演示了部分Python客户端的使用。
import kudu
from kudu.client import Partitioning
from datetime import datetime
# Connect to Kudu master server
client = kudu.connect(host='kudu.master', port=7051)
# Define a schema for a new table
builder = kudu.schema_builder()
builder.add_column('key').type(kudu.int64).nullable(False).primary_key()
builder.add_column('ts_val', type_=kudu.unixtime_micros, nullable=False, compression='lz4')
schema = builder.build()
# Define partitioning schema
partitioning = Partitioning().add_hash_partitions(column_names=['key'], num_buckets=3)
# Create new table
client.create_table('python-example', schema, partitioning)
# Open a table
table = client.table('python-example')
# Create a new session so that we can apply write operations
session = client.new_session()
# Insert a row
op = table.new_insert({'key': 1, 'ts_val': datetime.utcnow()})
session.apply(op)
# Upsert a row - upsert操作该操作的实现原理是通过判断插入的记录里是否存在主键冲突来决定是插入还是更新,当出现主键冲突时则进行更新操作
op = table.new_upsert({'key': 2, 'ts_val': "2016-01-01T00:00:00.000000"})
session.apply(op)
# Updating a row
op = table.new_update({'key': 1, 'ts_val': ("2017-01-01", "%Y-%m-%d")})
session.apply(op)
# Delete a row
op = table.new_delete({'key': 2})
session.apply(op)
# Flush write operations, if failures occur, capture print them.
try:
session.flush()
except kudu.KuduBadStatus as e:
print(session.get_pending_errors())
# Create a scanner and add a predicate
scanner = table.scanner()
scanner.add_predicate(table['ts_val'] == datetime(2017, 1, 1))
# Open Scanner and read all tuples
# Note: This doesn't scale for large scans
result = scanner.open().read_all_tuples()
Kudu旨在与Hadoop生态系统中的MapReduce,YARN,Spark和其他框架集成。见 RowCounter.java 和 ImportCsv.java 的例子,你可以模拟在你自己的集成。请继续关注未来使用YARN和Spark的更多示例。