69.Kudu、Spark2、Kafka安装—CDH

69.1 演示环境介绍

  • CDH集群运行正常
  • 操作系统版本为:CentOS6.5
  • CM和CDH版本为:5.12.1
  • CM管理员为:admin用户
  • 操作系统用户为:root用户

69.2 操作演示

Kudu安装

  • Kudu的Parcel部署
    • 下载Kudu的Parcel包:
http://archive.cloudera.com/kudu/parcels/5.12.1/KUDU-1.4.0-1.cdh5.12.1.p0.10-el6.parcel
http://archive.cloudera.com/kudu/parcels/5.12.1/KUDU-1.4.0-1.cdh5.12.1.p0.10-el6.parcel.sha1
http://archive.cloudera.com/kudu/parcels/5.12.1/manifest.json
  • 以上面文件下载到http服务所在服务器的/var/www/html/kudu1.4目录
[root@ip-186-31-6-148~]# cd /var/www/html/
[root@ip-186-31-6-148 html]# mkdir kudu1.4
[root@ip-186-31-6-148 html]# cd kudu1.4/
[root@ip-186-31-6-148 kudu1.4]# ll
total 474140
-rw-r--r-- 1 rootroot 485506175 Aug 30 14:55 KUDU-1.4.0-1.cdh5.12.1.p0.10-el6.parcel
-rw-r--r-- 1 rootroot        41 Aug 30 14:55KUDU-1.4.0-1.cdh5.12.1.p0.10-el6.parcel.sha1
-rw-r--r-- 1 rootroot      2646 Aug 30 14:55 manifest.json
[root@ip-186-31-6-148 kudu1.4]# 
  • Kudu服务安装
    • CM界面配置Kudu的Parcel地址,并下载,分发,激活Kudu
    • 选择Master和Tablet Server
    • 配置相应的目录,无论是Master还是Tablet根据实际情况,数据目录(fs_data_dir)应该都可能有多个,以提高并发读写,从而提高Kudu性能
  • 配置Impala
    • 在Impala的高级配置项中设置KuduMaster的地址和端口:
--kudu_master_hosts=ip-186-31-6-148.fayson.com:7051
  • 多个master可以以“,”分割如:
--kudu_master_hosts=ip-186-31-6-148.fayson.com:7051,ip-186-31-6-148.fayson.com:7051

安装Spark2

  • 下载csd文件:
http://archive.cloudera.com/spark2/csd/SPARK2_ON_YARN-2.1.0.cloudera1.jar
  • csd文件移动至/opt/cloudera/csd目录下
[root@ip-186-31-6-148csd]# pwd
/opt/cloudera/csd
[root@ip-186-31-6-148 csd]#ll
total 16
-rw-r--r-- 1 rootroot 16109 Mar 29 06:58 SPARK2_ON_YARN-2.1.0.cloudera1.jar
[root@ip-186-31-6-148 csd]# 

csd目录如不存在,则创建

[root@ip-186-31-6-148cloudera]# mkdir csd
[root@ip-186-31-6-148 cloudera]# chown cloudera-scm:cloudera-scm csd/
  • 重启CM服务
[root@ip-186-31-6-148~]# service cloudera-scm-serverrestart
Stopping cloudera-scm-server:                              [  OK  ]
Starting cloudera-scm-server:                              [  OK  ]
[root@ip-186-31-6-148 ~]# 
  • Spark2的Parcel部署
    • 下载Spark2的Parcel包:
http://archive.cloudera.com/spark2/parcels/2.1.0/SPARK2-2.1.0.cloudera1-1.cdh5.7.0.p0.120904-el6.parcel
http://archive.cloudera.com/spark2/parcels/2.1.0/SPARK2-2.1.0.cloudera1-1.cdh5.7.0.p0.120904-el6.parcel.sha1
http://archive.cloudera.com/spark2/parcels/2.1.0/manifest.json

上面个文件下载至/var/www/html/spark2.1.0目录下

[root@ip-186-31-6-148html]# cd /var/www/html/
[root@ip-186-31-6-148 html]# mkdir spark2.1.0
[root@ip-186-31-6-148 html]# cd spark2.1.0/
[root@ip-186-31-6-148 spark2.1.0]# ll
total 173052
-rw-r--r-- 1 rootroot      4677 Mar 29 06:58 manifest.json
-rw-r--r-- 1 rootroot 177185276 Mar 29 06:58 SPARK2-2.1.0.cloudera1-1.cdh5.7.0.p0.120904-el6.parcel
-rw-r--r-- 1 rootroot        41 Mar 29 06:58SPARK2-2.1.0.cloudera1-1.cdh5.7.0.p0.120904-el6.parcel.sha1
[root@ip-186-31-6-148 spark2.1.0]# 
  • Spark2安装
    • CM管理界面配置Spark2的Parcel地址并保存
    • 选择History Server和Gateway节点

安装Kafka

  • 版本选择
  • Kafka的Parcel部署
    • 下载Kafka的Parcel包:
http://archive.cloudera.com/kafka/parcels/2.1.1.18/KAFKA-2.1.1-1.2.1.1.p0.18-el6.parcel
http://archive.cloudera.com/kafka/parcels/2.1.1.18/KAFKA-2.1.1-1.2.1.1.p0.18-el6.parcel.sha1
http://archive.cloudera.com/kafka/parcels/2.1.1.18/manifest.json
  • 上面文件下载至/var/www/html/kafka2.1.1.18目录下
[root@ip-186-31-6-148html]# cd /var/www/html/
[root@ip-186-31-6-148 html]# mkdir kafka2.1.1.18
[root@ip-186-31-6-148 html]# cd kafka2.1.1.18/
[root@ip-186-31-6-148 kafka2.1.1.18]# ll
total 66536
-rw-r--r-- 1 rootroot 68116503 Mar 27 17:39 KAFKA-2.1.1-1.2.1.1.p0.18-el6.parcel
-rw-r--r-- 1 rootroot       41 Mar 27 17:39KAFKA-2.1.1-1.2.1.1.p0.18-el6.parcel.sha1
-rw-r--r-- 1 rootroot     5252 Mar 27 17:40 manifest.json
[root@ip-186-31-6-148 kafka2.1.1.18]# 
  • Kafka服务安装
    • CM配置Kafka的Parcel包地址并保存
    • Kafka选择一组依赖关系
    • 选择Kafka Broker和Gateway
    • 根据集群环境修改Kafka配置
    • 修改Kafka Broker的heap大小,默认为50M,可能会导致Kafka启动失败

Kudu验证

  • 建表:
CREATE TABLE my_first_table(
    id BIGINT,
    name STRING,
    PRIMARY KEY(id)
)
PARTITION BY HASH PARTITIONS 16
STORED AS KUDU;
  • 通过Impala-shell创建Kudu表
[impala@ip-186-31-6-148root]$ impala-shell -iip-186-31-10-118.fayson.com
...
[ip-186-31-10-118.fayson.com:21000] > show tables;
Query: show tables
+------------+
| name       |
+------------+
| test       |
| test_table |
+------------+
Fetched 2 row(s) in 0.06s
[ip-186-31-10-118.fayson.com:21000] > CREATE TABLEmy_first_table(
                                    >     id BIGINT,
                                    >     name STRING,
                                    >    PRIMARY KEY(id)
                                    > )
                                    >PARTITION BY HASH PARTITIONS 16
                                    > STORED AS KUDU;
Query: create TABLE my_first_table(
    id BIGINT,
    name STRING,
    PRIMARY KEY(id)
)
PARTITION BY HASH PARTITIONS 16
STORED AS KUDU

Fetched 0 row(s) in 2.43s
[ip-186-31-10-118.fayson.com:21000] >
  • 插入数据并查询
    • 通过Kudu Master Web UI查看
[ip-186-31-10-118.fayson.com:21000]> insert into my_first_table values(1,'fayson');
Query: insert into my_first_table values(1,'fayson')
...
Modified 1 row(s), 0 row error(s) in 3.92s
[ip-186-31-10-118.fayson.com:21000] >select * from my_first_table;
...
+----+--------+
| id | name   |
+----+--------+
| 1  | fayson |
+----+--------+
Fetched 1 row(s) in 1.02s
[ip-186-31-10-118.fayson.com:21000] > 

验证Spark2

[root@ip-186-31-6-148~]# spark2-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). ForSparkR, use setLogLevel(newLevel).
17/09/11 09:46:22 WARN spark.SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0
Spark context Web UI available at http://186.31.6.148:4040
Spark context available as 'sc' (master = yarn, app id =application_1505121236974_0001).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__ ___ _____/ /__
    _\ \/ _ \/ _ `/__/  '_/
   /___/ .__/\_,_/_//_/\_\   version 2.1.0.cloudera1
      /_/
        
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_67)
Type in expressions tohave them evaluated.
Type :help for more information.

scala> var textFile=sc.textFile("/fayson/test/a.txt")
textFile: org.apache.spark.rdd.RDD[String] =/fayson/test/a.txt MapPartitionsRDD[1] at textFile at :24

scala> textFile.count()
res0: Long = 3

scala> 

验证Kafka

  • 创建一个test的topic
[root@ip-186-31-6-148hive]# kafka-topics --create--zookeeper ip-186-31-6-148.fayson.com:2181 --replication-factor 3 --partitions1 --topic test
  • 向topic发送消息
[root@ip-186-31-6-148hive]# kafka-console-producer--broker-list ip-186-31-10-118.fayson.com:9092 --topic test
  • 消费topic的消息
[root@ip-186-31-6-148hive]# kafka-console-consumer --zookeeperip-186-31-6-148.fayson.com:2181 --topic test --from-beginning
  • 查看topic描述信息
[root@ip-186-31-6-148hive]# kafka-topics --describe--zookeeper ip-186-31-6-148.fayson.com:2181 --topic test

大数据视频推荐:
腾讯课堂
CSDN
大数据语音推荐:
企业级大数据技术应用
大数据机器学习案例之推荐系统
自然语言处理
大数据基础
人工智能:深度学习入门到精通

你可能感兴趣的:(69.Kudu、Spark2、Kafka安装—CDH)