基于ambari hadoop平台的搭建

环境准备

1:jdk安装,防止在/opt/data下,tar –zxf 安装包在/opt/apps下   所有节点都需要

2:sudo apt update  源升级,所有节点都需要

3:/etc/hosts

10.68.29.243   iZwz9870dk1soyw67s3ephZ

10.68.29.244   iZwz9870dk1soyw67s3epgZ

10.68.29.245   iZwz9870dk1soyw67s3eplZ

10.68.29.246   iZwz9870dk1soyw67s3epjZ

10.68.29.247   iZwz9870dk1soyw67s3epkZ

10.68.29.248   iZwz9870dk1soyw67s3epiZ


4:sudo apt-get install mysql-server  10.68.29.246节点安装mysql

vi /etc/mysql/mysql.conf.d/mysqld.cnf

#bind-address           =127.0.0.1

sudo/etc/init.d/mysql restart

GRANT ALL PRIVILEGES ON *.* TO 'root'@'iZwz9870dk1soyw67s3epjZ'IDENTIFIED BY 'newpass' WITH GRANT OPTION;

FLUSH PRIVILEGES;

在/usr/share/java 上次mysql驱动

改名: mvmysql-connector-java-8.0.12.jar mysql-connector-java.jar

5:在/opt/data目录下执行下面命令

nohup python -m SimpleHTTPServer    >> SimpleHTTPServer.log 2>&1&

得到的url为:ttp://47.106.167.81:8000/


6:解压下面文件

ambari-2.6.2.2-ubuntu16.tar.gz

HDP-2.6.5.0-ubuntu16-deb.tar.gz

HDP-GPL-2.6.5.0-ubuntu16-gpl.tar.gz

HDP-UTILS-1.1.0.22-ubuntu16.tar.gz


7: /etc/apt/sources.list.d/下创建两个文件

ambari.list

deb http:// iZwz9870dk1soyw67s3epjZ:8000/ambari/ubuntu16/2.6.2.2-1/  Ambari main

ambari-hdp.list

deb http://iZwz9870dk1soyw67s3epjZ:8000/HDP/ubuntu16/2.6.5.0-292/HDP main

deb http://iZwz9870dk1soyw67s3epjZ:8000/HDP-GPL/ubuntu16/2.6.5.0-292/HDP-GPL main

deb http://iZwz9870dk1soyw67s3epjZ:8000/HDP-UTILS/ubuntu16/1.1.0.22/HDP-UTILS main

并scp到其他相同目录的节点

8: 配置获取公钥

apt-key adv --recv-keys--keyserver keyserver.ubuntu.com B9733A7A07513CAD  (其中keyserver.ubuntu.com表示可下载公钥的服务器,B9733A7A07513CAD为签名)

9: apt-get update

sudo apt-get installambari-agent ambari-metrics-assembly agent节点执行

10: ambari-server的安装

 apt-get install -y ambari-server  server节点

更改配置:

在/etc/ambari-server/conf/ambari.properties配置

jdk1.8.url=http://iZwz9870dk1soyw67s3epjZ:8000/jdk-8u171-linux-x64.tar.gz

添加一行server.jdbc.driver.path=/usr/share/java/mysql-connector-java.jar

更改/var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql

新增文本:

# USE @schema;

CREATE DATABASE ambari;

CREATE DATABASE hive;

GRANT ALL PRIVILEGES ONambari.* TO 'ambari'@'%'

IDENTIFIED BY 'ambari';

GRANT ALL PRIVILEGES ON hive.*TO 'hive'@'%'

IDENTIFIED BY 'hive';

FLUSH PRIVILEGES;

use ambari;


mysql里面: source  /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql

10:更改/etc/ambari-agent/conf/ambari-agent.ini

[server]

hostname=iZwz9870dk1soyw67s3epjZ

(这个需要更改)

在security下加入force_https_protocol=PROTOCOL_TLSv1_2   不然会报PROTOCOL异常


开始配置HDP

1:ambari-serversetup

选用自定义jdk

选用mysql数据库


启动服务

ambari-server  start   server节点

ambari-agent start    agent节点


出现下面的情况需要更改配置文件

将hostname更换


正式安装HDP

在windows上配置外网ip映射

47.106.167.81  iZwz9870dk1soyw67s3epjZ

39.108.98.125   iZwz9870dk1soyw67s3epkZ

47.106.120.70   iZwz9870dk1soyw67s3epiZ



点击next


开始进行资源自定义配置


发布:


最后

NameNode的HA 配置

ResourceManager的HA

遇到的坑:

1. server需要先启动,

2. mysql驱动版本不能低于5.6,

3. mysql安装后需要进行配置文件的修改

4. 端口号8020,50070需要开放,hosts的本地回环需要去掉.

5.  zk的配置文件最后多了一行是官网的坑,需要删除

/usr/hdp/current/zookeeper-server/conf/zoo.cfg

这里会多出一行,有个非法字符

6: root@iZwz9870dk1soyw67s3epgZ:/etc/apt/sources.list.d#sudo apt update

Get:1 file:/var/nvidia-diag-driver-local-repo-396.26  InRelease

Ign:1file:/var/nvidia-diag-driver-local-repo-396.26 InRelease

Get:2file:/var/nvidia-diag-driver-local-repo-396.26 Release [574 B]

Hit:3http://mirrors.cloud.aliyuncs.com/ubuntu xenial InRelease      

Get:2file:/var/nvidia-diag-driver-local-repo-396.26 Release [574 B]

Get:4http://mirrors.cloud.aliyuncs.com/ubuntu xenial-updates InRelease [109 kB]

Get:5http://mirrors.cloud.aliyuncs.com/ubuntu xenial-security InRelease [107kB] 

Get:7 http://mirrors.cloud.aliyuncs.com/ubuntuxenial-updates/main amd64 Packages [892 kB]

Get:8http://mirrors.cloud.aliyuncs.com/ubuntu xenial-updates/main i386 Packages [790kB]     

Get:9http://mirrors.cloud.aliyuncs.com/ubuntu xenial-updates/universe amd64 Packages[715 kB]     

Get:10http://mirrors.cloud.aliyuncs.com/ubuntu xenial-updates/universe i386 Packages[655 kB]         

0% [Waiting for headers]^Z


卡在这里:

rm -rf /var/lib/apt/lists/*

apt-get clean

apt-get update


apt-get  remove hdp-select 卸载成功后再进行安装


问题3:EOF occurred in violation of protocol (_ssl.c:579)

vi/etc/ambari-agent/conf/ambari-agent.ini

添加

[security]

ssl_verify_cert=0

force_https_protocol=PROTOCOL_TLSv1_2

脑裂

重启ambari-server遇到resourceManager脑裂,两个处于standby 状态

re you sure you want to continue? (Y or N)Y

18/12/20 16:17:19 WARN ha.HAAdmin:Proceeding with manual HA state management even though

automatic failover is enabled fororg.apache.hadoop.yarn.client.RMHAServiceTarget@212bf671

Operation failed: Error when transitioningto Active mode

       atorg.apache.hadoop.yarn.server.resourcemanager.AdminService.transitionToActive(AdminService.java:334)

       atorg.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)

       atorg.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)

       atorg.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)

       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)

       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)

       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)

       at java.security.AccessController.doPrivileged(Native Method)

       at javax.security.auth.Subject.doAs(Subject.java:422)

       at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)

       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)

Caused by:org.apache.hadoop.service.ServiceStateException:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RetriableException):org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot createfile/system/yarn/node-labels/nodelabel.mirror.writing. Name node is in safemode.

The reported blocks 6070 needs additional373 blocks to reach the threshold 0.9900 of total blocks 6508.

The number of live datanodes 6 has reachedthe minimum number 0. Safe mode will be turned off automatically once thethresholds have been reached.

       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1426)

       atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2697)

       atorg.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2586)

       at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:736)

       atorg.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)

       at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

       atorg.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)

       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)

       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)

       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)

       at java.security.AccessController.doPrivileged(Native Method)

       at javax.security.auth.Subject.doAs(Subject.java:422)

       atorg.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)

       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)

是namenode处于安全模式

hdfs dfsadmin –safemode leave

然后yarn rmadmin -transitionToActive --forcemanual  rm1手动的切换active

你可能感兴趣的:(基于ambari hadoop平台的搭建)