CentOS 6:
chkconfig iptables off
service iptables stop
chkconfig --list iptables
CentOS 7:
systemctl disable firewalld.service
systemctl stop firewalld.service
yum install -y ntp
chkconfig --list ntpd
chkconfig ntpd on
service ntpd start
CentOS 7:
yum install -y ntp
chkconfig --list ntpd
systemctl is-enabled ntpd
systemctl enable ntpd
systemctl start ntpd
以上的环境配置是最基本的,每个节点都需要的配置。可以做一备份节点,用于将来创建子节点克隆使用,减少重复工作。当然克隆后还有小的调整,比如要重新配置hostname,还有SSH配置过程里,分发主节点里配置好的authorized_keys到各从节点,还是在克隆所有子节点后再操作。
2. 安装Ambari集群
2.1. 制作本地源
制作本地源是因为在线安装Ambari太慢。制作本地源只需在主节点上进行。
1) 配置HTTP 服务
配置HTTP 服务到系统层使其随系统自动启动
chkconfig httpd on
service httpd start
2) 安装工具
安装本地源制作相关工具
yum install yum-utils createrepo yum-plugin-priorities -y
vi /etc/yum/pluginconf.d/priorities.conf
添加gpgcheck=0
3) 下载 Ambari与HDP
CentOS 6:
http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.0.3/ambari-2.5.0.3-centos6.tar.gz
http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.0.3/HDP-2.6.0.3-centos6-rpm.tar.gz
http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6/HDP-UTILS-1.1.0.21-centos6.tar.gz
CentOS 7:
http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.5.0.3/ambari-2.5.0.3-centos7.tar.gz
http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.0.3/HDP-2.6.0.3-centos7-rpm.tar.gz
http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7/HDP-UTILS-1.1.0.21-centos7.tar.gz
4) 创建本地源
将下载的3个tar包解压到/var/www/html目录下:
tar zxvf /opt/ambari-2.5.0.3-centos6.tar.gz -C /var/www/html
tar zxvf /opt/HDP-2.6.0.3-centos6-rpm.tar.gz -C /var/www/html
tar zxvf /opt/HDP-UTILS-1.1.0.21-centos6.tar.gz -C /var/www/html
创建本地源
cd /var/www/html/
createrepo ./
下载ambari.repo
wget -nv http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.0.3/ambari.repo -O /etc/yum.repos.d/ambari.repo
修改ambari.repo,配置为本地源
vi /etc/yum.repos.d/ambari.repo
#VERSION_NUMBER=2.5.0.3-7
[ambari-2.5.0.3]
name=ambari Version - ambari-2.5.0.3
baseurl=http://hdp131.cancer.com/ambari/centos6/
gpgcheck=0
gpgkey=http://hdp131.cancer.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1
vi /etc/yum.repos.d/HDP.repo
[HDP-2.6.0.3-8]
name=HDP-2.6.0.3-8
baseurl=http://hdp131.cancer.com/HDP/centos6/
gpgcheck=0
gpgkey=http://hdp131.cancer.com/HDP/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1
vi /etc/yum.repos.d/HDP-UTILS.repo
[HDP-UTILS-1.1.0.21]
name=HDP-UTILS-1.1.0.21
baseurl=http://hdp131.cancer.com/HDP-UTILS-1.1.0.20/repos/centos6/
gpgcheck=0
gpgkey=http://hdp131.cancer.com/HDP-UTILS-1.1.0.20/repos/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1
yum clean all
yum makecache2.2. 安装MySQL
Ambari使用的默认数据库是PostgreSQL,用于存储安装元数据,可以使用自己安装MySQL数据库作为Ambari元数据库。
CentOS 6:
yum install -y mysql-server
chkconfig mysqld on
service mysqld start
CentOS 7:
wget http://dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpm
rpm -ivh mysql-community-release-el7-5.noarch.rpm
yum install mysql-community-server
2.3. 安装Ambari
执行yum install ambari-server
执行yum install ambari-server成功后,针对mysql数据库再做一些工作:
将mysql-connector-java.jar复制到/usr/share/java目录下
mkdir /usr/share/java
cp /opt/mysql-connector-java-5.1.40.jar /usr/share/java/mysql-connector-java.jar
将mysql-connector-java.jar复制到/var/lib/ambari-server/resources目录下
cp /usr/share/java/mysql-connector-java.jar /var/lib/ambari-server/resources/mysql-jdbc-driver.jar
编辑ambari.properties
vi /etc/ambari-server/conf/ambari.properties
添加server.jdbc.driver.path=/usr/share/java/mysql-connector-java.jar
在mysql中分别创建数据库ambari,hive,oozie和其相应用户,创建相应的表:
CREATE DATABASE ambari;
use ambari;
CREATE USER 'ambari'@'%' IDENTIFIED BY 'bigdata';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%';
CREATE USER 'ambari'@'localhost' IDENTIFIED BY 'bigdata';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'localhost';
CREATE USER 'ambari'@'hdp131.cancer.com' IDENTIFIED BY 'bigdata';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'hdp131.cancer.com';
FLUSH PRIVILEGES;
source /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql
show tables;
use mysql;
select Host,User,Password from user where user='ambari';
CREATE DATABASE hive;
use hive;
CREATE USER 'hive'@'%' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%';
CREATE USER 'hive'@'localhost' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost';
CREATE USER 'hive'@'hdp131.cancer.com' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'hdp131.cancer.com';
FLUSH PRIVILEGES;
CREATE DATABASE oozie;
use oozie;
CREATE USER 'oozie'@'%' IDENTIFIED BY 'oozie';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'%';
CREATE USER 'oozie'@'localhost' IDENTIFIED BY 'oozie';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'localhost';
CREATE USER 'oozie'@'hdp131.cancer.com' IDENTIFIED BY 'oozie';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'hdp131.cancer.com';
FLUSH PRIVILEGES;
注:在配置Ambari前先在mysql中建库建表,可以避免执行ambari-server setup时的中断。
2.5. 启动Amabri
执行ambari-server start
成功启动后在浏览器输入Ambari地址:
http://hdp131.cancer.com:8080/
3. 安装配置部署HDP集群
3.1. 登录
在浏览器里第一次输入:http://hdp131.cancer.com:8080/
会出现登录界面,默认管理员账户登录, 账户:admin 密码:admin
3.2. 安装向导
登录成功后出现下面的界面:
点击Launch Install Wizard进入。
3.3. 设置集群名称
给你的集群命名,点击下一步。
3.4. 选择版本
这一步选择软件版本和安装源。版本选择HDP2.6 。
使用公共存储库需要外网连接。我们的本地存储库已经创建好了,在这里可以直接选择Use Local Repository。
选择安装的操作系统和本地yum源。输入本地源地址:
OS | Name | Base URL |
---|---|---|
Redhat6 | HDP-2.6 | http://hdp131.cancer.com/HDP/centos6/ |
Redhat6 | HDP-UTILS-1.1.0.21 | http://hdp131.cancer.com/HDP-UTILS-1.1.0.20/repos/centos6/ |
Redhat7 | HDP-2.6 | http://hdp131.cancer.com/HDP/centos7/ |
Redhat7 | HDP-UTILS-1.1.0.21 | http://hdp131.cancer.com/HDP-UTILS-1.1.0.20/repos/centos7/ |
3.5. 设置集群机器
这一步填写要部署的节点,使用FQDN方式填写。
例如:hdp13[1-3].cancer.com
拷贝.ssh目录下的id_rsa私钥的内容到文本框里,也可以点击选择文件上传id_rsa文件到里面。
如果选择不输入id_rsa私钥方式,就需要所有的节点都已经执行过yum install ambari-agent,并且启动ambari-agent。
3.6. Host确认
这一步会在相关机器上安装ambar-agent来实现跟ambari-server通信。
注:如果通信失败,在success的地方会变成 Failed,点开查看错误原因。一般是这几个原因:
1. ambari-server主机与agent节点不能使用密钥文件登录,即无法联通。手动检查是否可以无密码在远程机器上执行命令。
2.yum安装时失败,到/etc/yum.repos.d/ 目录下检查,有没有ambari.repo仓库文件。尝试用yum clean all清除过期的缓存,再次尝试安装。
3.此前安装过Ambari等相关的功能,没有清理干净,又重新安装的,检查一下残留文件,删除重试。
Host确认成功后,上图底下有个“Click hereto see the warnings.”按钮,打开后会显示主机上的各种警告信息。
虽然已经安装了ambari-agent,可是还有其他可能导致安装集群失败的潜在不足,比如ntp没做,或防火墙规则存在,虽然放行了ssh,但是等安装hadoop集群,需要打开很多的tcp端口,可能会导致错误发生。
3.7. 选择安装服务
这一步选择安装什么服务。默认的服务除了Ambari Log Search,Druid是Technical Preview服务,没有勾选,其他都是勾选的。其中SmartSense是必选,不能取消。
如果觉得安装时间过长或安装过程中经常报错中断,可以选择最小安装。比如可以选择只安装HDFS,HBase,Zookeeper,Ambari Metrics, SmartSense,等安装成功后再陆续添加。
3.8. 节点服务分配
这一步分配每个节点需要安装那些服务,这里涉及到集群机器的规划。
3.9. 节点客户端分配
这一步选择每个节点需要安装服务的客户端。
3.10. 服务自定义配置
这一步是对已选服务的自定义配置,需要设置密码的服务非常多。
这里要注意一下hive和oozie,如果已经在MySQL中创建了元表,选择已经存在的MySQL,输入URL和密码后,点击连接测试总是失败。需要先停止Ambari,执行下面这个命令:
ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar
再重启Ambari后点击连接测试就会成功。
3.11. 显示配置信息
所有信息配置完成后,会出现一份报告,汇总这次安装的集群的机器规划情况。
3.12. 安装,启动和测试
这一步安装各个服务,并且完成安装后会启动和测试相关服务。这一步的时间比较长,如果中途出现错误,可根据具体提示或者log进行操作。
tail -1000f /var/log/ambari-server/ambari-server.log
在后面会总结一些安装部署遇到的问题和解决方案。
3.13. 安装成功
如果都出现绿色的success便说明集群正确的安装成功。进入http://hdp131.cancer.com:8080/#/main/dashboard/metrics看到平台主页面了。
4. 安装部署遇到问题解决方案
1) ERROR [main] DBAccessorImpl:117 - If you are using a non-default database for Ambari and a custom JDBC driver jar, you need to set property "server.jdbc.driver.path={path/to/custom_jdbc_driver}" in ambari.properties config file, to include it in ambari-server classpath.java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
解决方案:修改ambari.properties
server.jdbc.driver.path=/usr/share/java/mysql-connector-java.jar
2) Table 'ambari.metainfo' doesn't exist
Error while creating database accessor
java.sql.SQLException: Access denied for user 'ambari'@'hdp71.cancer.com' (using password: YES)
解决方案:在mysql中导入建库脚本:
source /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql
3) Transparent Huge Pages Issues (1)
The following hosts have Transparent Huge Pages (THP) enabled. THP should be disabled to avoid potential Hadoop performance issues.
解决方案:vi /etc/grub.conf
添加 transparent_hugepage=never
4) Service Issues (1)
The following services should be up Service ntpd or chronyd Not running on 3 hosts
解决方案:NTP服务只在一个节点启动,其他节点关闭service ntpd stop
5) An internal system exception occurred: Base url http://public-repo-1.hortonworks.com/HDP/sles12/2.x/updates/2.6.0.3
is already defined for another repository version.
Setting up base urls that contain the same versions of components will cause stack upgrade to fail.
解决方案:资源库创建有问题,里面包含了HDP低版本,清除重建。
6) resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install accumulo_2_6_0_3_8' returned 1.
You could try using --skip-broken to work around the problem
** Found 1 pre-existing rpmdb problem(s), 'yum check' output follows:
mysql-community-common-5.7.9-1.el5.x86_64 has missing requires of mysql = ('0', '5.7.9', '1.el5')
解决方案:mysql版本问题,将mysql卸载重装。
IOError: [Errno 40] Too many levels of symbolic links: '/usr/hdp/current/hadoop-client/conf/core-site.xml'
7) 解决方案:软链接错误,找到对应文件夹删除或重命名
Error: Cannot retrieve repository metadata (repomd.xml) for repository: base. Please verify its path and try again
解决方案:yum clean all
yum makecache
8) resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install snappy-devel' returned 1.
Error: Package: snappy-devel-1.0.5-1.el6.x86_64 (HDP-UTILS-1.1.0.20)
Requires: snappy(x86-64) = 1.0.5-1.el6
Installed: snappy-1.1.0-1.el6.x86_64 (@anaconda-CentOS-201311272149.x86_64/6.5)
snappy(x86-64) = 1.1.0-1.el6
Available: snappy-1.0.5-1.el6.x86_64 (HDP-UTILS-1.1.0.20)
snappy(x86-64) = 1.0.5-1.el6
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
解决方案:yum info snappy
yum remove snappy
yum erase -y snappy
9) resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/sbin/ambari-metrics-collector --config /etc/ambari-metrics-collector/conf stop' returned 127.
-bash: /usr/sbin/ambari-metrics-collector: No such file or directory
解决方案:停止Ambari将ambari-metrics-collector删除重装
curl -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://hdp131.cancer.com:8080/api/v1/clusters/hdpCluster/services/AMBARI_METRICS
rpm -ivh /var/www/html/ambari/centos6/ambari/ambari-metrics-collector-2.5.0.3-7.x86_64.rpm
10) resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install kafka_2_6_0_3_8' returned 1.
Error unpacking rpm package kafka_2_6_0_3_8-0.10.1.2.6.0.3-8.noarch
error: unpacking of archive failed on file /usr/hdp/2.6.0.3-8/kafka/config: cpio: rename failed - Is a directory
解决方案:删除config这个目录