使用cloudera manager 安装CDH5

阅读更多
使用cloudera manager安装cdh5
[root@hadoopone ~]# cat /etc/issue
CentOS release 6.5 (Final)
Kernel \r on an \m
[root@master01 .ssh]# getconf LONG_BIT
64
[root@master01 .ssh]#

1.修改主机名
vi /etc/sysconfig/network
[root@hadoopfive ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=master01
[root@hadoopfive ~]#

分别对应关系
vi /etc/hosts
192.168.1.207 master01
192.168.1.208 master02
192.168.1.209 datanode01
192.168.1.213 datanode02
192.168.1.214 datanode03

2.配置信任关系
cd /root/.ssh
ssh-keygen -t rsa
[root@master01 .ssh]# cat id_rsa.pub >>authorized_keys
[root@master01 .ssh]# cat -n authorized_keys
     1 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAqHUFyjOlpQtFN4gZy6kqQKSg26NciIG4L4yN5chs3lSweWUuui+0+B7ONQr2E6wCKeQXnzoD3ohht+YbOUkAg7mDWoBCEbsdGKIGzhbkfersajCMWv4h409cLW8WsCcGQkqhZn1h2u0bsbwyfyLsYH1BuWwA3xhu4lgqFp/KSsioDuKqysah41VeLUbxkj3FveLLS8rESBecv0YSqEkSLqKvKFXZ9CV7Txo7IzGSs9pnXgfZ818/wMSOXvL+5vzZyWXwuxhLsRwF+sJedd49eV2vXDO4Fb0b8W8zp+Mzeej7zGZ0zbtfXXu8l2HjX3bITMy61u6ezN5Kfg0+5C9g8w== root@master01
[root@master01 .ssh]# scp authorized_keys  [email protected]:/root/.ssh
The authenticity of host '192.168.1.208 (192.168.1.208)' can't be established.
RSA key fingerprint is e2:ce:9f:8c:d7:8d:5d:20:08:27:e4:22:64:73:92:75.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.208' (RSA) to the list of known hosts.
[email protected]'s password:
authorized_keys                                                                                                                                           100%  395     0.4KB/s   00:00   
[root@master01 .ssh]#
要保证每一个都能直接ssh过去,所以先要都做一下操作
[root@master01 .ssh]# ssh [email protected] date  合计25次

3.关闭selinux及防火墙
sed -i -e 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
sed -i -e 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
chkconfig iptables off

修改完后要重启,并使用命令检查SELinux状态:
1、/usr/sbin/sestatus -v      ##如果SELinux status参数为enabled即为开启状态
SELinux status:                 enabled
2、getenforce                 ##也可以用这个命令检查

4.安装必要的包
yum install -y sysstat vim tree wget lrzsz screen gcc python-devel gcc-c++  ntpdate libyaml libyaml-devel python-setuptools ntp  libreoffice-core  libreoffice-ure

5.安装jdk
先卸载自带jdk
yum -y erase java
rpm -qa | grep jdk
rpm -e --nodeps java-*-openjdk*

安装jdk1.7版本 到/usr/java/jdk1.7
tar -xzvf jdk-7u67-linux-x64.tar.gz
mv jdk-7u67-linux-x64 /usr/java/jdk1.7

[root@master01 local]# source /etc/profile
export JAVA_HOME=/usr/java/jdk1.7
export JRE_HOME=$JAVA_HOME/jre
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export CLASSPATH=:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib

[root@master01 local]# java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)


6.所有需要的包:
cloudera-manager-agent-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
cloudera-manager-daemons-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
cloudera-manager-server-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
cloudera-manager-server-db-2-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
enterprise-debuginfo-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
oracle-j2sdk1.7-1.7.0+update67-1.x86_64.rpm
jdk-6u31-linux-amd64.rpm         (可选择不装,因为我们选择的版本已经不支持jdk1.6了,至少是1.7)
cloudera-manager-installer.bin (5.4.6)
CDH-5.4.10-1.cdh5.4.10.p0.16-el6.parcel
CDH-5.4.10-1.cdh5.4.10.p0.16-el6.parcel.sha    (这个是从manifest.json中抠出来的对应版本的hash值)
manifest.json


7.安装cloudera manager
下载地址:
http://archive-primary.cloudera.com/cm5/installer/5.4.6/   --下载cloudera-manager-install.bin
http://archive-primary.cloudera.com/cdh5/parcels/    --下载cdh
http://archive-primary.cloudera.com/cm5/redhat/6/x86_64/cm/5.4.6/RPMS/x86_64/  --下载cloudera-manager的包
http://archive.cloudera.com/cdh5/parcels/5.4.10/manifest.json  --查看对应cdh的hash
http://archive-primary.cloudera.com/cm5/cm/5/  --下载gz包


说明:
cloudera-manager是5.4.6的
parcels都是用的5.4.10的

-rwxrwxrwx  1 root root    4781732 Jun  4 18:02 cloudera-manager-agent-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rwxrwxrwx. 1 root root  669277644 Jun  4 17:00 cloudera-manager-daemons-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rwxrwxrwx  1 root root       8556 Jun  4 18:02 cloudera-manager-server-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rwxrwxrwx  1 root root       9880 Jun  4 18:02 cloudera-manager-server-db-2-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rwxrwxrwx  1 root root     980244 Jun  4 18:02 enterprise-debuginfo-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rwxrwxrwx  1 root root  142039186 Jun  6 10:50 oracle-j2sdk1.7-1.7.0+update67-1.x86_64.rpm

master节点需要安装所有这6个
-rwxrwxrwx  1 root root    4781732 Jun  4 18:02 cloudera-manager-agent-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rwxrwxrwx. 1 root root  669277644 Jun  4 17:00 cloudera-manager-daemons-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rwxrwxrwx  1 root root       8556 Jun  4 18:02 cloudera-manager-server-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rwxrwxrwx  1 root root       9880 Jun  4 18:02 cloudera-manager-server-db-2-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rwxrwxrwx  1 root root     980244 Jun  4 18:02 enterprise-debuginfo-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rwxrwxrwx  1 root root  142039186 Jun  6 10:50 oracle-j2sdk1.7-1.7.0+update67-1.x86_64.rpm


其他节点需要安装2个:
-rwxrwxrwx  1 root root    4781732 Jun  4 18:02 cloudera-manager-agent-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rwxrwxrwx. 1 root root  669277644 Jun  4 17:00 cloudera-manager-daemons-5.4.6-1.cm546.p0.8.el6.x86_64.rpm

安装命令:
yum localinstall --nogpgcheck  --skip-broken *.rpm

[root@master01 cm]# cd /opt/cloudera/parcel-repo/    --这个路径是安装cloudera manager的时候自动生成的,我们把下好的包直接拷贝进去,注意要root用户
下载完成后只上传到master节点即可。然后解压到/opt目录下,不能解压到其他地方,因为cdh5的源会默认在/opt/cloudera/parcel-repo寻找
[root@master01 parcel-repo]# ll
total 1233984
-rwxrwxrwx 1 root root 1263545730 Jun  7 11:03 CDH-5.4.10-1.cdh5.4.10.p0.16-el6.parcel
-rwxrwxrwx 1 root root         41 Jun  7 11:00 CDH-5.4.10-1.cdh5.4.10.p0.16-el6.parcel.sha
-rwxrwxrwx 1 root root      43172 Jun  7 12:59 manifest.json

[root@master01 parcel-repo]# cat CDH-5.4.10-1.cdh5.4.10.p0.16-el6.parcel.sha
2b9e62980495ffbceeaa7303cc1d457f3291308d


7.启动服务ntpd
service ntpd start
chkconfig ntpd on

编辑脚本,定时同步时间
*/1 * * * * root ntpdate -u ntpdate ntp.api.bz

补充:修改时区
vi /etc/sysconfig/clock
ZONE='Asia/Shanghai'
rm /etc/localtime
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
reboot

8.关闭透明大内存页功能
echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
echo "echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag" >> /etc/rc.local

9.修改系统配置,防止hadoop运行出错
echo "* soft nofile 65536">>/etc/security/limits.conf
echo "* hard nofile 65536">>/etc/security/limits.conf
echo "root soft nofile 65536">>/etc/security/limits.conf
echo "root hard nofile 65536">>/etc/security/limits.conf
echo "* soft memlock unlimited">>/etc/security/limits.conf
echo "* hard memlock unlimited">>/etc/security/limits.conf
echo "root soft memlock unlimited">>/etc/security/limits.conf
echo "root hard memlock unlimited">>/etc/security/limits.conf
echo "* soft as unlimited">>/etc/security/limits.conf
echo "* hard as umlimited">>/etc/security/limits.conf
echo "root soft as unlimited">>/etc/security/limits.conf
echo "root hard as unlimited">>/etc/security/limits.conf
echo "vm.max_map_count = 131072">>/etc/sysctl.conf
echo "vm.swappiness=0">>/etc/sysctl.conf

sysctl -p  使配置生效

10.需要挂载的磁盘先进行分区并格式化  /dev/sdb
parted /dev/sdb
mklabel gpt
mkpart primary 2048s 100%
mkfs.ext4 /dev/sdb1

11.将磁盘挂载到/data/disk01  /data/disk02等目录,并将磁盘挂载命令写入到/etc/fstab中,以便于重启后能生效
uuid=`ls /dev/disk/by-uuid/ -l |grep sdb1 |awk '{print $9}'`
echo "UUID=$uuid /data/disk01 ext4 default,noatime,nodiratime 0 0" >>/etc/fstab

12.启动服务:
Server
service cloudera-scm-server start

agent
service cloudera-scm-agent start  --必须要保证都是启动的
如果失败,可查看日志/var/log/cloudera-scm-agent/logs

失败处理:
service cloudera-scm-agent hard_stop_confirmed
yum remove -y 'cloudera-manager-*' hadoop hue-common 'bigtop-*'
rm -f /var/run/cloudera-scm-agent.pid
yum localinstall --nogpgcheck  --skip-broken *.rpm
service cloudera-scm-agent start
service cloudera-scm-agent status


13.开始安装manager(注意这里采用的离线方式,在线要求网速给力)
chmod +x cloudera-manager-installer.bin
./cloudera-manager-installer.bin
安装报错的话,可查看日志/var/log/cloudera-manager-installer/

这个步骤会在/etc/yum.repo.d/下生成cloudera-manager.repo,内容如下
[cloudera-manager]
name = Cloudera Manager, Version 5.4.6
baseurl = http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/5.4.6/
gpgkey = http://archive.cloudera.com/redhat/cdh/RPM-GPG-KEY-cloudera
gpgcheck = 1

有时候会失败,依然会去网上下载,这个时候,我们可能需要重新安装那6个包,或者
./cloudera-manager-install.bin  --skip_repo_package=1    让它不生成包,使用我们自己创建的包

如果需要重装,那么就要删除一下这个
rm -f  /etc/cloudera-scm-server/db.properties
cd /usr/share/cmf/
./uninstall-manager.bin
yum localinstall --nogpgcheck --skip-broken *.rpm
./cloude-manager-install.bin


还有时候会报错,找不到db.properties,我们可以在bin启动之后,自己创建一个放进去
[root@master01 opt]# /usr/share/cmf/schema/scm_prepare_database.sh mysql  -uroot -p --scm-host localhost scm scm scm123
Enter database password:
JAVA_HOME=/usr/java/jdk1.7.0
Verifying that we can write to /etc/cloudera-scm-server  --放到这个目录下了
Creating SCM configuration file in /etc/cloudera-scm-server
Executing:  /usr/java/jdk1.7.0/bin/java -cp /usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/cmf/schema/../lib/* com.cloudera.enterprise.dbutil.DbCommandExecutor /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
[                          main] DbCommandExecutor              INFO  Successfully connected to database.
All done, your SCM database is configured correctly!
[root@master01 opt]#

补充13说明,当然也可以不用./cloudera-manager-install.bin来安装,可以直接下载
cloudera-manager-el6-cm5.4.6_x86_64.tar.gz
解压到master节点/opt即可,给所有的节点添加cloudera-scm用户
useradd --system --home=/opt/cm-5.4.6/run/cloudera-scm-server --no-createhome --shell=/bin/false --comment "cloudera scm user" cloudera-scm
以上未经测试


安装成功,会看到:
Your browser should now open to http://192.168.1.207:7180/. Log in to Cloudera Manager with
the username and password set to 'admin' to continue installation.

14.进入管理页面配置
[root@master01 bin]# ./apachectl start    --起来后,要稍微等一会才行
httpd (pid 3473) already running
[root@master01 bin]#  service cloudera-scm-server start
cloudera-scm-server is already running
[root@master01 bin]#  service cloudera-scm-server-db start
Database is already running. Please stop it first., giving up
[root@master01 bin]# netstat -nat|grep 7180
[root@master01 bin]# netstat -nat|grep 7180
[root@master01 bin]# netstat -nat|grep 7180
[root@master01 bin]# netstat -nat|grep 7180
[root@master01 bin]# netstat -nat|grep 7180   --确保这几个都正常启动了,才能打开页面
tcp        0      0 0.0.0.0:7180                0.0.0.0:*                   LISTEN     
tcp        0      0 192.168.1.207:7180          192.168.1.41:55213          ESTABLISHED
tcp        0      0 192.168.1.207:7180          192.168.1.41:55212          ESTABLISHED
tcp        0      0 192.168.1.207:7180          192.168.1.41:55216          ESTABLISHED
tcp        0      0 192.168.1.207:7180          192.168.1.41:55217          ESTABLISHED
tcp        0      0 192.168.1.207:7180          192.168.1.41:55214          ESTABLISHED
tcp        0      0 192.168.1.207:7180          192.168.1.41:55215          ESTABLISHED

页面地址:http://192.168.1.207:7180/
用户/密码:admin/admin

15.安装mysql库
安装:
groupadd mysql
useradd -s /sbin/nologin -g mysql mysql
tar -zxvf mysql-5.7.10-linux-glibc2.5-x86_64.tar.gz
mv mysql-5.7.10-linux-glibc2.5-x86_64 /usr/local/mysql
cd /usr/local/mysql/support-files
cp mysql.server /etc/init.d/mysql57
chkconfig --add mysql57
chkconfig --level 2345 mysql57 on
mkdir -p /usr/local/mysql/data
chown -R mysql:mysql /usr/local/mysql/

初始化:
cd /usr/local/mysql/
bin/mysqld --initialize --user=mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data
记下密码

修改配置
vi /etc/my.cnf
[root@master01 parcel-repo]# cat /etc/my.cnf
[mysqld]
port=3306
datadir=/usr/local/mysql/data
socket=/tmp/mysql.sock
[mysqld_safe]
log-error=/usr/local/mysql/mysqlerr.log
pid-file=/usr/local/mysql/mysqld.pid

启动服务
service mysql57 start
/usr/local/mysql/bin/mysql -uroot -p
set password=password('root');

创建所需要的库和用户
create database hive charset utf8;
grant all on *.* to hive@'%' identified by 'hive123';
create database monitor charset utf8;
grant all on *.* to monitor@'%' identified by 'monitor123';
create database oozie charset utf8;
grant all on *.* to oozie@'%' identified by 'oozie123';
create database amon charset amon;
grant all on *.* to amon@'%' identified by 'amon123';
create database oozie charset utf8;
grant all on *.* to hue@'%' identified by 'hue123';
flush privileges;

16.拷贝hive连接mysql需要的jar包(这里hive使用的是mysql库作为元数据库)
unzip mysql-connector-java-5.1.30.zip
cp mysql-connector-java-5.1.30-bin.jar  /usr/java/mysql-connector-java.jar  --一定是要是这个名称

17.将Mysql的驱动包拷贝到/opt/cloudera/parcels/CDH-5.0.0-1.cdh5.0.0.p0.47/lib/hive/lib/目录下

18.页面开始安装,按照提示操作。

服务字母对应:
HBASE部分
M ---Master
HBTS --HBase Thrift Server
G  --Gateway
HBRS --HBase REST Server
RS --RegionServer

HDFS部分
FC --Failover Controller
SNN  --SecondaryNameNode
NFSC --NFS Gateway
HFS --HttpFS
NN --Namenode
G  --Gateway
JN --JournalNode
DN --DateNode

Hive部分
HMS --Hive Metastore Server
WHC --WebHCat Server
HS2 --HiveServer2
G --Gateway

Hue部分
HS --Hue Server
KTR kerberos Ticket Renewer

Oozie部分
OS --Oozie Server

YARN部分
G --Gateway
NM --NodeManager

Zookeeper
S --Server


测试环境各个软件对应的版本
Cloudera manager 5.4.6
CDH 5.4.10
hadoop 2.6.0
hive  1.1.0
hbase 1.0.0
zookeeper  3.4.5
sqoop 1.4.5
jdk 1.7.0_67


页面的基本步骤
1.选择免费版
2.选中所有主机
192.168.1.207,192.168.1.208,192.168.1.209,192.168.1.213,192.168.1.214
搜索全选

3.选择使用parcel安装集群
注意看CDH的版本,是否是我们下的那个CDH-5.4.10-1.cdh5.4.10.p0.16

可以配置JAVA加密(默认方式)
install oracle java se development kit

ssh数据root密码,同时安装的数量1

注释:
指定主机的 SSH 登录方式
一种通过root用户,密码需要一致,刚开始安装建议使用root。
也可以使用非root用户,但是需要保证sudo无密码访问
实现方法如下
给aboutyun用户设置无密码sudo权限:
chmod u+w /etc/sudoers
aboutyun ALL=(root)NOPASSWD:ALL
chmod u-w /etc/sudoers
测试:sudo ifconfig

4.正常读取的话,会显示(已下载100%)
然后就是分配、激活

5.主机检测,务必保证所有主机的配置都验证通过
jdk显示不适用,没关系

6.选择需要安装的服务组合
含HBase的内核

7.角色分配
将各个主机的角色进行均衡分配

8.数据库测试
hive库,monitor库等都选择mysql,适用我们刚刚创建的库名和用户名密码
测试连接全部successful再进行下一步

9.审核更改
修改目录,一般会在默认路径前面加/hadoop,方便统一管理

10.启动安装配置
全部运行成功后,基本就可以了



可以访问的一些网址
cloudera manager管理工具:http://192.168.1.207:7180   admin  admin
hue网址:http://192.168.1.207:8888/accounts/login/?next=/    hue/hue123
job管理:http://192.168.1.207:8088/cluster  
oozie页面:http://192.168.1.207:11000/oozie/

角色添加:
群集--选择需要添加的服务--实例--添加角色实例--选择主机-运行--启动




补充:oozie页面的配置
CDH刚配置好的时候,oozie页面会报错:
Oozie web console is disabled.
To enable Oozie web console install the Ext JS library.
Refer to Oozie Quick Start documentation for details.

安装这个操作即可:
http://archive.cloudera.com/gplextras/misc/
下载
# mv ext-2.2.zip /var/lib/oozie/
# cd /var/lib/oozie
# unzip ext-2.2.zip
# chown -R oozie:oozie ext-2.2

刷新页面:http://192.168.1.207:11000/oozie/
果然ok了!


补充:简单的job测试:
[root@master01 hadoop-mapreduce]#  sudo -u hdfs hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 100
Number of Maps  = 10
Samples per Map = 100
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
16/06/08 15:46:19 INFO client.RMProxy: Connecting to ResourceManager at master01/192.168.1.207:8032
16/06/08 15:46:20 INFO input.FileInputFormat: Total input paths to process : 10
16/06/08 15:46:20 INFO mapreduce.JobSubmitter: number of splits:10
16/06/08 15:46:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1465355079490_0001
16/06/08 15:46:20 INFO impl.YarnClientImpl: Submitted application application_1465355079490_0001
16/06/08 15:46:20 INFO mapreduce.Job: The url to track the job: http://master01:8088/proxy/application_1465355079490_0001/
16/06/08 15:46:20 INFO mapreduce.Job: Running job: job_1465355079490_0001
16/06/08 15:46:25 INFO mapreduce.Job: Job job_1465355079490_0001 running in uber mode : false
16/06/08 15:46:25 INFO mapreduce.Job:  map 0% reduce 0%
16/06/08 15:46:30 INFO mapreduce.Job:  map 10% reduce 0%
16/06/08 15:46:31 INFO mapreduce.Job:  map 40% reduce 0%
16/06/08 15:46:34 INFO mapreduce.Job:  map 60% reduce 0%
16/06/08 15:46:35 INFO mapreduce.Job:  map 80% reduce 0%
16/06/08 15:46:37 INFO mapreduce.Job:  map 100% reduce 0%
16/06/08 15:46:40 INFO mapreduce.Job:  map 100% reduce 100%
16/06/08 15:46:40 INFO mapreduce.Job: Job job_1465355079490_0001 completed successfully
16/06/08 15:46:40 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=99
FILE: Number of bytes written=1258850
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2620
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=22270
Total time spent by all reduces in occupied slots (ms)=2090
Total time spent by all map tasks (ms)=22270
Total time spent by all reduce tasks (ms)=2090
Total vcore-seconds taken by all map tasks=22270
Total vcore-seconds taken by all reduce tasks=2090
Total megabyte-seconds taken by all map tasks=22804480
Total megabyte-seconds taken by all reduce tasks=2140160
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=340
Input split bytes=1440
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=340
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=290
CPU time spent (ms)=4000
Physical memory (bytes) snapshot=4625891328
Virtual memory (bytes) snapshot=16887525376
Total committed heap usage (bytes)=4443340800
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 20.497 seconds
Estimated value of Pi is 3.14800000000000000000

成功运算
http://192.168.1.207:8088/cluster
可以看到各个运行成功的job




--------------------------------其他过程----------------------------
1)apache的网页目录访问位置
[root@master01 cloudera_manager]# pwd
/usr/local/apache/htdocs/cloudera_manager
[root@master01 cloudera_manager]# cd /opt/cm
[root@master01 cm]# ll
total 663364
-rw-r--r--  1 root root   4781732 Jun  4 18:02 cloudera-manager-agent-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rw-r--r--. 1 root root 669277644 Jun  4 17:00 cloudera-manager-daemons-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rw-r--r--  1 root root      8556 Jun  4 18:02 cloudera-manager-server-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rw-r--r--  1 root root      9880 Jun  4 18:02 cloudera-manager-server-db-2-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rw-r--r--  1 root root    980244 Jun  4 18:02 enterprise-debuginfo-5.4.6-1.cm546.p0.8.el6.x86_64.rpm
-rw-r--r--  1 root root   2868261 Jun  6 09:21 jdk-6u31-linux-amd64.rpm
-rw-r--r--  1 root root   1334340 Jun  4 18:02 oracle-j2sdk1.7-1.7.0+update67-1.x86_64.rpm
drwxr-xr-x  2 root root      4096 Jun  6 09:40 repodata
[root@master01 cm]# ln -s  /opt/cm  /usr/local/apache/htdocs/cloudera_manager

之前尝试使用自己创建的目录,让其下载
[root@master01 htdocs]# rmdir cloudera_manager/
[root@master01 htdocs]# ln -s cloudera_manager/ /usr/local/apache/htdocs/
[root@master01 htdocs]#

/usr/local/apache/bin/apachectl start
http://192.168.1.207/cm即可访问

ln -s  /opt/cm  /usr/local/apache/htdocs/cm  --包存在问题
[root@master01 cloudera-manager-installer]# tail -30f 1.install-oracle-j2sdk1.7.log
Loaded plugins: fastestmirror, refresh-packagekit, security
Error: File contains no section headers.
file: file://///etc/yum.repos.d/cloudera-manager.repo, line: 1
'\xe3\x80\x90cloudera-manager\xe3\x80\x91\n'
^C

2)缺少依赖包:
Warning: RPMDB altered outside of yum.
** Found 3 pre-existing rpmdb problem(s), 'yum check' output follows:
1:libreoffice-core-4.0.4.2-9.el6.x86_64 has missing requires of libjawt.so()(64bit)
1:libreoffice-core-4.0.4.2-9.el6.x86_64 has missing requires of libjawt.so(SUNWprivate_1.1)(64bit)
1:libreoffice-ure-4.0.4.2-9.el6.x86_64 has missing requires of jre >= ('0', '1.5.0', None)
yum install -y libreoffice-core libreoffice-ure
yum localinstall --nogpgcheck  --skip-broken *.rpm

3)有关parcel.sha文件:
/opt/cloudera/parcel-repo
[root@master01 cloudera]# cd parcel-repo/
[root@master01 parcel-repo]# ll
total 9192
-rw-r----- 1 cloudera-scm cloudera-scm 9363456 Jun  6 11:40 CDH-5.4.10-1.cdh5.4.10.p0.16-el6.parcel.part
-rw-r--r-- 1 root         root              41 Jun  6 11:46 CDH-5.4.10-1.cdh5.4.10.p0.16-el6.parcel.sha1
-rw-r--r-- 1 root         root           43173 Jun  6 11:37 manifest.json
[root@master01 parcel-repo]# cat CDH-5.4.10-1.cdh5.4.10.p0.16-el6.parcel.sha1
2b9e62980495ffbceeaa7303cc1d457f3291308d
[root@master01 parcel-repo]# cat manifest.json |grep "hash"
            "hash": "706f7333519520957a40e95c4b9fe789e332d147"
            "hash": "69533466dcd974137a81d3c70605a6715a2c2dc1"
            "hash": "72292dc7e352b91861ac0326ba6c2d6e73fa3f4e"
            "hash": "75837fadc1b7771febb6ba66372374903f4df722"
            "hash": "6723b768d8bf47621fe8c186e3e872ec9817e093"
            "hash": "2b9e62980495ffbceeaa7303cc1d457f3291308d"

[root@master01 opt]# ps -ef|grep cloudera
root      2075  2073  0 Jun05 ?        00:11:03 /usr/lib64/cmf/agent/build/env/bin/python /usr/lib64/cmf/agent/src/cmf/agent.py --package_dir /usr/lib64/cmf/service --agent_dir /var/run/cloudera-scm-agent --lib_dir /var/lib/cloudera-scm-agent --logfile /var/log/cloudera-scm-agent/cloudera-scm-agent.log
root      2139  2110  0 Jun05 ?        00:00:00 /usr/lib64/cmf/agent/build/env/bin/python /usr/lib64/cmf/agent/src/cmf/supervisor_listener.py -l /var/log/cloudera-scm-agent/cmf_listener.log /var/run/cloudera-scm-agent/events
root      4113  9397  0 13:05 pts/1    00:00:00 grep cloudera
496      12076     1  0 11:15 ?        00:00:00 /usr/bin/postgres -D /var/lib/cloudera-scm-server-db/data
root     12113     1  0 11:15 pts/0    00:00:00 su cloudera-scm -s /bin/bash -c nohup /usr/sbin/cmf-server
496      12115 12113  2 11:15 ?        00:02:18 /usr/java/jdk1.7.0_67-cloudera/bin/java -cp .:lib/*:/usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar -server -Dlog4j.configuration=file:/etc/cloudera-scm-server/log4j.properties -Dfile.encoding=UTF-8 -Dcmf.root.logger=INFO,LOGFILE -Dcmf.log.dir=/var/log/cloudera-scm-server -Dcmf.log.file=cloudera-scm-server.log -Dcmf.jetty.threshhold=WARN -Dcmf.schema.dir=/usr/share/cmf/schema -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dpython.home=/usr/share/cmf/python -XX:+UseConcMarkSweepGC -XX:-CMSConcurrentMTEnabled -XX:+UseParNewGC -XX:+HeapDumpOnOutOfMemoryError -Xmx2G -XX:MaxPermSize=256m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -XX:OnOutOfMemoryError=kill -9 %p com.cloudera.server.cmf.Main

4)内存设置太小
  HDFS: Namenode 的 Java 堆栈大小(字节)
Java Heap Size of Namenode in Bytes is recommended to be at least 1GB for every million HDFS blocks.
Suggested minimum value: 1073741824
页面修改为1G即可

5)停止scm-agent服务
service cloudera-scm-agent hard_stop_confirmed
Decommissioning requires the ResourceManager roles of service YARN (MR2 Included) to be running.
 
6)查看mysql-connect-java的jar包的默认位置
[root@master01 cloudera-scm-agent]# vi config.ini
[root@master01 cloudera-scm-agent]# pwd
/opt/cm-5.4.6/etc/cloudera-scm-agent
[root@master01 cloudera-scm-agent]#
[JDBC]
#cloudera_mysql_connector_jar=/usr/share/java/mysql-connector-java.jar
#cloudera_oracle_connector_jar=/usr/share/java/oracle-connector-java.jar



====================
1:安装rpm包
把需要的安装包拷贝到   /home/hadoop/app目录下
yum localinstall --nogpgcheck  --skip-broken *.rpm(安装rpm包)


2:执行./cloudera-manager-installer.bin,如果遇到问题修改一下即可
agent节点(slave2~5)

1:把下面的包分发到agent节点上 /opt/hadoop/app 目录下
cloudera-manager-agent-5.6.0-1.cm560.p0.54.el7.x86_64.rpm
cloudera-manager-daemons-5.6.0-1.cm560.p0.54.el7.x86_64.rpm

2:agent安装
yum localinstall --nogpgcheck  --skip-broken *.rpm (安装rpm包)
rm -f  /etc/cloudera-scm-server/db.properties

3:启动服务
Server
service cloudera-scm-server start
agent
service cloudera-scm-agent start
[root@master01 opt]# chmod +x cloudera-manager-installer-5.4.10.bin

4.安装时可查看日志:
[root@master01 log]# cd /var/log/cloudera-manager-installer/
[root@master01 cloudera-manager-installer]# ll
total 20
-rw-r--r-- 1 root root    0 Jun  7 11:57 0.check-selinux.log
-rw-r--r-- 1 root root  229 Jun  7 11:57 1.install-oracle-j2sdk1.7.log
-rw-r--r-- 1 root root  243 Jun  7 11:57 2.install-cloudera-manager-server.log
-rw-r--r-- 1 root root  248 Jun  7 11:57 3.install-cloudera-manager-server-db-2.log
-rw-r--r-- 1 root root 1975 Jun  7 11:58 4.start-embedded-db.log
-rw-r--r-- 1 root root   59 Jun  7 11:58 5.start-scm-server.log
[root@master01 cloudera-manager-installer]#

5.跳过repo创建
[root@master01 opt]# ./cloudera-manager-install.bin  --skip_repo_package=1
[root@master01 opt]# cat /etc/yum.repos.d/cloudera-manager.repo   --尝试使用自己创建的repo来,但好像不行
[cloudera-manager]
name=Cloudera Manager
#gpgcheck=1
baseurl=http://192.168.1.207/cm
enabled=1
gpgcheck=0
[root@master01 opt]#


还是可以直接用./cloudera-manager-install.bin
[root@master01 yum.repos.d]# ll
total 24
-rw-r--r--. 1 root root 1926 Nov 27  2013 CentOS-Base.repo.bk
-rw-r--r--. 1 root root  638 Nov 27  2013 CentOS-Debuginfo.repo.bk
-rw-r--r--. 1 root root  630 Nov 27  2013 CentOS-Media.repo.bk
-rw-r--r--. 1 root root 3664 Nov 27  2013 CentOS-Vault.repo
-rw-r--r--  1 root root  106 Jun  7 11:56 cloudera-manager.repo
-rw-r--r--  1 root root  195 Sep 18  2014 cloudera-manager.repo.rpmnew
[root@master01 yum.repos.d]# cat cloudera-manager.repo.rpmnew   ---这个路径不对,会直接下载5.7.1最新版,但这是它自动生成的,有时候能找对
[cloudera-manager]
name=Cloudera Manager
baseurl=http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/5/
gpgkey = http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/RPM-GPG-KEY-cloudera
gpgcheck=1
[root@master01 yum.repos.d]#
[root@master01 yum.repos.d]#

6.找不到db.properties
[root@master01 opt]# /usr/share/cmf/schema/scm_prepare_database.sh mysql  -uroot -p --scm-host localhost scm scm scm123  --自己创建的db.properties
Enter database password:
JAVA_HOME=/usr/java/jdk1.7.0
Verifying that we can write to /etc/cloudera-scm-server
Creating SCM configuration file in /etc/cloudera-scm-server
Executing:  /usr/java/jdk1.7.0/bin/java -cp /usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/cmf/schema/../lib/* com.cloudera.enterprise.dbutil.DbCommandExecutor /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
[                          main] DbCommandExecutor              INFO  Successfully connected to database.
All done, your SCM database is configured correctly!
[root@master01 opt]#

可以在启动后,赶紧把这个拷贝进去就可以了!! mv db.properties  /etc/cloudera-manager-server/

7.卸载cloudera manager
如果要卸载的话,记得要删掉下面的目录:
然后删除/var/lib/cloudera-scm-server-db/目录,不然下次安装可能不成功。
rm -fr /var/lib/cloudera-scm-server-db/
rm -f /etc/cloudera-scm-server/db.properties
rm -fr /etc/cloudera-scm-server/*
rm -fr /var/run/cloudra-manager-server.pid   --这个不删除很可能会网页打不开,service起不来
yum remove -y 'cloudera-manager-*' hadoop hue-common 'bigtop-*'

yum localinstall --nogpgcheck  --skip-broken *.rpm

8.确认7180端口可用
[root@master01 bin]# ./apachectl start    --起来后,要稍微等一会才行
httpd (pid 3473) already running
[root@master01 bin]#  service cloudera-scm-server start
cloudera-scm-server is already running
[root@master01 bin]#  service cloudera-scm-server-db start
Database is already running. Please stop it first., giving up
[root@master01 bin]# netstat -nat|grep 7180
[root@master01 bin]# netstat -nat|grep 7180
[root@master01 bin]# netstat -nat|grep 7180
[root@master01 bin]# netstat -nat|grep 7180
[root@master01 bin]# netstat -nat|grep 7180
tcp        0      0 0.0.0.0:7180                0.0.0.0:*                   LISTEN     
tcp        0      0 192.168.1.207:7180          192.168.1.41:55213          ESTABLISHED
tcp        0      0 192.168.1.207:7180          192.168.1.41:55212          ESTABLISHED
tcp        0      0 192.168.1.207:7180          192.168.1.41:55216          ESTABLISHED
tcp        0      0 192.168.1.207:7180          192.168.1.41:55217          ESTABLISHED
tcp        0      0 192.168.1.207:7180          192.168.1.41:55214          ESTABLISHED
tcp        0      0 192.168.1.207:7180          192.168.1.41:55215          ESTABLISHED


9.parcel-repo路径
[root@master01 parcel-repo]# ll
total 1233984
-rwxrwxrwx 1 root root 1263545730 Jun  7 11:03 CDH-5.4.10-1.cdh5.4.10.p0.16-el6.parcel
-rwxrwxrwx 1 root root         41 Jun  7 11:00 CDH-5.4.10-1.cdh5.4.10.p0.16-el6.parcel.sha  ---改了属主后能读到了,但也不清楚是不是以为这个,反正很纠结
-rwxrwxrwx 1 root root      43172 Jun  7 12:59 manifest.json


10.scm-server启动报错
cloudera-scm-server dead but pid file exists 2015-03-23 14:35:34
分类: Hadoop
1.查看内存是否占用过高
2.查看数据库是否能通
直接删除这个pid即可
rm -f /var/run/cloudera-scm-server.pid

11.apt-get方式安装jdk
apt-get -o Dpkg::Options::=--force-confdef -o Dpkg::Options::=--force-confold -y install oracle-j2sdk1.7
网上说jdk的默认安装路径为:/usr/lib/jvm/java-7-oracle-cloudera

12.远程parcel存储库url
http://archive.cloudera.com/cdh5/parcels/{latest_supported}/
http://archive.cloudera.com/cdh4/parcels/latest/
http://archive.cloudera.com/impala/parcels/latest/
http://archive.cloudera.com/search/parcels/latest/
http://archive.cloudera.com/accumulo-c5/parcels/latest/
http://archive.cloudera.com/spark/parcels/latest/
http://archive.cloudera.com/accumulo/parcels/1.4/
http://archive.cloudera.com/sqoop-connectors/parcels/latest/
http://archive.cloudera.com/navigator-keytrustee5/parcels/latest/
http://archive.cloudera.com/kafka/parcels/latest/


13.查看整个安装过程的日志:
[root@master01 cloudera-scm-server]# pwd
/var/log/cloudera-scm-server
[root@master01 cloudera-scm-server]# tail -100f cloudera-scm-server.log

14.反复安装,导致格式化失败
/hadoop/dfs/nn目录非空,所以没有执行hdfs格式化,因此我们需要改一下
Running in non-interactive mode, and data appears to exist in Storage Directory /hadoop/dfs/nn. Not formatting.
16/06/07 14:59:24 INFO util.ExitUtil: Exiting with status 1
16/06/07 14:59:24 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master02/192.168.1.208
************************************************************/
清空/hadoop/dfs/nn

15.初始化mysql
[root@master02 mysql]# bin/mysqld --initialize --user=mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data
2016-06-07T07:14:54.589198Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2016-06-07T07:14:55.500652Z 0 [Warning] InnoDB: New log files created, LSN=45790
2016-06-07T07:14:55.747566Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
2016-06-07T07:14:55.891085Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 8964792a-2c7f-11e6-a20f-f0795963c117.
2016-06-07T07:14:55.918133Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2016-06-07T07:14:55.918734Z 1 [Note] A temporary password is generated for root@localhost: ?KRu6g3&jkZY

16.hive创建表失败,权限不足
[root@datanode03 local]# find / -name "hive-site.xml"
/var/run/cloudera-scm-agent/process/37-hive-metastore-create-tables/hive-site.xml
/var/run/cloudera-scm-agent/process/35-hive-metastore-create-tables/hive-site.xml
/var/run/cloudera-scm-agent/process/33-hive-metastore-create-tables/hive-site.xml
/var/run/cloudera-scm-agent/process/31-hive-metastore-create-tables/hive-site.xml
/var/run/cloudera-scm-agent/process/ccdeploy_hive-conf_etchiveconf.cloudera.hive_7463312241894822279/hive-conf/hive-site.xml
/etc/hive/conf.cloudera.hive/hive-site.xml
/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/etc/hive/conf.dist/hive-site.xml
[root@datanode03 local]# cd /var/run/cloudera-scm-agent
[root@datanode03 cloudera-scm-agent]# ll
total 8
drwxr-x--x.  6 root root 4096 Jun  7 11:06 cgroups
prw-------.  1 root root    0 Jun  7 15:24 events
drwxr-x--x. 15 root root  300 Jun  7 15:24 process
drwxr-x--x.  3 root root 4096 Jun  7 13:02 supervisor
[root@datanode03 cloudera-scm-agent]# cd process/
[root@datanode03 process]# ll
total 0
drwxr-x--x. 3 hdfs  hdfs   340 Jun  7 15:04 11-hdfs-DATANODE
drwxr-x--x. 3 hbase hbase  360 Jun  7 15:04 20-hbase-MASTER
drwxr-x--x. 3 hbase hbase  360 Jun  7 15:04 23-hbase-REGIONSERVER
drwxr-x--x. 3 yarn  hadoop 420 Jun  7 15:05 28-yarn-NODEMANAGER
drwxr-x--x. 4 hive  hive   280 Jun  7 15:17 31-hive-metastore-create-tables
drwxr-x--x. 4 hive  hive   280 Jun  7 15:23 33-hive-metastore-create-tables
drwxr-x--x. 4 hive  hive   280 Jun  7 15:24 35-hive-metastore-create-tables
drwxr-x--x. 4 hive  hive   260 Jun  7 15:24 37-hive-metastore-create-tables
drwxr-x--x. 3 root  root   100 Jun  7 14:32 3-cluster-host-inspector
drwxr-xr-x. 4 root  root   100 Jun  7 14:58 ccdeploy_hadoop-conf_etchadoopconf.cloudera.hdfs_9218384747398952504
drwxr-xr-x. 4 root  root   100 Jun  7 14:58 ccdeploy_hadoop-conf_etchadoopconf.cloudera.yarn_6771263608544652443
drwxr-xr-x. 4 root  root   100 Jun  7 14:58 ccdeploy_hbase-conf_etchbaseconf.cloudera.hbase_6069235750806499928
drwxr-xr-x. 4 root  root   100 Jun  7 14:58 ccdeploy_hive-conf_etchiveconf.cloudera.hive_7463312241894822279

mysql> create user 'hive'@'localhost' identified by '';
Query OK, 0 rows affected (0.00 sec)

mysql> grant all on *.* to 'hive'@'localhost';
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql>

create user 'hive'@'localhost' identified by '';
grant all on *.* to 'hive'@'localhost' identified by '';
flush privileges;

Tue Jun  7 15:56:52 CST 2016
JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
using /usr/java/jdk1.7.0_67-cloudera as JAVA_HOME
using 5 as CDH_VERSION
using /opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/lib/hive as HIVE_HOME
using /var/run/cloudera-scm-agent/process/47-hive-metastore-create-tables as HIVE_CONF_DIR
using /opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/lib/hadoop as HADOOP_HOME
using /var/run/cloudera-scm-agent/process/47-hive-metastore-create-tables/yarn-conf as HADOOP_CONF_DIR
Metastore connection URL: jdbc:mysql://localhost:3306/metastore?useUnicode=true&characterEncoding=UTF-8
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive

++ exec /opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/lib/hadoop/bin/hadoop jar /opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/lib/hive/lib/hive-cli-1.1.0-cdh5.4.10.jar org.apache.hive.beeline.HiveSchemaTool -verbose -dbType mysql -initSchema
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:77)
at org.apache.hive.beeline.HiveSchemaTool.getConnectionToMetastore(HiveSchemaTool.java:113)
at org.apache.hive.beeline.HiveSchemaTool.testConnectionToMetastore(HiveSchemaTool.java:159)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:257)
at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:243)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:473)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.sql.SQLException: Access denied for user 'hive'@'localhost' (using password: YES)  --加了权限,试了好多次依然失败,最后重装了
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1084)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4232)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4164)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:926)
at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1748)
at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1288)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2506)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2539)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2321)
at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:832)
at com.mysql.jdbc.JDBC4Connection.(JDBC4Connection.java:46)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:409)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:417)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:344)
at java.sql.DriverManager.getConnection(DriverManager.java:571)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at org.apache.hive.beeline.HiveSchemaHelper.getConnectionToMetastore(HiveSchemaHelper.java:73)
... 11 more
*** schemaTool failed ***

[root@master01 cloudera-manager-installer]# tail -30 5.start-scm-server.log
Starting cloudera-scm-server:                              [FAILED]

17.安装agent报错:
>>[08/Jun/2016 09:16:00 +0000] 11714 MainThread agent INFO Re-using pre-existing directory: /var/run/cloudera-scm-agent/process
>>[08/Jun/2016 09:16:00 +0000] 11714 MainThread agent INFO Re-using pre-existing directory: /var/run/cloudera-scm-agent/supervisor
>>[08/Jun/2016 09:16:00 +0000] 11714 MainThread agent INFO Re-using pre-existing directory: /var/run/cloudera-scm-agent/supervisor/include
>>[08/Jun/2016 09:16:00 +0000] 11714 MainThread agent ERROR Failed to connect to previous supervisor.

提示:
安装失败。 无法接收 Agent 发出的检测信号。
请确保主机的名称已正确配置。
请确保端口 7182 可在 Cloudera Manager Server 上访问(检查防火墙规则)。
请确保正在添加的主机上的端口 9000 和 9001 空闲。
检查正在添加的主机上 /var/log/cloudera-scm-agent/ 中的代理日志(某些日志可在安装详细信息中找到)。

可以看到默认的JAVA_HOME的可选路径:
++ JAVA7_HOME_CANDIDATES=('/usr/java/jdk1.7' '/usr/java/jre1.7' '/usr/lib/jvm/j2sdk1.7-oracle' '/usr/lib/jvm/j2sdk1.7-oracle/jre' '/usr/lib/jvm/java-7-oracle')

18.报错:
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:473)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1322)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1292)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:320)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:225)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:862)
at java.lang.Thread.run(Thread.java:745)
2015-06-25 13:45:08,290 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to scm.data.dingkai.com/103.231.66.62:8022
2015-06-25 13:45:08,391 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool (Datanode Uuid unassigned)
2015-06-25 13:45:10,392 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2015-06-25 13:45:10,393 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2015-06-25 13:45:10,395 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hadoop1.data.com/192.168.23.45
************************************************************/

解析:
以上红色字体部分已经说明, /data0/dfs/nn 目录初始化失败。其原因是应为  /data0/dfs/nn 是hdfs的 dfs.namenode.name.dir 目录,
而这里secondary namenode 节点已经启动了,这就说明/data0/dfs/nn 被当成secondary namenode的 检查点目录(fs.checkpoint.dir, dfs.namenode.checkpoint.di)
占用了。 而对于hdfs集群目录是不能和检查目录共用,所以节点1启动失败。

解决方法:
修改secondary namenode 的检查点目录,并重启hdfs集群就可以正常了。 --这是网上方法,但我们实际的问题是下面的

上面的问题其实是因为master和datanode之间的clusterid不一致导致的。
[root@master01 current]# cat VERSION  --master节点
#Wed Jun 08 10:15:25 CST 2016
namespaceID=988925528
clusterID=cluster36   --只需要这个保持一致
cTime=0
storageType=NAME_NODE
blockpoolID=BP-1880485561-192.168.1.207-1465352125888
layoutVersion=-60
[root@master01 current]# pwd
/hadoop/dfs/nn/current
[root@master01 current]#

[root@datanode01 java]# cd /hadoop/dfs/dn/current/  ---datanode节点
[root@datanode01 current]# cat VERSION
#Wed Jun 08 11:01:15 CST 2016
storageID=DS-2f2d6cb1-2f78-4005-bca8-8086885a4aca
clusterID=cluster18   --改成跟上面一样的cluster36即可
cTime=0
datanodeUuid=1c1ed6c9-283b-4642-9435-c258af43ec95
storageType=DATA_NODE
layoutVersion=-56
[root@datanode01 current]#

18.zookeeper添加实例报错:
Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode.
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:121)
at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:177)

当有一台机器正在跑zookeeper的时候,再添加其他的,就会报错如下:
Starting these new ZooKeeper Servers may cause the existing ZooKeeper Datastore to be lost. Try again after restarting any
existing ZooKeeper Servers with outdated configurations. If you do not want to preserve the existing Datastore, you can start
each ZooKeeper Server from its respective Status page.

将这个停止,然后再三台一起启动即可。





由于是装完后,才靠着记忆写下来的,因此可能有一些不对的地方,见谅

学习中,欢迎交流
qq:906179271
tina



你可能感兴趣的:(使用cloudera manager 安装CDH5)