CDH的hadoop与Spark套件组安装

1.解压JDBC并复制到cm目录中

tar -zxvf mysql-connector-java-5.1.43.tar.gz 

cp /opt/mysql-connector-java-5.1.43/mysql-connector-java-5.1.43-bin.jar /opt/cm-5.10.0/share/cmf/lib/
  

2.初始化CM5数据库

/opt/cm-5.10.0/share/cmf/schema/scm_prepare_database.sh mysql cm -hlocalhost -uroot -p123456 --scm-host localhost scm scm scm

3.配置server_host并复制

vim /opt/cm-5.10.0/etc/cloudera-scm-agent/config.ini

注意:里面的server_host一定要配成主机的名称,如server_host=spark001

scp -r /opt/cm-5.10.0 root@spark002:/opt/

4.添加新用户

注意是每个节点都要添加

useradd --system --home=/opt/cm-5.10.0/run/cloudera-scm-server/ --no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scm

5.将CDH5的压缩包移动到/opt/cloudera/parcel-repo/

#移动文件命令
mv CDH-5.10.0-1.cdh5.10.0.p0.41-el6.parcel /opt/cloudera/parcel-repo/
mv CDH-5.10.0-1.cdh5.10.0.p0.41-el6.parcel.sha1 /opt/cloudera/parcel-repo/
mv manifest.json /opt/cloudera/parcel-repo/
#重命名成sha
mv CDH-5.10.0-1.cdh5.10.0.p0.41-el6.parcel.sha1 CDH-5.10.0-1.cdh5.10.0.p0.41-el6.parcel.sha 

6.启动主节点和从节点

1.主节点启动

#启动服务器
/opt/cm-5.10.0/etc/init.d/cloudera-scm-server start
#启动Agent服务
/opt/cm-5.10.0/etc/init.d/cloudera-scm-agent start

2.启动其他节点

#启动Agent服务
/opt/cm-5.10.0/etc/init.d/cloudera-scm-agent start

7.解决交换设置及透明大页面压缩问题

在每个节点上运行

echo 0 > /proc/sys/vm/swappiness
echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag

你可能感兴趣的:(cdh5,hadoop,spark)