目标:

搭建hadoop+hbase+zoopkeer+hive 开发环境

安装环境:

1、centeros 192.168.1.101

2、 centeros 192.168.1.102

开发环境:

window +eclipse

一、安装hadoop集群

1、配置hosts

#vi /etc/hosts

192.168.1.101 master

192.168.1.101 slave1

2、关闭防火墙:

systemctl status firewalld.service #检查防火墙状态

systemctl stop firewalld.service #关闭防火墙

systemctl disable firewalld.service #禁止开机启动防火墙

3、配置ssh 无密码访问

ssh-keygen -t rsa #生成密钥

slave1

cp ~/.ssh/id_rsa.pub ~/.ssh/slave1.id_rsa.pub

#scp ~/.ssh/slave1.id_rsa.pub master:~/.ssh

master:

cd ~/.ssh

cat id_rsa.pub >> authorized_keys

cat slave1.id_rsa.pub >>authorized_keys

scp authorized_keys slave1:~/.ssh

测试:ssh master

   ssh slave1

4、安装hadoop

tar -zxvf hadoop-2.8.0.tar.gz

mkdir /usr/hadoop-2.8.0/tmp

mkdir /usr/hadoop-2.8.0/logs

mkdir /usr/hadoop-2.8.0/hdf

mkdir/usr/hadoop-2.8.0/hdf/data

mkdir /usr/hadoop-2.8.0/hdf/name

修改配置文件

修改hadoop-env.sh 增加 export JAVA_HOME

修改mapred-env.sh 增加 export JAVA_HOME

修改yarn-env.sh 增加 export JAVA_HOME

修改 core-site.xml :



fs.default.name
hdfs://master:9000
HDFS address


hadoop.tmp.dir
/usr/hadoop/hadoop-2.8.0/tmp
namenode tmp file


fs.defaultFS
hdfs://master:9000
HDFS address

修改mapred.site.xml



mapred.job.tracker
http://master:9001;


mapreduce.framework.name
yarn


mapred.system.dir
/usr/hadoop/hadoop-2.8.0/mapred/system


mapred.local.dir
/usr/hadoop/hadoop-2.8.0/mapred/local
true


mapreduce.jobhistory.address
master:10020


mapreduce.jobhistory.webapp.address
master:19888

修改 yarn-site.xml


yarn.nodemanager.aux-services
mapreduce_shuffle


yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.mapred.ShuffleHandler


yarn.resourcemanager.address
master:8032


yarn.resourcemanager.scheduler.address
master:8030


yarn.resourcemanager.resource-tracker.address
master:8031


yarn.resourcemanager.admin.address
master:8033


yarn.resourcemanager.webapp.address
master:8088

修改hdfs-site.xml



dfs.name.dir
/usr/hadoop/hadoop-2.8.0/hdfs/name
namenode


dfs.data.dir
/usr/hadoop/hadoop-2.8.0/hdfs/data
datanode脡


dfs.replication
3
num


dfs.permissions
false

true or false


建立slaves文件并加入slave1

复制文件到slave1

scp -r ~/usr/hadoop slave1:~/usr

加入hadoop bin 到环境变量中

格式化namenode

hadoop namenode -format

启动hadoop

./start-all.sh

检查服务启动情况 :jps

master :包含ResourceManager、SecondaryNameNode、NameNode

slave1 :包含datanode NodeManager

下次再说zoopker +hbase