在几次的面试与交流过程中,发现自己在分布式系统方面了解几乎为0,并且加上最近在学习机器学习算法,在数据量变大之后,分布式系统也是需要的,那么这里写下最近配置环境的步骤。
略过系统安装,系统环境统一为:Centos 7 64位,机子两台,用户名相同(我的用户名:
chen
)两台主机名分别为: SPA1 SPA2
修改主机名:
sudo vim /etc/hostname
SPA1 //把后面的local等等删干净
同理在另外一台机器上把hostname
改成SPA2
这步是为了方便以后各种操作等等。
sudo vim /etc/hosts
127.0.0.1 SPA1PC //这里改成这个是SPA1PC没错,不是SPA1
#127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain //这行注释掉了
::1 localhost localhost.localdomain localhost6 localhost6.localdomai //这行无视
192.168.0.64 SPA1 //这里是SPA1 这里的IP是第一台机子的IP
192.168.0.220 SPA2 //这里是SPA2 这里是第二个机子的IP
同理SPA2一样改
配置Java环境步骤比较简单,首先下载JDK。传送门:Oracle JDK链接
我这里使用的是jdk 1.8.0_05
下载好安装包之后,第一步解压:jdk-8u45-linux-x64.tar.gz
tar zxvf jdk-8u45-linux-x64.tar.gz //看好自己压缩包名字
sudo mv jdk1.8.0_05 /usr/lib/jvm //移动到这个文件夹下
sudo chmod 755 jdk1.8.0_05 -R 修改权限
这里不修改 ~/.bashrc
直接修改/etc/profile
这个文件,是全局系统的环境变量。
在文件末尾添加上:
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_05 //看好自己的路径
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar //看好自己的安装路径
source /etc/profile 使得profile即时生效
java -version
javac
看是不是都有正常的输出就好了。
首先测试两台机子是不是都能够ping通,因为前面已经修改了hostname这里直接ping机器的hostname就可以了。
ping SPA1 //查看输出
ping SPA2 //查看输出
64 bytes from SPA1 (10.126.45.56): icmp_seq=1 ttl=62 time=0.235 ms
64 bytes from SPA1 (10.126.45.56): icmp_seq=2 ttl=62 time=0.216 ms
64 bytes from SPA1 (10.126.45.56): icmp_seq=3 ttl=62 time=0.276 ms
有上面这种输出,说明能够ping通,就可以开始安装ssh了。
因为两台机子的用户名是相同的,所以直接ssh hostname就可以了。
ssh SPA1 //输入密码看是否登录成功
ssh SPA2 //输入密码是否能够登录成功
如果都可以了,那么就开始进行ssh免密码验证了。
这里只需要能够在master机子上免密码登录到slaves上就可以了。如果你想可以互相免密码,那就反过来再操作一边就可以。
cd ~/.ssh/
文件夹(没有自己创建一个)ssh-keygen -t rsa
,然后默认回车一直到结束。ls -al
是否有两个文件(id_rsa, id_rsa.pub
)id_rsa.pub
拷贝到SPA2上并命名为authorized_keys
。操作:scp id_rsa.pub chen@SPA2:~/.ssh/authorized_keys
(如果没有文件夹,记得创建).ssh
文件夹权限改为700
,authorized_keys
权限改为600
改权限的操作:
sudo chmod 700 ~/.ssh
sudo chmod 600 ~/.ssh/authorized_keys
tar -zxvf hadoop-2.6.0.tar.gz
mv hadoop-2.6.0 ~/opt/
cd ~/opt/hadoop-2.6.0
mkdir tmp
mkdir dfs/data
mkdir dfs/name
cd etc/hadoop/
然后对以下文件进行配置:
hadoop-env.sh
yarn-env.sh
slaves
core-site.xml
hdfs-site.xml
mapred-site.xml
yarn-site.xml
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_05
# some Java parameters
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_05
SPA1
SPA2
有路径的地方记得用自己电脑的路径!
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://SPA1:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/chen/opt/hadoop-2.6.0/tmp</value>
<description>Abasefor other temporary directories.</description>
</property>
<property>
<name>hadoop.proxyuser.chen.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.chen.groups</name>
<value>*</value>
</property>
</configuration>
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>SPA1:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/chen/opt/hadoop-2.6.0/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/chen/opt/hadoop-2.6.0/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
注意: 此处默认是没有这个文件的,需要复制mapred-site.xml.template一份,并把名字改为mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>SPA1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>SPA1:19888</value>
</property>
</configuration>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>SPA1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>SPA1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>SPA1:8035</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>SPA1:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>SPA1:8088</value>
</property>
</configuration>
scp -r ~/opt/hadoop-2.6.0/ chen@10.126.34.43:~/opt/
cd ~/opt/hadoop-2.6.0/
./bin/hdfs namenode -format
./sbin/start-hdfs.sh
但这里可以直接用如下命令全部启动:
./sbin/start-all.sh
启动完之后查看:
jps
结果如下:
Master(SPA1)
NameNode,NodeManager,Jps,ResourceManager, DateNode
SLAVES(SPA2)
Jps,DataNode,NodeManager
./sbin/stop-hdfs.sh(同理可以用stop-all.sh)
./bin/hdfs dfsadmin -report
Configured Capacity: 52101857280 (48.52 GB)
Present Capacity: 45749510144 (42.61 GB)
DFS Remaining: 45748686848 (42.61 GB)
DFS Used: 823296 (804 KB)
DFS Used%: 0.00%
Under replicated blocks: 10
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Live datanodes (1):
Name: 10.126.45.56:50010 (S1PA222)
Hostname: S1PA209
Decommission Status : Normal
Configured Capacity: 52101857280 (48.52 GB)
DFS Used: 823296 (804 KB)
Non DFS Used: 6352347136 (5.92 GB)
DFS Remaining: 45748686848 (42.61 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.81%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Jan 05 16:44:50 CST 2015
http://SPA1:50070/
换成自己的IP地址
http://SPA1:8088/
换成自己的IP地址
cd ~/opt/hadoop-2.6.0
mkdir input
vim input/f1
写入以下内容:Hello world I am chen
vim input/f2
写入一下内容:Hello world who are you
cd ~/opt/hadoop-2.6.0
./bin/hadoop fs -mkdir /tmp
./bin/hadoop fs -mkdir /tmp/input
注意:此处可能会报错,即没有datanode的情况,原因之一可能是datanode的防火墙没有关闭(systemctl stop firewalld.service #停止
systemctl disable firewalld.service #禁用);
./bin/hadoop fs -put input/ /tmp
./bin/hadoop fs -ls /tmp/input/
Found 2 items
-rw-r--r-- 3 chen supergroup 20 2015-01-04 19:09 /tmp/input/f1
-rw-r--r-- 3 chen supergroup 25 2015-01-04 19:09 /tmp/input/f2
./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /tmp/input /output
15/01/05 17:00:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/01/05 17:00:09 INFO client.RMProxy: Connecting to ResourceManager at S1PA11/10.58.44.47:8032
15/01/05 17:00:11 INFO input.FileInputFormat: Total input paths to process : 2
15/01/05 17:00:11 INFO mapreduce.JobSubmitter: number of splits:2
15/01/05 17:00:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1420447392452_0001
15/01/05 17:00:12 INFO impl.YarnClientImpl: Submitted application application_1420447392452_0001
15/01/05 17:00:12 INFO mapreduce.Job: The url to track the job: http://S1PA11:8088/proxy/application_1420447392452_0001/
15/01/05 17:00:12 INFO mapreduce.Job: Running job: job_1420447392452_0001
./bin/hadoop fs -cat /output/part-r-00000
编辑 ~/.bash_profile文件 增加HADOOP_HOME环境变量配置,
export HADOOP_HOME=/home/chen/opt/hadoop-2.6.0
export PATH=$PATH:$HADOOP_HOME/bin //JAVA的别删了...
http://www.scala-lang.org/download/2.11.4.html
wget http://downloads.typesafe.com/scala/2.11.4/scala-2.11.4.tgz?_ga=1.248348352.61371242.1418807768
tar zxvf scala-2.11.4.tgz
mv scala-2.11.4 ~/opt/
编辑 /etc/profile 文件 增加SCALA_HOME环境变量配置
export JAVA_HOME=/home/chen/opt/java/jdk1.8.0_05
export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
export SCALA_HOME=/home/chen/opt/scala-2.11.4
export HADOOP_HOME=/home/chen/opt/hadoop-2.6.0
export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:${SCALA_HOME}/bin
//不要删东西!只是添加!
scala -version
Scala code runner version 2.11.4 -- Copyright 2002-2013, LAMP/EPFL
scp /etc/profile root@10.126.45.56:/etc/profile
wget http://d3kbcqa49mib13.cloudfront.net/spark-1.2.0-bin-hadoop2.4.tgz
注意:此处下载的spark要下载对应的版本,比如我安装的hadoop是2.6.0,那么我要下载的spark就是spark-1.6.0-bin-hadoop2.6.
export SPARK_HOME=/home/spark/opt/spark-1.2.0-bin-hadoop2.4
export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:${HADOOP_HOME}/bin
进入 spark conf目录
cd spark-1.2.0-bin-hadoop2.4/
SPA1 SPA2
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_05
export SCALA_HOME=/home/chen/opt/scala-2.11.4
export SPARK_MASTER_IP=10.58.44.47 // your ip
export SPARK_WORKER_MEMORY=2g
将spark目录copy slave机器
scp -r ~/opt/spark-1.2.0-bin-hadoop2.4 [email protected]:~/opt/
cd ~/opt/spark-1.2.0-bin-hadoop2.4
./sbin/start-all.sh
jps
输出如下:
Master:
31233 ResourceManager
27201 Jps
30498 NameNode
30733 SecondaryNameNode
5648 Worker
5399 Master
15888 JobHistoryServer
Slaves:
20352 Bootstrap
30737 NodeManager
7219 Jps
30482 DataNode
29500 Bootstrap
757 Worker
进去spark集群的web管理页面,访问
SPA1:8080
spark-shell
访问http://SPA1:4040/,我们可以看到spark WEBUI页面
看下之前在hdfs上穿的文件,随便找一个,或者再穿一个
val readmeFile = sc.textFile("hdfs://SPA1:9000/tmp/READEME.txt")
readmeFile.count
得到结果Long = 内容长度
var theCount = readmeFile.filter(line=>line.contains("the"))
theCount.count
得到单词the的个数
再实现下Hadoop wordcount功能
var wordCount = readmeFile.flapMap(line=>line.split(" ")).map(word=>(word,1)).reduceByKey(_+_)
wordCount.collect
然后去WEBUI查看,浏览器输入:http://SPA1:4040即可
以上就是配置环境的过程。