环境:阿里云服务器 CentOS 7 x86_64
安装介质:jdk-7u75-linux-i586.tar.gz,hadoop-2.4.1.tar.gz
tar -zxvf jdk-7u75-linux-i586.tar.gz
配置环境变量:
# vi .bash_profile
JAVA_HOME=/root/training/jdk1.7.0_75
export JAVA_HOME
PATH=$JAVA_HOME/bin:$PATH
export PATH
# source .bash_profile
# which java
# java -version
bug解决:64bit的操作系统,无法运行32bit的应用程序,需要安装32bit的glibc库。
-bash: /root/training/jdk1.7.0_75/bin/java: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
# yum install glibc*.i686
# locate /lib/ld-linux.so.2
# rpm -qf /lib/ld-linux.so.2
tar -zxvf hadoop-2.4.1.tar.gz
配置环境变量:
# vi .bash_profile
HADOOP_HOME=/root/training/hadoop-2.4.1
export HADOOP_HOME
PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export PATH
# source .bash_profile
参数文件 | 配置参数 | 参考值 |
---|---|---|
hadoop-env.sh |
JAVA_HOME |
/root/training/jdk1.7.0_75 |
# vi hadoop-env.sh
export JAVA_HOME=/root/training/jdk1.7.0_75
修改hostname,/etc/hosts下的地址必须使用私有地址。
# vi /etc/hosts
192.168.1.107 izwz985sjvpoji48moqz01z
验证mapreduce
# hadoop jar hadoop-mapreduce-examples-2.4.1.jar wordcount ~/training/data/input/data.txt ~/training/data/output/
# more part-r-00000
参数文件 | 配置参数 | 参考值 | 备注 |
---|---|---|---|
hadoop-env.sh |
JAVA_HOME |
/root/training/jdk1.7.0_75 |
Java的home目录 |
hdfs-site.xml |
dfs.replication |
1 |
数据的冗余度 |
core-site.xml |
fs.defaultFS |
hdfs:// |
namenode的IP地址和端口,9000是RPC通信的端口 |
core-site.xml |
hadoop.tmp.dir |
/root/training/hadoop-2.4.1/tmp |
如不修改默认为/tmp,设置的路径必须事先存在 |
mapred-site.xml |
mapreduce.framework.name |
yarn |
指定MR运行在yarn上 |
yarn-site.xml |
yarn.resourcemanager.hostname |
|
指定YARN的老大(ResourceManager)的地址 |
yarn-site.xml |
yarn.nodemanager.aux-services |
mapreduce_shuffle |
reducer获取数据的方式 |
hdfs-site.xml
<property>
<name>dfs.replicationname>
<value>1value>
property>
core-site.xml
<property>
<name>fs.defaultFSname>
<value>hdfs://192.168.1.107:9000value>
property>
<property>
<name>hadoop.tmp.dirname>
<value>/root/training/hadoop-2.4.1/tmpvalue>
property>
mapred-site.xml,cp mapred-site.xml.template mapred-site.xml
<property>
<name>mapreduce.framework.namename>
<value>yarnvalue>
property>
yarn-site.xml
<property>
<name>yarn.resourcemanager.hostnamename>
<value>192.168.1.107value>
property>
<property>
<name>yarn.nodemanager.aux-servicesname>
<value>mapreduce_shufflevalue>
property>
验证HDFS和mapreduce
# cd ~/trainging
# ls hadoop-2.4.1/tmp/
# hdfs namenode -format
# start-all.sh
# jps
5828 NodeManager
6284 Jps
5438 SecondaryNameNode
5288 DataNode
5579 ResourceManager
5172 NameNode
# hdfs dfsadmin -report
# hdfs dfs -mkdir /input
# hdfs dfs -put data/input/data.txt /input/data.txt
# hdfs dfs -lsr /
# hadoop jar hadoop-mapreduce-examples-2.4.1.jar wordcount /input/data.txt /output
# hdfs dfs -cat /output/part-r-00000
# stop-all.sh
# jps
Server A | Server B |
---|---|
1、生成A的密钥和公钥ssh-keygen -t rsa | - |
2、将A的公钥 –> B,ssh-copy -i –> B | 3、 得到Server A的公钥 |
- | 4、随机产生一个字符串:helloworld |
- | 5、使用A的公钥进行加密:* |
- | 6、将加密后的字符串*发给A |
7、得到B发来的加密字符串 | - |
8、使用私钥进行解密 –> helloworld | - |
9、将解密后的helloworld发给B | 10、得到A发来的解密后的字符串helloworld |
- | 11、对比step4和step10这两个字符串,一样则Server B允许Server A免密码登录到Server B |
# cd ~
# ls .ssh/
hnown_hosts
# ssh-keygen -t rsa
# ssh-copy-id -i .ssh/id_rsa.pub [email protected]
# more .ssh/authorized_keys