首先不穿别人的破鞋,要穿一手鞋,既然都开始了,不如花点时间看看官网,出自于哪家大户。
Apache旗下顶级项目,官网:
http://hadoop.apache.org/
耍下流氓就看看百度百科,介绍还算可以,比维基百科讲的全,也可以看出hadoop在国内的热火:
http://baike.baidu.com/link?url=BRQsh1t3cOeZP-uV5q8WQjjyIWn96SzgGFf1qLl2mSYFl-oFIFafZXRb9lcBZe_F34TPcl2gsu9yCfQh-4G7jq
本次安装在6.6版本中,在CENTOS6中安装中如果使用的是分布式安装,尽量保持多台服务器的版本一致。
[root@hadoop01 hadoop-2.6.0]# cat /etc/issue CentOS release 6.6 (Final) Kernel \r on an \m
这次安装使用Master-Slave 2台机器:
如果使用VMWare WS或者vSphere,这次安装使用默认centos minimal安装。
hadoop01 192.168.145.128
hadoop02 192.168.145.129
修改 /etc/hosts,使用命令vi
127.0.0.1 localhost localhost4 localhost4.localdomain4 localhost.localdomain 192.168.145.128 hadoop01 192.168.145.129 hadoop02 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
注意:两台服务器都必须要修改
测试两部机器的通信
[root@hadoop01 hadoop-2.6.0]# ping hadoop02 PING hadoop02 (192.168.145.129) 56(84) bytes of data. 64 bytes from hadoop02 (192.168.145.129): icmp_seq=1 ttl=64 time=0.276 ms 64 bytes from hadoop02 (192.168.145.129): icmp_seq=2 ttl=64 time=0.391 ms 64 bytes from hadoop02 (192.168.145.129): icmp_seq=3 ttl=64 time=0.232 ms 64 bytes from hadoop02 (192.168.145.129): icmp_seq=4 ttl=64 time=0.390 ms
如果是测试环境,可以关闭防火墙
service iptables stop
如果是正式环境,建议配置好防火墙
见CENTOS JDK安装
SSH免密验证在这里简单介绍下:
首先在hadoop01机器配置(该机器是master)
进去.ssh文件: [root@hadoop01 sbin]$ cd ~/.ssh/
生成秘钥 ssh-keygen : ssh-keygen -t rsa ,一路狂按回车键就可以了
最终生成(id_rsa,id_rsa.pub两个文件)
生成authorized_keys文件:[root@hadoop01 .ssh]$ cat id_rsa.pub >> authorized_keys
在另一台机器hadoop02(slave机器)也生成公钥和秘钥
步骤跟hadoop01是类似的
进去.ssh文件: [root@hadoop02 sbin]$ cd ~/.ssh/
生成秘钥 ssh-keygen :ssh-keygen -t rsa ,一路狂按回车键就可以了
最终生成(id_rsa,id_rsa.pub两个文件)
将hadoop02 机器的id_rsa.pub文件copy到hadoop01机器:[root@hadoop02 .ssh]$ scp id_rsa.pub root@192.168.145.128:~/.ssh/id_rsa.pub_sl
此切换到机器hadoop01合并authorized_keys; [root@hadoop01 .ssh]$ cat id_rsa.pub_sl >> authorized_keys
将authorized_keyscopy到hadoop02 机器(/home/spark/.ssh):[root@hadoop01 .ssh]$ scp authorized_keys root@192.168.145.129:~/.ssh/
现在讲两台机器 .ssh/ 文件夹权限改为700,authorized_keys文件权限改为600(or 644)
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
OK 完成以上操作后 可以开始ssh验证了
[root@hadoop01 .ssh]$ ssh hadoop02
Last login: Mon Jan 5 15:18:58 2015 from root
[root@hadoop02 ~]$ exit
logout
Connection to hadoop02 closed.
[root@hadoop01 .ssh]$ ssh hadoop02
Last login: Mon Jan 5 15:46:00 2015 from root
Connection to hadoop01 closed.
[root@hadoop02 .ssh]$ ssh hadoop01
Last login: Mon Jan 5 15:46:43 2015 from root
[root@hadoop01~]$ exit
顺利完成ssh免密码验证
免密验证在CENTOS默认安装下总是会出一些问题,这时候需要根据ssh -v 进行调试 ,详细信息使用ssh -vvv进行调试查看验证过程
通过情况:
[root@hadoop01 hadoop-2.6.0]# ssh -v hadoop02 OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to hadoop02 [192.168.145.129] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/identity type -1 debug1: identity file /root/.ssh/identity-cert type -1 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'hadoop02' is known and matches the RSA host key. debug1: Found key in /root/.ssh/known_hosts:4 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_0' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_0' not found debug1: Unspecified GSS failure. Minor code may provide more information debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_0' not found debug1: Next authentication method: publickey debug1: Trying private key: /root/.ssh/identity debug1: Trying private key: /root/.ssh/id_rsa debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). debug1: channel 0: new [client-session] debug1: Requesting no-more-sessions@openssh.com debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.UTF-8 Last login: Sun Apr 12 19:38:17 2015 from 192.168.145.1 [root@hadoop02 ~]#
如果是这种错误:
已经拿到了 id_rsa文件,确始终认证失败,再回到paassword认证
查看:http://www.cnblogs.com/qcly/p/3219535.html
PS:异常问题处理
1、ssh localhost:publickey 授权失败
sudo vi /etc/ssh/sshd_config RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys service sshd restart
注:ssh可同时支持publickey和password两种授权方式,publickey默认不开启,需要配置为yes。
如果客户端不存在.ssh/id_rsa,则使用password授权;存在则使用publickey授权;
如果publickey授权失败,依然会继续使用password授权。
不要设置 PasswordAuthentication no ,它的意思是禁止密码登录,这样就只能本机登录了!
2、
vi /etc/selinux/config SELINUX=disabled chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys 最后重启你的 linux 执行 ssh localhost
3、ssh ip 或 hostname 均提示:connection refused
目标主机的ssh server端程序是否安装、服务是否启动,是否在侦听22端口;
是否允许该用户登录;
本机是否设置了iptables规则,禁止了ssh的连入/连出;
如果在使用中涉及到端口服务的,务必开启该端口
最新版本为2.6.0
下载Hadoop版本:http://mirror.bit.edu.cn/apache/hadoop/common/
http://www.apache.org/dyn/closer.cgi/hadoop/common/
这是下载后的hadoop-2.6.0.tar.gz压缩包,
1、解压 tar -xzvf hadoop-2.6.0.tar.gz
2、move到指定目录下:
[root@hadoop01 software]$ mv hadoop-2.6.0 ~/usr/
3、进入hadoop目前
[root@hadoop01 usr]$ cd hadoop-2.6.0/ [root@hadoop01 hadoop-2.6.0]$ ls bin dfs etc include input lib libexec LICENSE.txt logs NOTICE.txt README.txt sbin share tmp
配置之前,先在本地文件系统创建以下文件夹:~/hadoop/tmp、~/dfs/data、~/dfs/name。 主要涉及的配置文件有7个:都在/hadoop/etc/hadoop文件夹下,可以用gedit命令对其进行编辑。
~/hadoop/etc/hadoop/hadoop-env.sh
~/hadoop/etc/hadoop/yarn-env.sh
~/hadoop/etc/hadoop/slaves
~/hadoop/etc/hadoop/core-site.xml
~/hadoop/etc/hadoop/hdfs-site.xml
~/hadoop/etc/hadoop/mapred-site.xml
~/hadoop/etc/hadoop/yarn-site.xml
4、进去hadoop配置文件目录
[root@hadoop01 hadoop-2.6.0]$ cd etc/hadoop/ [root@hadoop01 hadoop]$ ls capacity-scheduler.xml hadoop-env.sh httpfs-env.sh kms-env.sh mapred-env.sh ssl-client.xml.example configuration.xsl hadoop-metrics2.properties httpfs-log4j.properties kms-log4j.properties mapred-queues.xml.template ssl-server.xml.example container-executor.cfg hadoop-metrics.properties httpfs-signature.secret kms-site.xml mapred-site.xml yarn-env.cmd core-site.xml hadoop-policy.xml httpfs-site.xml log4j.properties mapred-site.xml.template yarn-env.sh hadoop-env.cmd hdfs-site.xml kms-acls.xml mapred-env.cmd slaves yarn-site.xml
4.1、配置 hadoop-env.sh文件-->修改JAVA_HOME
# The java implementation to use. export JAVA_HOME=/usr/java/jdk1.7.0_60
4.2、配置 yarn-env.sh 文件-->>修改JAVA_HOME
# some Java parameters export JAVA_HOME=/usr/java/jdk1.7.0_60
4.3、配置slaves文件-->>增加slave节点
hadoop02
4.4、配置 core-site.xml文件-->>增加hadoop核心配置(hdfs文件端口是9000、file:/usr/hadoop-2.6.0/tmp、)
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop01:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/hadoop-2.6.0/tmp</value> <description>Abasefor other temporary directories.</description> </property> <property> <name>hadoop.proxyuser.spark.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.spark.groups</name> <value>*</value> </property> </configuration>
4.5、配置 hdfs-site.xml 文件-->>增加hdfs配置信息(namenode、datanode端口和目录位置)
<configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop01:9001</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/hadoop-2.6.0/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/hadoop-2.6.0/dfs/data</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration>
4.6、配置 mapred-site.xml 文件-->>增加mapreduce配置(使用yarn框架、jobhistory使用地址以及web地址)
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop01:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop01:19888</value> </property> </configuration>
4.7、配置 yarn-site.xml 文件-->>增加yarn功能
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>hadoop01:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>hadoop01:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>hadoop01:8035</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>hadoop01:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>hadoop01:8088</value> </property> </configuration>
5、将配置好的hadoop文件copy到另一台slave机器上
[root@hadoop01 usr]$ scp -r hadoop-2.6.0/ root@192.168.145.129:~/usr/ hadoop01 hadoop02 配置/etc/profile export HADOOP_HOME=/usr/hadoop-2.6.0 export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
1、格式化namenode:
[root@hadoop01 opt]$ cd hadoop-2.6.0/ [root@hadoop01 hadoop-2.6.0]$ ls bin dfs etc include input lib libexec LICENSE.txt logs NOTICE.txt README.txt sbin share tmp [root@hadoop01 hadoop-2.6.0]$ ./bin/hdfs namenode -format #hadoop2 [root@hadoop02 .ssh]$ cd ~/opt/hadoop-2.6.0 [root@hadoop02 hadoop-2.6.0]$ ./bin/hdfs namenode -format
2、启动hdfs:
[root@hadoop01 hadoop-2.6.0]# ./sbin/start-dfs.sh 15/04/12 11:44:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [hadoop01] hadoop01: starting namenode, logging to /usr/hadoop-2.6.0/logs/hadoop-root-namenode-hadoop01.out hadoop02: starting datanode, logging to /usr/hadoop-2.6.0/logs/hadoop-root-datanode-hadoop02.out Starting secondary namenodes [hadoop01] hadoop01: starting secondarynamenode, logging to /usr/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-hadoop01.out hadoop01: Java HotSpot(TM) Client VM warning: You have loaded library /usr/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. hadoop01: It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 15/04/12 11:44:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@hadoop01 hadoop-2.6.0]# ./sbin/start-yarn.sh starting yarn daemons starting resourcemanager, logging to /usr/hadoop-2.6.0/logs/yarn-root-resourcemanager-hadoop01.out hadoop02: starting nodemanager, logging to /usr/hadoop-2.6.0/logs/yarn-root-nodemanager-hadoop02.out [root@hadoop01 hadoop-2.6.0]# jps 1656 ResourceManager 1350 NameNode 1909 Jps 1514 SecondaryNameNode
3、停止hdfs:
[root@hadoop01 hadoop-2.6.0]$./sbin/stop-dfs.sh 15/01/05 16:40:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Stopping namenodes on [hadoop01] hadoop01: stopping namenode hadoop02: stopping datanode Stopping secondary namenodes [hadoop01] hadoop01: stopping secondarynamenode 15/01/05 16:40:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@hadoop01 hadoop-2.6.0]$ jps 30336 Jps 22230 Master 22478 Worker 19781 ResourceManager
4、启动yarn:
[root@hadoop01 hadoop-2.6.0]# ./sbin/start-yarn.sh starting yarn daemons starting resourcemanager, logging to /usr/hadoop-2.6.0/logs/yarn-root-resourcemanager-hadoop01.out hadoop02: starting nodemanager, logging to /usr/hadoop-2.6.0/logs/yarn-root-nodemanager-hadoop02.out
5、停止yarn:
[root@hadoop01 hadoop-2.6.0]$ ./sbin/stop-yarn.sh stopping yarn daemons stopping resourcemanager hadoop02: stopping nodemanager no proxyserver to stop
[root@hadoop01 hadoop-2.6.0]# jps 1656 ResourceManager 1350 NameNode 1909 Jps 1514 SecondaryNameNode
6、查看集群状态:
[root@hadoop01 hadoop-2.6.0]# ./bin/hdfs dfsadmin -report 15/04/12 11:45:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Configured Capacity: 18435350528 (17.17 GB) Present Capacity: 15464701952 (14.40 GB) DFS Remaining: 15464259584 (14.40 GB) DFS Used: 442368 (432 KB) DFS Used%: 0.00% Under replicated blocks: 6 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Live datanodes (1): Name: 192.168.145.129:50010 (hadoop02) Hostname: hadoop02 Decommission Status : Normal Configured Capacity: 18435350528 (17.17 GB) DFS Used: 442368 (432 KB) Non DFS Used: 2970648576 (2.77 GB) DFS Remaining: 15464259584 (14.40 GB) DFS Used%: 0.00% DFS Remaining%: 83.88% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Sun Apr 12 11:45:25 CST 2015
7、查看 hdfs:
http://192.168.145.128:50070/
8、查看RM:
http://192.168.145.128:8088/
9、运行wordcount程序
9.1、创建 input目录:
[root@hadoop01 hadoop-2.6.0]$ mkdir input
9.2、在input创建f1、f2并写内容
[root@hadoop01 hadoop-2.6.0]$ cat input/f1 Hello world bye jj [root@hadoop01 hadoop-2.6.0]$ cat input/f2 Hello Hadoop bye Hadoop
9.3、在hdfs创建/tmp/input目录
[root@hadoop01 hadoop-2.6.0]$ ./bin/hadoop fs -mkdir /tmp 15/01/05 16:53:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@hadoop01 hadoop-2.6.0]$ ./bin/hadoop fs -mkdir /tmp/input 15/01/05 16:54:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
9.4、将f1、f2文件copy到hdfs /tmp/input目录
[root@hadoop01 hadoop-2.6.0]$ ./bin/hadoop fs -put input/ /tmp 15/01/05 16:56:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
9.5、查看hdfs上是否有f1、f2文件
[root@hadoop01 hadoop-2.6.0]$ ./bin/hadoop fs -ls /tmp/input/ 15/01/05 16:57:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Found 2 items -rw-r--r-- 3 spark supergroup 20 2015-01-04 19:09 /tmp/input/f1 -rw-r--r-- 3 spark supergroup 25 2015-01-04 19:09 /tmp/input/f2
9.6、执行wordcount程序
[root@hadoop01 hadoop-2.6.0]# ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /tmp/input /output 15/04/12 11:48:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 15/04/12 11:48:29 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/192.168.145.128:8032 15/04/12 11:48:30 INFO input.FileInputFormat: Total input paths to process : 2 15/04/12 11:48:30 INFO mapreduce.JobSubmitter: number of splits:2 15/04/12 11:48:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1428810311695_0001 15/04/12 11:48:31 INFO impl.YarnClientImpl: Submitted application application_1428810311695_0001 15/04/12 11:48:31 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1428810311695_0001/ 15/04/12 11:48:31 INFO mapreduce.Job: Running job: job_1428810311695_0001 15/04/12 11:48:42 INFO mapreduce.Job: Job job_1428810311695_0001 running in uber mode : false 15/04/12 11:48:42 INFO mapreduce.Job: map 0% reduce 0% 15/04/12 11:48:49 INFO mapreduce.Job: map 100% reduce 0% 15/04/12 11:48:54 INFO mapreduce.Job: map 100% reduce 100% 15/04/12 11:48:54 INFO mapreduce.Job: Job job_1428810311695_0001 completed successfully 15/04/12 11:48:54 INFO mapreduce.Job: Counters: 49 File System Counters FILE: Number of bytes read=84 FILE: Number of bytes written=317792 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=241 HDFS: Number of bytes written=36 HDFS: Number of read operations=9 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 Job Counters Launched map tasks=2 Launched reduce tasks=1 Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=9245 Total time spent by all reduces in occupied slots (ms)=3185 Total time spent by all map tasks (ms)=9245 Total time spent by all reduce tasks (ms)=3185 Total vcore-seconds taken by all map tasks=9245 Total vcore-seconds taken by all reduce tasks=3185 Total megabyte-seconds taken by all map tasks=9466880 Total megabyte-seconds taken by all reduce tasks=3261440 Map-Reduce Framework Map input records=2 Map output records=8 Map output bytes=75 Map output materialized bytes=90 Input split bytes=196 Combine input records=8 Combine output records=7 Reduce input groups=5 Reduce shuffle bytes=90 Reduce input records=7 Reduce output records=5 Spilled Records=14 Shuffled Maps =2 Failed Shuffles=0 Merged Map outputs=2 GC time elapsed (ms)=369 CPU time spent (ms)=3110 Physical memory (bytes) snapshot=402833408 Virtual memory (bytes) snapshot=1495408640 Total committed heap usage (bytes)=257171456 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=45 File Output Format Counters Bytes Written=36