Hadoop2.6.1 安装

1.安装 jdk 配置环境变量

apt-get install openJdk8
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
使全局变量生效

source /etc/profile

2.下载 Hadoop

wget address

3.减压 tar.gz

tar -zxvf hadoop.2.6.1tar.gz

4.hadoop安装

创建数据存放的文件夹
/home/hadoop
/home/hadoop/tmp
/home/hadoop/hdfs
/home/hadoop/hdfs/data
/home/hadoop/hdfs/name

5.创建目录

mkdir /usr/local/hadoop

hadoop-2.6.1.tar.gz解压至/usr/local/hadoop/下

6.配置各机器hosts

vi /etc/hosts
ip 机器名
ip 机器名

7.配置hadoop参数

7.1配置hadoop-2.6.1/etc/hadoop/core-site.xml


    
        fs.defaultFS
        hdfs://hadoop:9000
    
    
        hadoop.tmp.dir
        file:/home/hadoop/tmp
    
    
        io.file.buffer.size
        131702
    

7.2配置hadoop-2.6.1/etc/hadoop/hdfs-site.xml


    
        dfs.namenode.name.dir
        file:/home/hadoop/hdfs/name
    
    
        dfs.datanode.data.dir
        file:/home/hadoop/hdfs/data
    
    
        dfs.replication
        1
    
    
        dfs.namenode.secondary.http-address
        hadoop:9001
    
    
    dfs.webhdfs.enabled
    true
    

7.3配置hadoop-2.6.1/etc/hadoop/mapred-site.xml


    
        mapreduce.framework.name
        yarn
    
    
        mapreduce.jobhistory.address
        hadoop:10020
    
    
        mapreduce.jobhistory.webapp.address
        hadoop:19888
    

42.62.73.147

7.4配置hadoop-2.6.1/etc/hadoop/yarn-site.xml


    
        yarn.nodemanager.aux-services
        mapreduce_shuffle
    
    
        yarn.nodemanager.auxservices.mapreduce.shuffle.class
        org.apache.hadoop.mapred.ShuffleHandler
    
    
        yarn.resourcemanager.address
        hadoop:8032
    
    
        yarn.resourcemanager.scheduler.address
        hadoop:8030
    
    
        yarn.resourcemanager.resource-tracker.address
        hadoop:8031
    
    
        yarn.resourcemanager.admin.address
        hadoop:8033
    
    
        yarn.resourcemanager.webapp.address
        hadoop:8088
    
    
        yarn.nodemanager.resource.memory-mb
        22528
    

7.5配置hadoop-2.6.1/etc/hadoop/hadoop-env.sh和hadoop-2.6.1/etc/hadoop/yarn-env.sh

export JAVA_HOME=/usr/local/jdk/jdk1.7.0_79

7.6配置slaves 添加从节点

vi hadoop-2.6.1/etc/hadoop/slaves

8.命令

初始化

bin/hdfs namenode -format
sbin/start-dfs.sh
sbin/start-yarn.sh

sbin/stop-dfs.sh
sbin/stop-yarn.sh

输入命令jps可以看到相关信息

执行jar

hadoop jar xxxxx.jar arg1 arg2 

** hdfs命令 **
列出目录下文件

hadoop fs -ls /

创建目录

hadoop fs -mkdir /newdir

本地文件复制到HDFS

hadoop fs -copyFromLocal /home/input/a.txt /input/a.txt

HDFS文件复制到本地

hadoop fs -copyToLocal /input/a.txt /home/input/a.txt

删除HDFS目录及其中文件

 hadoop fs -rm -f -r /output1

移动文件

hadoop fs -mv URI [URI …] 

停止job

hadoop job -kill 

关闭安全模式

hadoop dfsadmin -safemode leave 

Cluster查看
http://192.168.1.100:8088/

HDFS查看
http://192.168.1.100:50070/

9.错误

初始化报错

host = java.net.UnknownHostException: centos: centos

查看/etc/sysconfig/network文件

NETWORKING=yes
HOSTNAME=centos

HOSTNAME是centos, 无法在/etc/hosts中找到对应IP

vi /etc/hosts,添加:

127.0.0.1   centos

启动报错:

/hadoop-2.6.1/sbin/hadoop-daemon.sh: Permission denied

从节点hadoop目录要有执行权限

chmod -R 755 hadoop-2.6.1

ShuffleError

Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#3
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.io.BoundedByteArrayOutputStream.(BoundedByteArrayOutputStream.java:56)
at org.apache.hadoop.io.BoundedByteArrayOutputStream.(BoundedByteArrayOutputStream.java:46)
at org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.(InMemoryMapOutput.java:63)
at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.unconditionalReserve(MergeManagerImpl.java:305)
at org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:295)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:514)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:336)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193)

解决方法:在mapred-size.xml添加配置

        
        mapreduce.reduce.shuffle.memory.limit.percent
        0.10
    

15/11/30 20:15:46 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Bad connect ack with firstBadLink as 192.168.1.200:50010
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1460)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)

解决方法:打开192.168.1.200 50010端口防火墙

你可能感兴趣的:(Hadoop2.6.1 安装)