Hadoop环境配置及启动

core-site.xml

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://slave2.hadoop:8020</value>
<final>true</final>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/hadoop-root/tmp</value>
</property>
<property> 
<name>fs.checkpoint.period</name> 
<value>300</value> 
<description>The number of seconds between two periodic checkpoints.</description> 
</property> 
<property> 
<name>fs.checkpoint.size</name> 
<value>67108864</value> 
<description>The size of the current edit log (in bytes) that triggers a periodic checkpoint even if the fs.checkpoint.period hasn't expired.  </description> 
</property> 
<property> 
<name>fs.checkpoint.dir</name> 
<value>${hadoop.tmp.dir}/dfs/namesecondary</value> 
<description>Determines where on the local filesystem the DFS secondary namenode should store the temporary images to merge.If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy.</description> 
</property> 
</configuration>

hdfs-site.xml

<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/hadoop-root/dfs/name</value>
<final>true</final>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/hadoop-root/dfs/data</value>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>

<property>  
<name>dfs.namenode.secondary.http-address</name>
<value>slave1:50090</value> 
</property>
</configuration>

mapred-site.xml

<configuration>
 <property>
   <name>mapreduce.framework.name</name>
   <value>yarn</value>
 </property>


<property>
    <name>mapred.system.dir</name>
    <value>/home/hadoop/hadoop-root/mapred/system</value>
    <final>true</final>
</property>
<property>
    <name>mapred.local.dir</name>
    <value>/home/hadoop/hadoop-root/mapred/local</value>
    <final>true</final>
</property>

<property>
    <name>mapreduce.tasktracker.map.tasks.maximum</name>
    <value>2</value>
</property>
<property>
    <name>mapreduce.tasktracker.reduce.tasks.maximum</name>
    <value>1</value>
</property>

<property>
    <name>mapreduce.job.maps</name>
    <value>2</value>
</property>
<property>
    <name>mapreduce.job.reduces</name>
    <value>1</value>
</property>

<property>
<name>mapreduce.tasktracker.http.threads</name>
<value>50</value>
</property>
<property>
<name>io.sort.factor</name>
<value>20</value>
</property>
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx400m</value>
</property>
<property>
<name>mapreduce.task.io.sort.mb</name>
<value>200</value>
</property>
<property>
<name>mapreduce.map.sort.spill.percent</name>
<value>0.8</value>
</property>
<property>
<name>mapreduce.map.output.compress</name>
<value>true</value>
</property>
<property>
<name>mapreduce.map.output.compress.codec</name>
<value>org.apache.hadoop.io.compress.DefaultCodec</value>
</property>
<property>
<name>mapreduce.reduce.shuffle.parallelcopies</name>
<value>10</value>
</property>
</configuration>

一、恢复hadoop
1、停止所有服务
2、删除/home/hadoop/hadoop-root/dfs下的data和name,并且重新建立
3、删除/home/hadoop/hadoop-root/tmp下的文件
4、在namenode节点执行 hadoop namenode -format
5、启动hadoop服务


-----自此hadoop恢复----
6、停止hbase服务,停不掉就杀掉
7、(多个节点)进入/tmp/hbase-root/zookeeper 删除所有文件
8、启动hbase服务

你可能感兴趣的:(hadoop,环境配置)