Hadoop May not run daemons as root. Please specify HADOOP_TASKTRACKER_USER

原文地址:http://www.migrate2cloud.com/blog/hadoop-cluster-with-hadoop-0-20-and-ubuntu-10-04


HADOOP Cluster on AWS EC2 with hadoop-0.20 and ubuntu-10.04
Friday, June 17th, 2011 | Posted in Amazon EC2, Cloud computing
Let’s start with a small introduction- what is hadoop ?. Hadoop is an open-source project administered by the Apache Software Foundation. Apache Hadoop is a Java software framework that supports data-intensive distributed applications under a free license. It enables applications to work with thousands of nodes and petabytes of data. Hadoop was inspired by Google’s MapReduce and Google File System (GFS) papers.

Technically, Hadoop consists of two key services: reliable data storage using the Hadoop Distributed File System (HDFS) and high-performance parallel data processing using a technique called MapReduce.

Dealing with big data requires two things:

Inexpensive, reliable storage; and
New tools for analyzing unstructured and structured data.
Hadoop creates clusters of machines and coordinates work among them. Clusters can be built with inexpensive computers.If one fails, Hadoop continues to operate the cluster without losing data or interrupting work, by shifting work to the remaining machines in the cluster.

HDFS manages storage on the cluster by breaking incoming files into pieces, called “blocks,” and storing each of the blocks redundantly across the pool of servers.

The main services running in a hadoop cluster will be

1)namenode

2)jobtracker

3)secondarynamenode

These three will be running only on a single node(machine) ; that machine is the central machine which controls the cluster.

4)datanode

5)tasktracker

These two services will be running on all other nodes in the cluster.

HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on.

Above the file systems comes the MapReduce engine, which consists of one Job Tracker, to which client applications submit MapReduce jobs. The Job Tracker pushes work out to available Task Tracker nodes in the cluster, striving to keep the work as close to the data as possible.

The only purpose of the secondary name-node is to perform periodic checkpoints. The secondary name-node periodically downloads current name-node image and edits log files, joins them into new image and uploads the new image back to the (primary and the only) name-node.

Now Let us have a look at how to build a hadoop cluster using Cloudera hadoop-0.20 on ubuntu-10.04

You should install sun –jdk first. Then add the following repositories to the apt sources list.

vim /etc/apt/sources.list.d/cloudera.list

1
deb http://archive.cloudera.com/debian lucid-cdh3u0 contrib
2

3
deb-src http://archive.cloudera.com/debian lucid-cdh3u0 contrib
Import key

1
curl -s http://archive.cloudera.com/debian/archive.key | apt-key add -
Then run

1
apt-get update
For Namenode/Jobtracker ( These two services should run only on a single central machine in the cluster)

1
apt-get install hadoop –yes
2

3
apt-get install hadoop-0.20-namenode
4

5
apt-get install hadoop-0.20-jobtracker
6

7
apt-get install hadoop-0.20-secondarynamenode
Configuration

vim /etc/hadoop/conf/hadoop-env.sh

Append these

1
export JAVA_HOME=/usr/lib/jvm/java-6-sun-1.6.0.24/ ( your java home comes here )
2

3
export HADOOP_CONF_DIR=/etc/hadoop/conf
4

5
export HADOOP_HOME=/usr/lib/hadoop-0.20
6

7
export HADOOP_NAMENODE_USER=hdfs
8

9
export HADOOP_SECONDARYNAMENODE_USER=hdfs
10

11
export HADOOP_DATANODE_USER=hdfs
12

13
export HADOOP_JOBTRACKER_USER=mapred
14

15
export HADOOP_TASKTRACKER_USER=mapred
16

17
export HADOOP_IDENT_STRING=hadoop
vim /etc/hadoop/conf/core-site.xml

1

2

3

4

5

6

7

8

9

10

11
fs.default.name
12

13
hdfs://< ip address of this machine >:8020
14

15

16

17

vim /etc/hadoop/conf/hdfs-site.xml


1

2

3

4

5

6

7

8

9

10

11
dfs.name.dir
12

13
/var/lib/hadoop-0.20/name
14

15

16

17

18

19
dfs.data.dir
20

21
/var/lib/hadoop-0.20/data
22

23

24

25

26

27
dfs.replication
28

29
2
30

31

32

33

vim /etc/hadoop/conf/mapred-site.xml

1

2

3

4

5

6

7

8

9

10

11
mapred.job.tracker
12

13
< ip address of this machine >:8021
14

15

16

17

18

19
mapred.system.dir
20

21
/var/lib/hadoop-0.20/system
22

23

24

25

26

27
mapred.local.dir
28

29
/var/lib/hadoop-0.20/mapred
30

31

32

33

——————————————————————————————————————————————

1
mkdir / var/lib/hadoop-0.20/name
2

3
mkdir / var/lib/hadoop-0.20/data
4

5
mkdir / var/lib/hadoop-0.20/system
6

7
mkdir / var/lib/hadoop-0.20/mapred
8

9
chown -R hdfs /var/lib/hadoop-0.20/name
10

11
chown -R hdfs /var/lib/hadoop-0.20/data
12

13
chown -R mapred /var/lib/hadoop-0.20/mapred
Now format NameNode

1
yes Y | /usr/bin/hadoop namenode –format
Start namenode

1
/etc/init.d/hadoop-0.20-namenode start
Check the log Files for error:

less /usr/lib/hadoop-0.20/logs/hadoop-hadoop-namenode-.log

Also you can check whether the Namenode process is up or not using the command

1
# jps
Start the SecondaryNamenode

1
/etc/init.d/hadoop-0.20-secondarynamenode start
Log: less /usr/lib/hadoop-0.20/logs/hadoop-hadoop-secondarynamenode-.log

1
sudo -u hdfs hadoop fs -mkdir /var/lib/hadoop-0.20/system
2

3
sudo -u hdfs hadoop fs -chown mapred /var/lib/hadoop-0.20/system
Now Start the JobTracker

1
/etc/init.d/hadoop-0.20-jobtracker start
Log : less /usr/lib/hadoop-0.20/logs/hadoop-hadoop-jobtracker-ip-10-108-39-34.log

Now jps command will show the three processes up

# jps

19233 JobTracker

18994 SecondaryNameNode

18871 NameNode

For Datanode/Tasktracker ( These two services should be running on all the other machines in the cluster )

1
apt-get install hadoop-0.20-datanode
2

3
apt-get install hadoop-0.20-tasktracker
Configuration

vim /etc/hadoop/conf/core-site.xml


1

2

3

4

5
 
6

7

8

9
 
10

11

12

13

14

15
fs.default.name
16

17
hdfs://< ip address of the namenode >:8020
18

19

20

21

vim /etc/hadoop/conf/hdfs-site.xml

1

2

3

4

5
 
6

7

8

9
 
10

11

12

13

14

15
dfs.name.dir
16

17
/var/lib/hadoop-0.20/name
18

19

20

21

22

23
dfs.data.dir
24

25
/var/lib/hadoop-0.20/data
26

27

28

29

30

31
dfs.replication
32

33
2
34

35

36

37

vim /etc/hadoop/conf/mapred-site.xml

1

2

3

4

5
 
6

7

8

9
 
10

11

12

13

14

15
mapred.job.tracker
16

17
< ip address of jobtracker >:8021
18

19

20

21

22

23
mapred.system.dir
24

25
/var/lib/hadoop-0.20/system
26

27

28

29

30

31
mapred.local.dir
32

33
/var/lib/hadoop-0.20/mapred
34

35

36

37

———————————————————————————————————————————————

1
mkdir /var/lib/hadoop-0.20/data/
2

3
chown -R hdfs /var/lib/hadoop-0.20/data
4

5
mkdir /var/lib/hadoop-0.20/mapred
6

7
chown -R mapred /var/lib/hadoop-0.20/mapred
Start the DataNode

1
/etc/init.d/hadoop-0.20-datanode start
Log : less /usr/lib/hadoop-0.20/logs/hadoop-hadoop-datanode-.log

Start the TaskTracker

1
/etc/init.d/hadoop-0.20-tasktracker start
Log: less /usr/lib/hadoop-0.20/logs/hadoop-hadoop-tasktracker-.log

You can now check the interface

http://< namenode-ip >:50070 – for HDFS overview

and

http://< jobtracker –ip>:50030 – for Mapreduce overview

你可能感兴趣的:(Hadoop May not run daemons as root. Please specify HADOOP_TASKTRACKER_USER)