推送hadoop度量值到ganglia的最简单配置

两个机器 192.168.14.8 和 192.168.14.7

192.168.14.8 上安装 ganglia 服务端 和 客户端

192.168.14.7 上安装 hadoop

-------------------------

192.168.14.8

系统版本 CentOS Linux release 7.4.1708 (Core)

1.安装 wget

yum -y install wget

2.下载 aliyun epel repo 到/etc/yum.repo.d/

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

3.安装 ganglia

yum -y install ganglia-gmond.x86_64 ganglia-gmetad.x86_64 ganglia-web.x86_64

4.修改 httpd 配置,让所有ip都可以访问

vi /etc/httpd/conf.d/ganglia.conf

Alias /ganglia /usr/share/ganglia

  # Require local

  Require all granted

  # Require ip 10.1.2.3

  # Require host example.org

5.修改 rrd 数据库权限

chown -R nobody:nobody /var/lib/ganglia/rrds

6.修改 gmetad 服务配置

gridname "HadoopCluster"

setuid_username nobody

data_source "HadoopCluster" 192.168.14.8 192.168.14.7

#data_source "my cluster" localhost

7.修改 gmond 配置

vi /etc/ganglia/gmond.conf

cluster {

  name = "HadoopCluster"   

}

udp_send_channel {

host = 192.168.14.8

port = 8649

}

udp_recv_channel {

  port = 8649

}

8.修改域名映射文件

vi /etc/hosts

192.168.14.8 ganglia

192.168.14.7 hadoop

9.启动服务

systemctl start httpd.service

systemctl start gmetad.service

systemctl start gmond.service

------------------

192.168.14.7

系统版本 CentOS Linux release 7.4.1708 (Core)

软件版本 jdk-8u171,hadoop-2.7.5

1.解压缩 jdk

 tar -zxvf jdk-8u171-linux-x64.tar.gz

2.移动到 /opt 目录

  mv jdk1.8.0_171/ /opt/jdk

3.解压缩 hadoop

tar -zxvf hadoop-2.7.5.tar.gz

4.移动到 /opt 目录

mv hadoop-2.7.5/ /opt/hadoop

5.编辑环境变量

vi ~/.bashrc

export JAVA_HOME=/opt/jdk

export HADOOP_HOME=/opt/hadoop

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

6.使环境变量生效

source ~/.bashrc

7.新建目录分别用于存放元数据和具体数据

mkdir /opt/namenode

mkdir /opt/datanode

8.编辑配置文件 core-site.xml

vi  /opt/hadoop/etc/hadoop/core-site.xml

   

        fs.defaultFS

        hdfs://192.168.14.7:9000

   

9.编辑配置文件 hdfs-site.xml

  vi /opt/hadoop/etc/hadoop/hdfs-site.xml

   

        dfs.replication

        1

   

   

        dfs.namenode.name.dir

        file:///opt/namenode

   

   

        dfs.datanode.data.dir

        file:///opt/datanode

   

10.修改 metrics 文件

vi /opt/hadoop/etc/hadoop/hadoop-metrics2.properties

*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31

*.sink.ganglia.period=10

*.sink.ganglia.supportsparse=true

*.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both

*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40

namenode.sink.ganglia.servers=192.168.14.8:8649

11.格式化 namenode

hdfs namenode -format

12.启动守护进程

hadoop-daemon.sh start namenode

hadoop-daemon.sh start datanode

你可能感兴趣的:(推送hadoop度量值到ganglia的最简单配置)