Zookeeper+Kafka+Flume

一.环境

centos 6.8
jdk 1.7
develop1 192.168.1.10
develop2 192.168.1.11
develop3 192.168.1.12

项目:zookeeper集群+kafka集群+Flume单机
zookeeper:3.4.5
kafka:2.10-0.10.0.0
flume:1.6.0

二.安装zookeeper

develop1

cd /opt/software
wget http://archive.apache.org/dist/zookeeper/zookeeper-3.4.5/zookeeper-3.4.5.tar.gz
tar -xzf  zookeeper-3.4.5.tar.gz -C /opt
mkdir /opt/zookeeper-3.4.5/data -p
mkdir /opt/zookeeper-3.4.5/logs -p
cd /opt/zookeeper-3.4.5/conf/
cp zoo_sample.cfg zoo.cfg
 sed -i "s/dataDir=\/tmp\/zookeeper/dataDir=\/opt\/zookeeper-3.4.5\/data\ndataLogDir=\/opt\/zookeeper-3.4.5\/logs/" zoo.cfg
 echo -e "server.1=192.168.1.10:2888:3888\nserver.2=192.168.1.11:2888:3888\nserver.3=192.168.1.12:2888:3888" >> zoo.cfg

将该文件拷贝到另外两台服务器上

scp -r /opt/zookeeper-3.4.5  [email protected]:/opt
scp -r /opt/zookeeper-3.4.5  [email protected]:/opt

192.168.1.10
echo "1">/opt/zookeeper-3.4.5/data/myid
192.168.1.11
echo "2">/opt/zookeeper-3.4.5/data/myid
192.168.1.12
echo "3">/opt/zookeeper-3.4.5/data/myid
三台机同时执行命令

/opt/zookeeper-3.4.5/bin/zkServer.sh start

查看状态
/opt/zookeeper-3.4.5/bin/zkServer.sh status

zookeeper状态

连接客户端
/opt/zookeeper-3.4.5/bin/zkCli.sh -server 192.168.1.11:2181

二.安装kafka集群

develop1:

cd /opt/software
wget http://apache.fayea.com/kafka/0.10.0.0/kafka_2.10-0.10.0.0.tgz
tar -xzf kafka_2.10-0.10.0.0.tgz -C /opt
mkdir -p /opt/kafka_2.10-0.10.0.0/logs
cd /opt/kafka_2.10-0.10.0.0/config/
sed -i "s/#listeners=PLAINTEXT:\/\/:9092/listeners=PLAINTEXT:\/\/192.168.1.10:9092\nadvertised.host.name=192.168.1.10\nadvertised.port=9092/" server.properties
sed -i "s/log.dirs=\/tmp\/kafka-logs/log.dirs=\/opt\/kafka_2.10-0.10.0.0\/logs\/kafka-logs/" server.properties
sed -i "s/num.partitions=1/num.partitions=2/" server.properties
sed -i "s/zookeeper.connect=localhost:2181/zookeeper.connect=192.168.1.10:2181,192.168.1.11:2181,192.168.1.12:2181/" server.properties
echo -e "broker.list=192.168.1.10:9092,192.168.1.11:9092,192.168.1.12:9092\nproducer.type=async" >> server.properties

将该文件拷贝到另外两台服务器上

scp -r /opt/[email protected]:/opt
scp -r /opt/[email protected]:/opt

192.168.1.11
sed -i "s/192.168.1.10/192.168.1.11/g" server.properties
192.168.1.12
sed -i "s/192.168.1.10/192.168.1.12/g" server.properties
启动:bin/kafka-server-start.sh -daemon config/server.properties
其中一台服务器:

bin/kafka-topics.sh --create --zookeeper 192.168.1.10:2181 --replication-factor 2 --partitions 2 --topic test
bin/kafka-topics.sh --list --zookeeper 192.168.1.10:2181
bin/kafka-topics.sh --describe --zookeeper 192.168.1.10:2181  --topic test

topic 信息

创建发送信息控制台(生产者),发送信息(producer)
bin/kafka-console-producer.sh --broker-list 192.168.1.10:9092,192.168.1.11:9092,192.168.1.12:9092 --topic test2
开启一个消费者(consumer)
192.168.1.10:9092,192.168.1.11:9092,192.168.1.12:9092 --topic test2 --from-beginning
在生产者的终端中输入一些字符
如果消费者的终端同步出这些字符,则集群部署成功。

三.单机安装flume

在develop1上安装flume

cd /opt/software
wget apache.claz.org/flume/1.6.0/apache-flume-1.6.0-bin.tar.gz
tar -xzf apache-flume-1.6.0-bin.tar.gz -C /opt
cd /opt/apache-flume-1.6.0-bin/conf/
cp flume-conf.properties.template flume.conf
sed -i "s/seqGenSrc/r1/"  flume.conf
sed -i "s/memoryChannel/c1/"  flume.conf
 sed -i "s/loggerSink/s1/" flume.conf
 sed -i "s/seq/netcat/"  flume.conf
echo "agent.sources.r1.bind = localhost" >> flume.conf
echo "agent.sources.r1.port = 8888" >> flume.conf
sed -i "s/logger/org.apache.flume.sink.kafka.KafkaSink/" flume.conf

flume.conf配置文件

agent.sinks.s1.topic = test
agent.sinks.s1.brokerList = 192.168.1.10:9092,192.168.1.11:9092,192.168.1.12:9092
agent.sinks.s1.requiredAcks = 1
agent.sinks.s1.batchSize = 20
agent.sinks.s1.channel = c1
agent.sinks.s1.sink.directory = /tmp/log/flume

bin/flume-ng agent --conf ./conf/ -f conf/flume.conf -Dflume.root.logger=DEBUG,console -n agent1

zookeeper 更改日志输出参考

1.http://www.cnblogs.com/zhwbqd/p/3957018.html
2.http://blog.csdn.net/dxl342/article/details/53302338

你可能感兴趣的:(Zookeeper+Kafka+Flume)