一、单broker
1.下载解压并进入目录
$ cd /Users/didi/Softwres/kafka_2.11-0.11.0.2
2.启动服务
启动zookeeper:$ bin/zookeeper-server-start.sh config/zookeeper.properties
启动kafka:$ bin/kafka-server-start.sh config/server.properties
3.创建一个主题(名称为test,只有一个分区,一份复制)
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
4.检查是否创建成功
$ bin/kafka-topics.sh --list --zookeeper localhost:2181
5.向kafka集群发送一些信息
启动producer,并发送一些信息
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
>This is a message
>This is another message
>
6.启动一个consumer并消费发送的信息
$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
This is a message
This is another message
二、启动多个broker详见:https://kafka.apache.org/quickstart#quickstart_multibroker
1. broker.id属性是集群中每个节点的唯一标识,由于集群启动在单一电脑上,cp配置文件并修改
$ cp server.properties server-1.properties
$ cp config/server.properties config/server-2.properties
修改如下:
config/server-1.properties:
broker.id=1
listeners=PLAINTEXT://:9093
log.dirs=/tmp/kafka-logs-1
-------------------------------------------------------------
config/server-2.properties:
broker.id=2
listeners=PLAINTEXT://:9094
log.dirs=/tmp/kafka-logs-2
2.启动这两个server
$ bin/kafka-server-start.sh config/server-1.properties
$ bin/kafka-server-start.sh config/server-2.properties
目前有三个server启动
3.创建一个新的topic(my-replicated-topic)
三分复制,三个分区
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 3 --topic my-replicated-topic
查看现在有的主题:
$ bin/kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
my-replicated-topic
test
4. 查看这个主题的描述信息:
$ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount:3 ReplicationFactor:3 Configs:(注:总的描述,下面三行为每一个分区的信息)
Topic: my-replicated-topic Partition: 0 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1
Topic: my-replicated-topic Partition: 1 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2
Topic: my-replicated-topic Partition: 2 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0
Leader:负责该分区的数据读写
Replicas:分区的复制所在的三个节点
5.向kafka集群发送一些信息
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic
>my test message 1
>my test message 2
>
6.消费信息
$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
my test message 2
my test message 1
my test message 3
7.容错能力测试
查看server1的进程号,并把它杀死
$ ps aux | grep server-1.properties
kill -9 9485
可以看到控制台server1被杀死
重新查看描述,发现server1是leader的已经不存在:
$ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount:3 ReplicationFactor:3 Configs:
Topic: my-replicated-topic Partition: 0 Leader: 2 Replicas: 2,0,1 Isr: 2,0
Topic: my-replicated-topic Partition: 1 Leader: 0 Replicas: 0,1,2 Isr: 0,2
Topic: my-replicated-topic Partition: 2 Leader: 2 Replicas: 1,2,0 Isr: 2,0
8.重新启动,producer和consumer,继续发送消息,没有影响
三、使用kafka输入输出信息
1.生成一个文件
$ echo -e "foo\nbar">test.txt
2.启动connector
$ bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
以上配置文件(详见配置文件信息):默认topic:connect-test
默认文件:安装目录下:test.txt
默认输出:安装目录下:test.sink.txt
3. $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning