kafka

一 kafka官网

https://kafka.apache.org/

二 安装Kafka

1 安装zookeeper集群

2 安装kafka

1. 解压

[root@lihl01 software]# tar -zxvf kafka_2.11-1.1.1.tgz -C /opt/apps/
[root@lihl01 apps]# mv kafka_2.11-1.1.1/ kafka-2.11/

2. 修改$KAFKA_HOME/config/server.properties

The id of the broker. This must be set to a unique integer for each broker.

broker.id=1 ## 当前kafka实例的id,必须为整数,一个集群中不可重复
log.dirs=/opt/apps/kafka-2.11/data/kafka ## 生产到kafka中的数据存储的目录,目录需要手动创建
zookeeper.connect=lihl01,lihl02,lihl03/kafka ## kafka数据在zk中的存储目录

3. 配置环境变量

envrioment

export JAVA_HOME=/opt/apps/jdk1.8.0_45
export HADOOP_HOME=/opt/apps/hadoop-2.6.0-cdh5.7.6
export SCALA_HOME=/opt/apps/scala-2.11.8
export SPARK_HOME=/opt/apps/spark-2.2.0
export HIVE_HOME=/opt/apps/hive-1.1.0-cdh5.7.6
export ZOOKEEPER_HOME=/opt/apps/zookeeper-3.4.5-cdh5.7.6
export KAFKA_HOME=/opt/apps/kafka-2.11
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SCALA_HOME/bin:$HIVE_HOME/bin
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin:$ZOOKEEPER_HOME/bin:$KAFKA_HOME/bin

4. 分发,并修改其他的borker.id

[root@lihl01 apps]# scp -r kafka-2.11/ lihl02:/opt/apps/
[root@lihl01 apps]# scp -r kafka-2.11/ lihl03:/opt/apps/

5. 启动zk

[root@lihl01 apps]# zkServer.sh start
[root@lihl02 apps]# zkServer.sh start
[root@lihl03 apps]# zkServer.sh start

6. 启动kafka

[root@lihl01 apps]# kafka-server-start.sh -daemon $KAFKA_HOME/config/server.properties

[root@lihl02 apps]# kafka-server-start.sh -daemon KAFKA_HOME/config/server.properties

三 Kafka基本操作-命令行

1 创建主题

./kafka-topics.sh \
--create \
--topic lihltest \
--if-not-exists \
--partitions 3 \
--replication-factor 2 \
--zookeeper lihl01,lihl02,lihl03/kafka

Created topic "lihltest".

tip:
副本因子的个数应该小于等于broker的个数

2 查询所有的主题列表

./kafka-topics.sh \
--list \
--zookeeper lihl01,lihl02,lihl03/kafka

lihltest

3 查询主题详情

./kafka-topics.sh
--describe
--topic lihltest
--zookeeper lihl01,lihl02,lihl03/kafka

Topic:lihltest PartitionCount:3 ReplicationFactor:2 Configs:
Topic: lihltest Partition: 0 Leader: 3 Replicas: 3,1 Isr: 3,1
Topic: lihltest Partition: 1 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: lihltest Partition: 2 Leader: 2 Replicas: 2,3 Isr: 2,3

Partition: 当前topic对应的分区编号
Replicas : 副本因子,当前kafka对应的partition所在的broker实例的broker.id的列表
Leader : 该partition的所有副本中的leader领导者,处理所有kafka该partition读写请求
ISR : 该partition的存活的副本对应的broker实例的broker.id的列表

4 修改主题

./kafka-topics.sh
--alter
--topic lihltest
--partitions 2
--zookeeper lihl01,lihl02,lihl03/kafka

WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!

tip:
修改主题的时候,主要是修改主题的分区数,但是分区数只能扩大不能减少

5 删除主题

./kafka-topics.sh
--delete
--topic flink_test1
--zookeeper lihl01,lihl02,lihl03/kafka

6 测试生产与消费

6.1 生产端

./kafka-console-producer.sh
--topic lihltest
--broker-list lihl01:9092,lihl02:9092,lihl03:9092

6.2 消费端

kafka-console-consumer.sh
--topic flink_test1
--bootstrap-server lihl01:9092,lihl02:9092,lihl03:9092

6.3 消费者组

kafka-console-consumer.sh
--topic lihltest
--bootstrap-server lihl01:9092,lihl02:9092,lihl03:9092
--group lihltest
--offset latest \ # 从什么位置(消息的偏移量)开始消费。数字、latest、earlist
--partition 0 # 消费者对应的分区

tip:
当前的消费者属于lihltest的消费者组,每次从0分区的最新的位置开始消费
我的主题的并行度由分区和消费者决定,消费者组内的消费者至多可以同时消费数据量最多和分区数相同
我们的kafka中的主题的偏移量默认是保存在zk中,我们需要读取这个主题中间的消息就必须要先获取到偏移量才可以

四 Kafka的JavaAPI

1 生产者生产数据

1.1 初版代码

public class Demo1_Kafka_Producer {
    public static void main(String[] args) {
        //0. 申明连接到kafka的配置的url
        Properties props = new Properties();
        props.put("bootstrap.servers", "lihl01:9092, lihl02:9092, lihl03:9092");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        //1. 创建生产者对象
        Producer producer = new KafkaProducer<>(props);

        //2. 创建你想要发送的记录对象
        ProducerRecord record = new ProducerRecord("lihltest", "hello");
        producer.send(record);

        //3. 释放
        producer.close();
    }
}

1.2 优化之后

package cn.lihl.spark.kafka.day1;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.io.IOException;
import java.util.Properties;

public class Demo1_Kafka_Producer {
    public static void main(String[] args) throws IOException {
        //0. 申明连接到kafka的配置的url
        Properties props = new Properties();
        props.load(Demo1_Kafka_Producer.class.getClassLoader().getResourceAsStream("producer.properties"));

        //1. 创建生产者对象
        Producer producer = new KafkaProducer<>(props);

        //2. 创建你想要发送的记录对象
        ProducerRecord record = new ProducerRecord("hzbigdata2002", "hello");
        producer.send(record);

        //3. 释放
        producer.close();
    }
}

1.3 producer.properties常见参数的说明

bootstrap.servers=lihl01:9092,lihl02:9092,lihl03:9092
key.serializer=org.apache.kafka.common.serialization.StringSerializer # key的序列器
value.serializer=org.apache.kafka.common.serialization.StringSerializer # value的序列器
acks=[0|-1|1|all] ## 消息确认机制
0: 不做确认,直管发送消息即可
-1|all: 不仅leader需要将数据写入本地磁盘,并确认,还需要同步的等待其它followers进行确认
1:只需要leader进行消息确认即可,后期follower可以从leader进行同步
batch.size=1024 #每个分区内的用户缓存未发送record记录的空间大小

如果缓存区中的数据,没有沾满,也就是仍然有未用的空间,那么也会将请求发送出去,为了较少请求次数,我们可以配置linger.ms大于0,

linger.ms=10 ## 不管缓冲区是否被占满,延迟10ms发送request
buffer.memory=10240 #控制的是一个producer中的所有的缓存空间
retries=0 #发送消息失败之后的重试次数

1.4 producer.properties

############################# Producer Basics #############################

bootstrap.servers=lihl01:9092,lihl02:9092,lihl03:9092
compression.type=gzip
max.block.ms=3000
linger.ms=1
batch.size=16384
buffer.memory=33554432
acks=1
key.serializer=org.apache.kafka.common.serialization.StringSerializer
value.serializer=org.apache.kafka.common.serialization.StringSerializer

2 消费者消费数据

2.1 consumer.properties

bootstrap.servers=lihl01:9092,lihl02:9092,lihl03:9092
group.id=hzbigdata2002
auto.offset.reset=earliest
enable.auto.commit=true
auto.commit.interval.ms=1000
key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
value.deserializer=org.apache.kafka.common.serialization.StringDeserializer

2.2 消费者代码

package cn.lihl.spark.kafka.day2;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.io.IOException;
import java.util.Arrays;
import java.util.Properties;

public class Demo1_Kafka_Consumer {
    public static void main(String[] args) throws IOException {
        Properties props = new Properties();
        props.load(Demo1_Kafka_Consumer.class.getClassLoader().getResourceAsStream("consumer.properties"));

        KafkaConsumer consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Arrays.asList("lihltest"));
        while (true) {
            ConsumerRecords records = consumer.poll(100);
            for (ConsumerRecord record : records)
                System.out.printf("offset = %d, key = %s, value = %s ,partition = %d%n", record.offset(), record.key(), record.value(), record.partition());
        }
    }
}

3 操作Topic

3.1 创建主题

public class Demo2_Kafka_Admin {
    public static void main(String[] args) {
        //1. 创建配置文件
        Properties props = new Properties();
        props.setProperty("bootstrap.servers", "lihl01:9092,lihl02:9092,lihl03:9092");
        //2. 创建对象
        AdminClient client = AdminClient.create(props);
        //3. 创建主题
        client.createTopics(Arrays.asList(new NewTopic("chengzhiyuan", 3, (short)1)));
        //4. 释放
        client.close();
    }
}

3.2 打印所有的主题列表

public class Demo3_Kafka_Admin_list {
    public static void main(String[] args) throws ExecutionException, InterruptedException {
        //1. 创建配置文件
        Properties props = new Properties();
        props.setProperty("bootstrap.servers", "lihl01:9092,lihl02:9092,lihl03:9092");
        //2. 创建对象
        AdminClient client = AdminClient.create(props);
        //3. list
        ListTopicsResult listTopicsResult = client.listTopics();
        //4. 获取到所有的主题名称
        KafkaFuture> names = listTopicsResult.names();
        //5. 获取到所有的名字字符串
        Set topicNames = names.get();
        //6. 遍历
        for (String topicName : topicNames) {
            System.out.println(topicName);
        }
        client.close();
    }
}

4 自定义分区

4.1 默认分区策略

每一条producerRecord有,topic名称、可选的partition分区编号,以及一对可选的key和value组成。
三种策略进入分区
1、如果指定的partition,那么直接进入该partition
2、如果没有指定partition,但是指定了key,使用key的hash选择partition
3、如果既没有指定partition,也没有指定key,使用轮询的方式进入partition

4.2 随机分区器

4.2.1 代码

package cn.lihl.spark.kafka.day2;

import org.apache.kafka.clients.producer.Partitioner;
import org.apache.kafka.common.Cluster;

import java.util.Map;
import java.util.Random;

public class Demo4_Kafka_RandomPartitioner implements Partitioner{

    private Random random = new Random();

    @Override
    public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
        //1. 获取我的分区个数
        int partitionCount = cluster.partitionCountForTopic(topic);
        int partition = random.nextInt(partitionCount);
        return partition;
    }

    @Override
    public void close() {

    }

    @Override
    public void configure(Map configs) {

    }
}

4.2.2 修改配置文件:producer.properties

partitioner.class=cn.lihl.spark.kafka.day2.Demo4_Kafka_RandomPartitioner

五 Flume整合Kafka

1 安装Flume

[root@lihl01 software]# tar -zxvf apache-flume-1.9.0-bin.tar.gz -C /opt/apps/
[root@lihl01 apps]# mv apache-flume-1.9.0-bin/ flume-1.9.0

envrioment

export JAVA_HOME=/opt/apps/jdk1.8.0_45
export HADOOP_HOME=/opt/apps/hadoop-2.6.0-cdh5.7.6
export SCALA_HOME=/opt/apps/scala-2.11.8
export SPARK_HOME=/opt/apps/spark-2.2.0
export HIVE_HOME=/opt/apps/hive-1.1.0-cdh5.7.6
export ZOOKEEPER_HOME=/opt/apps/zookeeper-3.4.5-cdh5.7.6
export KAFKA_HOME=/opt/apps/kafka-2.11
export FLUME_HOME=/opt/apps/flume-1.9.0
export CLASSPATH=.:JAVA_HOME/lib/tools.jar
export PATH=JAVA_HOME/bin:HADOOP_HOME/sbin:HIVE_HOME/bin
export PATH=SPARK_HOME/bin:ZOOKEEPER_HOME/bin:FLUME_HOME/bin

2 新建一个主题

kafka-topics.sh --create
--topic flume-kafka
--zookeeper lihl01,lihl02,lihl03/kafka
--partitions 3
--replication-factor 1

Created topic "flume-kafka".

3 配置flume:netcat_kafka.conf

a1.sources = r1
a1.channels = c1
a1.sinks = k1

a1.sources.r1.type = netcat
a1.sources.r1.bind = 192.168.49.111
a1.sources.r1.port = 6666

a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 10000
a1.channels.c1.byteCapacityBufferPercentage = 20
a1.channels.c1.byteCapacity = 800000

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = flume-kafka
a1.sinks.k1.kafka.bootstrap.servers = lihl01:9092,lihl02:9092,lihl03:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.k1.kafka.producer.compression.type = snappy

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

4 启动测试

1 start-yarn.sh
2 zkServer.sh start
3 kafka-server-start.sh

4 启动flume,后台启动

  1. 前台启动
    flume-ng agent -n a1 -c /opt/apps/flume-1.9.0/conf/ -f /home/netcat_kafka.conf -Dflume.root.logger=INFO,console

  2. 后台启动
    nohup flume-ng agent -n a1 -c /opt/apps/flume-1.9.0/conf/ -f /home/netcat_kafka.conf > /dev/null 2>&1 &

5 启动消费者脚本
kafka-console-consumer.sh
--topic flume-kafka
--bootstrap-server lihl01:9092,lihl02:9092,lihl03:9092

6 安装一个web服务器
[root@lihl01 home]# yum -y install telnet

7 启动telnet
telnet lihl01 6666

你可能感兴趣的:(kafka)