大数据技术之Kafka——Kafka的安装与部署

一、安装kafka

(1)手动下载 下载地址:Apache Kafka

(2)也可以用命令下载(直接下载到服务器哦!):

将安装包下载到/opt/monitor/kafka文件夹下面


下载命令:
    wget http://mirrors.hust.edu.cn/apache/kafka/2.8.0/kafka_2.12-2.8.0.tgz
手动下载移动到/opt/soft/kafka目录下也可以

 二、解压并配置

(1)解压并重命名

[root@hadoop02 install]# tar -xzf -C kafka_2.12-2.8.0.tgz
[root@hadoop02 install]# mv kafka_2.12-2.8.0.tgz ../soft/kafka212

(2)更改kafka配置文件

[root@hadoop02 install]# cd /opt/soft/kafka212/config/server.properties 


21 broker.id=0
36 advertised.listeners=PLAINTEXT://192.168.78.143:9092
60 log.dirs=/opt/soft/kafka212/data    消息存放目录
103 log.retention.hours=1680           消息存放时间小时
123 zookeeper.connect=192.168.78.143:2181  连接zookeeper
137 delete.topic.enable=true      设置可以对topic删除,默认不能删除

(3)设置myid

echo "0">/opt/soft/kafka212/data/myid

(4)配置环境变量

[root@hadoop02 ~]# vim /etc/profile

#KAFKA_HOME
export KAFKA_HOME=/opt/soft/kafka212
export PATH=$PATH:$KAFKA_HOME/bin

# 保存并退出
[root@hadoop02 ~]# source /etc/profile

三、启动服务

(1)启动zookeerper

zkServer.sh start
zkServer.sh status

(2)开启Kafka

# 启动Kafka会占用一个界面,可以通过nohup 或 daemon来解决
# 下面为开启的三种方式

kafka-server-start.sh /opt/soft/kafka212/config/server.properties 
kafka-server-start.sh -daemon /opt/soft/kafka212/config/server.properties
nohup kafka-server-start.sh /opt/soft/kafka212/config/server.properties &

# 成功启动后可以看到jps中出现了kafka

四、常用命令行

(1)主题 kafka-topic.sh

连接:
(1)--bootstrap-server hadoop101:9092,hadoop102:9092

操作某个主题:
(2)--topic first

对主题进行增删改查操作:
(3)--create
(4)--delete
(5)--alter
(6)--list
(7)--describe

查看分区数量
(8)--partitions

指定副本
(9)--replication-factor

(2)生产者 kafka-console-producer.sh

连接:
(1)--bootstrap-server hadoop101:9092,hadoop102:9092

操作某个主题:
(2)--topic first

(3)消费者 kafka-console-consumer.sh

连接:
(1)--bootstrap-server hadoop101:9092,hadoop102:9092

操作某个主题:
(2)--topic first

五、Java操作Kafka

(1)pom中添加配置

    
      org.apache.kafka
      kafka-clients
      2.8.0
    
    
      org.apache.kafka
      kafka_2.12
      2.8.0
    

(2)Producer

import org.apache.kafka.clients.consumer.StickyAssignor;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
import java.util.Properties;
import java.util.Scanner;

public class MyProducer {
    public static void main(String[] args) {
        Properties properties = new Properties();

        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.153.139:9092" );
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);

      /*
      * 0:不需要等待broker任何相应,无法保证数据正确送到broker
      * 1:只需要得到分区副本中leader确认就可以,可能丢失数据
      * -1:等到所有副本(Leader和ISR队列)全部确认,响应时间最长,数据最安全,可能会重复
      * */
        properties.put(ProducerConfig.ACKS_CONFIG, "0");

        KafkaProducer producer = new KafkaProducer(properties);

    }
}

(3)Consumer

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.record.Record;
import org.apache.kafka.common.serialization.StringDeserializer;
import java.time.Duration;
import java.util.Collections;
import java.util.Properties;

public class MyConsumer {
    public static void main(String[] args) {

        Properties pro = new Properties();

        pro.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.153.139:9092" );
        pro.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        pro.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class);
//        设置是否自动提交,false 手动提交;true 自动提交
        pro.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
        pro.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
        pro.put(ConsumerConfig.GROUP_ID_CONFIG,"GROUP1" );

        KafkaConsumer consumer = new KafkaConsumer(pro);
        consumer.subscribe(Collections.singleton("bigdata"));   //创建好kafka消费者对象后,订阅消息。指定消息队列名字

            }
        }
    }
}

你可能感兴趣的:(Kafka,kafka)