kerberos环境下flume监控目录写kafka配置

------------------flume的kerberos认证配置配置-------------

1、在flume的conf目录下(/etc/flume/conf)增加一个文件 flume_kafka_jaas.conf,其内容可参考kafka_jaas.conf

[root@aicloud1 conf]# more flume_kafka_jaas.conf

KafkaClient {

com.sun.security.auth.module.Krb5LoginModule required

useKeyTab=true

storeKey=true

serviceName="kafka"

keyTab="/etc/security/keytabs/kafka.service.keytab"

principal="kafka/[email protected]";

};

2、在ambari界面,flume的配置项,高级flume-env中增加以下内容:

export JAVA_OPTS="-Xms100m -Xmx2000m -Dcom.sun.management.jmxremote "

export JAVA_OPTS="$JAVA_OPTS -Djava.security.krb5.conf=/etc/krb5.conf"

export JAVA_OPTS="$JAVA_OPTS -Djava.security.auth.login.config=/etc/flume/conf/ flume_kafka_jaas.conf"


3、检查kafka的keytab访问权限,增加其它用户可读(可选项)

chmod 444 /etc/security/keytabs/kafka.service.keytab


4、在kafka命令行,新建1个topic

./kafka-topics.sh --create --zookeeper host1:2181 --topic test001 --replication-factor 1 --partitions 5


5、配置flume.conf文件,使用spooldir监控目录,读取文件,写到kafka。需要注意的是,由于kafka的配置文件中,使用了SASL_PLAINTEXT://localhost:6668,所以在flume的配置项中,同样需要相同的端口

# Flume agent config

# Define spooling source

a1.sources.s1.type = spooldir

a1.sources.s1.channels = c1

a1.sources.s1.spoolDir = /tmp/flume/

a1.sources.s1.batchSize = 1000

a1.sources.s1.consumeOrder = youngest

a1.sources.s1.interceptors = i1

a1.sources.s1.interceptors.i1.type = regex_extractor

a1.sources.s1.interceptors.i1.regex = ([A-Za-z0-9])*\\|[A-Za-z0-9]*\\|[A-Za-z0-9]*

a1.sources.s1.interceptors.i1.serializers = e1

a1.sources.s1.interceptors.i1.serializers.e1.name = key


# Define a kafka channel

a1.channels.c1.type = org.apache.flume.channel.kafka.KafkaChannel

a1.channels.c1.kafka.bootstrap.servers = host1:6668

a1.channels.c1.kafka.topic = test001

a1.channels.c1.parseAsFlumeEvent = false

a1.channels.c1.kafka.producer.security.protocol = SASL_PLAINTEXT

a1.channels.c1.kafka.consumer.security.protocol = SASL_PLAINTEXT

a1.channels = c1

a1.sources = s1

a1.sinks =k1


6、将文本文件放到/tmp/flume文件夹,在kafka命令行可以看到文件数据

./kafka-console-consumer.sh --topic test001 --security-protocol SASL_PLAINTEXT --bootstrap-server host1:6668

-----------以下命令会有告警,但也能正常消费

./kafka-console-consumer.sh --topic test001 --security-protocol SASL_PLAINTEXT --zookeeper host1:2181


参考资料:https://community.hortonworks.com/articles/86079/flume-with-secured-kafka-channel.html

你可能感兴趣的:(kerberos环境下flume监控目录写kafka配置)