在conf/server.properties中添加上
delete.topic.enable=true
auto.create.topics.enable=true
listeners=SASL_SSL://172.19.3.48:9093
advertised.listeners=SASL_SSL://172.19.3.48:9093
# inter.broker.listener.name=SASL_SSL
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
#security.inter.broker.protocol=SASL_PLAINTEXT
security.inter.broker.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=HTTPS
ssl.keystore.location=/opt/tool/server.keystore.jks
ssl.keystore.password=itc123
ssl.key.password=itc123
ssl.truststore.location=/opt/tool/server.truststore.jks
ssl.truststore.password=itc123
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
ssl.secure.random.implementation=SHA1PRNG
#可以禁用服务器主机名验证,kafka2.0.x开始,将ssl.endpoint.identification.algorithm设置为了HTTPS,如需使用即
#ssl.endpoint.identification.algorithm=HTTPS
ssl.endpoint.identification.algorithm=
# ACL
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
#修改验证机制,kafka3.0之后版本弃用了SimpleAclAuthorizer验证
#authorizer.class.name=kafka.security.authorizer.AclAuthorizer
allow.everyone.if.no.acl.found=true
super.users=User:admin
#super.users=User:admin 表示启动超级用户admin,注意:此用户名不允许更改,否则使用生产模式时,会有异常!
这里暂时不要用自己安装的zk,用kafka自带的zk,否则会有问题,在conf/zookeeper.properties中添加上
vim config/zookeeper.properties
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
先要启动zk,在执行以下语句
/opt/kafka_2.12-3.1.0/bin/kafka-configs.sh --zookeeper localhost --alter --add-config 'SCRAM-SHA-256=[password=itc123],SCRAM-SHA-512=[password=itc123]' --entity-type users --entity-name admin
如我输入命令如下:依次是 密码—重输密码—名与姓—组织单位—组织名—城市—省份—国家两位代码—密码—重输密码—yes,后面告警不用管,此步骤要注意的是,名与姓这一项必须输入域名,如 “localhost”,切记不可以随意写,我曾尝试使用其他字符串,在后面客户端生成证书认证的时候一直有问题。
keytool -keystore server.keystore.jks -alias localhost -validity 3650 -genkey
##以下两行带验证,请勿使用
##keytool -keystore server.keystore.jks -validity 3650 -alias localhost -genkey -ext SAN=IP:{IP_ADDRESS}
##keytool -keystore server.keystore.jks -validity 3650 -alias localhost -genkey -ext SAN=IP:172.19.3.48
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]: localhost
What is the name of your organizational unit?
[Unknown]: wisentsoft
What is the name of your organization?
[Unknown]: wisentsoft
What is the name of your City or Locality?
[Unknown]: beijing
What is the name of your State or Province?
[Unknown]: beijing
What is the two-letter country code for this unit?
[Unknown]: CN
Is CN=localhost, OU=wisentsoft, O=wisentsoft, L=beijing, ST=beijing, C=CN correct?
[no]: yes
Enter key password for <localhost>
(RETURN if same as keystore password):
Re-enter new password:
Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore server.keystore.jks -destkeystore server.keystore.jks -deststoretype pkcs12".
解释:
keystore: 密钥仓库存储证书文件。密钥仓库文件包含证书的私钥(保证私钥的安全)。
validity: 证书的有效时间,天
keytool -importkeystore -srckeystore server.keystore.jks -destkeystore server.keystore.jks -deststoretype pkcs12
查看证书list
keytool -list -v -keystore server.keystore.jks
openssl req -new -x509 -keyout ca-key -out ca-cert -days 3650
如果是kafka集群,这一步只需要在某一个节点执行,然后分发到其余节点,再执行一下命令(除了这步,其余的命令都需要每个节点执行)
其中PEM pass phrase就是之前输入的密库口令
具体要填写的如下
openssl req -new -x509 -keyout ca-key -out ca-cert -days 3650
Generating a 2048 bit RSA private key
..............................................................................+++
......................+++
writing new private key to 'ca-key'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:beijing
Locality Name (eg, city) [Default City]:beijing
Organization Name (eg, company) [Default Company Ltd]:wisentsoft
Organizational Unit Name (eg, section) []:wisentsoft
Common Name (eg, your name or your server's hostname) []:localhost
Email Address []:[email protected]
将生成的CA添加到**clients' truststore(客户的信任库)**
,以便client可以信任这个CA:
keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert
keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert
执行到这里我的目录下有这些文件
用步骤2.2 生成的CA签名 步骤2.1生成的证书。首先导出请求文件:
keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file
cert-file: 出口,服务器的未签名证书
然后用CA签名:
#{validity}是CA签名生效时间天数,推荐3650,{ca-password}密码和之前itc123一样
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password}
#openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 3650 -CAcreateserial -passin pass:
最后,你需要导入CA的证书和已签名的证书到密钥仓库:
keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed
[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ cat config/zookeeper_jaas.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="admin-secret"
user_kafka="kafka-secret";
};
[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ cat config/kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_现场用户名="密码";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="kafka"
password="kafka-secret";
};
vim config/kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="现场用户名"
password="密码";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="kafka"
password="kafka-secret";
};
bin/zookeeper-server-stop.sh -daemon config/zookeeper.properties
#/KAFKA_HOME要替换
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka_2.12-3.1.0/config/zookeeper_jaas.conf"
bin/zookeeper-server-start.sh -daemon config/zookeeper.properties
#/KAFKA_HOME要替换
export KAFKA_OPTS="-Djava.security.auth.login.config=/KAFKA_HOME/config/kafka_server_jaas.conf"
#export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka_2.12-3.1.0/config/kafka_server_jaas.conf"
bin/kafka-server-start.sh config/server.properties
bin/kafka-server-start.sh -daemon config/server.properties
vim producer.properties
bootstrap.servers=localhost:9093
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";
#sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";
#修改ssl.truststore.location
ssl.truststore.location=/opt/tool/server.truststore.jks
ssl.truststore.password=itc123
ssl.keystore.password=itc123
#修改ssl.keystore.location
ssl.keystore.location=/opt/tool/server.keystore.jks
#可以禁用服务器主机名验证,kafka2.0.x开始,将ssl.endpoint.identification.algorithm设置为了HTTPS,如需使用即
#ssl.endpoint.identification.algorithm=HTTPS
ssl.endpoint.identification.algorithm=
vim consumer.properties
sasl.mechanism=PLAIN
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";
#修改ssl.truststore.location
ssl.truststore.location=/opt/tool/server.truststore.jks
ssl.truststore.password=itc123
#可以禁用服务器主机名验证,kafka2.0.x开始,将ssl.endpoint.identification.algorithm设置为了HTTPS,如需使用即
#ssl.endpoint.identification.algorithm=HTTPS
ssl.endpoint.identification.algorithm=
[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ bin/kafka-console-producer.sh --broker-list localhost:9093 --topic test1 --producer.config config/producer.properties
#bin/kafka-console-producer.sh --broker-list 172.19.3.48:9093 --topic test1 --producer.config config/producer.properties
>sd
>sd
>sd
>sd
>sfd
>sfdf
[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test1 --from-beginning --consumer.config config/consumer.properties
#bin/kafka-console-consumer.sh --bootstrap-server 172.19.3.48:9093 --topic test1 --from-beginning --consumer.config config/consumer.properties
sdf
sadf
如果没有问题环境搭建完成.