【Kafka】Kafka 配置 SASL_SSL jks鉴权验证方式

在这里插入图片描述

1.概述

2.配置server

delete.topic.enable=true
auto.create.topics.enable=true

listeners=SASL_SSL://localhost:9093,PLAINTEXT://localhost:9092
advertised.listeners=SASL_SSL://localhost:9093,PLAINTEXT://localhost:9092
# inter.broker.listener.name=SASL_SSL

sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
#security.inter.broker.protocol=SASL_PLAINTEXT
security.inter.broker.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=HTTPS

ssl.keystore.location=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.keystore.jks
ssl.keystore.password=ke123456
ssl.key.password=ke123456
ssl.truststore.location=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.truststore.jks
ssl.truststore.password=ke123456

ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
ssl.secure.random.implementation=SHA1PRNG




# ACL
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
super.users=User:admin

创建 SCRAM 证书

bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin


3.配置zk

这里暂时不要用自己安装的zk,用kafka自带的zk,否则会有问题
kafka的zk

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ pwd
/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram
[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ vi config/zookeeper.properties

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000

4.配置消费者 consumer.properties

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ cat config/consumer.properties

sasl.mechanism=PLAIN
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";

ssl.truststore.location=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.truststore.jks
ssl.truststore.password=ke123456


5.生成 SSL certificates

Create the certificates in /KAFKA_HOME/config



keytool -keystore server.keystore.jks -alias <alias> -validity 365 -genkey -keyalg RSA -ext SAN=DNS:<hostname>,DNS:<fqdn>,DNS:localhost,IP:<IP-ADDRESS>,IP:127.0.0.1

openssl req -new -x509 -keyout ca-key -out ca-cert -days 365 -subj '/CN=<fqdn>'   -extensions san   -config <(echo '[req]'; echo 'distinguished_name=req'; echo '[san]'; echo 'subjectAltName = DNS:localhost, IP:127.0.0.1, DNS:<hostname>, IP:<ip-address>')

keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert

keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert

keytool -keystore server.keystore.jks -alias <fqdn> -certreq -file cert-file -ext SAN=DNS:<hostname>,DNS:localhost,IP:<ip-address >,IP:127.0.0.1

openssl x509 -req  -extfile <(printf "subjectAltName = DNS:localhost, IP:127.0.0.1, DNS:<fqdn>, IP:<ip-address>") -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:<password>

keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
 
keytool -keystore server.keystore.jks -alias <alias> -import -file cert-signed.

这里直接可以使用 【Kafka】Kafka如何开启SSL 控制台消费与生产 代码消费与生产 的脚本来生成。

6.zookeeper_jaas.conf

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ cat config/zookeeper_jaas.conf
Server {
        org.apache.zookeeper.server.auth.DigestLoginModule required
        user_super="admin-secret"
        user_kafka="kafka-secret";
 };

7.kafka_server_jaas.conf

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ cat config/kafka_server_jaas.conf
KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="admin-secret"
    user_admin="admin-secret";
 };

Client {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="kafka"
    password="kafka-secret";
 };

8.启动zk

Add the zookeeper_jaas.conf file to the environment variable KAFKA_OPTS before starting zookeeper.

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/KAFKA_HOME/config/zookeeper_jaas.conf"
$ bin/zookeeper-server-start.sh -daemon config/zookeeper.properties

我的

[lcc@lcc  ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ export KAFKA_OPTS="-Djava.security.auth.login.config=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/config/zookeeper_jaas.conf"
[lcc@lcc  ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ bin/zkServer.sh restart conf/zoo.cfg
ZooKeeper JMX enabled by default
Using config: conf/zoo.cfg
ZooKeeper JMX enabled by default
Using config: conf/zoo.cfg
Stopping zookeeper ... STOPPED
ZooKeeper JMX enabled by default
Using config: conf/zoo.cfg
Starting zookeeper ... STARTED

这里要启动kafka自带的zk

9.启动kakfa

Add the kafka_server_jaas.conf file to the environment variable KAFKA_OPTS before starting kafka server.

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/KAFKA_HOME/config/kafka_server_jaas.conf"
bin/kafka-server-start.sh -daemon config/server.properties

我的操作

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ export KAFKA_OPTS="-Djava.security.auth.login.config=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/config/kafka_server_jaas.conf"
[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ bin/kafka-server-start.sh config/server.properties

然后就大量报错:【Kafka】kafka This may indicate that authentication failed due to invalid credentials

这个原因是我前面设置的其他认证方式没清除干净,清除干净就好了

但是在此运行报错:KeeperErrorCode = AuthFailed for /consumers

最终启动完成。

10.生产者

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$

bootstrap.servers=localhost:9093
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";

ssl.truststore.location=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.truststore.jks
ssl.truststore.password=ke123456
ssl.keystore.password=ke123456
ssl.keystore.location=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.keystore.jks


11.消费者配置

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$

sasl.mechanism=PLAIN
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret";

ssl.truststore.location=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.truststore.jks
ssl.truststore.password=ke123456


12.客户端配置

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ cat config/kafka_client_jaas.conf
KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin-secret";
};
Client {
  org.apache.zookeeper.server.auth.DigestLoginModule required
  username="kafka"
  password="kafka-secret";
};

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$

13.测试

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ bin/kafka-console-producer.sh --broker-list localhost:9093 --topic test1 --producer.config config/producer.properties
>sd
[2020-08-08 10:57:32,845] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 1 : {test1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
>sd
>sd
>sd
>sfd
>sfdf

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test1 --from-beginning --consumer.config config/consumer.properties
sdf
sadf

可以看到生产者消费者都正常工作了

14.代码测试

 @Test
    public void sshTest21() throws InterruptedException, ExecutionException {
        System.setProperty("java.security.auth.login.config", "/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/config/kafka_client_jaas.conf"); //配置文件路径

        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9093");
        props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
        props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, "/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.truststore.jks");
        props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "ke123456");
        props.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG, "/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.keystore.jks");
        props.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG, "ke123456");
        props.put(SslConfigs.SSL_KEY_PASSWORD_CONFIG, "ke123456");
        props.put(SaslConfigs.SASL_JAAS_CONFIG, "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"admin\" password=\"admin-secret\";");
//        props.put(SaslConfigs.SASL_JAAS_CONFIG, "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"admin\" password=\"admin-secret\";");
//        props.put(SaslConfigs.SASL_JAAS_CONFIG, "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"alice\" password=\"alice\";");
//        props.put(SaslConfigs.SASL_MECHANISM, "SCRAM-SHA-256");
        props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
        props.put(ProducerConfig.ACKS_CONFIG, "all");
        props.put(ProducerConfig.RETRIES_CONFIG, 0);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
        props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
        props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);

        Producer<String, String> producer = new KafkaProducer<>(props);
        int count = 0;
        System.out.println("开始发送数据,计数count=" + count);
        while (true)
        {
            //producer.send(new ProducerRecord<>(keyMap.get("topic"), strs[r.nextInt(10)], strs[r.nextInt(10)]));
            String aa = strs[r.nextInt(10)];
            System.out.println(aa);
            String topic = "test1";
            System.out.println(topic);
            RecordMetadata bb = producer.send(new ProducerRecord<>(topic, aa)).get();
            System.out.println(bb);
            Thread.sleep(1000);
            count++;
            System.out.println("发送完毕");
        }
    }

这里开始配置的是 props.put(SaslConfigs.SASL_MECHANISM, "SCRAM-SHA-256")结果无法写入数据,后来看里面的配置统一成props.put(SaslConfigs.SASL_MECHANISM, "PLAIN"); 这样就好了。

代码测试通过。

15.统一成SCRAM-SHA-256

15.1 server配置


#listeners=SASL_SSL://localhost:9093,PLAINTEXT://localhost:9092
#advertised.listeners=SASL_SSL://localhost:9093,PLAINTEXT://localhost:9092
listeners=SASL_SSL://localhost:9092
advertised.listeners=SASL_SSL://localhost:9092
# inter.broker.listener.name=SASL_SSL

# 这里改成SCRAM-SHA-512 两个要一样的
sasl.enabled.mechanisms=SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
# security.inter.broker.protocol=SASL_PLAINTEXT
security.inter.broker.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=HTTPS

ssl.keystore.location=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.keystore.jks
ssl.keystore.password=ke123456
ssl.key.password=ke123456
ssl.truststore.location=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.truststore.jks
ssl.truststore.password=ke123456

ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
ssl.secure.random.implementation=SHA1PRNG




# ACL
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
super.users=User:admin

~
~

下面的明文验证要改成ScramLoginModule验证,不然会kafka一直报错空指针异常

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ vi  /Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/config/kafka_server_jaas.conf
KafkaServer {
    org.apache.kafka.common.security.scram.ScramLoginModule  required
    username="admin"
    password="admin-secret"
    user_admin="admin-secret";
 };

Client {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="kafka"
    password="kafka-secret";
 };

15.2 启动kafka

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ export KAFKA_OPTS="-Djava.security.auth.login.config=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/config/kafka_server_jaas.conf"
[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ bin/kafka-server-start.sh config/server.properties

启动不报错

15.3 生产者

lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ cat config/producer.properties
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule  required username="admin" password="admin-secret";

ssl.truststore.location=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.truststore.jks
ssl.truststore.password=ke123456
ssl.keystore.password=ke123456
ssl.keystore.location=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.keystore.jks


[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scramcat /Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/config/kafka_client_jaas.conf
KafkaClient {
  org.apache.kafka.common.security.scram.ScramLoginModule  required
  username="admin"
  password="admin-secret";
};
Client {
  org.apache.zookeeper.server.auth.DigestLoginModule required
  username="kafka"
  password="kafka-secret";
};

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ export KAFKA_OPTS="-Djava.security.auth.login.config=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/config/kafka_client_jaas.conf"
[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test1 --producer.config config/producer.properties
>assd
>

启动生产者发送数据不报错。

15.4 消费者

[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ cat /Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/config/kafka_client_jaas.conf
KafkaClient {
  org.apache.kafka.common.security.scram.ScramLoginModule  required
  username="admin"
  password="admin-secret";
};
Client {
  org.apache.zookeeper.server.auth.DigestLoginModule required
  username="kafka"
  password="kafka-secret";
};


[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ export KAFKA_OPTS="-Djava.security.auth.login.config=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/config/kafka_client_jaas.conf"



[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ cat  config/consumer.properties

sasl.mechanism=SCRAM-SHA-512
security.protocol=SASL_SSL
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule  required username="admin" password="admin-secret";

ssl.truststore.location=/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.truststore.jks
ssl.truststore.password=ke123456


[lcc@lcc ~/soft/kafka/kafka_2.11-1.1.0_author_scram]$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test1 --from-beginning --consumer.config config/consumer.properties
ssd
assd

能正常消费到数据。

15.5 代码测试

/**
     *  本地mac 测试
     *
     * @throws InterruptedException
     * @throws ExecutionException
     */
    @Test
    public void sshTest21() throws InterruptedException, ExecutionException {
//        System.setProperty("java.security.auth.login.config", "/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/config/kafka_client_jaas.conf"); //配置文件路径

        Properties props = new Properties();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_SSL");
        props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, "/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.truststore.jks");
        props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "ke123456");
        props.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG, "/Users/lcc/soft/kafka/kafka_2.11-1.1.0_author_scram/certificates/kafka.keystore.jks");
        props.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG, "ke123456");
        props.put(SslConfigs.SSL_KEY_PASSWORD_CONFIG, "ke123456");
//        props.put(SaslConfigs.SASL_JAAS_CONFIG, "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"admin\" password=\"admin-secret\";");
        props.put(SaslConfigs.SASL_JAAS_CONFIG, "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"admin\" password=\"admin-secret\";");
//        props.put(SaslConfigs.SASL_JAAS_CONFIG, "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"alice\" password=\"alice\";");
        props.put(SaslConfigs.SASL_MECHANISM, "SCRAM-SHA-512");
//        props.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
        props.put(ProducerConfig.ACKS_CONFIG, "all");
        props.put(ProducerConfig.RETRIES_CONFIG, 0);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
        props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
        props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);

        Producer<String, String> producer = new KafkaProducer<>(props);
        int count = 0;
        System.out.println("开始发送数据,计数count=" + count);
        while (true)
        {
            //producer.send(new ProducerRecord<>(keyMap.get("topic"), strs[r.nextInt(10)], strs[r.nextInt(10)]));
            String aa = strs[r.nextInt(10)];
            System.out.println(aa);
            String topic = "test1";
            System.out.println(topic);
            RecordMetadata bb = producer.send(new ProducerRecord<>(topic, aa)).get();
            System.out.println(bb);
            Thread.sleep(1000);
            count++;
            System.out.println("发送完毕");
        }
    }

测试写入成功

参考:https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.0/sa-ui-installation-config-guide-10.1.0/GUID-DF659094-60D3-4E1B-8D63-3DE3ED8B0EDF.html

猜你喜欢

转载自blog.csdn.net/qq_21383435/article/details/107868685