Integrate Kafka client (spring-kafka) to operate Kafka in Spring Boot microservices

Records : 457

Scenario : Integrate Kafka client spring-kafka-2.8.2 to operate Kafka in Spring Boot microservices. Use the KafkaTemplate encapsulated by Spring to operate the Kafka producer Producer. Use Spring-encapsulated @KafkaListener to operate Kafka's consumer Consumer.

Version:JDK 1.8,Spring Boot 2.6.3,kafka_2.12-2.8.0,spring-kafka-2.8.2。

Kafka installation : https://blog.csdn.net/zhangbeizhen18/article/details/129071395

1. Basic concepts

Event:An event records the fact that "something happened" in the world or in your business. It is also called record or message in the documentation.

Broker : A Kafka node is a broker; multiple Brokers can form a Kafka cluster.

Topic : Kafka classifies messages according to Topic, and each message published to Kafka needs to specify a Topic.

Producer : The message producer, the client that sends messages to the Broker.

Consumer : Message consumer, the client that reads messages from Broker.

ConsumerGroup : Each Consumer belongs to a specific ConsumerGroup, and a message can be consumed by multiple different ConsumerGroups; but only one Consumer in a ConsumerGroup can consume the message.

Partition : A topic can be divided into multiple partitions, and the internal messages of each partition are ordered.

publish : Publish, use Producer to write data to Kafka.

subscribe : Subscribe, use Consumer to read data from Kafka.

2. Configure Kafka information in microservices

(1) Add dependencies in pom.xml

pom.xml file:

<dependency>
  <groupId>org.springframework.kafka</groupId>
  <artifactId>spring-kafka</artifactId>
  <version>2.8.2</version>
</dependency>

Analysis: The choice of spring-kafka is generally the corresponding version integrated with spring-boot.

Please know : the bottom layer of the spring-kafka framework uses native kafka-clients. This example corresponds to version: 3.0.0.

(2) Configure Kafka information in application.yml

The configuration details are in the configuration of the official website: https://kafka.apache.org/documentation/

(1) application.yml configuration content

spring:
  kafka:
    #kafka服务端的IP和端口,格式:(ip:port)
    bootstrap-servers: 192.168.19.203:29001
    #生产者
    producer:
      #客户端发送服务端失败的重试次数
      retries: 2
      #多个记录被发送到同一个分区时,生产者将尝试将记录一起批处理成更少的请求.
      #此设置有助于提高客户端和服务器的性能,配置控制默认批量大小(以字节为单位)
      batch-size: 16384
      #生产者可用于缓冲等待发送到服务器的记录的总内存字节数(以字节为单位)
      buffer-memory: 33554432
      #指定key使用的序列化类
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      #指定value使用的序列化类
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
      #生产者producer要求leader节点在考虑完成请求之前收到的确认数,用于控制发送记录在服务端的持久化
      #acks=0,设置为0,则生产者producer将不会等待来自服务器的任何确认.该记录将立即添加到套接字(socket)缓冲区并视为已发送.在这种情况下,无法保证服务器已收到记录,并且重试配置(retries)将不会生效(因为客户端通常不会知道任何故障),每条记录返回的偏移量始终设置为-1.
      #acks=1,设置为1,leader节点会把记录写入本地日志,不需要等待所有follower节点完全确认就会立即应答producer.在这种情况下,在follower节点复制前,leader节点确认记录后立即失败的话,记录将会丢失.
      #acks=all,acks=-1,leader节点将等待所有同步复制副本完成再确认记录,这保证了只要至少有一个同步复制副本存活,记录就不会丢失.
      acks: -1
    consumer:
      #开启consumer的偏移量(offset)自动提交到Kafka
      enable-auto-commit: true
      #consumer的偏移量(offset)自动提交的时间间隔,单位毫秒
      auto-commit-interval: 1000
      #在Kafka中没有初始化偏移量或者当前偏移量不存在情况
      #earliest,在偏移量无效的情况下,自动重置为最早的偏移量
      #latest,在偏移量无效的情况下,自动重置为最新的偏移量
      #none,在偏移量无效的情况下,抛出异常.
      auto-offset-reset: latest
      #一次调用poll返回的最大记录条数
      max-poll-records: 500
      #请求阻塞的最大时间(毫秒)
      fetch-max-wait: 500
      #请求应答的最小字节数
      fetch-min-size: 1
      #心跳间隔时间(毫秒)
      heartbeat-interval: 3000
      #指定key使用的反序列化类
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      #指定value使用的反序列化类
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

(2) Analysis

The configuration class is automatically annotated in the spring boot package: spring-boot-autoconfigure-2.6.3.jar.

类:org.springframework.boot.autoconfigure.kafka.KafkaProperties。

Use the @ConfigurationProperties annotation to make it effective, the prefix is: spring.kafka.

(3) Loading logic

When the Spring Boot microservice starts, Spring Boot will read the configuration information of application.yml, find KafkaProperties in spring-boot-autoconfigure-2.6.3.jar according to the configuration content, and inject them into the corresponding properties. After the Spring Boot microservice is started, the configuration information of KafkaProperties can be obtained in the Spring environment.

Spring's spring-kafka framework injects KafkaProperties configuration information into the KafkaTemplate operation producer Producer.

Spring's spring-kafka framework uses KafkaProperties and @KafkaListener to operate Kafka's consumer Consumer

3. Use KafkaTemplate to operate Kafka producer Producer

After integrating spring-kafka, the Producer that operates Kafka is extremely simplified, and can be operated only by injecting KafkaTemplate. Other cumbersome object generation is handed over to the spring-kafka framework for processing.

KafkaTemplate全称:org.springframework.kafka.core.KafkaTemplate。

(1) sample code

@RestController
@RequestMapping("/hub/example/producer")
@Slf4j
public class OperateKafkaProducerController {
  @Autowired
  private KafkaTemplate<String, String> kafkaTemplate;
  private final String topicName = "hub-topic-city-01";
  @GetMapping("/f01_1")
  public Object f01_1() {
    try {
       //1.获取业务数据
       CityDTO cityDTO = CityDTO.buildDto(2023061501L, "杭州", "杭州是一个好城市");
       String cityStr = JSONObject.toJSONString(cityDTO);
       log.info("向Kafka的Topic: {} ,写入数据:", topicName);
       log.info(cityStr);
       //2.使用KafkaTemplate向Kafka写入数据
       kafkaTemplate.send(topicName, cityStr);
    } catch (Exception e) {
       log.info("Producer写入Topic异常.");
       e.printStackTrace();
    }
    return "写入成功";
  }
}

(2) Analysis code

Use the send method of KafkaTemplate to specify the topic name of Kafka and the data to be written, and then the Producer can write data to the Broker node of Kafka.

4. Use @KafkaListener to operate Kafka's consumer Consumer

After integrating spring-kafka, it is quite easy to operate Kafka's consumer Consumer. You only need to use the @KafkaListener annotation on the specified method to listen to the news of the Topic that consumes Kafka. Other cumbersome operations are handed over to the spring-kafka framework for processing.

Annotation KafkaListener full name: org.springframework.kafka.annotation.KafkaListener.

(1) sample code

@Component
@Slf4j
public class OperateKafkaConsumer {
  private final String topicName = "hub-topic-city-01";
  @KafkaListener(
        topics = {topicName},
        groupId = "hub-topic-city-01-group")
  public void consumeMsg(ConsumerRecord<?, ?> record) {
    try {
        //1.从ConsumerRecord中获取消费数据
        String originalMsg = (String) record.value();
        log.info("从Kafka中消费的原始数据: " + originalMsg);
        //2.把消费数据转换为DTO对象
        CityDTO cityDTO = JSONUtil.toBean(originalMsg, CityDTO.class);
        log.info("消费数据转换为DTO对象: " + cityDTO.toString());
    } catch (Exception e) {
        log.info("Consumer消费Topic异常.");
        e.printStackTrace();
    }
  }
}

(2) Analysis code

Use the @KafkaListener annotation to specify Kafka's Topic name and consumer group Id, and use ConsumerRecord as a function input parameter in the listening method of the annotation. The spring-kafka framework will automatically write the monitored data into ConsumerRecord. In the listening method In the above, take out the data of ConsumerRecord, which is the data consumed from the Kafka node.

5. Test

(1) Use the Postman test to call the producer to write data

Request RUL: http://127.0.0.1:18208/hub-208-kafka/hub/example/producer/f01_1

(2) Consumers automatically consume data

Log information:

向Kafka的Topic: hub-topic-city-01 ,写入数据:
{"cityDescribe":"杭州是一个好城市","cityId":2023061501,"cityName":"杭州","updateTime":"2023-06-17 11:29:58"}
从Kafka中消费的原始数据: {"cityDescribe":"杭州是一个好城市","cityId":2023061501,"cityName":"杭州","updateTime":"2023-06-17 11:29:58"}
消费数据转换为DTO对象: CityDTO(cityId=2023061501, cityName=杭州, cityDescribe=杭州是一个好城市, updateTime=Sat Jun 17 11:29:58 CST 2023)

6. Auxiliary class

@Data
@Builder
public class CityDTO {
  private Long cityId;
  private String cityName;
  private String cityDescribe;
  @JsonFormat(
          pattern = "yyyy-MM-dd HH:mm:ss"
  )
  private Date updateTime;
  public static CityDTO buildDto(Long cityId, String cityName,
                                 String cityDescribe) {
      return builder().cityId(cityId)
              .cityName(cityName).cityDescribe(cityDescribe)
              .updateTime(new Date()).build();
  }
}

Above, thanks.

June 17, 2023

Guess you like

Origin blog.csdn.net/zhangbeizhen18/article/details/131265217