springboot整合kafka小案例

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接: https://blog.csdn.net/Romantic_sir/article/details/102591110

1、提前启动zk,kafka,并且创建一个Topic(first_kafka)

2、项目结构(创建springboot项目)

为了更加体现实际开发需求,一般生产者都是在调用某些接口的服务处理完逻辑之后然后往kafka里面扔数据,然后有一个消费者不停的监控这个Topic,然后处理数据,所以这里把生产者作为一个接口,消费者放到kafka这个目录下,注意@Component注解,不然扫描不到@KafkaListener3

3、具体实现代码

pom.xml所需要手动添加依赖:

        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
            <version>RELEASE</version>
            <scope>compile</scope>
        </dependency>

4、SpringBoot配置文件application.yml

spring:
  kafka:
    bootstrap-servers: hdp-1:9092
    producer:
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
    consumer:
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      group-id: one
      enable-auto-commit: true
      auto-commit-interval: 1000

5、生产者:

package com.zpark.kafkatest.test;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping("/kafka")
public class TestKafkaProducerController {

    @Autowired
    private KafkaTemplate<String,String> kafkaTemplate;
    @RequestMapping("/send/{msg}")
    public String send (@PathVariable String msg) {
        kafkaTemplate.send("first_kafka", msg);
        return "success";
    }
}

6、消费者
这里的消费者会监听这个主题,有消息就会执行,不需要进行while(true)

package com.zpark.kafkatest.test;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;

@Component
public class TestConsumer {
    @KafkaListener(topics = "first_kafka")
    public void listen (ConsumerRecord<?, ?> record) throws Exception {
        System.out.printf("topic = %s, offset = %d, value = %s \n", record.topic(), record.offset(), record.value());
    }

}

7、启动类

package com.zpark.kafkatest;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class KafkatestApplication {

    public static void main(String[] args) {
        SpringApplication.run(KafkatestApplication.class, args);
    }

}

8、测试

运行项目,执行localhost:8080/kafka/send/hello

控制台输出:topic = first_kafka, offset = 19, value = hello 

为了体现消费者不止执行一次就结束,再调用一次接口: 
localhost:8080/kafka/send/kafka

topic = first_kafka, offset = 20, value = kafka 

所以可以看到这里消费者实际上是不停的poll Topic数据的。
 

猜你喜欢

转载自blog.csdn.net/Romantic_sir/article/details/102591110