Kafka使用-原理学习总结

目录

kafka使用

kafka事件监听

kafka原理

拓扑结构

消费者结构 

kafka使用

pom引入kafka

<dependency>
  <groupId>org.springframework.kafka</groupId>
  <artifactId>spring-kafka</artifactId>
  <version>1.2.2.RELEASE</version>
</dependency>

yml文件配置

spring:
  application:
    name: xxx
  profiles:
    active: local
  kafka:
    consumer:
      group-id: ${spring.application.name}
      enable-auto-commit: false
      auto-offset-reset: latest
    producer:
      client-id: ${spring.application.name}
      retries: 3

生产者:

@Autowired
KafkaTemplate kafkaTemplate;
private void pushKafka(Long aaa, String bbb) throws Exception {
    try {
        Map<String, Object> paramMap = new HashMap<>();
        paramMap.put("aaa", aaa);
        paramMap.put("bbb", bbb);
        kafkaTemplate.send("topicName",
                UUID.randomUUID().toString().replace("-", ""),
                JSONObject.toJSONString(paramMap));
       
    } catch (Exception e) {
        throw new Exception("");
    }
}

如此,map中的消息会被发出去 

消费者:

扫描二维码关注公众号,回复: 5412491 查看本文章

 新建一个Listener监听某一topic的消息

@Component
public class xxxListener {
    protected Logger logger = LogManager.getLogger(this.getClass());

    @KafkaListener(topics = {"topicName"}, clientIdPrefix = "xxxListener")
    public void capitalHandler(@Payload String value, @Header(KafkaHeaders.OFFSET) int offset,
                               @Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key,
                               @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
                               @Header(KafkaHeaders.RECEIVED_TOPIC) String topic) {
        logger.info("xxxListener kafkaConsume: key:{} value:{} topic:{} partition:{} offset:{}", key, value,topic, partition, offset);
        try {
            JSONObject jsonObject = JSON.parseObject(value);
            Boolean bbb = jsonObject.getString("aaa");
            Long aaa = jsonObject.getLong("bbb");
                
        } catch (Exception e) {
            logger.error("listener error,value is {}", value, e);
        }
    }

} 

如此,发送的消息被接收并且进行业务处理 

kafka事件监听

监听produce事件,实现ProducerListener接口

public class LoggingProducerListener<K,V> implements ProducerListener<K, V> {
    protected Logger logger = LogManager.getLogger(getClass());

    @Override public void onSuccess(String topic, Integer partition, K key, V value,
                                    RecordMetadata recordMetadata) {
        logger.info("kafkaSendSuccess: topic:{} p:{} key:{} value:{}",topic, recordMetadata.partition(), key, value );
    }

    @Override
    public void onError(String topic, Integer partition, K key, V value, Exception exception) {
        String tmp = String.format("kafkaSendError: topic:%s p:%s key:%s value:%s",topic,partition, key, value);
        logger.error(tmp, exception);
    }

    @Override public boolean isInterestedInSuccess() {
        return true;
    }
}

 可以监听send Kafka消息时成功/失败,进行相应的业务逻辑处理

kafka原理

拓扑结构

一个topic可以有1个或多个partition

消费者结构 

同一topic的消息会发给不同的Consumer Group,但是一个partition只能被一个group中的一个consumer消费,一个consumer可以消费多个partition

Spring Boot对Kafka的支持

Spring提供了KafkaAutoConfiguration类,我们只需在yml中设置必须的参数,其他的都由Spring帮我们处理

@Configuration
@ConditionalOnClass(KafkaTemplate.class)
@EnableConfigurationProperties(KafkaProperties.class)
@Import(KafkaAnnotationDrivenConfiguration.class)
public class KafkaAutoConfiguration {

   private final KafkaProperties properties;

   private final RecordMessageConverter messageConverter;

   public KafkaAutoConfiguration(KafkaProperties properties,
         ObjectProvider<RecordMessageConverter> messageConverter) {
      this.properties = properties;
      this.messageConverter = messageConverter.getIfUnique();
   }

   @Bean
   @ConditionalOnMissingBean(KafkaTemplate.class)
   public KafkaTemplate<?, ?> kafkaTemplate(
         ProducerFactory<Object, Object> kafkaProducerFactory,
         ProducerListener<Object, Object> kafkaProducerListener) {
      KafkaTemplate<Object, Object> kafkaTemplate = new KafkaTemplate<>(
            kafkaProducerFactory);
      if (this.messageConverter != null) {
         kafkaTemplate.setMessageConverter(this.messageConverter);
      }
      kafkaTemplate.setProducerListener(kafkaProducerListener);
      kafkaTemplate.setDefaultTopic(this.properties.getTemplate().getDefaultTopic());
      return kafkaTemplate;
   }

   @Bean
   @ConditionalOnMissingBean(ProducerListener.class)
   public ProducerListener<Object, Object> kafkaProducerListener() {
      return new LoggingProducerListener<>();
   }

   @Bean
   @ConditionalOnMissingBean(ConsumerFactory.class)
   public ConsumerFactory<?, ?> kafkaConsumerFactory() {
      return new DefaultKafkaConsumerFactory<>(
            this.properties.buildConsumerProperties());
   }

   @Bean
   @ConditionalOnMissingBean(ProducerFactory.class)
   public ProducerFactory<?, ?> kafkaProducerFactory() {
      DefaultKafkaProducerFactory<?, ?> factory = new DefaultKafkaProducerFactory<>(
            this.properties.buildProducerProperties());
      String transactionIdPrefix = this.properties.getProducer()
            .getTransactionIdPrefix();
      if (transactionIdPrefix != null) {
         factory.setTransactionIdPrefix(transactionIdPrefix);
      }
      return factory;
   }

   @Bean
   @ConditionalOnProperty(name = "spring.kafka.producer.transaction-id-prefix")
   @ConditionalOnMissingBean
   public KafkaTransactionManager<?, ?> kafkaTransactionManager(
         ProducerFactory<?, ?> producerFactory) {
      return new KafkaTransactionManager<>(producerFactory);
   }

   @Bean
   @ConditionalOnProperty(name = "spring.kafka.jaas.enabled")
   @ConditionalOnMissingBean
   public KafkaJaasLoginModuleInitializer kafkaJaasInitializer() throws IOException {
      KafkaJaasLoginModuleInitializer jaas = new KafkaJaasLoginModuleInitializer();
      Jaas jaasProperties = this.properties.getJaas();
      if (jaasProperties.getControlFlag() != null) {
         jaas.setControlFlag(jaasProperties.getControlFlag());
      }
      if (jaasProperties.getLoginModule() != null) {
         jaas.setLoginModule(jaasProperties.getLoginModule());
      }
      jaas.setOptions(jaasProperties.getOptions());
      return jaas;
   }

   @Bean
   @ConditionalOnMissingBean
   public KafkaAdmin kafkaAdmin() {
      KafkaAdmin kafkaAdmin = new KafkaAdmin(this.properties.buildAdminProperties());
      kafkaAdmin.setFatalIfBrokerNotAvailable(this.properties.getAdmin().isFailFast());
      return kafkaAdmin;
   }

}

注意自动配置类中的部分注解

@ConditionalOnMissingBean(KafkaTemplate.class)

当这个bean不存在的时候才会进行构建,即我们在自己的配置类中进行初始化,则不会在自动配置类中再次处理。

猜你喜欢

转载自blog.csdn.net/lbh199466/article/details/87690521