跟踪kafka配置max.poll.records参数有否生效(默认max.poll.records=500)

kafka消费客户端使用Spring-kafka第三方库的开源jar包,引入Maven

[java] view plain copy

  1. <dependency>  
  2.     <groupId>org.apache.kafka</groupId>  
  3.     <artifactId>kafka_2.10</artifactId>  
  4.     <version>0.10.2.1</version>  
  5. </dependency>  
  6. <dependency>  
  7.     <groupId>org.springframework.kafka</groupId>  
  8.     <artifactId>spring-kafka</artifactId>  
  9.     <version>1.2.1.RELEASE</version>  
  10. </dependency>  
  11. <dependency>  
  12.     <groupId>org.apache.kafka</groupId>  
  13.     <artifactId>kafka-clients</artifactId>  
  14.     <version>0.10.2.1</version>  
  15. </dependency>  


配置kafka参数max.poll.records 为10

断点调试org.springframework.kafka.listener.KafkaMessageListenerContainer里的run()函数里的以下代码

扫描二维码关注公众号,回复: 82841 查看本文章

[java] view plain copy

  1. long lastReceive = System.currentTimeMillis();  
  2.     long lastAlertAt = lastReceive;  
  3.     while (isRunning()) {  
  4.         try {  
  5.             if (!this.autoCommit) {  
  6.                 processCommits();  
  7.             }  
  8.             processSeeks();  
  9.             if (this.logger.isTraceEnabled()) {  
  10.                 this.logger.trace("Polling (paused=" + this.paused + ")...");  
  11.             }  
  12.             ConsumerRecords<K, V> records = this.consumer.poll(this.containerProperties.getPollTimeout()); //拉取数据在这行代码  
  13.             if (records != null && this.logger.isDebugEnabled()) {  
  14.                 this.logger.debug("Received: " + records.count() + " records");  
  15.             }  
  16.             if (records != null && records.count() > 0) {  
  17.                 if (this.containerProperties.getIdleEventInterval() != null) {  
  18.                     lastReceive = System.currentTimeMillis();  
  19.                 }  
  20.                 // if the container is set to auto-commit, then execute in the  
  21.                 // same thread  
  22.                 // otherwise send to the buffering queue  
  23.                 if (this.autoCommit) {  
  24.                     invokeListener(records);  
  25.                 }  
  26.                 else {  
  27.                     if (sendToListener(records)) {  
  28.                         if (this.assignedPartitions != null) {  
  29.                             // avoid group management rebalance due to a slow  
  30.                             // consumer  
  31.                             this.consumer.pause(this.assignedPartitions);  
  32.                             this.paused = true;  
  33.                             this.unsent = records;  
  34.                         }  
  35.                     }  
  36.                 }  
  37.             }  
  38.             else {  
  39.                 if (this.containerProperties.getIdleEventInterval() != null) {  
  40.                     long now = System.currentTimeMillis();  
  41.                     if (now > lastReceive + this.containerProperties.getIdleEventInterval()  
  42.                             && now > lastAlertAt + this.containerProperties.getIdleEventInterval()) {  
  43.                         publishIdleContainerEvent(now - lastReceive);  
  44.                         lastAlertAt = now;  
  45.                         if (this.theListener instanceof ConsumerSeekAware) {  
  46.                             seekPartitions(getAssignedPartitions(), true);  
  47.                         }  
  48.                     }  
  49.                 }  
  50.             }  
  51.             this.unsent = checkPause(this.unsent);  
  52.         }  
  53.         catch (WakeupException e) {  
  54.             this.unsent = checkPause(this.unsent);  
  55.         }  
  56.         catch (Exception e) {  
  57.             if (this.containerProperties.getGenericErrorHandler() != null) {  
  58.                 this.containerProperties.getGenericErrorHandler().handle(e, null);  
  59.             }  
  60.             else {  
  61.                 this.logger.error("Container exception", e);  
  62.             }  
  63.         }  
  64.     }  

消费者拉取Kafka broker数据在ConsumerRecords<K, V> records = this.consumer.poll(this.containerProperties.getPollTimeout()); 这行代码

注意:kafka在0.9版本无max.poll.records参数,默认拉取记录是500,直到0.10版本才引入该参数,所以在0.9版本配置是无效的。

在ConsumerConfig.java类里有做默认配置拉取默认500

[ruby] view plain copy

  1. .define(MAX_POLL_RECORDS_CONFIG,  
  2.                                        Type.INT,  
  3.                                        500,  
  4.                                        atLeast(1),  
  5.                                        Importance.MEDIUM,  
  6.                                        MAX_POLL_RECORDS_DOC)  

猜你喜欢

转载自my.oschina.net/lsl1991/blog/1629897
max
今日推荐