跟踪kafka配置max.poll.records参数有否生效(默认max.poll.records=500)

2019独角兽企业重金招聘Python工程师标准>>> hot3.png

kafka消费客户端使用Spring-kafka第三方库的开源jar包,引入Maven

 

[java] view plain copy

  1.   
  2.     org.apache.kafka  
  3.     kafka_2.10  
  4.     0.10.2.1  
  5.   
  6.   
  7.     org.springframework.kafka  
  8.     spring-kafka  
  9.     1.2.1.RELEASE  
  10.   
  11.   
  12.     org.apache.kafka  
  13.     kafka-clients  
  14.     0.10.2.1  
  15.   


配置kafka参数max.poll.records 为10

 

 

 

断点调试org.springframework.kafka.listener.KafkaMessageListenerContainer里的run()函数里的以下代码

 

[java] view plain copy

  1. long lastReceive = System.currentTimeMillis();  
  2.     long lastAlertAt = lastReceive;  
  3.     while (isRunning()) {  
  4.         try {  
  5.             if (!this.autoCommit) {  
  6.                 processCommits();  
  7.             }  
  8.             processSeeks();  
  9.             if (this.logger.isTraceEnabled()) {  
  10.                 this.logger.trace("Polling (paused=" + this.paused + ")...");  
  11.             }  
  12.             ConsumerRecords records = this.consumer.poll(this.containerProperties.getPollTimeout()); //拉取数据在这行代码  
  13.             if (records != null && this.logger.isDebugEnabled()) {  
  14.                 this.logger.debug("Received: " + records.count() + " records");  
  15.             }  
  16.             if (records != null && records.count() > 0) {  
  17.                 if (this.containerProperties.getIdleEventInterval() != null) {  
  18.                     lastReceive = System.currentTimeMillis();  
  19.                 }  
  20.                 // if the container is set to auto-commit, then execute in the  
  21.                 // same thread  
  22.                 // otherwise send to the buffering queue  
  23.                 if (this.autoCommit) {  
  24.                     invokeListener(records);  
  25.                 }  
  26.                 else {  
  27.                     if (sendToListener(records)) {  
  28.                         if (this.assignedPartitions != null) {  
  29.                             // avoid group management rebalance due to a slow  
  30.                             // consumer  
  31.                             this.consumer.pause(this.assignedPartitions);  
  32.                             this.paused = true;  
  33.                             this.unsent = records;  
  34.                         }  
  35.                     }  
  36.                 }  
  37.             }  
  38.             else {  
  39.                 if (this.containerProperties.getIdleEventInterval() != null) {  
  40.                     long now = System.currentTimeMillis();  
  41.                     if (now > lastReceive + this.containerProperties.getIdleEventInterval()  
  42.                             && now > lastAlertAt + this.containerProperties.getIdleEventInterval()) {  
  43.                         publishIdleContainerEvent(now - lastReceive);  
  44.                         lastAlertAt = now;  
  45.                         if (this.theListener instanceof ConsumerSeekAware) {  
  46.                             seekPartitions(getAssignedPartitions(), true);  
  47.                         }  
  48.                     }  
  49.                 }  
  50.             }  
  51.             this.unsent = checkPause(this.unsent);  
  52.         }  
  53.         catch (WakeupException e) {  
  54.             this.unsent = checkPause(this.unsent);  
  55.         }  
  56.         catch (Exception e) {  
  57.             if (this.containerProperties.getGenericErrorHandler() != null) {  
  58.                 this.containerProperties.getGenericErrorHandler().handle(e, null);  
  59.             }  
  60.             else {  
  61.                 this.logger.error("Container exception", e);  
  62.             }  
  63.         }  
  64.     }  

 

 

消费者拉取Kafka broker数据在ConsumerRecords records = this.consumer.poll(this.containerProperties.getPollTimeout()); 这行代码

 

注意:kafka在0.9版本无max.poll.records参数,默认拉取记录是500,直到0.10版本才引入该参数,所以在0.9版本配置是无效的。

在ConsumerConfig.java类里有做默认配置拉取默认500

 

[ruby] view plain copy

  1. .define(MAX_POLL_RECORDS_CONFIG,  
  2.                                        Type.INT,  
  3.                                        500,  
  4.                                        atLeast(1),  
  5.                                        Importance.MEDIUM,  
  6.                                        MAX_POLL_RECORDS_DOC)  

转载于:https://my.oschina.net/lsl1991/blog/1629897

你可能感兴趣的:(跟踪kafka配置max.poll.records参数有否生效(默认max.poll.records=500))