kafka producer writes large messages

Recently, the project encountered a special scenario, which requires kafka to transmit 1 million pieces of data in the past, about 1 G, and cannot be unpacked due to other link restrictions.

 

At the beginning, the producer has been reporting network problems. After the following parameters need to be modified, in order to explore the previous statement that the limit cannot exceed 1G, the upper limit of all parameters is set to be close to 2G.

  

config/server.properties

 

socket.request.max.bytes=2048576000  

log.segment.bytes=2073741824

message.max.bytes=2048576000

replica.fetch.max.bytes=2048576000

fetch.message.max.bytes=2048576000 --- It seems that this should not be set in server.propeties. This is the parameter of the consumer, please verify it later.

replica.socket.timeout.ms=300000 -- This parameter doesn't seem to need to be set, but request.timeout.ms needs to be set in the producer. Otherwise, the sending time is too long and the sending fails.

 

Parameter Description:

 

socket.request.max.bytes =100*1024*1024

The maximum value of socket request, to prevent serverOOM , message.max.bytes must be less than socket.request.max.bytes , which will be overwritten by the specified parameters when the topic is created

 

 

log.segment.bytes =1024*1024*1024

topic的分区是以一堆segment文件存储的,这个控制每个segment的大小,会被topic创建时的指定参数覆盖

 

message.max.bytes =6525000

表示消息体的最大大小,单位是字节

 

replica.fetch.max.bytes =1024*1024

replicas每次获取数据的最大大小

 

fetch.message.max.bytes=1024*1024

每个拉取请求的每个topic分区尝试获取的消息的字节大小。这些字节将被读入每个分区的内存,因此这有助于控制消费者使用的内存。 拉取请求的大小至少与服务器允许的最大消息的大小一样大,否则生产者可能发送大于消费者可以拉取的消息。

 

replica.socket.timeout.ms

网络请求的socket超时,该值最少是replica.fetch.wait.max.ms

 

 

 

 

 生产者设定

  props.put("max.request.size", 2073741824);

  props.put("buffer.memory", 2073741824);

  props.put("timeout.ms", 30000000);

  props.put("request.timeout.ms", 30000000);

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326220812&siteId=291194637