【Kafka】Failed to allocate 16384 bytes within the configured max blocking time 60000 ms. Total memory

1. Problem

Failed to allocate 16384 bytes within the configured max blocking time 60000 ms. Total memory: 33554432 bytes. Available memory: 0 bytes. Poolable size: 16384 bytes

The translation is:
it is because the Kafka producer exceeds the configured maximum blocking time when allocating buffers, resulting in memory exhaustion.

2. Kafka configuration file

 kafka:
        #连接地址
        bootstrap-servers: 127.0.0.1:9092
        producer:
            # 发生错误后,消息重发的次数
            retries: 0
            #当有多个消息需要被发送到同一个分区时,生产者会把它们放在同一个批次里。该参数指定了一个批次可以使用的内存大小,按照字节数计算
            batch-size: 16384
            # 设置生产者内存缓冲区的大小
            buffer-memory: 33554432
            # 键的序列化方式
            key-serializer: org.apache.kafka.common.serialization.StringSerializer
            # 值的序列化方式
            value-serializer: org.apache.kafka.common.serialization.StringSerializer
            #value-serializer: org.springframework.kafka.support.serializer.JsonSerializer # 反序列化value的类
            # acks=0 : 生产者在成功写入消息之前不会等待任何来自服务器的响应
            # acks=1 : 只要集群的首领节点收到消息,生产者就会收到一个来自服务器成功响应
            # acks=all :只有当所有参与复制的节点全部收到消息时,生产者才会收到一个来自服务器的成功响应
            acks: all
            properties:
                # producer发送消息的延时,与batch-size配合使用,默认值0,单位ms
                linger:
                    ms: 50
                metadata:
                    max:
                        age:
                            ms: 300000
        consumer:
            # 自动提交的时间间隔 在spring boot 2.X 版本中这里采用的是值的类型为Duration 需要符合特定的格式,如1S,1M,2H,5D
            auto-commit-interval: 1s
            # 该属性指定了消费者在读取一个没有偏移量的分区或者偏移量无效的情况下该作何处理:
            # latest(默认值)在偏移量无效的情况下,消费者将从最新的记录开始读取数据(在消费者启动之后生成的记录)
            # earliest :在偏移量无效的情况下,消费者将从起始位置读取分区的记录
            auto-offset-reset: earliest
            # 是否自动提交偏移量,默认值是true,为了避免出现重复数据和数据丢失,可以把它设置为false,然后手动提交偏移量
            enable-auto-commit: false
            group-id: ${
    
    spring.profiles.active}-group
            # 键的反序列化方式
            key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
            # 值的反序列化方式
            value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
            #value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer # 反序列化value的类
            properties:
                max.poll.interval.ms: 86400000
        listener:
            # 在侦听器容器中运行的线程数。
            concurrency: 1
            #listner负责ack,每调用一次,就立即commit
            ack-mode: manual_immediate
            missing-topics-fatal: false
            type: batch
        # 其它属性配置
        properties:
            # 设置发送消息的大小
            max.request.size: 10240000

There are mainly 2 parameters here:

batch-size: 16384
buffer-memory: 33554432

This problem occurs because the producer has exceeded the configured maximum blocking time (60 seconds) when trying to allocate 16384 bytes of buffer space, and the available memory has been exhausted. This could be caused by the producer sending too large a message or the Kafka cluster being heavily loaded.

batch-size: 5000
buffer-memory: 53554432

Reduce the batch size, and then increase the buffer size, and there will be no problem in the observation. In addition, the value of max.block.ms can also be adjusted to extend the maximum blocking time, thereby giving producers more time to allocate buffer space.

Guess you like

Origin blog.csdn.net/daohangtaiqian/article/details/131963374