kafka消费者监听数据原理

kafka确实是一个很牛逼的消息中间件。基本上是消息中间件中数据最快吞吐量最高的分布式消息中间件了。
由于公司对kafka全封装了,直接调用api就可以了。但是本人对kafka很感兴趣,就先看了下kafka监听topic里的新增的消息。
看了下源码其实很简单。

public class Consumer{

    private static final KafkaConsumer<String, String> consumer;
    private static ExecutorService executors;

// 消费者从partition里取数据时,需指定的一系列参数
    public ConsumerHandler(ConsumerProperty  consumerProperty, List<String> topics) {
        Properties props = new Properties();
        props.put("bootstrap.servers", consumerProperty.getBrokerList());
        props.put("group.id", consumerProperty.getGroupId());
        props.put("enable.auto.commit", consumerProperty.getEnableAutoCommit());
        props.put("auto.commit.interval.ms", consumerProperty.getAutoCommitInterval());
        props.put("session.timeout.ms", consumerProperty.getSessionTimeout());
        props.put("key.deserializer", consumerProperty.getKeySerializer());
        props.put("value.deserializer", consumerProperty.getValueSerializer());
        consumer = new KafkaConsumer<String, String>(props);
        consumer.subscribe(topics);
    }

    public void execute(int threads) {
        executors = new ThreadPoolExecutor(threads, workerNum, 0L, TimeUnit.MILLISECONDS,
                new ArrayBlockingQueue(1000), new ThreadPoolExecutor.CallerRunsPolicy());
        Thread t = new Thread(new Runnable(){//启动一个子线程来监听kafka消息
            public void run(){
       while (true) {  //采用循环不断从partition里捞数据
        ConsumerRecords<String, String> records = consumer.poll(200);
        for (final ConsumerRecord record : records) {
            System.out.println("监听到kafka消息。。。。。。");
            executors.submit(new ConsumerWorker(record));
        }
      }
            }});
        t.start();

    }

    public void shutdown() {
        if (consumer != null) {
            consumer.close();
        }
        if (executors != null) {
            executors.shutdown();
        }
        try {
            if (!executors.awaitTermination(10, TimeUnit.SECONDS)) {
                System.out.println("Timeout.... Ignore for this case");
            }
        } catch (InterruptedException ignored) {
            System.out.println("Other thread interrupted this shutdown, ignore for this case.");
            Thread.currentThread().interrupt();
        }
    }
}

看了源码得知:开启一个线程池ThreadPoolExecutor,for循环建立一个长连接,每200毫秒去kafka服务器拉取消息,每拉到一个消息,就分配给一个线程类ConsumerWorker去处理这个消息

猜你喜欢

转载自blog.csdn.net/CSDNzhangtao5/article/details/78248104