使用JavaAPI 实现操作消费Kafak数据,偶遇一坑!

一、检查环境是否正常
查看虚拟机中的各个节点启动是否正常,这一步很关键。产品上线前不可能直接拉到服务器上测试,肯定在自己搭建的集群中先行测试;通过kafka控制台消费者是否可以消费数据;通过Java API 是否可以获取到kafka的消息。
二、示例代码!

import java.util.Arrays;
import java.util.Properties;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;



public class kafkaConsumer {

    public static void main(String[] args) {
        Properties props = new Properties();
        // 定义kakfa 服务的地址,不需要将所有broker指定上
        props.put("bootstrap.servers", "192.168.22.132:9092");
        // 制定consumer group
        props.put("group.id", "gg");
        // 是否自动确认offset
        props.put("enable.auto.commit", "true");
        // 自动确认offset的时间间隔
        props.put("auto.commit.interval.ms", "1000");
        // key的序列化类
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        // value的序列化类
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        // 定义consumer
        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);

        // 消费者订阅的topic, 可同时订阅多个
        consumer.subscribe(Arrays.asList("tt"));

        while (true) {
            // 读取数据,读取超时时间为100ms
            ConsumerRecords<String, String> records = consumer.poll(100);

            for (ConsumerRecord<String, String> record : records)
                System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
        }
    }

}

三、在虚拟机中,先要生产数据!
1.启动生产数据命令!

java -cp /home/hadoop/tools/producter.jar producter.ProductLog /home/hadoop/install/a.tsv 

2.启动flume

bin/flume-ng agent -c conf --name a1 -f /home/hadoop/install/flume/conf/flume-kafka.conf

3.开启消费者

./bin/kafka-console-consumer.sh --zookeeper 192.168.22.132:2181 --topic tt --from-beginning

4.查看kafka的logs 是否产生数据!
5.然后再操作JavaAPI,这是会在控制台输出数据,说明已经成功!!!!
四、出现的问题
1.控制台没有数据产生,或者报错,那么这时需要到虚拟机中修改一个配置文件添加port 和 host.naee即可!
[hadoop@hadoop01 config]$ pwd
/home/hadoop/install/kafka/config
[hadoop@hadoop01 config]$ vi server.properties


# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
############################添加port 和 host.naee##############
port=9092
host.name=192.168.22.132
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

配置完成后,重启kafka集群!!!

猜你喜欢

转载自blog.csdn.net/weixin_43646034/article/details/84890939