flume-kafka-sparkStreaming日志分析

进入kafka文件夹:

启动zookeeper:./bin/zookeeper-server-start.sh -daemon config/zookeeper.properties(进程:QuorumPeerMain) 

启动kafka:./bin/kafka-server-start.sh -daemon config/server.properties (进程:kafka)

创建kafka主题:./bin/kafka-topics.sh --creat --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic baba(注意区分--和-)

查询创建的主题:./bin/kafka-topics.sh --list --zookeeper localhost:2181

查询所创建的主题详情:./bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic baba


启动kafka生产者: 
bin/kafka-console-producer.sh –broker-list localhost:9092 –topic test 
5.接收消息 
新开一个命令行窗口,启动kafka消费者: 

bin/kafka-console-consumer.sh –zookeeper localhost:2181 –topic test –from-beginning 



报错:ERROR Error when sending message to topic baba with key: null, value: 9 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms



-------------未完待续------------------------

猜你喜欢

转载自blog.csdn.net/huixu5662/article/details/79363620