十六: brokerList must contain at least one Kafka broke 案例及其它几个坑-阿里云

一:启动flume 告警如下:

nohup bin/flume-ng agent \
-c /home/hadoop/app/apache-flume-1.6.0-cdh5.7.0-bin/conf  -f /home/hadoop/app/apache-flume-1.6.0-cdh5.7.0-bin/conf/exec_memory_kafka.properties \
-n a1 -Dflume.root.logger=INFO,console &

tail -f nohup.out

2019-02-06 20:25:52,001 (conf-file-poller-0) [WARN - org.apache.flume.sink.kafka.KafkaSink.configure(KafkaSink.java:209)] The Property ‘topic’ is not set. Using the default topic name: default-flume-topic
2019-02-06 20:25:52,002 (conf-file-poller-0) [INFO - org.apache.flume.sink.kafka.KafkaSinkUtil.getKafkaProperties(KafkaSinkUtil.java:34)] context={ parameters:{kafka.producer.compression.type=snappy, kafka.producer.linger.ms=1, kafka.flumeBatchSize=6000, kafka.bootstrap.servers=172.17.4.16:9092,172.17.4.17:9092,172.17.217.124:9092, kafka.producer.acks=1, channel=c1, kafka.topic=kunming, type=org.apache.flume.sink.kafka.KafkaSink} }
2019-02-06 20:25:52,019 (conf-file-poller-0) [ERROR - org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:427)] Sink k1 has been removed due to an error during configuration
org.apache.flume.conf.ConfigurationException: brokerList must contain at least one Kafka broker
其中阿里云上flume配置如下:

# Name the components on this agent
a1.sources = r1
a1.channels = c1
a1.sinks = k1

# Describe/configure the source
a1.sources.r1.type = com.onlinelog.analysis.ExecSource_JSON
a1.sources.r1.command = tail -F /home/hadoop/data/log.out
a1.sources.r1.hostname= hadoop001
a1.sources.r1.servicename =weizhonggui


# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic =kunming
**a1.sinks.k1.kafka.bootstrap.servers = 172.17.4.16:9092,172.17.4.17:9092,172.17.217.124:9092**内网IP
a1.sinks.k1.kafka.flumeBatchSize = 6000
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.k1.kafka.producer.compression.type = snappy


# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.keep-alive = 90
a1.channels.c1.capacity = 2000000
a1.channels.c1.transactionCapacity = 60


# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel =  c1

二:原因分析:

2.1 The Property ‘topic’ is not set. Using the default topic name: default-flume-topic: -brokerList must contain at least one Kafka broker:

从这两条可以看出,topic和broker 都没找到:
get是看到配置的内容:
getKafkaProperties(KafkaSinkUtil.java:34)] context={ parameters:{kafka.producer.compression.type=snappy, kafka.producer.linger.ms=1, kafka.flumeBatchSize=6000, kafka.bootstrap.servers=172.17.4.16:9092,172.17.4.17:9092,172.17.217.124:9092, kafka.producer.acks=1, channel=c1, kafka.topic=kunming, type=org.apache.flume.sink.kafka.KafkaSink} }

则估计是各个版本之间配置语言可能不一样:我当前配置为最新1.9版本:而实际的版本是1.6

2.2 去核查这两个版本间差异:

通过比对发现两个版本之间的差异:
在这里插入图片描述
在这里插入图片描述

三:修改相关配置后:

在1.6版本和最新的版本差异还是较大,在使用时候留意所安装版本

# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic =kunming
a1.sinks.k1.brokerList = 172.17.4.16:9092,172.17.4.17:9092,172.17.217.124:9092
a1.sinks.k1.batchSize = 6000
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.k1.kafka.producer.compression.type = snappy

[hadoop@hadoop001 flume]$ tail -f nohup.out
2019-02-07 20:11:41,303 (lifecycleSupervisor-1-4) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SOURCE, name: r1 started
2019-02-07 20:11:41,735 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging c l a s s . i n f o ( L o g g i n g . s c a l a : 68 ) ] V e r i f y i n g p r o p e r t i e s 2019 02 0720 : 11 : 41 , 817 ( l i f e c y c l e S u p e r v i s o r 1 1 ) [ I N F O k a f k a . u t i l s . L o g g i n g class.info(Logging.scala:68)] Verifying properties 2019-02-07 20:11:41,817 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging class.info(Logging.scala:68)] Property key.serializer.class is overridden to kafka.serializer.StringEncoder
2019-02-07 20:11:41,817 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging c l a s s . i n f o ( L o g g i n g . s c a l a : 68 ) ] P r o p e r t y m e t a d a t a . b r o k e r . l i s t i s o v e r r i d d e n t o 172.17.4.16 : 9092 , 172.17.4.17 : 9092 , 172.17.217.124 : 90922019 02 0720 : 11 : 41 , 818 ( l i f e c y c l e S u p e r v i s o r 1 1 ) [ W A R N k a f k a . u t i l s . L o g g i n g class.info(Logging.scala:68)] Property metadata.broker.list is overridden to 172.17.4.16:9092,172.17.4.17:9092,172.17.217.124:9092 2019-02-07 20:11:41,818 (lifecycleSupervisor-1-1) [WARN - kafka.utils.Logging class.warn(Logging.scala:83)] Property producer.compression.type is not valid
2019-02-07 20:11:41,818 (lifecycleSupervisor-1-1) [WARN - kafka.utils.Logging c l a s s . w a r n ( L o g g i n g . s c a l a : 83 ) ] P r o p e r t y p r o d u c e r . l i n g e r . m s i s n o t v a l i d 2019 02 0720 : 11 : 41 , 818 ( l i f e c y c l e S u p e r v i s o r 1 1 ) [ I N F O k a f k a . u t i l s . L o g g i n g class.warn(Logging.scala:83)] Property producer.linger.ms is not valid 2019-02-07 20:11:41,818 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging class.info(Logging.scala:68)] Property request.required.acks is overridden to 1
2019-02-07 20:11:41,818 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property serializer.class is overridden to kafka.serializer.DefaultEncoder
2019-02-07 20:11:42,163 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SINK, name: k1: Successfully registered new MBean.
2019-02-07 20:11:42,163 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SINK, name: k1 started

猜你喜欢

转载自blog.csdn.net/weizhonggui/article/details/86769447