kafka接口协议二 详细

kafka没有直接将消息发给某个topic的partition,所以product必须发送partition的broker

 
client可以从任意broker获得cluster metadata信息,获得paritition的leader broker,当leader broker处理数据有误时,有两种情况1.broker死了,2broker不在包含此partition;
所以需要循环处理过程,当返回有误,则刷新metadata,在执行
 
官网:
  • Cycle through a list of "bootstrap" kafka urls until we find one we can connect to. Fetch cluster metadata.
  • Process fetch or produce requests, directing them to the appropriate broker based on the tofspic/partitions they send to or fetch from.
  • If we get an appropriate error, refresh the metadata and try again.
 
partition策略:
为了分摊请求压力和数据均衡, 用多余broker的producer随机写parition
 
producer:使用异步,默认是batch,1表示压缩用gz
  1. props.put("zk.connect"‚ "127.0.0.1:2181");  
  2. props.put("serializer.class""kafka.serializer.StringEncoder");  
  3. props.put("producer.type""async");  
  4. props.put("compression.codec""1");  

猜你喜欢

转载自blackproof.iteye.com/blog/2226152