1、下载kafka:http://kafka.apache.org/downloads
2、解压,kafka需要运行在zookeeper集群上,所以先启动zk
bin/zookeeper-server-start.sh config/zookeeper.properties &
在后边加& 后台运行,可以用telnet 127.0.0.1 2181
control+]
3、配置kafka
Kafka在config目录下提供了一个基本的配置文件。为了保证可以远程访问Kafka,我们需要修改两处配置。
打开config/server.properties文件,在很靠前的位置有listeners和 advertised.listeners两处配置的注释,去掉这两个注释,并且根据当前服务器的IP修改如下:
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://127.0.0.1:9092 #阿里云内网ip
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://127.0.0.1:9092 #阿里云外网ip
4、启动kafka
bin/kafka-server-start.sh config/server.properties
可能会报错,kafka默认需要1g内存.
Java Hotspot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5330000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12)
- 将 kafka-server-start.sh的
- export KAFKA_HEAP_OPTS=”-Xmx1G -Xms1G” 修改为 export KAFKA_HEAP_OPTS=”-Xmx256M -Xms128M”
- 再次启动
5、 创建SpringBoot项目
添加pom依赖
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<!--<version>2.1.5.RELEASE</version>-->
<exclusions>
<exclusion>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.8.2.0</version>
</dependency>
我用的是1.5.9的springboot 2.0.1亲测也可以 pom只添加
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>也可以
6、producer代码
package test.testkafka;
import java.util.Properties;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
public class Producer {
public static void main(String[] args) {
String topic = "simon_test2";
Properties prop = new Properties();
prop.put("bootstrap.servers", "127.0.0.1:9092"); //主机ip:port
prop.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");//key类型序列化(String型)
prop.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");//value类型序列化(String型)
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(prop);
int messageNo = 1;
String msg = "data_";
while(true){
msg = msg + messageNo;
producer.send(new ProducerRecord<String, String>(topic,msg));
System.out.println("Send:" + msg);
msg = "data_";
messageNo++;
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
7、Consumer
Cosumer类
package test.testkafka;
import org.apache.avro.generic.GenericRecord;
import org.apache.avro.util.Utf8;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.listener.MessageListener;
import org.springframework.stereotype.Component;
@Component
public class Consumer implements MessageListener<String, Object> {
@Override
public void onMessage(ConsumerRecord<String, Object> consumerRecord) {
System.out.println(consumerRecord.toString());
try {
Object genericRecord = consumerRecord.value();
if(genericRecord instanceof GenericRecord){
GenericRecord gr = (GenericRecord) genericRecord;
String applyId = ((Utf8)gr.get("id")).toString();
}
} catch (Exception e) {
}
}
}
需要一个配置类,注入配置
KafkaConfig类
package test.testkafka;
import java.util.Map;
import org.springframework.boot.autoconfigure.kafka.KafkaProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.listener.MessageListener;
import org.springframework.kafka.listener.config.ContainerProperties;
@Configuration
public class KafkaConfig {
@Bean
public ConcurrentMessageListenerContainer<String, String> listenerContainer(MessageListener<String, Object> listener, KafkaProperties props) {
// topics
ContainerProperties properties = new ContainerProperties("simon_test2");
// listener
properties.setMessageListener(listener);
Map<String, Object> consumerProps = props.getConsumer().buildProperties();
DefaultKafkaConsumerFactory<String, String> cf =
new DefaultKafkaConsumerFactory<>(consumerProps);
ConcurrentMessageListenerContainer<String, String> container =
new ConcurrentMessageListenerContainer<>(cf, properties);
return container;
}
@Bean
public ProducerFactory producerFactory(KafkaProperties props) {
Map<String, Object> producerProps = props.getProducer().buildProperties();
return new DefaultKafkaProducerFactory(producerProps);
}
@Bean
public KafkaTemplate<String, Object> kafkaTemplate(ProducerFactory<String, Object> producerFactory) {
return new KafkaTemplate<String, Object>(producerFactory);
}
}
配置yml spring: kafka: consumer: auto-offset-reset: latest enable-auto-commit: true group-id: local bootstrap-servers: 127.0.0.1:9092 max-poll-records: 10 value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
这里需要注意一个地方:不能直接消费到运行Producer产生到消息,要启动整个项目,然后
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic simon_test2 来生产消息。
程序监听到消息消费。同一个消费组中到消费者只会消费一次topic中partition的消息,如果需要 则将其加入到不同消费者组形成发布订阅模型。
avro还可以加schema