Kafka 入门教程之二: Java连接Kafka之生产者

1. 检查service配置文件

修改参数 advertised.listeners=PLAINTEXT://tjtestrac1:9092
注意防火墙对端口的屏蔽
[kafka@tjtestrac1 config]$ vi server.properties 
############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
#listeners=PLAINTEXT:tjtestrac1:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://tjtestrac1:9092

 

2. 重启kafka 服务

这里采用最简单粗暴的方式关闭 生产千万不要这样
[kafka@tjtestrac1 config]$ jps
12048 Jps
30323 QuorumPeerMain
4739 Kafka 
[kafka@tjtestrac1 config]$ kill -9 30323
[kafka@tjtestrac1 config]$ kill -9 4739
[kafka@tjtestrac1 config]$ jps
12727 Jps
确认关闭干净后 重新启动
[kafka@tjtestrac1 config]$ zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties & 
[1] 16757
[kafka@tjtestrac1 config]$ jps
17093 Jps
16757 QuorumPeerMain

[kafka@tjtestrac1 config]$  kafka-server-start.sh $KAFKA_HOME/config/server.properties &
[kafka@tjtestrac1 config]$ jps
19858 Kafka
16757 QuorumPeerMain
20600 Jps

2. Kafka producer 的体系结构和基本概念

Producer component

Producer

Producer 提供了很多不同场景下的 API 接口:
1. 对于传统的credit card 交易系统是不允许丢失数据以及产生错误的重复数据
1. 对于互联网数据用户行为跟踪等数据的采集,系统是可以容忍数据丢失以及错误数据的产生的

3. Eclipse 构建Kafka 项目 (Maven 构建)

a) 创建名为kafkaClient 的项目

Kafka Client Project

b) 配置pom.xml 文件

https://mvnrepository.com/artifact/org.apache.kafka/kafka_2.12/2.1.0

Pom.xml 文件设置

配置slf4j 服务

https://mvnrepository.com/artifact/org.slf4j/slf4j-api/1.8.0-beta2
在这里插入图片描述

https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12/1.8.0-beta2

在这里插入图片描述

https://mvnrepository.com/artifact/org.slf4j/slf4j-nop/1.8.0-beta2
在这里插入图片描述
https://mvnrepository.com/artifact/commons-logging/commons-logging
在这里插入图片描述
https://mvnrepository.com/artifact/org.slf4j/slf4j-simple/1.8.0-beta2
在这里插入图片描述
https://mvnrepository.com/artifact/org.slf4j/slf4j-jdk14/1.8.0-beta2
在这里插入图片描述

在pom 文件中加入
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka -->
  <modelVersion>4.0.0</modelVersion>
  <groupId>KafkaClient</groupId>
  <artifactId>KafkaClient</artifactId>
  <version>0.0.1-SNAPSHOT</version>
<dependencies>
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka_2.12</artifactId>
    <version>2.1.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-api -->
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-api</artifactId>
    <version>1.8.0-beta2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12 -->
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.8.0-beta2</version>
    <scope>test</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/commons-logging/commons-logging -->
<dependency>
    <groupId>commons-logging</groupId>
    <artifactId>commons-logging</artifactId>
    <version>1.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-nop -->
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-nop</artifactId>
    <version>1.8.0-beta2</version>
    <scope>test</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-simple -->
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-simple</artifactId>
    <version>1.8.0-beta2</version>
    <scope>test</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-jdk14 -->
<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-jdk14</artifactId>
    <version>1.8.0-beta2</version>
    <scope>test</scope>
</dependency>
</dependencies>
 

log4j.xml

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">

<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">
  <appender name="console" class="org.apache.log4j.ConsoleAppender"> 
    <param name="Target" value="System.out"/> 
    <layout class="org.apache.log4j.PatternLayout"> 
      <param name="ConversionPattern" value="%-5p %c{1} - %m%n"/> 
    </layout> 
  </appender> 

  <root> 
    <priority value ="debug" /> 
    <appender-ref ref="console" /> 
  </root>

</log4j:configuration>

c) 创建一个生产者的类 MsgSender

MsgSender

生产者几个配置的必要信息如下:

parameter name parameter desc
bootstrap.servers 配置连接Kafka 服务器的连接字符串信息
key.serializer Key 的class: 接口org.apache.kafka.common.serialization.Serializer 的实现类
value.serializer Value 的 class: 生产者指定固定的类的对象向broker 发送信息
java 代码:
import java.util.Properties;
import java.util.concurrent.Future;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;

public class MsgSender {

  /**
   * @param args
   */
  public static String TOPIC = "TestMsg";
  public static void main(String[] args)throws Exception {
  	Properties kafkaProps = new Properties();
  	kafkaProps.setProperty("bootstrap.servers", "tjtestrac1:9092");
  	kafkaProps.setProperty("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
  	kafkaProps.setProperty("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

  	Producer<String, String> producer = new KafkaProducer<String, String>(kafkaProps);
  	ProducerRecord<String, String> record = new ProducerRecord<String, String>(TOPIC, "123123123",
  			"Welcome to my home!!! ");
  	try {
  		System.out.print("sending start..............");
  		Future future  = producer.send(record);
  		future.get();
  		System.out.print("sending end..............");
  	} catch (Exception e) {
  		e.printStackTrace();
  	}finally {
  		producer.close();
  		
  	}

  }

} 

控制台输出

INFO  ProducerConfig - ProducerConfig values: 
   acks = 1
   batch.size = 16384
   bootstrap.servers = [tjtestrac1:9092]
   buffer.memory = 33554432
   client.dns.lookup = default
   client.id = 
   compression.type = none
   connections.max.idle.ms = 540000
   delivery.timeout.ms = 120000
   enable.idempotence = false
   interceptor.classes = []
   key.serializer = class org.apache.kafka.common.serialization.StringSerializer
   linger.ms = 0
   max.block.ms = 60000
   max.in.flight.requests.per.connection = 5
   max.request.size = 1048576
   metadata.max.age.ms = 300000
   metric.reporters = []
   metrics.num.samples = 2
   metrics.recording.level = INFO
   metrics.sample.window.ms = 30000
   partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
   receive.buffer.bytes = 32768
   reconnect.backoff.max.ms = 1000
   reconnect.backoff.ms = 50
   request.timeout.ms = 30000
   retries = 2147483647
   retry.backoff.ms = 100
   sasl.client.callback.handler.class = null
   sasl.jaas.config = null
   sasl.kerberos.kinit.cmd = /usr/bin/kinit
   sasl.kerberos.min.time.before.relogin = 60000
   sasl.kerberos.service.name = null
   sasl.kerberos.ticket.renew.jitter = 0.05
   sasl.kerberos.ticket.renew.window.factor = 0.8
   sasl.login.callback.handler.class = null
   sasl.login.class = null
   sasl.login.refresh.buffer.seconds = 300
   sasl.login.refresh.min.period.seconds = 60
   sasl.login.refresh.window.factor = 0.8
   sasl.login.refresh.window.jitter = 0.05
   sasl.mechanism = GSSAPI
   security.protocol = PLAINTEXT
   send.buffer.bytes = 131072
   ssl.cipher.suites = null
   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
   ssl.endpoint.identification.algorithm = https
   ssl.key.password = null
   ssl.keymanager.algorithm = SunX509
   ssl.keystore.location = null
   ssl.keystore.password = null
   ssl.keystore.type = JKS
   ssl.protocol = TLS
   ssl.provider = null
   ssl.secure.random.implementation = null
   ssl.trustmanager.algorithm = PKIX
   ssl.truststore.location = null
   ssl.truststore.password = null
   ssl.truststore.type = JKS
   transaction.timeout.ms = 60000
   transactional.id = null
   value.serializer = class org.apache.kafka.common.serialization.StringSerializer

登陆服务器 验证消息是否发送成功

[kafka@tjtestrac1 config]$ kafka-topics.sh --list --zookeeper localhost:2181
TestMsg
__consumer_offsets
test
[kafka@tjtestrac1 config]$ kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic TestMsg --from-beginning
Welcome to my home!!! 
Welcome to my home!!! 
Welcome to my home!!!

猜你喜欢

转载自blog.csdn.net/chenxu_0209/article/details/84567716
今日推荐