Deploy stand-alone kafka with docker and docker-compose

premise

  1. docker
  2. docker-compose

Among them, docker-compose is not necessary. It is also possible to use docker alone. Here we mainly introduce docker and docker-compose.

docker deployment

Docker deployment of Kafka is very simple, only two commands are needed to complete the deployment of the Kafka server.

docker run -d --name zookeeper -p 2181:2181  wurstmeister/zookeeper
docker run -d --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 --link zookeeper -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.1.60(机器IP):9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka

Since kafka needs to work with zookeeper, a zookeeper needs to be deployed, but with docker it is very easy to deploy.
You can docker pscheck the status of the two containers, which will not be shown here.

Next, you can try the producers and consumers

Use Kafka's own tools to produce and consume message test

  1. First, enter the docker container of kafka
    docker exec -it kafka sh
    
  2. Run consumers and monitor messages

    kafka-console-consumer.sh --bootstrap-server 192.168.1.60:9092 --topic kafeidou --from-beginning
    
  3. Open a new ssh window, also enter the kafka container, execute the following command to produce messages

    kafka-console-producer.sh --broker-list 192.168.1.60(机器IP):9092 --topic kafeidou
    

    After entering this command, you will enter the console, you can enter any message you want to send, send one herehello

    >>
    >hello
    >
    >
    >
    
  4. As you can see, after entering a message in the producer's console, the consumer's console immediately saw the message

So far, a complete hello world of Kafka has been completed. Kafka deployment plus producer consumer testing.

Test through java code

  1. Create a new maven project and add the following dependencies
    <dependency>
       <groupId>org.apache.kafka</groupId>
       <artifactId>kafka-clients</artifactId>
       <version>2.1.1</version>
     </dependency>
     <dependency>
       <groupId>org.apache.kafka</groupId>
       <artifactId>kafka_2.11</artifactId>
       <version>0.11.0.2</version>
     </dependency>
    
  2. Producer code
    producer.java
import org.apache.kafka.clients.producer.*;

import java.util.Date;
import java.util.Properties;
import java.util.Random;

public class HelloWorldProducer {
  public static void main(String[] args) {
    long events = 30;
    Random rnd = new Random();

    Properties props = new Properties();
    props.put("bootstrap.servers", "192.168.1.60:9092");
    props.put("acks", "all");
    props.put("retries", 0);
    props.put("batch.size", 16384);
    props.put("linger.ms", 1);
    props.put("buffer.memory", 33554432);
    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    props.put("message.timeout.ms", "3000");

    Producer<String, String> producer = new KafkaProducer<>(props);

    String topic = "kafeidou";

    for (long nEvents = 0; nEvents < events; nEvents++) {
      long runtime = new Date().getTime();
      String ip = "192.168.2." + rnd.nextInt(255);
      String msg = runtime + ",www.example.com," + ip;
      System.out.println(msg);
      ProducerRecord<String, String> data = new ProducerRecord<String, String>(topic, ip, msg);
      producer.send(data,
          new Callback() {
            public void onCompletion(RecordMetadata metadata, Exception e) {
              if(e != null) {
                e.printStackTrace();
              } else {
                System.out.println("The offset of the record we just sent is: " + metadata.offset());
              }
            }
          });
    }
    System.out.println("send message done");
    producer.close();
    System.exit(-1);
  }
}
  1. Consumer code
    consumer.java
import java.util.Arrays;
import java.util.Properties;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.StringDeserializer;

public class HelloWorldConsumer2 {

  public static void main(String[] args) {
    Properties props = new Properties();

    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.60:9092");
    props.put(ConsumerConfig.GROUP_ID_CONFIG ,"kafeidou_group") ;
    props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
    props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    props.put("auto.offset.reset", "earliest");

    Consumer<String, String> consumer = new KafkaConsumer<>(props);
    consumer.subscribe(Arrays.asList("kafeidou"));

    while (true) {
      ConsumerRecords<String, String> records = consumer.poll(1000);
      for (ConsumerRecord<String, String> record : records) {
        System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
      }
    }
  }
}
  1. Run the producer and consumer separately to
    print the message
    1581651496176,www.example.com,192.168.2.219
    1581651497299,www.example.com,192.168.2.112
    1581651497299,www.example.com,192.168.2.20
    
    Consumer print message
    offset = 0, key = 192.168.2.202, value = 1581645295298,www.example.com,192.168.2.202
    offset = 1, key = 192.168.2.102, value = 1581645295848,www.example.com,192.168.2.102
    offset = 2, key = 192.168.2.63, value = 1581645295848,www.example.com,192.168.2.63
    
    Source code site: FISHStack / kafka-demo

Deploy Kafka through docker-compose

First create a docker-compose.yml file

version: '3.7'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    volumes:
      - ./data:/data
    ports:
      - 2182:2181

  kafka9094:
    image: wurstmeister/kafka
    ports:
      - 9092:9092
    environment:
      KAFKA_BROKER_ID: 0
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.60:9092
      KAFKA_CREATE_TOPICS: "kafeidou:2:0"   #kafka启动后初始化一个有2个partition(分区)0个副本名叫kafeidou的topic 
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
    volumes:
      - ./kafka-logs:/kafka
    depends_on:
      - zookeeper

The deployment is very simple, just docker-compose.ymlexecute in the file directory docker-compose up -d, and the test method is the same as above.
This docker-compose does more things than the docker method above.

  1. Data persistence. Two directories are hung in the current directory to store zookeeper and kafka data respectively. Of course docker run, adding in the command can  -v 选项also achieve this effect.
  2. Kafka will initialize a topic with partitions after it is started. Similarly, docker runadding it at the time -e KAFKA_CREATE_TOPICS=kafeidou:2:0is also possible.

Summary: docker-compose deployment is recommended first

why?

Because of the simple deployment using docker, if there are changes (for example: changing the port number open to the outside world), docker needs to stop the container docker stop 容器ID/容器NAME, then delete the container docker rm 容器ID/容器NAME, and finally start the new containerdocker run ...

And if you modify the content in the case of docker-compose deployment, you only need to modify the corresponding place of the docker-compose.yml file, for example 2181:2181改成2182:2182, and then execute it again in the directory corresponding to the docker-compose.yml file to docker-compose up -dachieve the updated effect.  

 

> Originating in four coffee beans  release! 
> Follow the official account->[Four Coffee Beans] Get the latest content  
 

 

Guess you like

Origin blog.csdn.net/lypgcs/article/details/104326484