kafka installation tutorial (official version)

Recently in school kafka, the first step to learn kafka, let's install kafka first!

Use environment: ubuntu system

kafka download address: http://kafka.apache.org/downloads

Official installation tutorial: http://kafka.apache.org/quickstart


1. Unzip the downloaded installation package (I downloaded kafka_2.11-1.1.0.tgz)

Unzip the downloaded installation package

tar -xzf kafka_2.11-1.1.0.tgz


ok, at this point you have completed the installation! Yes, yes the installation is complete! No additional configuration required! Next, we will introduce how to start and deploy a cluster on a single machine!


Second, kafka common commands (commands are executed in the decompression directory)

Go to the unzipped directory


(1) Start kafka (start zookeeper first)

bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties

(2) Create topic

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

(3) View topic

bin/kafka-topics.sh --list --zookeeper localhost:2181

(4) Send a message

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

(5) View the message

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning


(6) Multi-broker startup mode

a. Copy the configuration file

cp config/server.properties config/server-1.properties
cp config/server.properties config/server-2.properties

b. Modify the configuration file

Use vim to edit server-1.properties, server-2.properties, and modify broker.id, listeners, and log.dir properties. (If it is the same, the startup will report an error)

config/server-1.properties:

    broker.id=1
    listeners=PLAINTEXT://:9093
    log.dir=/tmp/kafka-logs-1

config/server-2.properties:

    broker.id=2
    listeners=PLAINTEXT://:9094
    log.dir=/tmp/kafka-logs-2

c. Start the cluster

bin/kafka-server-start.sh config/server-1.properties &
bin/kafka-server-start.sh config/server-2.properties &

d. Create a topic with a backup of 3 (each broker has a backup, if a broker fails to work, the system can still work normally)

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated

e. View topic

 bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic

Topic:my-replicated-topic   PartitionCount:1    ReplicationFactor:3 Configs:
Topic: my-replicated-topic  Partition: 0    Leader: 1   Replicas: 1,2,0 Isr: 1,2,0


(7) Import messages using a file

a. Create an import file (the message content is foo\bar), please create the file in the decompression directory

echo -e "foo\nbar" > test.txt

b. Import with default configuration

bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties

c. View imported and everywhere files

more test.sink.txt
foo
bar

d. View messages in kafka

> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning
{"schema":{"type":"string","optional":false},"payload":"foo"}

e.connectors will continue to process the content in the text.txt text, and re-direct the content to the text. You can continue to observe the output of test.sink.txt and the console

echo Another line>> test.txt









Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324845147&siteId=291194637