KAFKA && zookeeper 集群安装


服务器:
#vim /etc/hosts
10.16.166.90 sh-xxx-xxx-xxx-online-01
10.16.168.220 sh-xx-xxx-xxx-online-02
10.16.167.15 sh-xxx-xxx-xxx-online-03


#vim /etc/yum.repos.d/cdh.repo
[myrepo]
name=myrepo
baseurl=http://172.19.30.51/cdh/5
enabled=1
gpgcheck=0

#yum install -y zookeeper-server zookeeper

vim /etc/zookeeper/conf/zoo.cfg

maxClientCnxns=100
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/data/zookeeper
# the port at which the clients will connect
clientPort=2181
# the directory where the transaction logs are stored.
dataLogDir=/data/zookeeper
autopurge.purgeInterval=6
autopurge.snapRetainCount=20
server.1=10.16.166.90:2888:3888
server.2=10.16.168.220:2888:3888
server.3=10.16.167.15:2888:3888

#mkdir -p /data/zookeeper

初始化zookeeper
# zookeeper-server-initialize
No myid provided, be sure to specify it in /services/data/hadoop/zookeeper/myid if using non-standalone


手动生成一个myid文件,id号根据填写配置文件中server.后面的号码
例如10.16.166.90
echo 1 > /data/zookeeper/myid

例如10.16.168.220
echo 2 > /data/zookeeper/myid

10.16.167.15
echo 3 > /data/zookeeper/myid

修改权限

#chown zookeeper.zookeeper /data/zookeeper/ -R

#ln -sf /usr/local/java/bin/java /usr/sbin/java


启动服务:
sudo service zookeeper-server start


安装kafka

wget http://mirrors.tuna.tsinghua.edu.cn/apache/kafka/0.9.0.1/kafka_2.11-0.9.0.1.tgz


#vim /usr/local/kafka/config/server.properties

##

broker.id=1 #每台ID不一样
listeners=PLAINTEXT://10.16.166.90:9092 ##本地IP
host.name=10.16.166.90 #本地IP
num.network.threads=18
num.io.threads=24
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka
num.partitions=3
num.recovery.threads.per.data.dir=1
log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=10.16.166.90:2181,10.16.168.220:2181,10.16.167.15:2181 ##三台集群
zookeeper.connection.timeout.ms=6000
default.replication.factor = 2
delete.topic.enable=true
unclean.leader.election.enable=false
min.insync.replicas=2

#mkdir /data/kafka


#启动kafka

#nohup bin/kafka-server-start.sh config/server.properties >> kafak.log 2>&1 &
测试KAFAKA;


创建 TOIIC
#kafka-topics.sh --create --zookeeper 10.16.166.90:2181 --replication-factor 2 --partitions 9 --topic dsperrorlog_test

打开消费者
#./kafka-console-consumer.sh --zookeeper 10.16.166.90:2181 --topic dsperrorlog_test
这里可以看到输入的东西


生产者:

#./kafka-console-producer.sh --broker-list 10.16.166.90:9092 --topic dsperrorlog_test
输入任何东西回车

增加JVM端口,

#vim kafka-server-start.sh

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx3G -Xms3G" ##修改
export JMX_PORT="9999" ##新增的
fi


##验证zookeeper-client 里面的kafka节点
#zookeeper-client ##
#ls /
#ls /brokers/ids
#get /brokers/ids/1
#get /brokers/ids/2


查看集群状态:

zookeeper-server status


停止:bin/kafka-server-stop.sh config/server.properties

猜你喜欢

转载自www.cnblogs.com/Qing-840/p/9264236.html