因为zookeeper选举至少需要三个节点,我们首先准备三台服务器,IP地址分别如下(前提是要先安装JDK)
192.168.100.101
192.168.100.102
192.168.100.103
1、配置主机名到IP地址的映射(此步骤不是必须的,我们可以直接在zk的配置文件中填写IP地址),这样配置的好处是如果某个IP地址发生了变化,我们不需要重启zookeeper,直接修改主机对应的IP地址即可。
直接修改/etc/hosts文件,设置主机zoo-1映射到192.168.100.101,
设置主机zoo-2映射到192.168.100.102,
设置主机zoo-3映射到192.168.100.103
[root@localhost zookeeper]# vi /etc/hosts然后在文件末尾追加
192.168.100.101 zoo-1
192.168.100.102 zoo-2
192.168.100.103 zoo-3
这里的映射关系,保存后立即生效的。
2、在其中一台机器上安装zookeeper,下载zookeeper到zoo-1的机器上,此处使用的zookeeper-3.4.9,zookeeper下载解压后,修改配置文件即可使用。[root@localhost zookeeper]# tar zxvf /usr/local/download/zookeeper-3.4.9.tar.gz /usr/local/soft/zookeeper进入conf目录下,
将zoo_sample.cfg复制一份重命名为zoo.cfg(zookeeper启动时默认寻找conf下名字为zoo.cfg的配置文件)
[root@localhost zookeeper]# cd /usr/local/soft/zookeeper/zookeeper-3.4.9/
[root@localhost conf]# cp zoo_sample.cfg zoo.cfg
修改zoo.cfg中的配置信息
[root@localhost conf]# vi zoo.cfg
修改配置文件信息如下
The number of milliseconds of each tick
tickTime=2000
The number of ticks that the initial
synchronization phase can take
initLimit=10
The number of ticks that can pass between
sending a request and getting an acknowledgement
syncLimit=5
the directory where the snapshot is stored.
do not use /tmp for storage, /tmp here is just
example sakes.
dataDir=/usr/local/data/zookeeper #zookeeper数据目录
the port at which the clients will connect
clientPort=2181
the maximum number of client connections.
increase this if you need to handle more clients
#maxClientCnxns=60
#设置集群信息,此处的zoo-x可以用ip地址代替
server.1=zoo-1:2888:3888
server.2=zoo-2:2888:3888
server.3=zoo-3:2888:3888#2888是集群实例通讯端口,3888是集群间的选举端口,2181是客户端的访问端口
创建zookeeper的数据目录,即我们zoo.cfg中的dataDir对应的目录[root@localhost conf]# mkdir /usr/local/data/zookeeper
在dataDir目录中,创建一个名为myid的文件,并写入机器对应的数字值,比如我们是在zoo-1的机器上,就将该值设置为1,即集群中sever.1=zoo-1:2888:3888中server.后对应的数字。这是zookeeper用来识别是那一台集群机器的标识。[root@localhost conf]# echo “1” > /usr/local/data/zookeeper/myid
此时我们集群中的一台服务器就配置好了
3、在其余两台机器上也安装zookeeper,可以通过scp命令直接拷贝文件即可。每个机器的都是用相同的配置信息。
[root@localhost conf]# scp -r /usr/local/soft/zookeeper/zookeeper-3.4.9/ [email protected]:/usr/local/soft/zookeeper
[email protected]’s password:
输入密码即可将对应的文件传输到要其他集群机器上。注:此处唯一不同的地方是每个dataDir下的myid中的内容要按照zoo.cfg配置文件中的集群信息设置。比如:192.168.100.102对应集群中的server.2,所以在myid中写入2到此为止,集群相关配置已经完毕了。下面就可以启动集群了
4、启动zookeeper集群我的启动顺序是:zoo-1 -> zoo-2 --> zoo-3[root@localhost zookeeper]# pwd
/usr/local/soft/zookeeper
[root@localhost zookeeper]# zookeeper-3.4.9/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/soft/zookeeper/zookeeper-3.4.9/bin/…/conf/zoo.cfg
Starting zookeeper … STARTED
这样表明zoo-1的服务器已经启动,同时在我们当前操作路径下回生成一个zookeeper的日志文件,zookeeper.out
[root@localhost zookeeper]# pwd
/usr/local/soft/zookeeper
[root@localhost zookeeper]# ll
总用量 24
drwxr-xr-x. 10 1001 1001 4096 8月 23 2016 zookeeper-3.4.9
-rw-r–r--. 1 root root 19350 10月 27 23:54 zookeeper.out
文件中记录了集群的日志信息,当只启动zoo-1服务器的时候,日志中会出现异常,因为集群其他服务没启动,所以这个异常是没问题的。大概信息如下[root@localhost zookeeper]# tail -f zookeeper.out
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:426) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:843) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:822) 2017-10-27 23:35:29,886 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer
QuorumServer@149] - Resolved hostname: zoo-2 to address: zoo-2/192.168.100.102
2017-10-27 23:35:33,091 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@400] - Cannot open channel to 3 at election address zoo-3/192.168.100.103:3888
接着启动zoo-2和zoo-3,全部启动完毕后,集群会自动选举出一台服务器作为leader,其余服务器为follower。可以在每台服务器上使用命令查看服务器是什么类型的:
[root@zoo-1 zookeeper]# zookeeper-3.4.9/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/soft/zookeeper/zookeeper-3.4.9/bin/…/conf/zoo.cfg
Mode: follower
[root@zoo-2 zookeeper]# zookeeper-3.4.9/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/soft/zookeeper/zookeeper-3.4.9/bin/…/conf/zoo.cfg
Mode: leader
[root@zoo-3 zookeeper]# zookeeper-3.4.9/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/soft/zookeeper/zookeeper-3.4.9/bin/…/conf/zoo.cfg
Mode: follower通过上面状态查询结果可见,zoo-2是集群的Leader,其余的两个结点是Follower。可以通过客户端,连接到ZooKeeper集群上。对于客户端来说,ZooKeeper是一个整体(ensemble),你可以在任何一个结点上建立到服务集群的连接,
例如:
[root@localhost zookeeper]# zookeeper-3.4.9/bin/zkCli.sh -server 192.168.100.101:2181 Connecting to 192.168.100.101:2181