docker mysql pxc cluster

docker mysql pxc cluster

Introduction: There is a lot to build a cluster of articles on the web, but I did not need. Studied for some time, you can build success, and is based on multiple servers.

Author: everythingok

Reception: docker install ignored, this article begins swarm cluster. There servers A, B, C each with its own external network address (of course within the network can communicate better), and has another port will be open.

Ip above cnentos7 open to any open port

firewall-cmd --zone=public --permanent --add-rich-rule="rule family="ipv4" source address="45.124.124.162" accept" 
firewall-cmd --reload

1. SWARM cluster

1.1 A master node cluster established

docker swarm init --advertise-addr  A.A.A.A:2377
[root@xq-test-docker-master01 svn]# docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-274lmt9n21byry6tqshvcldfot2t8cvd455308l5d71a2g2t96-brff30kahfeu9jfokocurp9hj A.A.A.A:2377

1.2 B node joins the cluster

docker swarm join --token SWMTKN-1-274lmt9n21byry6tqshvcldfot2t8cvd455308l5d71a2g2t96-brff30kahfeu9jfokocurp9hj A.A.A.A:2377

1.3 C node joins the cluster

docker swarm join --token SWMTKN-1-274lmt9n21byry6tqshvcldfot2t8cvd455308l5d71a2g2t96-brff30kahfeu9jfokocurp9hj A.A.A.A:2377

1.4 A cluster node view case

docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
hmavp6p0u9r1dgjl2fxtdorao *   161                 Ready               Active              Leader              19.03.2
tzoini4a1hka4rawnzxxky84i     162                 Ready               Active                                  19.03.2 11maitff254rab4atq9e7qd2d 164 Ready Active 19.03.2 

1.5 A node establishes overlay network

1.5.1 Networking

docker network create -d overlay --attachable pxc-network

1.5.2 View Network

docker network inspect pxc-network

1. Perform problem docker network inspect pxc-network network finding command B, C node, but it is actually available.

2. PXC build clusters

2.1 A, B, C pulling node percona-xtradb-cluster

2.1.1 pull mirroring

docker pull percona/percona-xtradb-cluster

2.1.2 to change the name of the mirror

docker tag docker.io/percona/percona-xtradb-cluster pxc

2.2 A node is created PXC service

2.2.1 create volume, this step must be, because pxc service to mount.

docker volume create mysql-data-node
docker volume inspect mysql-data-node //可以查看挂载点路径等信息 

2.2.2 A service node starts pxc

docker run -d -e MYSQL_ROOT_PASSWORD=******** -e CLUSTER_NAME=PXC  -e XTRABACKUP_PASSWORD=******** --net=pxc-network --privileged -p 3306:3306 -v mysql-data-node:/var/lib/mysql --name=pxc-node-1 pxc 

Questions 1. In the last line of the configuration file /etc/mysql/node.cnf Rideau ck a character, remove this character.

2. Start the root problem is not successful after the completion of the connection, enter pxc service, change the root password

GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123456' flush privileges; 

Question 3. If you still can not connect, this time to modify the configuration file /etc/mysql/node.cnf, adding skip-name-resolve.

Question 4. I want to really start docker run twice, no time to study the issue.

Note. When using services such as real navicat mysql connection can be used again to create additional nodes.

2.3 B node creation PXC service

2.3.1 Creating volume

docker volume create mysql-data-node
docker volume inspect mysql-data-node //可以查看挂载点路径等信息 

2.3.2 A service node starts pxc

docker run -d -e MYSQL_ROOT_PASSWORD=******** -e CLUSTER_NAME=PXC  -e XTRABACKUP_PASSWORD=******** --net=pxc-network --privileged -p 3306:3306 -v mysql-data-node:/var/lib/mysql --name=pxc-node-1 pxc 
docker run -d -e MYSQL_ROOT_PASSWORD=******** -e CLUSTER_NAME=PXC  -e XTRABACKUP_PASSWORD=******* --net=pxc-network -e CLUSTER_JOIN=pxc-node-1 --privileged -p 3306:3306 -v mysql-data-node:/var/lib/mysql --name=pxc-node-2 pxc 

Note that, a plurality of other nodes command -e CLUSTER_JOIN = pxc-node-1, which means that the node synchronization, and

I was doing swarm cluster test of time, sometimes prompted network without problems. This is not practical to verify in the end is what causes, I will disband the cluster, and then rejoin. Behind this there is no problem to produce

I will find here a second node fails to start, find out the reasons PXC service is not created by default xtrabackup a user, you need to manually create a user.

GRANT ALL PRIVILEGES ON *.* TO 'xtrabackup'@'localhost' IDENTIFIED BY '*****' flush privileges; 

This XTRABACKUP_PASSWORD node B and Password consistent on the line.

2.4 C PXC service node to create, with the B node.

3. Verify the cluster synchronization. Creating tables, adding data, synchronization was successful.

4. Add HAPROXY and subsequent load balancing service KEEPALIVED

Guess you like

Origin www.cnblogs.com/everythingok001/p/11652674.html