Deployment and verification of calico on docker

1. Background

The following deployment uses a five-server environment as an example:

服务器1: hostname为etcdnode1, IP为192.168.56.100
服务器2: hostname为etcdnode2, IP为192.168.56.101
服务器3: hostname为etcdnode3, IP为192.168.56.102
服务器2: hostname为hostnode1, IP为192.168.56.200
服务器3: hostname为hostnode2, IP为192.168.56.201

Among them, etcdnode1, etcdnode2 and etcdnode3 will deploy etcd as the back-end distributed storage of the calico network; hostnode1 and hostnode2 will deploy the calico network.

Software background:

•	Ubuntu 16.04
•	etcd - v3.1.10 
•	Docker
•	calicoctl - v1.6.1
•	calico/node image - v.2.6.2
•	calico, calico-ipam plugins - v1.11.0

2. Deployment

2.1. etcd deployment

etcdnode1, etcdnode2 and etcdnode3 deploy etcd, execute the following commands respectively.

2.1.1. Installing etcd

# cd /usr/local
# curl -Lhttps://github.com/coreos/etcd/releases/download/v3.1.10/etcd-v3.1.10-linux-amd64.tar.gz-o etcd-v3.1.10-linux-amd64.tar.gz
# tar -zxf etcd-v3.1.9-linux-amd64.tar.gz
# cd etcd-v3.1.9-linux-amd64
# cp etcd etcdctl /usr/bin
# mkdir -p /var/lib/etcd
# chmod -R a+rw /var/lib/etcd

2.1.2. Creating systemd service files

Open the /etc/systemd/system/etcd.service file using vi.

[Unit]
Description=etcd
Documentation=https://github.com/coreos/etcd

[Service]
Type=notify
Restart=always
RestartSec=5s
LimitNOFILE=40000
TimeoutStartSec=0 

ExecStart=/usr/bin/etcd --name ${local_hostname} \
   --data-dir /var/lib/etcd \
   --listen-client-urls http://0.0.0.0:2379 \
   --listen-peer-urls http://0.0.0.0:2380 \
   --advertise-client-urls http://${local_IP}:2379\
   --initial-advertise-peer-urls http://${local_IP}:2380 \
   --initial-cluster *etcdnode1=http://192.168.56.100:2380,etcdnode2=http://192.168.56.101:2380, etcdnode3=http:// 192.168.56.102:2380 *
   --initial-cluster-token my-etcd-token \
   --initial-cluster-state new
 
[Install]
WantedBy=multi-user.target

It should be noted here that both local_hostname and local_IP need to be replaced with the node's own hostname and IP address.

2.1.3. Start etcd service

After all etcd nodes perform the above steps at the same time, perform the following steps at the same time.

# systemctl daemon-reload
# systemctl enable etcd.service
# systemctl start etcd.service

2.1.4. Check etcd status

# etcdctl cluster-health               // 检查集群的健康状态
# etcdctl member list                  // 返回集群的成员列表

2.2. docker deployment

Both hostnode1 and hostnode2 nodes need to be configured.

2.2.1. Install docker

# apt -y install docker.io

2.2.2. Modify daemon.json

The Docker daemon requires the storage and notification functions of etcd to be configured in the /etc/docker/daemon.json file. You can open /etc/docker/daemon.json through vi, and replace ${local_IP} with the IP address of each dockerhost.

{
    "cluster-store":"**etcd://192.168.56.100:2379, 192.168.56.101:2379,192.168.56.102:2379**",
    "cluster-advertise":"${local_IP}:2375",
    "hosts": ["tcp://0.0.0.0:2375","unix:///var/run/docker.sock"]
}

2.2.3. Restart the docker service

# systemctl restart docker.service

It will take some time to restart. After completion, confirm whether the docker configuration takes effect.

# docker info
Cluster Store: etcd://192.168.56.100:2379,192.168.56.101:2379, 192.168.56.102:2379
Cluster Advertise: 192.168.56.200:2375
Insecure Registries:
  127.0.0.0/8

2.3. calico deployment

Each docker host needs to be configured.

2.3.1. Download calico

PS: Here we download and use the v1.6.1 version. As of now, the v3.1.1 version of calico has appeared.

# wget -O /usr/local/bin/calicoctlhttps://github.com/projectcalico/calicoctl/releases/download/v1.6.1/calicoctl
# chmod +x /usr/local/bin/calicoctl
# mkdir /var/lib/calico
# curl -L -o /var/lib/calico/calicohttps://github.com/projectcalico/cni-plugin/releases/download/v1.11.0/calico
# curl -L -o/var/lib/calico/calico-ipamhttps://github.com/projectcalico/cni-plugin/releases/download/v1.11.0/calico-ipam
# chmod +x /var/lib/calico/calico
# chmod +x/var/lib/calico/calico-ipam

2.3.2. Add calico configuration

# mkdir -p /etc/calico

Then modify the configuration of calico and add the following content to the /etc/calico/calicoctl.cfg file. Here, the main purpose is to increase the etcd terminal configuration. If there are multiple etcd nodes, you can use commas to connect.

apiVersion: v1

kind: calicoApiConfig

metadata:

spec:

 datastoreType: "etcdv2"

 etcdEndpoints: "http:// 192.168.56.100:2379,http:// 192.168.56.101:2379,http://192.168.56.102:2379"

2.3.3. Setting Kernel Network Parameters

Calico requires parameters such as "net.ipv4.conf.all.rp_filter" and "net.ipv4.ip_forward" to be enabled, but some distributions do not enable these parameters by default, so they need to be enabled manually.

# echo “net.ipv4.conf.all.rp_filter=1”>> /etc/sysctl.conf

# echo “net.ipv4.ip_forward=1” >>/etc/sysctl.conf

# sysctl -p

2.3.4. Start the calico/node container

To start the calico/node container, you may need to download the corresponding image online. Also, ${local_IP} needs to be replaced with the IP address of the respective docker host.

Then:

# calicoctl node run--node-image=calico/node:v2.6.2 --ip={local_IP}

Check the connection status.

# calicoctl node status
Calico process is running.

IPv4 BGP status
+----------------+-------------------+-------+------------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE    |    INFO    |
+----------------+-------------------+-------+------------+-------------+
| 192.168.56.201 | node-to-node mesh| up    | 2017-11-06 | Established |
+----------------+-------------------+-------+------------+-------------+
IPv6 BGP status
No IPv6 peers found.

2.3.5. Create a docker network

Note: This step only needs to be created on any dockerhost node. Different nodes share the calico network.

# docker network create --driver calico--ipam-driver calico-ipam ${network name}

Here, we create a calico network named "calico-network".

# docker network create --driver calico--ipam-driver calico-ipam "calico-network"

2.3.6. Verifying the calico network

Execute command on hostnode1

# docker run --net calico-network--name workload-A -tid busybox

Execute command on hostnode2

# docker run --net calico-network--name workload-B -tid busybox

Then ping the IP address of the container workload-B on the container workload-A. If the connection is successful, the configuration is successful.

First, obtain the IP address of workload-B and execute the command on hostnode2.

# docker exec workload-B hostname –i
192.168.0.17

Then ping that IP address on hostnode1.

# docker exec workload-A ping 192.168.0.17
PING 192.168.0.17 (192.168.0.17) 56(84)bytes of data.
64 bytes from 192.168.0.17: icmp_seq=1ttl=64 time=0.165 ms

Ping communication means that two container networks using the same calico network on different docker hosts communicate with each other.

2.3.7. Configuring the ingress feature

If you need the docker host to access the container network, in the above example, if you want to access the IP address of workload-A on hostnode2, you need to configure the ingress feature of the calico network.

Export the existing configuration first.

# calicoctl get profile "calico-network"-o json > profile.json

The ingress section of profile.json, here is a configuration for incoming traffic. Let's add another configuration, especially the source->nets part, the content is roughly as follows:

"ingress": [
       {
          "action":"allow",
          "source": {
            "tag":"calico-network"
          },
          "destination": {}
       },
       {
          "action":"allow",
          "source": {
            "nets": [
               "192.168.56.1/24"
            ]
          },
          "destination": {}
       }
     ],

Then replace the modified profile.json file.

# calicoctl replace -f profile.json

Then, use ping on hostnode2 to try the IP address of workload-A, and the ping succeeds.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325610691&siteId=291194637