Deployment of microservices

1. Docker Compose orchestration tool

(1). Introduction to Docker Compose

       An application system using a microservice architecture generally contains several microservices, and each microservice generally deploys multiple instances. If each microservice has to be started and stopped manually, the low efficiency and the large amount of maintenance can be imagined. This lesson will discuss how to use Docker Compose to manage containers easily and efficiently. For the sake of simplicity, Docker Compose is referred to as Compose for short.
       Compose is a tool for defining and running multi-container Docker applications. Using Compose, you can configure the services of your application in a configuration file (yaml format), and then use a command to create and start all the services referenced in the configuration.

(2). Docker Compose installation and uninstallation

1. Installation conditions

Docker Compose relies on the Docker engine, so make sure that Docker is installed on the machine before installing Docker Compose (you can use the docker -v command to view)

2. Install Compose

  • Use the curl command to pull Docker Compose from the Compose repository on github. The specific instructions are as follows.
sudo curl -L https://github.com/docker/compose/releases/download/1.16.1/docker-compose-`uname -s `-`uname -m` -o /usr/local/bin/docker-compose
  • Change the executable file permissions of Docker Compose
sudo chmod +x /usr/local/bin/docker-compose
  • Check the installed Docker Compose version information, if it returns normally, the installation is successful. Results as shown below.
docker-compose --version

Microservice deployment-01

3. Uninstall Compose

Follow the above steps to use curl to install Docker Compose, you can use the rm command to uninstall, the specific instructions are as follows:

sudo rm /usr/local/bin/docker-compose

(3). The use and description of Compose file

2. Integration of Microservices and Docker

1. Add Dockerfile

2. Add Dockerfile-maven plugin

3. Add docker-compose.yml orchestration tool

3. Environment setup and mirror preparation

(1). Environment construction

1. Build Docker host

        To run the microservice project in Docker, you must first ensure that the operating environment is installed with the Docker engine. Here we choose the cluster environment built before. We will use the Docker machine named manager1 as the host for this microservice deployment, which is also a cluster. The management node in the environment, and the other two Docker machines named worker1 and worker2 are still used as working nodes in the cluster environment.
        It should be noted that, in order to facilitate viewing and management in this demonstration, we need to build a local private mirror warehouse on the manager1 service host. The warehouse address is 192.168.197.143:5000, which is the same as the mirror prefix address written in the previous project configuration and integration files. They are all consistent, otherwise they cannot be pushed to the designated warehouse.

2. Install the application compilation tool jdk

        In the process of using the mvn install command, you need to use jdk to compile and package, so we need to install and configure the jdk environment in advance. The specific configuration process is as follows.

  • Download the Linux version of the JDK toolkit. This project uses the jdk-8u144-linux-x64.tar.gz version, and use the tar command to decompress it on the Linux machine. The specific instructions are as follows:
sudo tar -zxf jdk-8u144-linux-x64.tar.gz

Insert picture description here

  • Move the decompressed package generated after executing the above decompression instruction to a custom directory (here, move the decompressed package directly to the /usr/lib/jvm directory, if it does not exist, you need to create the directory in advance), the specific operation instructions are as follows.
sudo mv jdk1.8.0_144/ /usr/lib/jvm

Insert picture description here

  • Configure JDK environment variables. Modify the /etc/profile file and add the following configuration to the file (note the JDK decompression package name and version number).
#set java environment
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_144/
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
  • After completing the JDK configuration, we can execute the source /etc/profile command to make the configuration take effect immediately. Use the java -version command to view the effect after installation.
    Insert picture description here

3. Install the application packaging tool Maven

        In the previous introduction of microservices and Docker integrated configuration, it has been explained that this integrated deployment is automatically executed through Maven's install command for automatic packaging, image construction and push, so you must first install and configure Maven. The specific configuration process is as follows:

  • Download the Linux version of the Maven toolkit, the Apache-maven-3.5.0-bin.tar.gz version used in this book, and use the tar command to decompress it on the Linux machine. The specific operation instructions are as follows:
sudo tar -zxf apache-maven-3.5.0-bin.tar.gz

Insert picture description here

  • Move the decompressed package generated after the above decompression command is executed to the custom directory (here, the decompressed package is directly moved to the opt directory), the specific operation instructions are as follows:
sudo mv apache-maven-3.5.0/ /opt/

Insert picture description here

  • Configure Maven environment variables. Modify the /etc/profile file and add the following configuration to the profile file (note the name and version number of the Maven decompression package):
#set maven environment
export M2_HOME=/opt/apache-maven-3.5.0/
export M2=$M2_HOME/bin
export MAVEN_OPTS="-Xms256m -Xmx512m"
export PATH=$M2:$PATH
  • After completing the configuration of the environment variables related to Maven, you can also execute the source /etc/profile command to make the configuration take effect immediately, and then use the mvn -v command to view the Maven information after installation and configuration.
    Insert picture description here

(2). Mirror preparation

        Due to the previous configuration of dockerfile-maven, the image will be automatically built and pushed to the designated warehouse after the packaging is completed, but whether it is pushed to the Docker Hub or the local private image warehouse, you must log in and authenticate before you can push. So in order to be able to automatically package, build and push the image, before using the mvn install command to package, in addition to pre-configured in the dockerfile-maven plugin configuration <useMavensettingsForAuth>' tag attribute value is true, you also need to set the Maven settings.xmt The configuration file (refer to the installation location of Maven when the basic environment is built in the previous section, the address in this example is /opt/apache-maven-3.5.0/convsettings.xml) configures the service authentication information, the specific configuration content is as follows (note that Configured in the <servers> tag).

sudo vi /opt/apache-maven-3.5.0/conf/settings.xml

Configuration content:

<server>
    <id>192.168.197.143:5000</id>
    <username>wangshilin</username>
    <password>wangshilin</password>
</server>

        After the configuration is complete, you can copy the microservice project microservice-mallmanagement to a working directory under the manager1 service residence, and enter the location of the project's pom file (the outermost pom file directory), and then use mvn install The instructions are packaged (the first package will download the pom dependent files, so it takes a certain amount of time)
        after the mvn install instruction is executed, the effect is shown in the following figure:
Insert picture description here
        when it is determined that all executions are successful, we can actually confirm it. First check whether there is a specified mirror generated in the mirror list through the docker images command, and then enter the mount directory /mnt/registry/docker/registry/v2/repositories configured by the local private mirror warehouse again to confirm, and check whether the generated mirror is also pushed Arrived at the local warehouse.
Insert picture description here

4. Manual deployment of microservices

        After preparing the environment and service image required for service deployment, you can formally deploy the microservice project. Here are two ways to deploy services based on specific development situations: service deployment in a non-cluster environment and service deployment in a cluster environment.

(1). Service deployment in a non-cluster environment

        Service deployment in a non-cluster environment is based on the required images are stored in the local private mirror warehouse, and the local private warehouse is configured with user authentication, so if you want to pass the mirror deployment service of the local private warehouse, you must first log in and get the image. Permission to use (Docker Hub remote warehouse mirroring does not require login authentication). The specific operation instructions are as follows.

1. Log in to the private warehouse

        Since the mirrors required by the deployed microservices are stored in the local private mirror warehouse, and the local private warehouse is configured with user authentication, if you want to pass the mirror deployment service of the local private warehouse, you must first log in for authentication and obtain the use of the mirror. Permission (Docker Hub remote warehouse mirroring does not require login authentication). The specific operation instructions are as follows.

sudo docker login 192.168.197.143:5000

Insert picture description here
        By executing the above instructions, you can log in to the Docker Registry local private mirror warehouse at the specified service address. After that, the Docker machine will be in a continuous authentication state, and we can use the docker logout 192.168.197.143:5000command to exit the authentication.

2. Deployment Service

        Enter the directory where the project docker-compose.yml file is located, and execute the service deployment instructions to deploy the entire microservice project. The specific instructions are as follows.

sudo docker-compose up

Insert picture description here
        Using the docker-compose up command is to deploy the entire service in the foreground, and all startup information will be printed out in the terminal window. If you don't want to see this information, you can also use the docker-compose up -d command to deploy the service in the background.
        When the service deployment is completed, you can check whether all services are running normally through the docker ps command (the simultaneous deployment process of multiple interdependent services may take a certain amount of time). The effect is shown in the following figure:
        As can be seen from the above figure, all The services have been started normally. At this point, the corresponding application in the container can also be accessed normally (the specific test method will be introduced later). When a service is no longer needed, you can end the entire service at the same level as the project docker-compose.yml file. The specific operation instructions are as follows:

sudo docker-compose down

Insert picture description here

(2). Service deployment in a cluster environment

1. Selective registration of network cards in the cluster service

Insert picture description here

  • According to the needs of the microservice project, a pre-defined overlay-driven network is used for local cluster network management in the cluster environment. The specific operation instructions are as follows:
sudo docker network create -d overlay --subnet 11.0.0.0/24 microservice_net

After executing the above command, a network named microservice_net, driven by overlay, will be created, and the subnet address of this custom network will start with 11.0 through the -subnet parameter.
Insert picture description here

  • In the configuration file application.yml of all services that need to be registered to the Eureka Center, add the sub-URL information of the preferred service that is registered to the Eureka Center. The specific content is as follows:
  • Modify service deployment

2. Cluster service deployment

  • Log in to the private warehouse
sudo docker login 192.168.197.143:5000

Insert picture description here

  • Deployment service
    Enter the directory where the docker-compose-swarm.yml file in the microservice project is located, and use docker stack deploy to deploy the service. The specific operation instructions are as follows:
sudo docker stack deploy -c docker-compose-swarm.yml --with-registry-auth mallmanagement

Insert picture description here
Deployment Services is the command to start the whole micro-services directly in the background after startup is complete, you can use a node on the cluster management docker service lscommand to view the list of service details, results as shown below:
Insert picture description here
From the chart we can see, in a clustered environment Replica instances of all services have been started normally. Because the service project is deployed on the Docker Swarm cluster service, these service instances (a total of 8 service instances this time) will be randomly assigned to the three nodes of the cluster. At this point, we can use the relevant instructions of docker stack to view the distribution and startup of the entire microservice project on the cluster node on the cluster management node. The specific operation instructions are as follows.

sudo docker stack ps mallmanagemet

Insert picture description here
In addition, because deployment services are started in the background in a cluster environment, the startup details of each service cannot be viewed on the Docker client. Here, you can use the service log instructions provided by Docker Service on the cluster management node to further view the entire log of a specific service from startup to operation. The specific operation instructions are as follows.

sudo docker service logs -f mallmanagement_order-service

Insert picture description here

(3). Microservice testing

1. View the startup of the service through the visualizer cluster service visualization tool.

After the microservice project is successfully deployed, you can view the display of the cluster service visualization tool visualizer through the address http://192.168.197.143:8081 (note the host address of manage1 in this book, readers need to use their own host address when testing). Results as shown below.
Insert picture description here

2. Check the startup status of the service through the Eureka registry.

We can access the Eureka service registry through the address http://192.168.197.143:8761/ to check whether other microservices have been started and registered to the registry. The effect is shown in the figure below.
Insert picture description here

3. Initialize the database data.

The MySQL database in this project is built using a Docker container, so when initializing the MySQL database, you need to install a MySQL client first. The specific operation instructions are as follows.

sudo apt install mysql-client-core-5.7

After executing the above instructions, a MySQL client version 5.7 will be installed on the current Docker machine. Through this client, we can connect to the MySQL database service just started. The specific operation instructions are as follows.

mysql -h 127.0.0.1 -uroot -p

4. Test the microservice.

Connect the access addresses of the user management microservices and order management microservices for testing. The specific addresses are http://192.168.197.143:8030/swagger-ui.html and http://192.168.197.143:7900/swagger-ui. .html .

5. Test and verify the API gateway service.

The calling method of the order service interface is http://192.168.197.143:7900/order/findOrders/1 , and the calling method of the user microservice interface is: http://192.168.197.143:8030/user/findOrders/shitou , when used After Zuul gateway proxy service, the two microservice interface call methods are changed to http://192.168.197.143:8050/order-service/order/findOrders/1 and http://192.168.197.143:8050/user-service respectively /order/findOrders/shitou , the effects are shown in the figure below.
Insert picture description here
Insert picture description here

Five. Use Jenkins to automatically deploy microservices

(1). Jenkins introduction

Jenkins is a powerful application that allows continuous integration and continuous delivery of projects, no matter what platform is used. This is a free source code that can handle any type of build or continuous integration. Integrated Jenkins can be used for some testing and deployment techniques. Jenkins is a software that allows continuous integration.

Jenkins purpose:

1. Continuously and automatically build/test software projects.
2. Monitor the software open process, quickly locate and deal with problems, and prompt open efficiency.

Jenkins features:

Open source java language development continuous integration tool, support CI, CD.
Easy to install, deploy and configure: It can be installed through yum, or downloaded war package and quickly implemented through docker container, which is convenient for web interface configuration management.
Message notification and test report: integrate RSS/E-mail to publish the build result through RSS or notify by e-mail when the build is completed, to generate JUnit/TestNG test report.
Distributed build: Support Jenkins to allow multiple computers to build/test together.
File identification: Jenkins can track which build generates which jar, which build uses which version of jar, etc.
Rich plug-in support: Support extension plug-ins, you can develop tools suitable for your team, such as git, svn, maven, docker, etc.

(2). Jenkins installation

1. Download Jenkins

Enter the Jenkins official website address https://jenkins.io/download/ in the browser to visit its download page, select the Generic java package (.war) at the bottom of the Long-term Support (LTS, long-term support) version on the page to start Jenkins Download the war package, as shown in the figure below:
Insert picture description here

2. Start the Jenkins service

Put the downloaded jenkins.war into a directory under the manager1 machine, and directly use the following command to start the Jenkins service.

java -jar jenkins.war --httpPort=49001

Insert picture description here

3. Jenkins initial installation

  • Initialize authentication password
    Insert picture description here
  • Initial plug-in installation
    Insert picture description here
  • Create admin user
    Insert picture description here
    Insert picture description here
    Insert picture description here

(3). Jenkins integration plug-in configuration

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

(4). Service automated deployment

1. Build a new task

Insert picture description here

(1) Configure the source code warehouse address

Insert picture description here

(2) Build trigger

Insert picture description here

(3) Service release configuration

Insert picture description here
After completing the configuration, the effect is as shown in the figure below:
Insert picture description here

2. Automated deployment services

Insert picture description here
Insert picture description here
Insert picture description here

Guess you like

Origin blog.csdn.net/qq_37955704/article/details/91410729