Is cloud native a trend? What is the significance of learning k8s and docker?

Direct Answer: Yes!

Many students are learning about the 'cloud', where there are many unknown new technologies, tools, and processes.

Many programs written by engineers are deployed to run on "cloud native" platforms. What is the difference between "cloud native" and "cloud host"?

Please see the cloud native entry skill tree .

Explain why.

In the past, writing code was done on a single machine, but programming evolves.

For example, writing a stand-alone program. Write a stand-alone command line program; write a stand-alone GUI program, you need to care about data, models, and interfaces; write a stand-alone web page program. The program utilizes the resources of a single machine: memory, CPU, disk storage. In order to utilize memory, various memory allocators that squeeze memory are made. In order to use memory conveniently, there are reference counting and garbage collection. In order to take advantage of the CPU, the operating system provides process and thread abstraction, and user-mode programs provide coroutine scheduling. To share state among multiple threads, there is synchronization/mutex/semaphore. In order to share state among multiple processes, there is memory sharing. In order to utilize the disk, there is a stand-alone database sqlite, and there are various data redundancy encoding fragmentation storage and decoding technologies.

For example, writing server/client programs. Write a command-line client program for remote services, such as the mysql command-line client. Write a B/S GUI program, such as an ordering system. Deploy the web page to the server. The program uses server-side resources: server memory, CPU, disk, database. For the client, the core lies in the business logic, the client is CPU-intensive, and the rest is done through API requests. So there is a front-end and back-end division of labor. The server allows the state storage and computing resources of the program to be in a 1:N relationship for the client. At this time, the server only needs to be a single machine.

For example, server-side architecture evolution. The large number of static resource requests requested by the website requires a CDN. If there is too much database data, it is necessary to separate the master and the slave. The amount of database requests is large and needs to be cached. The interface layer can't stand it anymore and needs a high-performance reverse proxy server. Gradually, the server program has become multi-machine.

For example, the server program itself will also change 1 to 3. At this time, it is not a microservice, but a dependency of multiple services. Service A depends on service B. As for whether A and B are micro, no one cares. However, if there are too many services, there will be problems with proxy requests between services, publishing and subscription, etc. Message queues do these things, such as zeromq. In addition to the database, there are also some important global metadata synchronization between separated services to solve the state consistency problem. Metadata management does this, such as zookeeper. If there are too many services, they need to be named, and the name has to be converted into an IP port at the end. The name service does this. In this way, after a series of operations, the distributed program can work as much as possible as a single-machine program.

For example, writing distributed data processing programs. Typical is the map/reduce architecture. Then various big data processing frameworks rolled up. This is because the amount of data is large, and it cannot be processed by a single machine. The memory and cpu are utilized through distributed data processing. But there are requirements for bandwidth. Distributed also brings the need for log collection, monitoring, and link tracking.

For example, writing microservice programs. Distributed programs have become complex, so many classic practices have been standardized. Programs written on a traditional stand-alone computer are also divided into single-process programs and multi-process programs, such as the Chrome browser. Microservices are usually built on the infrastructure of metadata synchronization, authorization, and message queues. The state of each service is internalized, and services are decoupled through APIs. One of the costs is the separation of state storage. Microservices also bring the need for service discovery and registration. Microservices are distributed services that utilize multi-machine memory and further standardization of cpu.

For example, write a Dockerfile. Containerization started out as a solution to the problem of packaging dependent environments. This brings a layer of virtualization, which brings portability while sacrificing as little performance loss as possible. The early Java JVM hopes to write once and run everywhere. Where does it run? Running on the JVM virtual machine. But programming languages ​​always forget that it's not the real thing, the real thing is the operating system environment. The programming language provides the runtime (runtime) for the program to run, but under the language, is the operating system environment, the file system, the configuration of the environment, and all other programs that depend on it, which constitute the big runtime. Containers provide the capability of a true self-contained runtime.
Starting from the container, all the previous forms of how to write programs have to be redone. Containerization of stand-alone programs, server-side client programs, especially the containerization of server-side programs. Of course, distributed programs can also be fully containerized and all started with containers. Even databases, caches, and message queues can be containerized. Docker is a typical container.

Then write a meta program that uploads, downloads, and schedules containers to run on different servers in a distributed environment, which can greatly simplify the program deployment and running process of distributed or microservices. This is "container orchestration", which is what k8s does. It makes the distributed program self-contained into a larger runtime. At this point, you can pretend to be writing a stand-alone program again.

A qualitative change has taken place here. Previously, a program was assumed to be a process, and the process was orchestrated and scheduled by the operating system. You could also say that an operating system is a "process choreographer". Now, suppose a program is a container, and the container is scheduled by the "container orchestration" program. We can say that the "process orchestration" program is the operating system, and of course we can say that the "container orchestration" is the XX system. XX=Operation?

One perspective is that the management of microservices has the concept of a service mesh under the assumption that programs run directly on the operating system. It is to reduce service registration, discovery, message queue, traffic monitoring, log collection, link tracking, etc. to the standard packages that microservices rely on. This is the "runtime" of microservices. After the program is containerized, it runs in the container orchestration environment, and the microservices are also containerized. Of course, the "runtime" of the corresponding service grid is also containerized and runs in the container orchestration environment. inside. For example: istio is an implementation of service mesh, which can be containerized and run in the container orchestration environment k8s. On this basis, write microservices, containerize, run in k8s, and get the benefits of istio in k8s. An oversized runtime.

Operating system programs are generally downloaded, installed, and uninstalled through package management software. As a platform, container orchestration can also have its own package management software at this level of abstraction. For example, the helm package management of k8s.

With k8s+ containerization, it is essentially the use of server resources on the cloud, that is, the meaning of "cloud native". The resources consumed by the program running are assumed to be the resources of the cloud from the beginning, and the cloud will become like a single machine. programming.

If it is a multi-cloud environment, deploying k8s in different environments is a problem that depends on different cloud environments. At this time, a multi-cloud infrastructure deployment specification and portability are required. Terraform does just that.

How are these cloud native environments built step by step? For many beginners, there are many unknown new technologies, tools, and processes. In the "Cloud Native Entry Skill Tree", I made a design that has been repeatedly adjusted: Cloud Native Entry Skill Tree

insert image description here

At this point, assuming that the above are all provided by the cloud and are standard, just like buying a cloud host environment (essentially a virtualized Linux operating system runtime, "process orchestration program"), buying a cloud native environment (essentially It is a virtualized XX system runtime, "container orchestrator"). Then, for developers, it is "cloud native": I can write "stand-alone programs" again.

Guess you like

Origin blog.csdn.net/huanhuilong/article/details/123750152