Why is Kubernetes so critical in edge computing?

Edge computing is a variant of cloud computing. In the design of edge computing, infrastructure services for computing, storage, and networking are physically closer to field devices that generate data. It can be said that this eliminates the "round trip" between data to the data center and improves service availability. Since its launch, edge computing has become an effective runtime platform that can help solve unique challenges in telecommunications, media, transportation, logistics, agriculture, retail, and other fields.

At the same time, Kubernetes has quickly become a key element of edge computing. By using Kubernetes, companies can run containers at the edge and maximize resource utilization, simplify testing, and because many organizations can use and analyze more data on-site, DevOps teams can migrate faster and more efficiently.

In today's information explosion era, data is constantly being updated at an unprecedented rate. Enterprises must consider how to economically transfer data from the edge to the data center, and whether it is more cost-effective to filter and preprocess the data locally. If the workload does not have low latency constraints, it should continue to be served by cloud solutions. However, the arrival of the wave of new use cases requires operators to rethink network architecture, and this is where edge computing comes in.

Edge computing provides three benefits. First of all, low latency , by improving the performance of field devices, it can not only respond faster, but also respond to more events. Second, low traffic , which helps reduce costs and increase overall throughput, so that the core data center can support more field devices. Finally, for stand-alone applications, it is more highly available when there is a network interruption between the edge and the data center.

The number of smart devices in the Internet of Things has grown exponentially, the arrival of 5G networks has also had a significant impact on edge computing, and the increasing importance of performing artificial intelligence tasks at the edge has driven enterprises to pay attention to edge computing because All of these require the ability to handle elastic demands and shift workloads. Therefore, Gartner, an internationally renowned consulting firm, said that by 2025, the amount of data generated by enterprises created and processed outside of the traditional centralized data center or cloud will soar from 10% today (2019) to 75%.

The edge cloud has at least two layers-although the structure of each layer is different, both of which will maximize the efficiency of operation and maintenance and the productivity of developers.

The first layer is the Infrastructure as a Service (IaaS) layer. In addition to providing computing and storage resources, the IaaS layer can also meet the network performance requirements of ultra-low latency and high bandwidth.

The second layer contains Kubernetes, which has become the de facto standard for coordinating containerized workloads in data centers and public clouds. Kubernetes has become an extremely important foundation for edge computing. Although it is not necessary to use Kubernetes at this layer, it has proved to be an effective platform for organizations engaged in edge computing. Since Kubernetes provides a common abstraction layer on physical resources (compute, storage, and network), developers or DevOps engineers can deploy applications and services in a standard manner anywhere (including the edge).

Kubernetes can also enable developers to simplify their DevOps practices and minimize the time it takes to integrate with heterogeneous operating environments, thereby satisfying developers and operations and maintenance personnel. Rancher Labs launched k3s in February 2019 as a lightweight Kubernetes distribution suitable for edge computing scenarios and IoT scenarios. Since its release, it has attracted the attention of global developers. The number of Stars on Github has reached 12,000, making it the most popular edge computing Kubernetes solution in the open source community. The size of k3s is less than 70MB, and it can run in less than 512MB of RAM, and k3s supports x86_64, ARM64 and ARMv7 architectures at the same time. This means it can work very flexibly across any edge infrastructure. In addition, in order to meet the requirements of "offline management", k3s simplifies the installation process, requiring only one command to complete the installation or upgrade.

So how should the organization deploy these layers?

The first step is to consider the physical infrastructure and what technology can effectively manage the infrastructure, and convert the original hardware to the IaaS layer. Therefore, there is a need for operational primitives that can be used for hardware discovery, thereby providing flexibility, enabling the allocation of computing resources and dynamically reusing them. In addition, a technology to automatically create edge clouds based on KVM pods is also needed, which effectively enables operations and maintenance personnel to create virtual machines using a predefined set of resources (RAM, CPU, storage, and oversubscription ratio).

After discovering and configuring the physical infrastructure of the edge cloud, the second step is to choose an orchestration tool that can easily install Kubernetes or any software on the edge infrastructure. Then, you can begin to deploy the environment and enable and verify the application. As more and more organizations adopt this model in the next few years, it will be very interesting.

Guess you like

Origin blog.csdn.net/qq_42206813/article/details/105954937