Born from the cloud, the threat of ghosts-cloud-native application delivery and operation and maintenance

Introduction: The past 2020 was a year full of uncertainty, but also a year full of opportunities. The sudden new crown epidemic has pressed the accelerator key for the digital transformation of the whole society. Cloud computing is no longer a technology, but a key infrastructure supporting the development of the digital economy and business innovation. In the process of using cloud computing to reshape enterprise IT, cloud-native technologies, which are born in the cloud, are better than the cloud, and maximize the value of the cloud, have been recognized by more and more enterprises, and have become an important means for enterprise IT to reduce costs and improve efficiency.

Head picture.png

Author | Yi Li Alibaba Cloud Senior Technical Expert
Source | Alibaba Cloud Native Official Account

This series of articles:

The past 2020 was a year full of uncertainty, but also a year full of opportunities. The sudden new crown epidemic has pressed the accelerator key for the digital transformation of the whole society. Cloud computing is no longer a technology, but a key infrastructure supporting the development of the digital economy and business innovation. In the process of using cloud computing to reshape enterprise IT, cloud-native technologies, which are born in the cloud, are better than the cloud, and maximize the value of the cloud, have been recognized by more and more enterprises, and have become an important means for enterprise IT to reduce costs and improve efficiency.

However, cloud-native changes are not only about the technical aspects of infrastructure and application architecture, but are also promoting changes in enterprise IT organizations, processes, and culture.

In the CNCF 2020 annual survey report, 83% of organizations already use Kubernetes in production environments. However, the top three challenges facing them are complexity, cultural change, and security.

1.png

In order to better accelerate business innovation and solve the challenges of Internet scale, cloud-native application architecture and development methods came into being. Compared with traditional monolithic application architecture, distributed microservice architecture has better, faster iteration speed and more Low development complexity, better scalability and flexibility. However, just like in the Star Wars universe, the Force has both a light and a dark side. The complexity of deployment, operation, maintenance, and management of microservice applications has greatly increased, and the DevOps culture and the automation tools and platform capabilities behind it have become the key .

2.png

Before the emergence of container technology, DevOps theory has been developed for many years. However, if the "development" and "operations" teams cannot communicate in the same language and collaborate with the same technology, then the barriers of organization and culture will never be broken. The emergence of Docker container technology has realized the standardization of the software delivery process. Build once and deploy anywhere. Combining cloud computing programmable infrastructure and Kubernetes declarative APIs, automated continuous integration and continuous delivery of applications and infrastructure can be realized through pipelines, greatly accelerating the integration of development and operation and maintenance roles.

Cloud native is also a reconstruction of the business value and function of the team . Some responsibilities of the traditional operation and maintenance team are transferred to the development team, such as application configuration and release, which reduces the labor cost of each release, while the operation and maintenance responsibilities will pay more attention to the stability of the system and IT governance. The SRE Site Reliability Engineering (site reliability engineering) advocated by Google uses software and automation to solve the complexity and stability of system operation and maintenance. In addition, security and cost optimization have also become the focus of cloud operation and maintenance.

Security is one of the core concerns of enterprises going to the cloud . The agility and dynamics of cloud native bring new challenges to enterprise security. Because cloud security is a shared responsibility model, companies need to understand the responsibility boundaries between cloud service providers and think about how to solidify security best practices through tool-based and automated processes. In addition, the traditional security architecture protects the boundaries through firewalls, and any internal users or services are fully trusted. With the sudden outbreak of the new crown epidemic in 2020, a large number of companies need employees and customers to work remotely and collaborate, and enterprise applications need to be deployed and interacted on IDC and the cloud. After the physical security boundary disappears, cloud security is ushering in a profound change.

In addition, the new crown epidemic has further caused companies to pay more attention to IT cost optimization. An important advantage of cloud native is to make full use of the elasticity of the cloud to provide computing resources required by the business on demand, avoid resource waste, and achieve the goal of cost optimization. However, unlike the traditional cost budget review system, cloud-native dynamics and high-density application deployment make IT cost management more complicated.

To this end, cloud native concepts and technologies are also developing to help users continue to reduce potential risks and system complexity. Below we will introduce some new trends in the field of cloud-native application delivery and operation and maintenance.

3.png

Kubernetes has become a universal and unified cloud control plane

The word Kubernetes comes from Greek, meaning helmsman or navigator, and is the root of the English "cybernetic" of "cybernetics". Kubernetes has become the de facto standard for container orchestration, not only thanks to Google’s halo and CNCF (Cloud Native Computing Foundation). Behind it is Google's precipitation and systematic thinking in the field of Borg's large-scale distributed resource scheduling and automated operation and maintenance. A careful understanding of the Kubernetes architecture design helps to think about some essential issues of system scheduling and management in distributed systems .

The core of the Kubernetes architecture is the controller loop, which is also a typical "negative feedback" control system. When the controller observes that the desired state is inconsistent with the current state, it will continue to adjust the resources so that the current state approaches the desired state. For example, scaling based on changes in the number of application copies, automatic migration of applications after node downtime, etc.

4.png

The success of K8s is inseparable from 3 important architectural choices:

  • Declarative API : On Kubernetes, developers only need to define the target state of abstract resources, and the controller will implement how to achieve it. For example, the abstraction of different types of workload resources such as Deployment, StatefulSet, and Job. Allows developers to focus on the application itself, rather than system implementation details. Declarative API is an important design concept of cloud native. Such an architectural approach helps to sink the overall operation and maintenance complexity to the implementation and continuous optimization of the infrastructure. In addition, due to the endogenous stability challenges of distributed systems, a declarative, final state-oriented "level-triggered" implementation can provide a more robust distributed system than an imperative API and event-driven "edge-triggered" approach. achieve.
  • Shielding the underlying implementation : K8s uses a series of abstractions such as Loadbalance Service, Ingress, CNI, and CSI to help business applications better use the infrastructure through business semantics without paying attention to the differences in the underlying implementation.
  • Scalable architecture : All K8s components are implemented and interacted with based on a consistent and open API. Third-party developers can also provide domain-related extensions through methods such as CRD (Custom Resource Definition) / Operator, which greatly expands the application scenarios of K8s.

5.png

Because of this, the scope of resources and infrastructure managed by Kubernetes has far exceeded container applications. Here are a few examples:

  • Infrastructure management : Unlike the open source Terraform or Infrastructure as Code (IaC) tools provided by cloud vendors, such as Alibaba Cloud ROS and AWS CloudFormation, Crossplane ( https://crossplane.io/ ) and AWS Controllers for Kubernetes are based on Kubernetes It extends the management and abstraction of infrastructure. In this way, K8s applications and cloud infrastructure can be managed and changed in a consistent manner.
  • Virtual machine management : K8s can realize unified scheduling and management of virtual machines and containers through KubeVirt. Virtualization can be used to make up for some limitations of container technology. For example, in CI/CD scenarios, it can be combined with Windows virtual machines for automated testing.
  • IoT device management : Edge container technologies such as KubeEdge and OpenYurt provide the ability to manage a large number of edge devices.
  • K8s cluster management : The node pool management and cluster management of Alibaba Cloud Container Service ACK are all automated management and operation and maintenance using Kubernetes. ACK Infra supports tens of thousands of Kubernetes clusters deployed around the world. Based on K8s, it completes automatic expansion and contraction, fault detection/self-healing and other capabilities.

1. Workload automation upgrade

The K8s controller's ideal of "Leave the complexity to oneself, and leave the simplicity to others" is very beautiful, but the realization of an efficient and robust controller is full of technical challenges.

  • Due to the limitations of the built-in workload of K8s, some requirements cannot meet the needs of enterprise application migration, and expansion through the Operator framework has become a common solution. But on the one hand, re-creating wheels for repeated requirements will result in a waste of resources; it will also lead to fragmentation of technology and reduce portability.
  • With more and more enterprise IT architectures, from on Kubernetes to in Kubernetes, a large number of CRDs and custom Controllers have brought a lot of challenges to the stability and performance of Kubernetes. End-state-oriented automation is a "double-edged sword", which not only brings declarative deployment capabilities to applications, but also potentially magnifies some misoperations by the end state. In the event of an operation failure, mechanisms such as maintaining the number of copies, version consistency, and cascading deletion are likely to cause the explosion radius to expand .

OpenKruise is Alibaba Cloud's open source cloud native application automation management engine, and it is also a Sandbox project currently hosted under the Cloud Native Computing Foundation (CNCF) . It comes from the accumulation of Alibaba’s containerization and cloud native technology for many years. It is a standard extension component based on Kubernetes that is applied on a large scale in Alibaba’s internal production environment. Best Practices. Open and co-construct with the community through the open source project OpenKruise. On the one hand, it helps enterprise customers avoid detours, reduce technical fragments, and improve stability in the process of cloud native exploration; on the other hand, it promotes the upstream technology community to gradually improve and enrich the application cycle automation capabilities of Kubernetes .

For more information, please refer to : "OpenKruise 2021 Planning Exposure: More than workloads"

2. A new collaboration interface for development and operation and maintenance emerges

The emergence of cloud native technology has also brought about changes in the corporate IT organization structure. In order to better respond to the needs of business agility, the microservice application architecture gave birth to "Two-pizza teams". Smaller, independent, and self-contained development teams can better reach consensus and accelerate business innovation. The SRE team has become a horizontal support team, supporting the improvement of upper-level R&D efficiency and system stability. With the development of Kubernetes, SRE teams can build their own enterprise application platforms based on K8s, promote standardization and automation, and allow upper-level application development teams to conduct resource management and application lifecycle management through self-service. We have seen further changes in the organization, and new platform engineering teams have begun to emerge.

6.png

Reference: https://blog.getambassador.io/the-rise-of-cloud-native-engineering-organizations-1a244581bda5

This also fits well with K8s' own positioning. The technical positioning of Kubernetes is oriented to application operation and maintenance infrastructure and Platform for Platform, not an integrated application platform for developers. More and more companies will build their own PaaS platforms based on Kubernetes by platform engineering teams to improve R&D efficiency and operation and maintenance efficiency.

Classic PaaS implementations similar to Cloud Foundry will establish a set of independent conceptual models, technical implementations, and extension mechanisms. This approach can provide a simplified user experience, but it also introduces some defects. It cannot be combined with the fast-developing Kubernetes system, and cannot be fully combined with a variety of new technologies, such as the serverless programming model, and support for new computing services such as AI/data analysis. However, the PaaS platform based on K8s lacks a unified architecture design and implementation plan, and there will be many fragmented technical implementations, which are not conducive to sustainable development.

The Open Application Model (OAM) open application model and its Kubernetes implementation KubeVela project are the standard model and framework projects in the field of cloud native application delivery and management jointly launched by Alibaba Cloud, Microsoft and the cloud native community . Among them, the design idea of ​​OAM is to provide a unified, end-user-oriented application definition model for any cloud infrastructure including Kubernetes; and KubeVela is the PaaS reference implementation of this unified model on Kubernetes.

7.png

KubeVela/OAM provides Kubernetes-oriented service abstraction and service assembly capabilities, which can uniformly abstract and describe the workload and operation and maintenance characteristics of different implementations, and provide plug-in registration and discovery mechanisms for dynamic assembly. The platform engineering team can use a consistent way to expand new functions and maintain good interoperability with the new application framework on Kubernetes. For application development and operation and maintenance teams, separation of concerns (Separation of Concerns) can be achieved, which can deconstruct application definitions, operation and maintenance capabilities, and infrastructure, making the application delivery process more efficient, reliable, and automated.

In the field of cloud-native application model definition, the industry is also exploring in different directions. For example, AWS's newly released Proton is a service for cloud-native application delivery. Through Proton, the complexity of container and serverless deployment, operation and maintenance can be reduced, and it can be combined with GitOps to improve the automation and manageability of the entire application delivery process.

The Knative supported by Alibaba Cloud Serverless K8s can simultaneously support serverless containers and functions to implement event-driven applications, allowing developers to use a programming model to efficiently select different serverless computing powers at the bottom for optimized execution.

Ubiquitous security risks lead to changes in security architecture

1. DevSecOps becomes a key factor

8.png

The combination of agile development and programmable cloud infrastructure greatly improves the delivery efficiency of enterprise applications. However, in this process, if security risk control is ignored, huge losses may be caused. Gartner concluded that by 2025, 99% of the security penetration of cloud infrastructure will be caused by user misconfiguration and management.

In the traditional software development process, security personnel only begin to intervene to conduct security audits after the system design and development are completed and before the release and delivery. This kind of process cannot meet the demands of rapid business iteration. "Shifting left on security" has begun to receive more attention, which allows application design and developers to collaborate with security teams as early as possible and seamlessly embed security practices. By shifting security to the left, not only Reducing security risks can also reduce repair costs. IBM researchers found that solving security issues in design can save about 6 times the cost during code development and about 15 times during testing .

The DevOps R&D collaboration process has also been expanded to DevSecOps. It is firstly a change in philosophy and culture, security has become everyone’s responsibility, rather than the responsibility of the security team; secondly, resolve security issues as soon as possible, move security to the software design stage, and reduce overall security governance costs; finally, through automated tool chains Instead of governing by man, it achieves risk prevention, continuous monitoring and timely response capabilities.

The technical premise of DevSecOps landing is to achieve a verifiable and reproducible construction and deployment process, which can ensure that we continue to verify and improve the security of the architecture in different environments such as testing, pre-release, and production. We can use the immutable infrastructure in the cloud native technology and the declarative policy management Policy as Code to combine to achieve the implementation of DevSecOps. The figure below is the most simplified container application DevSecOps pipeline.

9.png

After the code is submitted, the application can be actively scanned through the ACR image service of Alibaba Cloud and the image can be signed. When the container service K8s cluster starts to deploy the application, the security policy can verify the image and reject the application image that has not passed the verification. . In the same way, if we use Infrastructure as Code to make changes to the infrastructure, we can use the scanning engine to scan for risks before the changes, and if relevant security risks are found, we can terminate and alert.

In addition, when the application is deployed to the production environment, any changes need to go through the above-mentioned automated process. This approach minimizes the security risks caused by human misconfiguration. Gartner predicts that by 2025, 60% of enterprises will adopt DevSecOps and immutable infrastructure practices, reducing security incidents by 70% compared to 2020.

2. Service grid accelerates the implementation of zero-trust security architecture

Distributed microservice applications not only increase the complexity of deployment and management, but also their security attack surface has been enlarged. In the traditional three-tier architecture, security protection is mainly in north-south traffic, while in the micro-service architecture, east-west traffic protection will have greater challenges. Under the traditional border protection method, if an application is compromised due to security flaws, there is no security control mechanism to prevent internal threats from "lateral movement".

10.png
https://www.nist.gov/blogs/taking-measure/zero-trust-cybersecurity-never-trust-always-verify

"Zero trust" was first proposed by Forrester around 2010. Simply put, zero trust is to assume that all threats are possible, and do not trust any person/device/application inside or outside the network. It is necessary to reconstruct access control based on authentication and authorization. The trust foundation of the company guides the security system architecture from "network centralization" to "identity centralization"; it does not trust traditional network boundary protection, and replaces it with micro boundary protection.

Google is vigorously promoting cloud-native security and zero-trust architecture, such as BeyondProd methodology. Alibaba and Ant Group also began to introduce the concept and practice of zero-trust architecture during their cloud launches. The key is:

  • Unified identity system: provide an independent identity for each service component in the microservice architecture.
  • Unified access authorization model: calls between services need to be authenticated through identity.
  • Unified access control strategy: The access control of all services is centrally managed and controlled in a standardized direction.

Security architecture is a cross-cutting concern that runs through the concerns related to all components of the entire IT architecture. If it is coupled with a specific microservice framework, any security architecture adjustment may recompile and deploy each application service. In addition, the implementer of the microservice can bypass the security system. The service grid can provide a loosely coupled and distributed zero-trust security architecture that is independent of application implementation.

The following figure shows the security architecture of the Istio service grid:

11.png

among them:

  • The existing identity service can be used to provide identity, and it can also support the identity in SPIFFE format. The identity can be passed through X.509 certificate or JWT format.
  • Unified management, authentication, authorization, service naming and other security policies are implemented through the service grid control plane API.
  • Envoy Sidecar or border proxy server is used as a policy enforcement point (PEP) to execute security policies, which can provide secure access control for east-west and north-south service access. Moreover, Sidecar provides an application-level firewall for each microservice, and network micro-segmentation minimizes the security attack surface.

The service grid decouples network security architecture and applications, can evolve independently, manage independently, and improve security compliance guarantees. In addition, using its telemetry capabilities for service calls, it can further conduct risk analysis and automated defenses on the communication traffic between services through data-based and intelligent methods. Cloud-native zero-trust security is still in its early stages, and we look forward to more security capabilities sinking into the infrastructure in the future.

A new generation of software delivery methods are beginning to emerge

1. 从 Infrastructure as Code 到 Everything as Code

Infrastructure-as-Code (IaC) is a typical declarative API that changes the management, configuration, and collaboration of enterprise IT architectures on the cloud. Using IaC tools, we can integrate cloud servers, networks, and databases and other cloud resources to achieve fully automated creation, configuration, and assembly.

We can extend the IaC concept to cover the entire cloud-native software delivery, operation and maintenance process, that is, Everything as Code. The following figure involves various models in the application environment, from infrastructure to application model definitions to global delivery methods and security systems. We can create, manage, and change application configurations in a declarative manner.

12.png

In this way, we can provide flexible, robust, and automated full lifecycle management capabilities for distributed cloud-native applications:

  • All configurations can be version managed, traceable and auditable.
  • All configurations are maintainable, testable, understandable, and collaborative.
  • All configurations can be statically analyzed to ensure the predictability of changes.
  • All configurations can be reproduced in different environments, and all environmental differences also need to be displayed and declared to improve consistency.

2. Declarative CI/CD practices are gradually gaining attention

Furthermore, we can manage all environment configurations of the application through the source code control system, and carry out end-oriented delivery and change through automated processes. This is the core concept of GitOps .

GitOps was originally proposed by Alexis Richardson of Weaveworks, with the goal of providing a set of best practices for unified deployment, management, and monitoring of applications. In GitOps, all environmental information from application definition to infrastructure configuration is used as source code, and version management is carried out through Git; all release, approval, and change processes are recorded in the historical state of Git. In this way, Git becomes the source of truth, and we can efficiently trace historical changes and easily roll back to the specified version. Combining GitOps with the declarative API and immutable infrastructure advocated by Kubernetes, we can guarantee the reproducibility of the same configuration and avoid unpredictable stability risks in the online environment due to configuration drift.

13.png

Combined with the DevSecOps automation process mentioned above, we can provide a consistent testing and pre-launch environment before the business goes online, capture the stability risks in the system earlier and faster, and more complete verification of grayscale and rollback measures .

GitOps improves delivery efficiency, improves developer experience, and improves the stability of distributed application delivery .

In the past two years, GitOps has been widely used by Alibaba Group and Ant, and has become a standardized delivery method for cloud-native applications. GitOps is still in its early stages of development, and the open source community is still improving related tools and best practices. In 2020, Weaveworks' Flagger project is merged into Flux. Developers can use GitOps to implement progressive delivery strategies such as grayscale release, blue-green release, and A/B testing, which can control the explosion radius of release and improve the stability of release. At the end of 2020, the CNCF application delivery field team officially announced the establishment of the GitOps Working Group. We look forward to the future community will further promote the standardization process and technology implementation in related fields.

14.png

3. The operation and maintenance system evolves from standardization and automation to data and intelligence

With the development of the scale of microservice applications, the complexity of problem positioning and performance optimization has exploded. Although enterprises already have a variety of tools in the field of IT service management, such as log analysis, performance monitoring, and configuration management. However, there are data islands between different management systems, which cannot provide the end-to-end visibility necessary for complex problem diagnosis. Many existing tools use rule-based methods for monitoring and alerting. In an increasingly complex and dynamic cloud-native environment, rule-based methods are too fragile, expensive to maintain and difficult to scale.

AIOps uses technologies such as big data analysis and machine learning to automate IT operation and maintenance processes. AIOps can gain visibility into the internal and external dependencies of the IT system through a large number of log and performance data processing, and system environment configuration analysis, enhance forward-looking and problem insights, and realize autonomous operation and maintenance.

Benefiting from the development of the cloud native technology ecosystem, technologies such as AIOps and Kubernetes will promote each other and further improve the cost optimization, fault detection, and cluster optimization solutions of enterprise IT. There are several important boosts here:

  • Standardization of observability : With the development of the cloud native technology community Prometheus, OpenTelemetry, OpenMetrics and other projects, the application observability field is further standardized and integrated in the fields of logging, monitoring, link tracking, etc., making multi-index and root cause analysis The data set is richer. The non-intrusive data telemetry capability of Service Mesh can obtain richer business indicators without modifying existing applications. Thereby improving the accuracy and coverage of the AI ​​level of AIOPS.
  • Standardization of application delivery management capabilities : Kubernetes declarative APIs and end-state application delivery methods provide a more consistent management, operation and maintenance experience. The non-intrusive service flow management capability of Service Mesh allows us to manage and automate the operation and maintenance of applications in a transparent manner.

Through the combination of Alibaba Group's DevOps platform "cloud effect" and the container platform release and change system, the "unattended release" of applications can be realized . During the release process, the system continuously collects various indicators including system data, log data, and business data, and uses algorithms to compare indicator changes before and after the release. Once a problem is discovered, the release process can be blocked or even automatically rolled back. With this technology, any development team can safely do the release work without worrying about major failures caused by online changes.

Cloud native cost optimization gradually attracts attention

As companies migrate more core businesses from data centers to the cloud, more and more companies urgently need to budget, calculate, and optimize the cloud environment. From a fixed financial cost model to a changing, pay-as-you-go cloud financial model, this is an important conceptual and technological change. However, most companies do not yet have a clear understanding and technical means for cloud financial management. In the FinOps 2020 survey report , nearly half of the respondents (49%) have almost no or no automated methods to manage cloud spending. To help organizations better understand cloud costs and IT benefits, the concept of FinOps became popular.

FinOps is a way of cloud financial management and a transformation of enterprise IT operating mode. The goal is to improve organizations' understanding of cloud costs and make better decisions. In August 2020, the Linux Foundation announced the establishment of the FinOps Foundation to advance the discipline of cloud financial management through best practices, education and standards. At present, cloud vendors have gradually increased their support for FinOps, helping enterprises' financial processes to better adapt to the variability and dynamics of cloud resources. For example, AWS Cost Explorer, Alibaba Cloud Expense Center, can help companies better analyze and allocate costs. See: https://developer.aliyun.com/article/772964 for details .

More and more companies use the Kubernetes platform to manage and use infrastructure resources on the cloud. Use containers to increase deployment density and application flexibility, thereby reducing overall computing costs. But the dynamic nature of Kubernetes introduces new complexity challenges for resource measurement and cost allocation.
Since multiple containers can be dynamically deployed on the same virtual machine instance and can be elastically scaled on demand, we cannot simply map the underlying cloud resources to container applications one-to-one. In November 2020, the CNCF Foundation and FinOps Foundation released a new white paper on Kubernetes cloud financial management "FinOps for Kubernetes: Unpacking container cost allocation and optimization" to help everyone better understand related financial management practices.

Alibaba Cloud Container Service also has many best practices for cost management and optimization built into the product . Many customers are very concerned about how to achieve cost optimization based on Kubernetes and resource elasticity. Generally, we recommend that companies better understand their business types, divide different node pools for K8s clusters, and find a balance in multi-dimensional considerations such as cost, stability, and performance.

15.png

  • Daily business : For predictable and relatively constant load, we can use bare metal or large-scale virtual machines with monthly subscriptions to improve resource utilization and reduce costs.
  • Planned short-term or cyclical business : For example, short-term business peaks such as Double Eleven Promotion, New Year's Eve events, or periodic business load changes such as monthly settlements, we can use virtual machines or flexible container instances to cope with business peaks.
  • Unexpected sudden elastic business : such as breaking news hotspots, or temporary computing tasks. Elastic container instances can easily expand the capacity of thousands of instances per minute.

For more information about Kubernetes planning, please refer to: "Questions about the Soul of Kubernetes Planning" .

to sum up

In the past ten years, several major technological trends have been converging on infrastructure, Internet application architecture upgrades, and agile R&D processes. The combination with technological innovations such as containers, serverless, and service grids has jointly spawned the birth and development of the concept of cloud native. . Cloud native is redefining the computing infrastructure, application architecture, and organizational process, which is the historical inevitability of the development of cloud computing. Thanks to all those who are in the cloud-native era, let us explore and define the future of cloud-native .

Ending egg, the title of the three articles in this series pays homage to the Star Wars series, have you found it?

Team recruitment

Alibaba Cloud Container Service team is recruiting! Welcome to transfer, social recruitment and recommendation! Let's create a cloud-native and passionate future together! Hangzhou, Beijing, and Shenzhen all have opportunities. Send your resume to: [email protected].

A book to understand the comprehensive cloud-native process of Alibaba's core system on Double 11

Click to download the "Cloud Native Large-scale Application Landing Guide" for free . From technical system upgrades to technological capability breakthroughs to large-scale business promotion practices, a book to understand the comprehensive cloud nativeization process of Alibaba's core system on Double 11!

Original link: https://developer.aliyun.com/article/782176?

Copyright statement: The content of this article is voluntarily contributed by Alibaba Cloud real-name registered users. The copyright belongs to the original author. The Alibaba Cloud Developer Community does not own its copyright and does not assume corresponding legal responsibilities. For specific rules, please refer to the "Alibaba Cloud Developer Community User Service Agreement" and the "Alibaba Cloud Developer Community Intellectual Property Protection Guidelines". If you find suspected plagiarism in this community, fill in the infringement complaint form to report it. Once verified, the community will immediately delete the suspected infringing content.

Guess you like

Origin blog.csdn.net/alitech2017/article/details/114633185