《Cloud Native Infrastructure》CHAPTER 7(2)

Application Life Cycle

Lifecycle native cloud applications with legacy applications is no different, except that they should stage managed by the software.

Life cycles for cloud native applications are no different than traditional applications, except their stages should be managed by software.

This chapter does not intend to explain all modes and options management applications involved. We will briefly discuss on a cloud native cloud infrastructure to run native applications especially benefit phases: deployment, operation and destruction.

This chapter is not intended to explain all the patterns and options involved in man‐ aging applications. We will briefly discuss a few stages that particularly benefit from running cloud native applications on top of cloud native infrastructure: deploy, run, and retire.

These subjects do not contain all of the content, but the architecture of the application, different languages ​​and selected libraries, there are many other books and articles can refer to.

These topics are not all inclusive of every option, but many other books and articles exist to explore the options, depending on the application’s architecture, language, and chosen libraries.

Deploy

Deployment is one area most dependent on infrastructure applications. Although nothing can stop the deployment of the application itself, but the infrastructure still can manage many other aspects.

Deployments are one area where applications rely on infrastructure the most. While there is nothing stopping an application from deploying itself, there are still many other aspects that the infrastructure manages.

How to integrate and delivery are not the subject of our discussion here, but this Topic Some practice is clear. Application deployment is not just to get the code and run it. Cloud native applications designed to manage all phases of the software. This includes the ongoing health checks as well as the initial deployment phase. To eliminate bottlenecks in technology, processes and principles of the "people" aspects as possible.

How you do integration and delivery are topics we will not address here, but a few practices in this space are clear. Application deployment is more than just taking code and running it.
Cloud native applications are designed to be managed by software in all stages. This includes ongoing health checks as well as initial deployments. Human bottlenecks should be eliminated as much as possible in the technology, processes, and policies.

Deploying the application should be automated, self-service, and if often develop, then it should be carried out frequently. They should also be tested, verified and safe.

Deployments for applications should be automated, self-service, and, if under active development, frequent. They should also be tested, verified, and uneventful.

Each also update each instance of the application, it should not be a new version of the functional compatibility and upgrade solutions. Switching function should be a new CI, the dynamic can be selectively enabled without restarting the application. When the upgrade, the first part of the update, and then validated through testing, and when all the tests pass, released in a controlled manner.

Replacing every instance of an application at once is rarely the solution for new versions and features. New features are “gated” behind configuration flags, which can be selectively and dynamically enabled without an application restart. Version upgrades are partially rolled out, verified with tests, and, when all tests pass, rolled out in a controlled manner.

When you enable a new feature or deploy a new version of the mechanism to control the flow of traffic or application of isolation (see Appendix A) should exist. This can limit the impact of interruptions and allows a slow release and a faster feedback loop, in order to evaluate the use of application performance and new features.

When new features are enabled or new versions deployed, there should exist mechanisms to control traffic toward or away from the application (see Appendix A). This can limit outage impact and allows slow rollouts and faster feedback loops for application performance and feature usage.

Infrastructure should handle all the details of the deployment of the software. Engineers can define application versions, infrastructure requirements and dependencies, infrastructure will drive this state until all requirements are met or changed requirements.

The infrastructure should take care of all details of deploying software. An engineer can define the application version, infrastructure requirements, and dependencies, and the infrastructure will drive toward that state until it has satisfied all requirements or the requirements change.

Run

The operational phase of the application should be the application life cycle calmest and most stable phase. Chapter 1 discusses the software to run two most important aspects: observability to understand what the application is doing, and can be changed as needed operational applications.

Running the application should be the most uneventful and stable stage of an application’s life cycle. The two most important aspects of running software are discussed in Chapter 1: observability to understand what the application is doing, and operability to be able to change the application as needed.

We have the report by health and telemetry data in Chapter 1 introduces the observability of the application, but when it does not work, how would you do? If the telemetry data of the application indicates that it does not meet the SLO, how do you troubleshoot and debug applications?

We already went into detail in Chapter 1 about observability for applications by reporting health and telemetry data, but what do you do when things don’t work as intended? If an application’s telemetry data says it’s not meeting the SLO, how can you troubleshoot and debug the application?

Cloud native application, you should not be connected to the server via SSH and analyze the log. It might even be worth considering whether to require ssh, log files or server.

With cloud native applications, you should not SSH into a server and dig through logs. It may even be worth considering if you need SSH, log files, or servers at all.

You still need to AP), the log data (cloud logging) and stack servers in some locations, but worth a look, if you need traditional tools. When the program crashes, you need a way to debug applications and infrastructure components.

You still need application access (API), log data (cloud logging), and servers somewhere in the stack, but it’s worth going through the exercise to see if you need the traditional tools at all. When things break, you need a way to debug the application and infrastructure components.

When debugging a corrupted system, you should first check the infrastructure to test, as described in Chapter 5. Testing should be open or not properly configured to provide any infrastructure components expected performance.

When debugging a broken system, you should first look at your infrastructure tests, as explained in Chapter 5. Testing should expose any infrastructure components that are not configured properly or are not providing the expected performance.

Just because you do not manage the underlying infrastructure does not mean that infrastructure can not be the cause of your problem. By tests to verify that it is desirable to ensure that your infrastructure to perform as expected.

Just because you don’t manage the underlying infrastructure doesn’t mean the infrastructure cannot be the cause of your problems. Having tests to validate expectations will ensure your infrastructure is performing how you expect.

After excluding infrastructure factors, you should review the application for more information. The best way is through the application debugging application performance management (APM) and tracking applications may be distributed through OpenTracing and so on.

After infrastructure has been ruled out, you should look to the application for more information. The best places to turn for application debugging are application performance management (APM) and possibly distributed application tracing via standards such as OpenTracing.

OpenTracing example, realization and APM is beyond the scope of this book. As a very brief overview, OpenTracing allows you to track calls throughout the application in order to more easily identify network and application traffic issues. OpenTracing exemplary visualization can be seen in FIG 7-1. APM tools added to your application for service to collect metrics and reporting errors.

OpenTracing examples, implementation, and APM are out of scope for this book. As a very brief overview, OpenTracing allows you to trace calls throughout the application to more easily identify network and application communication problems. An example visualization of OpenTracing can be seen in Figure 7-1. APM adds tools to your applications for reporting metrics and faults to a collection service.

Figure 7-1. OpenTracing visualization

When the test track and still no exposure problem, sometimes you just need to enable more detailed logging on the application. But in the case of how to enable debug problems without destroying the site?

When tests and tracing still do not expose the problem, sometimes you just need to enable more verbose logging on the application. But how do you enable debugging without destroying the problem?

Runtime configuration is important for applications in the cloud but the native environment, the configuration should be dynamic, without having to restart the application. Configuration options are still through the application library implementation, but should be able to flag value through a centralized coordinator, application API calls, HTTP header information or dynamically change a variety of ways.

Configuration at runtime is important for applications, but in a cloud native environment, configuration should be dynamic without application restarts. Configuration options are still implemented via a library in the application, but flag values should have the ability to dynamically change through a centralized coordinator, application API calls, HTTP headers, or a myriad of ways.

动态配置的两个例子是Netflix的Archaius和Facebook的Gatekeeper。前Facebook工程经理Justin Mitchell在Quora帖子中分享,[Gatekeeper]在代码部署发布中隔离feature。当我们观察用户指标,性能并确保服务到位以便扩展时,功能可能会在几天或几周内发布。

Two examples of dynamic configuration are Netflix’s Archaius and Facebook’s Gatekeeper. Justin Mitchell, a former Facebook engineering manager, shared in a Quora post that:
[Gatekeeper] decoupled feature releasing from code deployment. Features might be released over the course of days or weeks as we watched user metrics, performance, and made sure services were in place for it to scale.

允许对应用程序配置进行动态控制,可以更好地控制公开的feature,并更好地测试部署代码,发布新代码很容易并不意味着它是适合所有情况的正确解决方案

Allowing dynamic control over application configuration enables more control over exposed features and better test coverage of deployed code. Just because pushing new code is easy doesn’t mean it is the right solution for every situation.

基础架构可以通过协调何时启用功能并根据高级网络策略路由流量来帮助解决此问题并启用更灵活的应用程序。此模式还允许更精细的控件和更好地协调全量或回滚方案。

Infrastructure can help solve this problem and enable more flexible applications by coordinating when features are enabled and routing traffic based on advanced net‐ work policies. This pattern also allows finer-grained controls and better coordination of roll-out or roll-back scenarios.

在动态的自服务的环境中,将部署的应用程序数量将迅速增长。您需要确保在类似的自助服务模型中动态调试应用程序以便部署应用程序。

In a dynamic, self-service environment the number of applications that will get deployed will grow rapidly. You need to make sure you have an easy way to dynamically debug applications in a similar self-service model to deploy the applications.

尽管工程师喜欢推出新的应用程序,但反过来很难让他们下线旧的应用程序。即使如此,它仍然是应用程序生命周期中的关键阶段。

As much as engineers love pushing new applications, it is conversely difficult to get them to retire old applications. Even still, it is a crucial stage in an application’s life cycle.

Retire

在快速变化的环境中,部署新的应用程序和服务很常见。注销应用程序应该以与创建它们相同的方式自服务。

Deploying new applications and services is common in a fast-moving environment. Retiring applications should be self-service in the same way as creating them.

如果自动部署和监控新服务和新资源,则也应按相同标准销毁。尽快部署新服务而不删除未使用的服务是最容易产生技术债务的。

If new services and resources are deployed and monitored automatically, they should also be retired under the same criteria. Deploying new services as quickly as possible with no removal of unused services is the easiest way to accrue technical debt.

识别应注销的服务和资源是特定于业务的。您可以使用应用程序的遥测测量中的经验数据来了解是否正在使用应用程序,但是应该由业务部门决定是否注销应用程序。

Identifying services and resources that should be retired is business specific. You can use empirical data from your application’s telemetry measurements to know if an application is being used, but the decision to retire applications should be made by the business.

不需要时,应自动清除基础架构组件(例如,VM实例和负载平衡器端点)。一个自动组件清理的例子是Netflix的Janitor Monkey。该公司在一篇博客文章中解释道:

Infrastructure components (e.g., VM instances and load balancer endpoints) should be automatically cleaned up when not needed. One example of automatic component cleanup is Netflix’s Janitor Monkey. The company explains in a blog post:

Janitor Monkey通过对其应用一组规则来确定资源是否应该是被清理的候选者。如果每一个规则都确定资源是清理候选者,则Janitor Monkey会标记资源并安排时间来清理它。

Janitor Monkey determines whether a resource should be a cleanup candidate by applying a set of rules on it. If any of the rules determines that the resource is a cleanup candidate, Janitor Monkey marks the resource and schedules a time to clean it up.

所有这些应用阶段的目标是让基础设施和软件来管理,而不是传统的由人来管理。我们采用协调模式与组件元数据相结合,不再编写由人工运行的自动化脚本,而是根据当前上下文不断运行并根据需要在高级别上执行的操作做出决策。

The goal in all of these application stages is to have infrastructure and software manage the aspects that would traditionally be managed by a human. Instead of writing automation scripts that run once by a human, we employ the reconciler pattern combined with component metadata to constantly run and make decisions about actions that need to be taken on a high level based on current context.

应用程序生命周期阶段不是应用程序依赖于基础设施的唯一方面。还有一些基础服务,每个阶段的应用程序都依赖于基础设施。我们将在下一节中介绍一些为应用程序提供的支持服务和API基础结构。

Application life cycle stages are not the only aspects where applications depend on infrastructure. There are also some fundamental services for which applications in every stage will depend on infrastructure. We will look at some of the supporting services and APIs infrastructure provides to applications in the next section.

Guess you like

Origin www.cnblogs.com/langshiquan/p/10960854.html