1 Introduction
1.1 Overview
In the ever-evolving software development environment, the widespread adoption of microservices architecture represents a major paradigm change in the software industry. Microservices architecture is a method of splitting large applications into smaller independent services, which focuses on fulfilling their specific functions and responsibilities. The advantage of microservices is to enhance the modularity, scalability, and agility of complex applications, so microservices can better adapt to the complexity and maintenance difficulties of large applications (Thönes 2015). Microservices architecture is becoming increasingly popular in the software industry because it can help organizations better cope with technical debt, improve software delivery efficiency, and enable rapid product change and innovation (Vural, Koyuncu, and Guney 2017). At the same time, Kubernetes orchestration has become a foundational technology for developing microservices architectures. Kubernetes is managed as an open-source project in the Cloud Native Computing Foundation (CNCF) and is supported by many cloud service providers and software vendors (Kubernetes 2019). Also, Kubernetes is a distributed clustering technology, which enables container-based systems to be managed declaratively through APIs. It also enables Kubernetes to form many key characteristics, including idempotent, fault tolerance, single purpose, and service discovery (Kubernetes 2019). This combination of microservices and Kubernetes orchestration has resulted in their significant impact on scalability and resource management so that they can be critical elements in the development and operation of modern software applications.
1.2 Topics and Context
The topic of this exploration consists of two technologies, which are microservices architecture and Kubernetes orchestration. This paper mainly studies the characteristics of software development, deployment, and Management, especially their major contributions to Scalability and Resource Management. Kubernetes provides the ability to automatically repair through fault recovery operations to improve the scalability of large software systems (Abdollahi Vayghan, Saied, Toeroe, and Khendek 2018). And the adoption of microservice architecture that can easily meet the requirements of these technologies in terms of scalability. Meanwhile, microservices architecture and Kubernetes orchestration can help manage and allocate resources efficiently. Each service in a microservices architecture can be deployed and scaled independently, so resources can be allocated according to the resource requirements of each service. Kubernetes orchestration tool implements intelligent resource schedule, resource limit, and quota functions, which can better manage the resources of the whole system in the microservice architecture (Ding, Wang, and Jiang 2023).
In the rest of the paper, Chapter 2 details the impact of microservices architecture and Kubernetes orchestration on scalability and resource management, and illustrates its practical application through a prominent use case. Through this existing example of successful implementation, the main research content of the paper is also confirmed. In Chapter 3, we objectively summarize the applicability, advantages and disadvantages of this topic. Next, the project management section of Chapter 4 includes Gantt charts, detailed timelines, and retrospective views, including initial plans, milestones, challenges, solutions, and lessons learned from the project. This part mainly reflects the research on which aspects are effective and the direction of work in the future.
1.3 Motivation
The motivation of the topic is to address the urgent need for software applications, which not only perform reliably under various loads but also develop and deploy quickly and efficiently. In the era of changing user requirements and software functions, highly scalable services and effective management resources are indispensable in software development. Microservices architecture provides a promising solution to the scalability challenge by supporting independent extension of application components. However, managing such distributed systems also brings complexity, especially in terms of deployment, resource provisioning, and load-balancing challenges. Kubernetes orchestration addresses these challenges through automated and containerized program management, ensuring that scalability is not compromised by complexity or resource inefficiencies. This exploration aims to provide a comprehensive understanding, which is confirmed by the analysis of existing cases. Ultimately, enabling its technology can drive the development of more dynamic, robust, and efficient applications.
2 Core topic (16 %)
2.1 Summary (8 %) 3.7(4)页
2.1.1 History
To better understand this topic, we first need to understand the evolutionary process and history of microservice architecture and Kubernetes.
Microservice Architecture
As shown in Figure 1 below, microservice has been through 3 stages of evolution.
Figure1 - The evolution of microservice
The early traditional architecture is monolithic. Monolithic, in this context, means “composed all in one piece.”, that is, all the code logic is integrated. However as the business becomes more complex, the code coupling becomes entangled, leading to an increase in the maintenance cost of the project. At this time, the Mutitier Architecture is introduced. One of the classic architectures is the 3-tier architecture. It comprises the presentation layer, responsible for displaying the user interface; the logic layer, which handles client requirements; and the data layer, used for storing data.
However, multitier architecture is still a centralized way to design applications, it does not effectively address the issues arising from the increase in software complexity. We can see from Figure 2 that as the software complexity increases, the difficulty of scaling, the speed of evolution, and the difficulty of maintenance all show exponential growth. The product development of big-tech companies like Netflix, Amazon, and Google illustrates this point well. Therefore, microservice architecture emerges.
Figure 2 - Software complexity growth curve
We can see from Figure 3 that the logic layer and data layer in the 3-tier architecture are further subdivided into smaller pieces called microservices. Each microservice is responsible for handling one business end-to-end and is independent of other microservices, greatly decoupling the business logic. Therefore, different modules can be managed by different teams.
Figure3 - Microservice Architecture
But, what if microservices become more and more complex? Suppose there are hundreds or even thousands of microservices. At this point, if a bug occurs and needs to be located, it can become extremely difficult. That’s why we need containerization technology, like Docker. Kubernetes solves the problems beyond Docker, including deployment, how to auto-scale it, manage its resources efficiently, self-healing, and so on [7]. The specific impacts of auto-scaling and resource management will be elaborated in detail in the next subsection.
Deployment - Kubernetes
In the early days of monolithic architecture, applications were run on virtual machines. When the program needed to scale, it was horizontally scaled by adding more virtual machines. In this architecture, there was a pilot role, used for traffic prediction, provisioning the number of virtual machines, auto-scaling, and checking healthy status utilizing metrics like the server’s response time [8].
Although virtual machines alleviated the problem of underutilized physical server resources, they still had drawbacks. At an early stage, an application occupies a virtual machine, which consumes a lot of space, RAM, and CPU, is slow to start up, and requires a license for each operating system, which may incur additional costs. Therefore, containerization technology emerged.
A container is an application that is packaged with all files, configurations, and dependencies that are necessary to run. So the container can run in any environment that has a container engine inside. So, It makes running many applications on a virtual machine or a physical server become possible [9].
At this point, the existing pilot role is not sufficient anymore. There is also a need for a scheduling algorithm to schedule the creation of new containers based on the resource utilization of each virtual machine. It also needs to manage the lifecycle of replicas independently, so when one virtual machine goes down, all of the replicas on it need to be rescheduled. Moreover, how to handle different microservices with different programming languages, how to maintain resource balance and isolation of containers, how to remove and add virtual machines, how to discover and communicate with other services, etc., all require corresponding solutions. These functionalities are what a container orchestrator system can provide, with Kubernetes being the most outstanding representative. Figure 4 illustrates what a container orchestrator looks like.
Figure 4 - Container Orchestrator
2.1.2 Principles
In this section, we are going to illustrate clearly how Kubernetes impact microservice in the aspect of scaling and resource management.
From Figure 5, we can see the architecture of Kubernetes. It mainly consists of two major parts. The first one is the control plane, which is responsible for managing the cluster’s state. The second part is a set of worker nodes; these nodes run containerized application workloads. Containerized applications run in a pod. A pod is the smallest deployable unit in Kubernetes. A pod host can run multiple containers, providing a shared database and network for these containers. The creation and scaling of pods are controlled by the control plane.
The control plane consists of four major components: API Server, etc, Scheduler, and Controller Manager. The API Server is the primary interface between the control plane and the cluster. It exposes a RESTful API to allow clients to submit requests to control the cluster. etcd is a distributed key-value store that stores the cluster’s persistent state, such as available resources, cluster health, and so on. The Scheduler is responsible for scheduling pods onto worker nodes in the cluster. It decides whether to add nodes based on the resource utilization within each node and where to place new pods. It is crucial for implementing application auto-scaling. The Controller Manager is responsible for running controllers that manage the state of the cluster, such as the replication controller, and deployment controller [10].
Figure 5 - Kubernetes Architecture
Kubernetes utilizes Horizontal Pod Autoscaler(HPA) to automatically adjust the number of replica pods in a deployment based on observed CPU utilization, memory usage, or other custom metrics. HPA continuously monitors the metrics and scales the pods up or down to maintain optimal performance. Beyond pod-level scaling, Kubernetes also offers Cluster Autoscaler, which automatically adjusts the size of the cluster by adding or removing nodes to accommodate the workload. This ensures that the underlying infrastructure scales seamlessly with the microservices deployments.
Dynamic Scaling has 2 workflows: scaling up and scaling down. When the demand for a particular microservice increases, Kubernetes increases the number of pod replicas to handle the increased load. This ensures that the service can handle more requests without affecting performance. Conversely, when the demand decreases, Kubernetes scales down the number of pod replicas to save resources and reduce costs. This adaptive scaling ensures efficient resource utilization and cost-effectiveness.
Dynamic scaling is a critical feature of Kubernetes that significantly enhances the scalability and efficiency of microservices architecture. By enabling adaptive scaling based on real-time demand, Kubernetes ensures optimal resource utilization, cost-effectiveness, and resilience of microservices deployments. Organizations adopting Kubernetes can benefit from improved performance, reduced operational costs, and enhanced reliability, making it an essential tool for managing modern microservices-based applications.
After microservices are containerized, they will ultimately be deployed into each pod. Based on developers’ configurations, such as the number of replicas needed for each microservice and whether pods should be colocated, Kubernetes will automatically handle resource allocation and scaling. Figure 6 provides a simple illustration of a microservices system deployed on Kubernetes. There are a total of four microservices in the diagram: Carts API service, Checkout API service, UI service, and Orders API service. Each service has its namespace [11].
Figure 6 - Microservice Deployed in Kubernetes
2.1.3 Impact
Dynamic Scaling
Dynamic scaling is one of the key features that Kubernetes brings to microservices architecture. Kubernetes can dynamically scale microservices based on real-time needs. It automatically adjusts the number of instances or pods of a microservice to efficiently handle different workloads. This autoscaling feature ensures optimal resource utilization and consistent performance during peak loads.
Dynamic scaling is a critical feature of Kubernetes that significantly enhances the scalability and efficiency of microservices architecture. By enabling adaptive scaling based on real-time demand, Kubernetes ensures optimal resource utilization, cost-effectiveness, and resilience of microservices deployments. Organizations adopting Kubernetes can benefit from improved performance, reduced operational costs, and enhanced reliability, making it an essential tool for managing modern microservices-based applications.
Resource Allocation and Management
Resource allocation and management are fundamental aspects of maintaining a stable and efficient microservices architecture. Kubernetes offers sophisticated resource management capabilities, allowing developers and administrators to define and manage resource requirements and limits for each microservice running on the cluster. By utilizing resource quotas, requests, and limits, Kubernetes ensures fair distribution and prevents resource contention among different services, thereby optimizing the overall performance and stability of the system.
Kubernetes allows administrators to set resource quotas at the namespace level, limiting the amount of CPU, memory, and storage that can be consumed by services within a namespace. This helps in preventing any single service or user from monopolizing the cluster resources. Developers can specify resource requests and limits for individual containers or pods, allowing Kubernetes to schedule and allocate resources more effectively. Resource requests indicate the minimum amount of resources required for a container to run, while resource limits define the maximum amount of resources a container can consume.
Kubernetes ensures that one microservice does not impact the performance of others by consuming excessive resources. By setting resource limits and quotas, Kubernetes prevents resource hogging and ensures that each service gets its fair share of resources, thereby maintaining performance isolation. In addition, Kubernetes facilitates better resource planning and management by allowing developers to set clear resource boundaries for each service. By specifying resource requests and limits, organizations can accurately predict and allocate resources, ensuring that the system can handle the workload efficiently. Kubernetes enables efficient utilization of hardware resources by dynamically adjusting resource allocations based on the actual needs of microservices. By continuously monitoring resource usage and workload, Kubernetes optimizes resource allocation to minimize waste and maximize efficiency.
Resource allocation and management are critical components of maintaining a stable and efficient microservices architecture. Kubernetes offers advanced resource management capabilities, including resource quotas, requests, and limits, to ensure fair distribution and prevent resource contention among different services. By providing isolation, predictability, and optimization, Kubernetes enables organizations to effectively manage and allocate resources, leading to improved performance, stability, and cost-efficiency of microservices deployments. Adopting Kubernetes for resource management can significantly benefit organizations by optimizing resource usage, enhancing system resilience, and improving operational efficiency in managing microservices-based applications.
2.2 Application (6%) 2.8(3)页
2.2.1 Dynamic Scaling in Kubernetes
Dynamic scaling is one of the key features that Kubernetes brings to microservices architecture. Kubernetes can dynamically scale microservices based on real-time needs. It automatically adjusts the number of instances or pods of a microservice to efficiently handle different workloads. This autoscaling feature ensures optimal resource utilization and consistent performance during peak loads.
-
Dynamic Scaling Mechanism
Kubernetes utilizes HPA to automatically adjust the number of replica pods in a deployment based on observed CPU utilization, memory usage, or other custom metrics. HPA continuously monitors the metrics and scales the pods up or down to maintain optimal performance. Beyond pod-level scaling, Kubernetes also offers Cluster Autoscaler, which automatically adjusts the size of the cluster by adding or removing nodes to accommodate the workload. This ensures that the underlying infrastructure scales seamlessly with the microservices deployments. -
Dynamic Scaling Workflow
Dynamic Scaling has 2 workflows: scaling up and scaling down. When the demand for a particular microservice increases, Kubernetes increases the number of pod replicas to handle the increased load. This ensures that the service can handle more requests without affecting performance. Conversely, when the demand decreases, Kubernetes scales down the number of pod replicas to save resources and reduce costs. This adaptive scaling ensures efficient resource utilization and cost-effectiveness.
Dynamic scaling is a critical feature of Kubernetes that significantly enhances the scalability and efficiency of microservices architecture. By enabling adaptive scaling based on real-time demand, Kubernetes ensures optimal resource utilization, cost-effectiveness, and resilience of microservices deployments. Organizations adopting Kubernetes can benefit from improved performance, reduced operational costs, and enhanced reliability, making it an essential tool for managing modern microservices-based applications.
2.2.2 Resource Allocation and Management in Kubernetes
Resource allocation and management are fundamental aspects of maintaining a stable and efficient microservices architecture. Kubernetes offers sophisticated resource management capabilities, allowing developers and administrators to define and manage resource requirements and limits for each microservice running on the cluster. By utilizing resource quotas, requests, and limits, Kubernetes ensures fair distribution and prevents resource contention among different services, thereby optimizing the overall performance and stability of the system.
Kubernetes allows administrators to set resource quotas at the namespace level, limiting the amount of CPU, memory, and storage that can be consumed by services within a namespace. This helps in preventing any single service or user from monopolizing the cluster resources. Developers can specify resource requests and limits for individual containers or pods, allowing Kubernetes to schedule and allocate resources more effectively. Resource requests indicate the minimum amount of resources required for a container to run, while resource limits define the maximum amount of resources a container can consume.
Kubernetes ensures that one microservice does not impact the performance of others by consuming excessive resources. By setting resource limits and quotas, Kubernetes prevents resource hogging and ensures that each service gets its fair share of resources, thereby maintaining performance isolation. In addition, Kubernetes facilitates better resource planning and management by allowing developers to set clear resource boundaries for each service. By specifying resource requests and limits, organizations can accurately predict and allocate resources, ensuring that the system can handle the workload efficiently. Kubernetes enables efficient utilization of hardware resources by dynamically adjusting resource allocations based on the actual needs of microservices. By continuously monitoring resource usage and workload, Kubernetes optimizes resource allocation to minimize waste and maximize efficiency.
Resource allocation and management are critical components of maintaining a stable and efficient microservices architecture. Kubernetes offers advanced resource management capabilities, including resource quotas, requests, and limits, to ensure fair distribution and prevent resource contention among different services. By providing isolation, predictability, and optimization, Kubernetes enables organizations to effectively manage and allocate resources, leading to improved performance, stability, and cost-efficiency of microservices deployments. Adopting Kubernetes for resource management can significantly benefit organizations by optimizing resource usage, enhancing system resilience, and improving operational efficiency in managing microservices-based applications.
2.2.3 Self-Healing and Auto-Recovery in Kubernetes
Self-healing and auto-recovery are essential features of Kubernetes that contribute to the high availability and resilience of microservices architectures. Kubernetes offers built-in mechanisms to automatically detect, recover, and manage failures within the system, ensuring continuous availability, reliability, and optimal performance of microservices.
Kubernetes manages the lifecycle of pods by automatically restarting failed containers. If a container fails or becomes unresponsive, Kubernetes restarts the container to restore the service to a healthy state. In case of node failures or issues, Kubernetes can automatically reschedule the affected workloads to healthy nodes within the cluster, ensuring continuous availability and preventing service disruptions. Kubernetes uses ReplicaSets and ReplicationControllers to ensure that the desired number of pod replicas are always running. If a pod fails, Kubernetes creates a new pod to maintain the desired replica count, ensuring high availability and reliability.
Kubernetes ensures uninterrupted service availability by quickly detecting and recovering from failures. By automatically restarting failed containers and rescheduling workloads to healthy nodes, Kubernetes maintains service continuity and ensures that the system remains accessible and responsive to users. Kubernetes minimizes service downtime by automating the recovery process and reducing the time taken to detect and resolve issues. By proactively managing and handling failures, Kubernetes reduces the need for manual intervention, leading to faster recovery and reduced downtime. Kubernetes improves operational efficiency by proactively managing and resolving issues without requiring human intervention. By automatically detecting and handling failures, Kubernetes reduces the burden on operators and allows them to focus on other critical tasks, leading to improved productivity and operational efficiency.
Self-healing and auto-recovery are critical features of Kubernetes that enhance the resilience, availability, and reliability of microservices architectures. By offering built-in mechanisms for detecting, managing, and recovering from failures, Kubernetes ensures continuous service availability, reduces downtime, and improves operational efficiency. Organizations adopting Kubernetes can benefit from improved fault tolerance, reduced service disruptions, and streamlined operations, making it an essential tool for managing modern microservices-based applications effectively.
2.3 Example
Kubernetes changes the way organizations manage, scale, and optimize microservices architectures, providing powerful tools and capabilities to enhance scalability, resource management, and resiliency. The following three examples are the actual application of Kubernetes.
Netflix, a globally recognized streaming giant, strategically employs Kubernetes to manage its dynamic scaling needs, particularly during peak usage periods. As a platform that caters to millions of viewers worldwide, Netflix faces the challenge of handling massive spikes in user traffic, especially during popular show releases or global events. Kubernetes’ robust auto-scaling capabilities enable Netflix to effectively scale its infrastructure and services in real-time, ensuring a seamless user experience and high-quality streaming for its vast user base.
Spotify, a leading music streaming platform with millions of active users, has integrated Kubernetes into its infrastructure to streamline its resource allocation and management processes. In a dynamic environment where user demands for music streaming can vary significantly, Kubernetes provides Spotify with the flexibility and control needed to allocate resources efficiently across its diverse range of microservices. This proactive approach to resource management enables Spotify to optimize its infrastructure, enhance application performance, maintain stability, and achieve cost-efficiency, ultimately leading to a better user experience for its global audience.
Airbnb, a globally recognized online marketplace for lodging and experiences, has strategically integrated Kubernetes into its microservices architecture to enhance the resilience and reliability of its platform. In a highly competitive marketplace where user trust and satisfaction are paramount, Kubernetes’ self-healing and auto-recovery capabilities enable Airbnb to proactively detect, manage, and recover from failures, ensuring continuous availability and seamless user experiences for its millions of users worldwide. This proactive approach to system management allows Airbnb to maintain a robust and reliable platform, even when faced with unexpected challenges or issues.
3 Conclusion (8 %)
3.1 Applicability (4 %) 1.8(2)页
Kubernetes provides a versatile framework for describing, inspecting, and reasoning about the sharing and utilization of infrastructure resources. Kubernetes boasts the following features.
-
Automated container deployment and orchestration
Kubernetes enables users to easily deploy containerized microservices and automatically orchestrates them to run efficiently in the cluster without much manual intervention. -
Dynamic resource allocation
Kubernetes can dynamically adjust resources such as CPU, memory, and storage based on application requirements, maximizing the utilization of cluster resources. -
Auto-scaling
With horizontal and vertical auto-scaling capabilities, Kubernetes can automatically adjust the number of application instances and resource configurations based on workload conditions to meet changing demands. -
Service discovery and load balancing
Kubernetes provides built-in service discovery and load balancing mechanisms, simplifying communication and load distribution among microservices. -
Self-healing capabilities
Containers are automatically restarted upon failure; when deployed node nodes encounter issues, containers are redeployed and rescheduled; containers failing health checks are terminated until they are running properly to provide services.
These features enable Kubernetes to better manage and schedule containerized applications, addressing common resource management and allocation issues in a microservices architecture. Through Kubernetes, developers and operations teams can more easily deploy, manage, and scale large-scale microservices applications, thereby accelerating development cycles and improving system stability and scalability. Therefore, Kubernetes has wide applicability in microservices architecture. This section summarizes the applicability of Kubernetes in microservices architecture in terms of scope, deployment, resource management, and reliability.
3.1.1 Scope of Applicability
Microservices architectures vary in scale and complexity, and Kubernetes offers a flexible and scalable solution suitable for various scales and types of microservices deployments. From small startup companies to large enterprise applications, Kubernetes can meet the deployment needs of different scales and requirements.
3.1.2 Deployment and Management
Kubernetes simplifies the deployment and management process of microservices applications. Through Kubernetes’ orchestration capabilities, developers can easily define application deployment specifications and utilize Kubernetes’ automation features to achieve rapid deployment and updates. Additionally, Kubernetes provides flexible configuration options, allowing developers to customize configurations according to the application’s needs, thereby better managing the lifecycle of microservices applications.
3.1.3 Resource Management
Kubernetes offers powerful resource management capabilities that help optimize resource utilization for microservices applications. With Kubernetes’ resource quota and limit features, developers can define resource limits for each microservice, ensuring fair allocation and isolation of resources between different microservices. Furthermore, Kubernetes supports various resource scheduling strategies, dynamically adjusting resource allocation based on application demands and priorities, thus maximizing the utilization of cluster resources.
3.1.4 Reliability
Kubernetes enhances the reliability of microservices architecture through automated health checks, fault recovery, and load-balancing functionalities. When a microservice fails, Kubernetes automatically reschedules it to healthy nodes, ensuring the high availability of the application. Additionally, Kubernetes supports horizontal scaling, dynamically adjusting resource allocation based on workload conditions to ensure application performance and stability.
As a robust container orchestration engine, Kubernetes has broad applicability in microservices architecture. It simplifies deployment and management, optimizes resource utilization and performance, and enhances application reliability.
3.2 Pros and Cons (4%) 1.8(2)页
3.2.1 Pros
Benefits of Kubernetes for Microservices.
-
High Availability and Fault Tolerance
Kubernetes (K8S) ensures the high availability and fault tolerance of microservices. By running replica sets or replica pools across multiple nodes, K8S ensures continuous service availability. In the event of node failure, K8S automatically migrates workloads to other available nodes without manual intervention. -
Elastic Scaling
K8S dynamically scales microservices based on actual workload. When the load increases, K8S automatically creates new replicas to handle additional requests; when the load decreases, K8S automatically reduces the number of replicas, saving resources. -
Easy Management and Monitoring
K8S provides a centralized management and monitoring platform, making it convenient to manage and monitor the runtime status of microservices. The K8S Dashboard can be used to view microservices logs, metrics, and health status.
3.2.2 Cons
Kubernetes also has some shortcomings. -
Steep Learning Curve
Using Kubernetes requires a certain learning curve, and teams need to invest time and effort to understand its concepts and operational procedures. -
Complex Management
Kubernetes offers numerous features and configuration options, resulting in potentially complex configuration files. This makes managing and maintaining Kubernetes clusters more complex. -
Increased Resource Consumption
Kubernetes requires computing resources to operate and manage, including nodes, network, and storage resources. This may result in additional cost and performance overhead.
Overall, Kubernetes is a complex technology that is harder to learn and harder to manage. If possible, enterprises may consider using managed Kubernetes services provided by cloud providers. This can reduce the burden of maintaining and managing Kubernetes clusters and provide flexible automation solutions.
4 Project management
4.1 GANTT Chart
Gantt chart is a popular project management tool that is often used to plan and display a project’s schedule. In preparation for the group task, we developed the first draft of the Gantt chart, because it will help us plan every phase and task of the project. The Gantt chart helps us track progress throughout the project, which ensures to make reasonable adjustments if some tasks are delayed. At the same time, milestones in the Gantt chart represent our record of achieving significant achievements and results. The display of task progress also helps team members to work together and communicate effectively. Overall, the Gantt chart is very important for this group work.
In this project, a Gantt chart was been created as follows. Meanwhile, the information on the Gantt chart is also shown. The used website is available at https://www.onlinegantt.com/#/gantt.
Overall GANNT Chart
Information of GANNT Chart
Finalized Timesheet
4.3 Retrospective View (Two pages)
4.3.1 Initial Planning
Our initial plan changed direction several times and eventually centered around a comprehensive study and understanding of microservices architecture and Kubernetes orchestration. Our goal was to conduct the relevant research in a phased manner, concentrating first on the basic concepts, principles, and application scenarios of these two technologies. We intend to build our knowledge through literature reviews, case studies, and viewing online tutorials. After familiarizing ourselves with the basic concepts, we plan to delve into the relationship between microservices architecture and Kubernetes orchestration through group discussions and brainstorming sessions. Our goal is to analyze their impact on scalability and resource management and to explore the challenges that can be encountered in a microservices environment, such as deployment, monitoring, and resource allocation.
As part of addressing the challenges of complexity inherent in microservices management, we propose a division of labor and effective long-term Agile meeting support for comprehensive research and planning. We plan to study the operational mechanisms of microservices and find extensive literature support to develop effective management strategies. These studies provide valuable insights into service decomposition, communication mechanisms, and data management in microservice architectures, which help us to select appropriate architectural patterns. Our plan also included comprehensive documentation and knowledge sharing throughout the project. Our goal was to document key insights, lessons learned, and solution approaches to facilitate knowledge transfer within and outside the team.
4.3.2 milestones
The following are milestones of the project, all of which have been incorporated into the Gantt chart and completed on schedule.
-
Determine the topic
Determine the topic and scope of the report. -
Establish project management
Plan project management, including personnel allocation, scheduling, Gantt chart creation, and progress tracking. -
Complete the first initial draft
Complete the initial draft of the report, including outlining and content composition. -
Complete the final draft
Complete the final version of the report, ensuring integrity of content, clarity of structure, and fluency of language. -
Record the presentation video
Complete recording presentation videos to showcase the research findings of the report.
4.3.3 Challenges and Solutions
The complexity of Microservices Management: The main challenge encountered in the project was the complexity management inherent in a microservices architecture. As we delved deeper into microservices, we realized that orchestrating these small, standalone services effectively required careful planning and robust solutions. The decentralized nature of microservices creates challenges in terms of deployment, monitoring, and resource allocation. Learning curve for Kubernetes: The learning curve for K8S was a huge challenge. While K8S provides powerful container orchestration capabilities, mastering its complexity takes time and effort. Understanding concepts such as Pod, Deployment, Service, and Ingress Controller is critical to managing microservices with Kubernetes effectively.
Leveraging existing literature to gain a deeper understanding of service decomposition, communication mechanisms, and data management, as well as a deeper understanding of the principles and advanced features of K8S, explore different microservices architectural patterns, select the most appropriate solution to address complexity challenges, and document key insights and lessons learned, facilitating the transfer of knowledge within the team to ensure continuous improvement.
4.3.4 Learned Lessons and Conclusion (4%)
A thorough analysis of Kubernetes orchestration reveals its profound impact on scalability and resource management in microservice architectures. Collaborating as a team allowed us to explore the complexity of Kubernetes orchestration and its interaction with microservice architectures in greater depth. teamwork helped to discover rich insights and perspectives that enriched the understanding of the topic.
We can delve further into the many features and capabilities of Kubernetes orchestration. By exploring advanced features such as network policies, security mechanisms, and advanced scheduling techniques, we can discover more opportunities to optimize scalability and resource management in microservice architectures. There is also an opportunity to do further research on integrating the latest research results with the advanced technology of Kubernetes orchestration, which can enhance our understanding and implementation of best practices. K8S orchestration has achieved significant results in practical applications. Implementing a more robust performance evaluation and benchmarking process will be a focus in future projects, by rigorously evaluating the performance of Kubernetes orchestration under different scenarios and workloads.
5 References
[1] Ding, Z., Wang S. and Jiang, C.(2023)’ Kubernetes-Oriented Microservice Placement With Dynamic Resource Allocation’,iIEEE Transactions on Cloud Computing, vol. 11, no. 2, pp. 1777-1793,
https://doi.org/10.1109/TCC.2022.3161900
[2] Kubernetes, T. (2019)’ Kubernetes’, Kubernetes. Retrieved.
[3] L. Abdollahi Vaughan, L. , Saied, M. A., Toeroe, M. and Khendek, F. (2018)’ Deploying Microservice Based Applications with Kubernetes: Experiments and Lessons Learned,2018 IEEE 11th International Conference on Cloud Computing (CLOUD), pp. 970-973, https://doi.org/10.1109/CLOUD.2018.00148
[4] Vural, H., Koyuncu, M. and Guney, S. (2017)‘A Systematic Literature Review on Microservices’, Computational Science and Its Applications – ICCSA 2017 Lecture Notes in Computer Science(), vol 10409,
https://doi.org/10.1007/978-3-319-62407-5_14
[5] Thönes, J.(2015) ‘Microservices’, IEEE Software, vol. 32, no. 1, pp. 116-116,
https://doi.org/10.1109/MS.2015.11
[6] Burns B, Oppenheimer D. Design patterns for container-based distributed systems[C]//8th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 16). 2016.
[7] 5 Minutes or Less. (2022, December 18). Microservices Explained in 5 Minutes [Video]. YouTube. https://www.youtube.com/watch?v=lL_j7ilk7rc
[8] YourTechBud Codes. (2021, October 22). Why do your Microservices need Kubernetes?! [Video]. YouTube. https://www.youtube.com/watch?v=ikbmGKHrkGc
[9] PowerCert Animated Videos. (2022, December 28). Virtual Machines vs Containers
[Video]. YouTube. https://www.youtube.com/watch?v=eyNBf1sqdBQ
[10] ByteByteGo. (2023, January 11). Kubernetes Explained in 6 Minutes | k8s Architecture [Video]. YouTube. https://www.youtube.com/watch?v=TlHvYWVUZyc
[11] UK DEVOPS GURU. (2023, January 15). Part 1 - Deploying Microservices to Kubernetes Cluster [Video]. YouTube. https://www.youtube.com/watch?v=_TvcFSwUv84
[12] Kubernetes Documentation. (n.d.). Autoscaling. Retrieved from https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
[13] Kubernetes Documentation. (n.d.). Managing Compute Resources for Containers. Retrieved from https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
[14] Kubernetes Documentation. (n.d.). Pod Lifecycle. Retrieved from https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/
[15] Vayghan, L. A., Saied, M. A., Toeroe, M., & Khendek, F. (2021). A Kubernetes controller for managing the availability of elastic microservice-based stateful applications. Journal of Systems and Software, 175, 110924. https://doi.org/10.1016/j.jss.2021.110924
[16]Vayghan, Leila Abdollahi, et al. “Deploying microservice-based applications with Kubernetes: Experiments and lessons learned.” 2018 IEEE 11th international conference on cloud computing (CLOUD). IEEE, 2018
[17]Thönes J. Microservices[J]. IEEE Software, 2015, 32(1): 116-116.
[18]Ding Z, Wang S, Jiang C. Kubernetes-oriented microservice placement with dynamic resource allocation[J]. IEEE Transactions on Cloud Computing, 2022.