see more blog

(VN) Kiến trúc Cloud Native và các đặc điểm của ứng dụng Cloud Native (Phần 1)

kien-truc-cloud-native-va-cac-dac-diem-cua-ung-dung-cloud-native-phan-2-1

A school kid called a cloud computing company. The company executive asked him the reason for contacting them. The kid said, “ I want to hire your services.” The executive was excited and also perplexed at the same time as to what services they can offer to a kid. The kid coolly replied, “ I want Homework-as-a-Service.” 
Let’s start with the cloud native architecture blog to build a strong cloud native app!

1. What is Cloud Native Architecture?

Today, every IT resource or product is offered as a service. As such, cloud native software development becomes a key requirement for every business, regardless of its size and nature. Before jumping onto the cloud bandwagon, it is important to understand what is cloud native architecture and how to design the right architecture for your cloud native app needs.

Cloud native architecture is an innovative software development approach that is specially designed to fully leverage the cloud computing model. It enables organizations to build applications as loosely coupled services using microservices architecture and run them on dynamically orchestrated platforms. Applications built on the cloud native application architecture are reliable, deliver scale and performance and offer faster time to market. 

The traditional software development environment relied on a so called “waterfall” model powered by monolithic architecture wherein software was developed in sequential order. 

  1. The designers prepare the product design along with related documents.
  2. Developers write the code and send it to the testing department.
  3. The testing team runs different types of tests to identify errors as well as gauge the performance of the cloud native application. 
  4. When errors are found, the code is sent back to the developers. 
  5. Once the code successfully passes all the tests, it is deployed to a test production environment and then deployment to a live environment.

If you have to update the code or add/remove a feature, you have to go through the entire process again. When multiple teams work on the same project, coordinating with each other on code changes is a big challenge. It also limits them to use a single programming language. Moreover, deploying a large software project requires a huge infrastructure setup along with an extensive functional testing mechanism. The entire process is inefficient and time-consuming.

Microservices architecture was introduced to resolve most of these challenges. Microservices architecture is a service-oriented architecture wherein applications are built as loosely coupled, independent services that can communicate with each other via APIs. It enabled developers to independently work on different services and use different languages. With a central repository that acts as a version control system, organizations were able to simultaneously work on different parts of the code and update specific features without disturbing the software or causing any downtime to the application. When automation is implemented, businesses can easily and frequently make high-impact changes with minimal effort.

Cloud native app augmented by microservices architecture leverages the highly scalable, flexible and distributed cloud nature to produce customer-centric software products in a continuous delivery environment. The striking feature of the cloud native architecture is that it allows you to abstract all the layers of the infrastructure such as databases, networks, servers, OS, security etc., enabling you to independently automate and manage each layer using a script. At the same time, you can instantly spin up the required infrastructure using code. As such, developers can focus on adding features to the software and orchestrating the infrastructure instead of worrying about the platform, OS or the runtime environment.

2. Benefits of a cloud native architecture

There are plenty of benefits offered by cloud native architecture. Here are some of them:

Accelerated Software Development Lifecycle (SDLC)

A cloud native application complements a DevOps-based continuous delivery environment with automation embedded across the product lifecycle, bringing speed and quality to the table. Cross-functional teams comprising members from design, development, testing, operations and business are formed to seamlessly collaborate and work together right through the SDLC. With automated CI/CD pipelines in the development segment and IaC-based infrastructure in the operations segment working in tandem, there is better control over the entire process which makes the whole system quick, efficient and error-free. Transparency is maintained across the environment as well. All these elements significantly accelerate the software development lifecycle.

A software development lifecycle (SDLC) refers to various phases involved in the development of a software product. A typical SDLC comprises 7 different phases.

  1. Requirements Gathering / Planning Phase : Gathering information about current problems, businesses requirements, customer requests etc.
  2. Analysis Phase: Define prototype system requirements, market research for existing prototypes, analyzing customer requirements against proposed prototypes etc.
  3. Design Phase: Prepare product design, software requirement specification docs, coding guidelines, technology stack, frameworks etc.
  4. Development Phase: Writing code to build the product as per specification and guidelines documents
  5. Testing Phase: The code is tested for errors/bugs and the quality is assessed based on the SRS document.
  6. Deployment Phase: Infrastructure provisioning, software deployment to production environment
  7. Operations and Maintenance Phase: product maintenance, handling customer issues, monitoring the performance against metrics etc.

Faster Time to Market

Speed and quality of service are two important requirements in today’s rapidly evolving IT world. Cloud native application architecture augmented by DevOps practices helps you to easily build and automate continuous delivery pipelines to deliver software out faster and better. IaC tools make it possible to automate infrastructure provisioning on-demand while allowing you to scale or take down infrastructure on the go. With simplified IT management and better control over the entire product lifecycle, SDLC is significantly accelerated enabling organizations to gain faster time to market. DevOps focuses on a customer-centric approach, where teams are responsible for the entire product lifecycle. Consequently, updates and subsequent releases become faster and better as well. The reduced development time, overproduction, overengineering and technical debt can lower the overall development costs as well. Similarly, improved productivity results in increased revenues as well.

High Availability and Resilience

Modern IT systems have no place for downtimes. If your product undergoes frequent downtimes, you are out of business. By combining a cloud native architecture with Microservices and Kubernetes, you can build resilient and fault-tolerant systems that are self-healing. During downtime, your applications remain available as you can simply isolate the faulty system and run the application by automatically spinning up other systems. As a result, higher availability, improved customer experience and uptime can be achieved.

Low costs

The cloud native application architecture comes with a pay-per-use model meaning that organizations involved only pay for the resources used while hugely benefiting from economies of scale. As CapEx turns into OpEx, businesses can convert their initial investments to acquire development resources. When it comes to OpEx, the cloud-native environment takes advantage of the containerization technology which is  managed by an open-source Kubernetes software. There are other cloud native tools available in the market to efficiently manage the system. With serverless architecture, standardization of infrastructure, open-source tools, operation costs come down as well resulting in a lower TCO.

Turns your Apps into APIs

Today, businesses are required to deliver customer-engaging apps. Cloud-native environments enable you to connect massive enterprise data with front-end apps using API-based integration. Since every IT resource is in the cloud and uses the API, your application also turns into an API. It not only delivers an engaging customer experience but also allows you to use your legacy infrastructure which extends it into the web and mobile era for your cloud native app.  

3. Cloud Native Architecture Patterns

Due to the popularity of  cloud native application architecture, several organizations came up with different design patterns and best practices to facilitate smoother operation. Here are the key cloud native architecture patterns for cloud architecture:

Pay-as-you-Go

In cloud architecture, resources are centrally hosted and delivered over the internet via a pay-per-use or pay-as-you-go model. Customers are charged based on resource usage. It means you can scale resources as and when required, optimizing resources to the core. It also gives flexibility and choice of services with various rates of payments. For instance, the serverless architecture enables you to provision resources only when the code is executed which means you only pay when your application is in use.

Self-service Infrastructure

Infrastructure as a service (IaaS) is a key attribute of a cloud native application architecture. Whether you deploy apps on an elastic, virtual or shared environment, your apps are automatically realigned to suit the underlying infrastructure, scaling up and down to suit changing workloads. It means you don’t have to seek and get permission from the server, load balancer or a central management system to create, test or deploy IT resources. While this waiting time is reduced, IT management is simplified.

Managed Services

Cloud architecture allows you to fully leverage cloud managed services in order to efficiently manage the cloud infrastructure, right from migration and configuration to management and maintenance while optimizing time and costs to the core. Since each service is treated as an independent lifecycle, managing it as an agile DevOps process is easy. You can work with multiple CI/CD pipelines simultaneously as well as manage them independently.

For instance, AWS Fargate is a serverless compute engine that lets you build apps without the need to manage servers via a pay-per-usage model. Amazon lambda is another tool for the same purpose. Amazon RDS enables you to build, scale and manage relational databases in the cloud. Amazon Cognito is a powerful tool that helps you securely manage user authentication, authorization and management on all cloud apps. With the help of these tools, you can easily set up and manage a cloud development environment with minimal costs and efforts.

Globally Distributed Architecture

Globally distributed architecture is another key component of the cloud native architecture that allows you to install and manage software across the infrastructure. It is a network of independent components installed at different locations. These components share messages to work towards achieving a single goal. Distributed systems enable organizations to massively scale resources while giving the impression to the end-user that he is working on a single machine. In such cases resources like data, software or hardware are shared and a single function is simultaneously run on multiple machines. These systems come with fault tolerance, transparency and high scalability. While the client-server architecture was used earlier, modern distributed systems use multi-tier, three-tier or peer-to-peer network architectures. Distributed systems offer unlimited horizontal scaling, fault tolerance and low latency. On the downside, they need intelligent monitoring, data integration and data synchronization. Avoiding network and communication failure is a challenge. The cloud vendor takes care of the governance, security, engineering, evolution and lifecycle control. It means you don’t have to worry about updates, patches and compatibility issues in your cloud native app.

Resource Optimization

In a traditional data center, organizations have to purchase and install the entire infrastructure beforehand. During peak seasons, the organization has to invest more in the infrastructure. Once the peak season is gone, the newly purchased resources lie idle, wasting your money. With a cloud architecture, you can instantly spin up resources whenever needed and terminate them after use. Moreover, you will be paying only for the resources used. It gives the luxury for your development teams to experiment with new ideas as they don’t have to acquire permanent resources.

Amazon Autoscaling

Autoscaling is a powerful feature of a cloud native architecture that lets you automatically adjust resources to maintain applications at optimal levels. The good thing about autoscaling is that you can abstract each scalable layer and scale specific resources. There are two ways to scale resources. Vertical scaling increases the configuration of the machine to handle the increasing traffic while horizontal scaling adds more machines to scale out resources. Vertical scaling is limited by capacity. Horizontal scaling offers unlimited resources.

For instance, AWS offers horizontal auto-scaling out of the box. Be it Elastic Compute Cloud (EC2) instances, DynamoDB indexes, Elastic Container Service (ECS) containers or Aurora clusters, Amazon monitors and adjusts resources based on a unified scaling policy for each application that you define. You can either define scalable priorities such as cost optimization or high availability or balance both. The Autoscaling feature of AWS is free but you will be paying for the resources that are scaled out.

12-Factor Methodology

With the purpose of facilitating seamless collaboration between developers working on the same app and efficiently managing dynamic organic growth of the app over time while minimizing software erosion costs, developers at Heroku came up with a 12-factor methodology that helps organizations easily build and deploy apps in a cloud native application architecture. The key takeaways of this methodology are that the application should use a single codebase for all deployments and should be packed with all dependencies isolated from each other. The configuration code should be separated from the app code. Processes should be stateless so that you can separately run them, scale them and terminate them. Similarly, you should build automated CI/CD pipelines while managing build, release and run stateless processes individually. Another key recommendation is that the apps should be disposable so that you can start, stop and scale each resource independently. The 12-factor methodology perfectly suits the cloud architecture.

Here are these 12 building blocks for cloud-based apps.

12-factor Methodology Principle Description
1 Codebase The first principle is to maintain a single codebase for each application that can be used to deploy multiple instances/versions of the same app and track it using a central version control system such as Git.
2 Dependencies As a best practice, define all the dependencies of the app, isolate them and package them within the app. Containerization helps here.
3 Configurations Though the same code is deployed across multiple environments, configuration varies with the environment. As such, it is recommended to separate configurations from code and store them using environmental variables.
4 Backing Services While using a backing service such as a database, treat it as an attached resource and define it in the configuration file so that you can replace the attached resource with a similar service by simply changing the configuration details.
5 Build, Release, Run Build, Release and Run are the three important components of a software development project. The 12-factor methodology recommends that these three components should be separated and managed so as to avoid code breaks.
6 Processes While the app contains multiple processes, it is important to run all the processes as a collection of stateless processes so that scaling becomes easy while unintended effects are eliminated. Each process does not need to know the state of other processes.
7 Port-Binding Contrary to traditional web applications that are a collection of servlets and contain dependencies, 12-factor apps are free from run-time dependency. They listen on a port to make the services available to other apps. eg: Port 80 for web servers, port 22 for SSH, port 27017 for MongoDB, port 443 for HTTPS etc.
8 Concurrency By running multiple instances simultaneously, you can manually as well as automatically scale applications based on predefined values. As dependencies are isolated in containers, apps can run side by side on a single host without causing any issues.
9 Disposability When applications built on a cloud native application architecture go down, the app should gracefully dispose of broken resources and instantly replace them, ensuring a fast start up and shutdown. Being completely disposable, it gives the flexibility to start, stop or modify apps at the go.
10 Dev / Prod Parity For applications to deliver consistent performance across different platforms, it is recommended to minimize differences between development and production environments. Building automated CI/CD pipelines, VCS, backing services and containerization will help you in this regard.
11 Logs For better debugging, apps should create logs as event streams without worrying about where they are stored. Log storage should be decoupled from the app. The job of segregation and compilation of these logs lies on the execution environment.
12 Admin Processes One-off tasks such as fixing bad records, migrating databases are also a part of the release. It is recommended to store these tasks in the same codebase

Automation and Infrastructure as Code (IaC)

With containers running on microservices architecture and powered by a modern system design, organizations can achieve speed and agility in business processes. To extend this feature to production environments, businesses are now implementing Infrastructure as Code (IaC). By applying software engineering practices to automate resource provisioning, organizations can manage the infrastructure via configuration files. With testing and versioning deployments, you can automate deployments to maintain the infrastructure at the desired state. When resource allocation needs to be changed, you can simply define it in the configuration file and automatically apply it to the infrastructure. IaC brings disposable systems into the picture in which you can instantly create, manage and destroy production environments while automating every task. It brings speed and resilience, consistency and accountability while optimizing costs.

The cloud design highly favors automation. You can automate infrastructure management using Terraform or CloudFormation, CI/CD pipelines using Jenkins/Gitlab and autoscale resources with AWS built-in features. A cloud native architecture enables you to build cloud-agnostic apps which can be deployed to any cloud provider platform. Terraform is a powerful tool that helps you in creating templates using Hashicorp Configuration Language (HCL) for automatic provisioning of apps on popular cloud platforms such as AWS, Azure, GCP etc. CloudFormation is a popular feature offered by AWS to automate the workload configuration of resources running on AWS services. It allows you to easily automate the setup and deployment of various IaaS offerings on AWS services. If you use various AWS services, automation of infrastructure becomes easy with CloudFormation.

Automated Recovery

Today, customers expect your applications to always be available. To ensure high availability of all your resources, it is important to have a disaster recovery plan in hand for all services, data resources and infrastructure. Cloud architecture allows you to incorporate resilience into the apps right from the beginning. You can design applications that are self-healing and can recover data, source code repository and resources instantly. 

For instance, IaC tools such as Terraform or CloudFormation allow you to automate the provisioning of the underlying infrastructure in case the system gets crashed. Right from provisioning of EC2 instances and VPCs to admin and security policies, you can automate all phases of the disaster recovery workflows. It also helps you to instantly roll back changes made to the infrastructure or recreate instances whenever needed. Similarly, you can roll back changes made to the CI/CD pipelines using CI automation servers such as Jenkins or Gitlab. It means that disaster recovery is quick and cost-effective. 

Immutable Infrastructure

Immutable infrastructure or immutable code deployments is a concept of deploying servers in such a way that they cannot be edited or changed. In case a change is required, the server is destroyed and a new server instance is deployed in that place from a common image repository. Not Every deployment is dependent on a previous one and there are no configuration drifts. As every deployment is time-stamped and versioned, you can roll back to an earlier version, if needed.

Immutable infrastructure enables administrators to replace problematic servers easily without disturbing the application. In addition, it makes deployments predictable, simple and consistent across all environments. It also makes testing straightforward. Auto Scaling becomes easy too. Overall, it improves the reliability, consistency and efficiency of deployed environments. Docker, Kubernetes, Terraform and Spinnaker are some of the popular tools that help with immutable infrastructure. Furthermore, implementing the 12-factor methodology principles can also help to maintain an immutable infrastructure.

Souce: William from clickittech.com

Related news

what’s up at VTI