AWS Monitoring helps you gain observability into your AWS environment
Amazon Web Services (AWS) is a widely used cloud platform that provides an extensive mixture of infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) offerings. AWS services enable scalable, flexible, and cost-effective access to computing resources and infrastructure.
Amazon Elastic Kubernetes Service (EKS) is a popular AWS service that helps organizations simplify the management and orchestration of Kubernetes-based containerized applications. Besides running containerized workloads on AWS, EKS also runs upstream Kubernetes and is conformant with cloud-native tooling, making EKS workloads compatible with those running on standard Kubernetes deployments.
In this first article of a three-part series, we’ll learn about the EKS service, its different components, benefits, and use cases.
Amazon EKS simplifies the deployment, management, and scaling of Kubernetes applications both on-premises and in the AWS cloud. An EKS cluster consists of two Amazon Virtual Private Clouds (VPCs). One managed VPC that hosts the control plane, and another VPC hosting the worker nodes. The EKS service relies on AWS infrastructure, such as load balancers, to provide resources and functionality to the cluster.
As a containers-as-a-service (CaaS) offering, EKS automatically creates and scales the control plane and worker nodes, enabling managed cluster installation, operation, and maintenance.
Deploying a Kubernetes workload in an Amazon EKS cluster typically goes through the following workflow:
The command for creating an EKS cluster using the AWS CLI tool would be similar to this:
aws eks create-cluster --region<aws-region-code>
--
name <my-cluster-name> --kubernetes-version <any-k8s-version-supported-by-eks> \
--role-arn arn:aws:iam::<aws-account-id>:role/<IAM-role> \
--resources-vpc-config subnetIds=<subID1,subID2>
EKS offers four cluster deployment options. These are:
A typical Kubernetes ecosystem relies on a complex framework of various components that enable a seamless orchestration of containers. Some of the primary components of an EKS cluster include:
AWS Virtual Private Cloud (VPC) allows EKS clusters to execute production-grade workloads within a virtual network. A VPC is utilized for secure networking of workloads that offers an organization the flexibility to use their defined network ACLs and VPC security groups.
VPC also helps achieve the isolation of applications by restricting AWS resource-sharing between different VPC networks, allowing organizations to develop stable, highly available, and secure applications.
EKS worker nodes are the AWS compute instances where all workloads are deployed. All worker nodes are registered with the master node belonging to the EKS cluster. Developers and administrators typically interact with worker nodes to manage the Kubernetes workloads and perform activities such as deploying containerized applications or autoscaling them.
The worker nodes can either be an EC2 instance node group or a Fargate profile, both of which provide seamless, automated scalability. Managed node groups automate the provisioning of EC2 instances, while Fargate profiles enable the deployment of containerized workloads on a serverless environment.
Fargate eliminates the overhead in managing worker nodes by leveraging a serverless environment for container orchestration. In instances where EKS deploys workloads on Fargate, a Fargate profile enables administrators to determine the pod or pods executed on Fargate.
Administrators declare pods using the profile’s selectors, where each profile can have a maximum of five selectors—while each selector should contain at least one namespace and other optional labels.
The collection of master nodes in EKS is alternatively called the control plane and is responsible for efficiently managing and monitoring new and existing worker nodes within the EKS cluster. A control plane controls critical activities of container orchestration, such as scheduling of containers or pods, monitoring pod availability, or holding cluster data.
The control plane has to run within a VPC managed by AWS; it is unique and cannot be shared between clusters. A typical control plane runs on at least three master nodes distributed across different AWS AZs. These nodes further host Kubernetes cluster orchestration and management components, including etcd, apiServer, the scheduler, and controller manager.
Key features of Amazon EKS include:
Various log types include:
eksctl
CLI tool, and IaC solutions such as AWS CloudFormation or Terraform.As per a 2021 study, for the majority of the organizations surveyed, Kubernetes sits at the core of their IT strategy. EKS simplifies end-to-end processes of managing, operating, and maintaining Kubernetes workloads by abstracting the overhead of setting up Kubernetes clusters. The platform benefits from the extensive AWS ecosystem of services and infrastructure to offer resources for highly available, scalable workloads.
The advantages of using EKS for containerized workloads include:
EKS seamlessly integrates Kubernetes RBAC with IAM, allowing administrators to directly assign cluster RBAC roles to IAM entities. Each EKS cluster leverages an OpenID Connect (OIDC) issuer URL that uniquely identifies the cluster. The OIDC provider allows cluster administrators to seamlessly use IAM roles for service accounts.
It is also possible to assign IAM policies to service accounts, allowing for fine-grained, controlled access to entities like other containerized services, applications hosted outside of AWS, and AWS resources external to the cluster.
EKS typically offers a built-in secure Kubernetes configuration to enable the comprehensive security of Kubernetes workloads. For enhanced security of Kubernetes clusters, EKS facilitates integrations with various AWS services and AWS partner solutions such as AWS IAM, Amazon Key Management Service (KMS), and Amazon VPC.
The EKS service is also certified by most compliance and regulatory frameworks defined for sensitive applications. Comprehensive security and regulatory compliance ensures data privacy and system integrity, making EKS ideal for sensitive and regulated Kubernetes applications.
Some of the frameworks EKS is compliant with include PCI-DSS, ISO, HIPAA, SOC, and HITRUST CSF.
EKS runs Kubernetes master nodes across different AWS availability zones. The construct enables EKS to automatically scale the control plane based on changes in workload demands. These zones are connected with redundant, high-throughput, low-latency networks for fault-tolerant and highly available clusters.
EKS coupled with AWS Fargate enables a CaaS model that offers a serverless platform to run Kubernetes-based applications directly on AWS. Fargate’s serverless platform allows organizations to choose predefined price points and pay only for resources used, allowing them to run pods on a logically isolated runtime environment, reducing resource conflicts between workloads. Fargate also eliminates the overhead of executing operational tasks such as provisioning and managing infrastructure, patching, upgrades, or managing security.
EKS is fully managed, allowing for the automatic creation of the control plane and worker nodes based on workloads. The service also offers multiple launch options (Web console, eksctl, and IaC) and can be connected with on-premises clusters using AWS Outposts for hybrid deployments.
EKS supports all major Kubernetes add-ons and is fully compatible with Kubernetes-native tooling. This allows organizations to utilize various Kubernetes community tools, such as CoreDNS, the Kubernetes Dashboard, the Kubernetes command line kubectl, and inherent AWS services offered out of the box.
Common usage scenarios for EKS clusters include:
Kubernetes clusters typically rely on various supporting resources like message queues, object stores, and databases that are provisioned through a set of AWS-managed services. Usage of AWS Controllers for Kubernetes (ACK) offers a unified way of managing Kubernetes workloads and its related dependencies. This is essentially done by allowing workloads running within EKS to connect directly with an entire AWS-managed services and infrastructure ecosystem. With ACK, EKS application workloads do not need to define external cluster resources separately or execute managed services within the cluster.
AWS App Mesh is a service mesh that enables seamless application-level networking by standardizing interservice communication in a microservices-based framework. App Mesh also helps to monitor and configure traffic flows, networking, and security for containerized microservices. Running service-oriented apps on EKS with the App Mesh abstracts the need to implement special libraries, write custom code, or configure communication protocols for the services.
EKS can pull container images from both public and private repositories to store, deploy, and manage container images. The ability to seamlessly blend with both container repositories makes it suitable for software teams running hybrid clusters that use the repositories for their containerized applications.
The EKS service allows organizations to run conformant Kubernetes clusters on the cloud without extensive expertise. EKS enables the orchestration of workloads on self-managed, serverless, and managed worker node instances, allowing for flexibility in the control and scheduling containerized workloads. With AWS Outposts and EKS Anywhere, administration teams can connect nodes on AWS, on-premises, and other cloud platforms for hybrid orchestration.
In this article, we delved into the features, benefits, and use cases of leveraging the managed EKS service by AWS. In the future articles of the series, we will explore other managed Kubernetes services to learn how they fare with each other on common points.
Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.
Apply Now