Supercharge Your Cloud Infrastructure with Amazon EKS

Supercharge Your Cloud Infrastructure with Amazon EKS

Supercharge Your Cloud Infrastructure with Amazon EKS

·

14 min read

Amazon EKS (Elastic Kubernetes Service) is a fully managed Kubernetes service provided by AWS that simplifies the process of running Kubernetes on the AWS cloud without having to manually manage the Kubernetes control plane or nodes. EKS provides a scalable, secure, and reliable platform to deploy and manage containerized applications using Kubernetes.

Key Features of Amazon EKS:

Fully Managed Kubernetes Control Plane:

AWS manages the Kubernetes control plane (API servers, etcd database, and control plane components), ensuring high availability and scalability across multiple availability zones.

Integration with AWS Services:

EKS integrates seamlessly with other AWS services such as IAM, VPC, Load Balancers, and CloudWatch for monitoring. This integration simplifies the management of your Kubernetes workloads with AWS services.

Support for Multiple Deployment Models:

Amazon EC2: You can run your EKS worker nodes on EC2 instances, giving you control over instance types, scaling, and node groups.

AWS Fargate: A serverless option for running Kubernetes pods, where AWS manages the underlying infrastructure.

Automatic Upgrades and Patching:

AWS takes care of patching and upgrading the Kubernetes control plane, providing you with new features and bug fixes without manual intervention.

Security:

EKS integrates with AWS IAM for role-based access control (RBAC) and can be used with AWS Identity providers like SSO.

It also supports network policies and integrates with AWS security tools like AWS WAF, Shield, and GuardDuty.

Multi-AZ Support:

The control plane is distributed across multiple AWS availability zones, ensuring high availability and fault tolerance.

Support for Add-ons:

EKS supports add-ons such as the AWS Load Balancer Controller, CoreDNS, and VPC CNI for networking and load balancing.

Components of an EKS Cluster:

EKS Control Plane:

Managed by AWS, it includes the Kubernetes API server and the control plane components. You don't manage or interact with it directly but pay a flat hourly fee for each cluster.

Worker Nodes:

These are the EC2 instances or Fargate profiles that run your Kubernetes workloads (pods). Worker nodes are part of node groups, which can be managed using EC2 Auto Scaling.

Amazon VPC:

EKS clusters run within a VPC. You need to configure networking properly to ensure that pods can communicate within the VPC and with external resources if needed.

Node Groups:

Managed Node Groups: AWS automatically handles lifecycle management (creation, scaling, termination) of worker nodes.

Self-managed Node Groups: You can create and manage the worker nodes yourself using EC2 instances, giving you more control over instance types and scaling.

Networking:

EKS uses the VPC CNI plugin for Kubernetes, which allows each pod to receive a VPC IP address, enabling seamless integration with other AWS resources.

You can also use service meshes like AWS App Mesh or Istio for advanced networking.

Key Use Cases for EKS:

Microservices:

EKS is commonly used to run microservice architectures. Kubernetes helps in managing the lifecycle of containers, while EKS manages the Kubernetes cluster itself.

CI/CD Pipelines:

You can integrate EKS with CI/CD pipelines (e.g., Jenkins, GitLab CI) to automate deployment processes.

Machine Learning:

EKS can be used with machine learning workloads that run on containerized applications such as TensorFlow or PyTorch, alongside tools like Kubeflow for ML pipeline management.

Batch Processing:

You can use EKS for running batch workloads in parallel across nodes, leveraging Kubernetes job scheduling features.

Pricing for Amazon EKS:

Control Plane: $0.10 per hour per cluster.

Worker Nodes: You pay for the EC2 instances or AWS Fargate that you use to run your worker nodes. The pricing depends on the instance types, size, and duration of usage.

Data Transfer: Charges apply for data transfer across regions, outside AWS, or between different services.

Getting Started with Amazon EKS:

Install AWS CLI, eksctl, and kubectl:

AWS CLI: To interact with AWS services from the command line.

eksctl: A simple CLI to create and manage EKS clusters.

kubectl: The standard Kubernetes CLI tool to interact with Kubernetes clusters

So will demonstrate the practicality of the AWS EKS service.

Go to the AWS console and search EKS.

Before we go further we have to install our laptop kubectl, eksctl (command line utility ) and AWs CLI as well. As I will be going to use one of the Ubuntu ec2 instances for the demo purpose. Created one ubuntu Ec2 instance.

Now, eksctl and kubectl both have been installed.

Along with this we have to configure aws cli as well,AWs CLI has also been installed..

Further go to the AWS console there would be an option called add cluster

If we would go with AWS UI we have to provide lots of information, that is not a best practice to follow so efficient way is to create a cluster through CLI (eksctl)

eksctl is a command line utility that is used to manage eks cluster.

Post this we will use below command to create a first cluster.

eksctl create cluster --name demo-cluster --region us-east-1 --fargate

Instead of the traditional ec2 instances I have used fargate to create the cluster. AWS Fargate is a serverless compute engine for running containers in Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). With Fargate, you don't need to provision, configure, or manage servers; you can simply deploy your containers and Fargate handles the rest. With fargate we don't have to worry about managing the underlying infrastructure. Fargate provisions and manages the compute resources for you.

Please keep in mind the post demo kindly delete cluster which was created for the practice purpose.

Post execution of the eksctl create command it will take approximately 15 minutes to control plane to be in ready state.

And finally the cluster has been ready.

We can check on the AWS console whether a cluster has been created or not? Yes, the cluster has been created.

So there is another advantage of using resources tabs: we can check the resources which are available on our cluster. So we don't go to the command line to check available pods we can check here only in the left pane “pods”

Each and every information has been available over here.

Let's say if we want to check the service accounts for this cluster. Then click on authentication.

Okay now coming to the cluster overview tab there is API server endpoint & OpenID connect provider url.

What is the openID connect provider URL.?

With this OpenId connect we can integrate the Identity providers like octa or LDAP with our cluster, where we have created all the users of our organization. We can create all the users in identity providers and we can attach those users identity providers to do multiple other things. So in this case what AWS does is allows you to attach any identity provider, So here we can integrate with IAM as identity provider, because if the pod which have been created over here if they wants to talk to the any other services in AWS for an example if they want talk to S3 buckets, eks control plane or any other services like cloudwatch then we have to integrate with IAM.

Then go to the Compute tab where we can see both fargate instances which have been created.

Node group is something where we can add the EC2 instances though we are using fargate so we don't have to modified any settings over here.

Coming to the Fargate Profiles: So currently the fargate profile has been attached to the default, Kube-system namespace. That means we can only deploy the pods in these two namespaces.(default , kube-system)

If we want to deploy our pods in other namespaces there is an option called “Add fargate profile”

If we want to deploy our pods in different namespaces click on “Add Fargate profile”

Coming to the Networking tab, no need to do any modification as of now.

So now we can go on the server (Ec2 instance) and execute the below command:

aws eks update-kubeconfig –name demo-cluster –region us-east-1 

Breaking Down the Command:

aws eks: This part indicates that you are using the AWS CLI to interact with the Amazon EKS service.

update-kubeconfig: This is the specific command that updates your local kubeconfig file with the necessary configurations to access the specified EKS cluster. The kubeconfig file is typically located at ~/.kube/config and stores information about clusters, users, and contexts.

--name demo-cluster: This flag specifies the name of the EKS cluster that you want to connect to. In this case, it is a demo-cluster.

--region us-east-1: This flag specifies the AWS region where the EKS cluster is located. Here, it is us-east-1.

Below is the content in cat /root/.kube/config

We got below error while post executing the command:

There was an issue with the command syntax we have provided the proper syntax and the command has been successfully executed.

So next proceed with the deployment of the actual application.

2048 App is app name 

We can execute the below command:

eksctl create fargateprofile \

    --cluster demo-cluster \

    --region us-east-1 \

    --name alb-sample-app \

    --namespace game-2048

The above command is for creating a fargate profile the reason that we are going to attach the namespace called game 2048 & providing the app name is alb-sample-app.

A fargate profile has been created. We can check and confirm through the console as well.

We have created the instances both of the namespaces. This is just like stand alone Kubernetes cluster pod-affinity, node-affinity or anti-affinity.

From the GitHub repo we can copy another command as below:

https://github.com/AmitP9999/aws-devops-zero-to-hero/tree/main/day-22

kubectl apply -f  raw.githubusercontent.com/kubernetes-sigs/a..

Now the above file is having all the configuration related to the deployment, service and ingress. We can also read the content of this file. For further information we can check the URL to understand more.

Each and every resource have been created now.

Below we can check there service is also in running state, the service has a cluster IP and the type is node-port but there is no external IP so what does means that anybody within aws VPC or anybody who has access to the VPC they can talk to these pods using the node ip address followed by the port i.e. 30310.

Of Course our goal over these practices is that someone from the outside network or end user should be accessing these pods.

We have created the ingress, we can check below in the ingress. Name of the ingress is :ingress 2048, class is -2048 , hosts (*) can be anything (anybody trying to access the application is fine) so there is no address . Ports are there but there is no address that means there should be an ingress controller placed over here once we deploy the ingress controller we can see the address over here. Why is an address useful? We have to access the application from outside world with this address.

So next we will create the ingress controller. The ingress controller is going to read this ingress resource called ingress-2048 & it will create the load balancer for us. So it will not just create the load balancer, it will configure the entire load balancer which means even inside the Alb ingress would be going to configure target groups, ports and everything should be taken care of by the ingress controller.

Then go to the github repo and you can check configure-oidc -connector . The reason why we need IAM OIDC Connector is because the ALB controller which running need to talk with few of the resources in the AWS.

Perfect , now the IAM OIDC provider has been integrated with the cluster.

Going to the final step then go to the repository again and go in the section called alb-controller-add-on. So we will go step by step. The first step is to install the ALB controller. (This is just a Pod).

For this pod access to aws services such as ALB. So first we have to create an IAM policy. So we already created the json policy which we took from the GitHub repo.

curl -O raw.githubusercontent.com/kubernetes-sigs/a..

Below is the IAM policy:

Then create the IAM policy using below command:

aws iam create-policy \

    --policy-name AWSLoadBalancerControllerIAMPolicy \

    --policy-document file://iam_policy.json

Now we have to execute the below command: to create the IAM service account

eksctl create iamserviceaccount \

  --cluster=demo-cluster \

  --namespace=kube-system \

  --name=aws-load-balancer-controller \

  --role-name AmazonEKSLoadBalancerControllerRole \

  --attach-policy-arn=arn:aws:iam::404926571352:policy/AWSLoadBalancerControllerIAMPolicy \

  --approve

Once we create a IAM service account ;

The CloudFormation stack you referenced (eksctl-demo-cluster-addon-iamserviceaccount-kube-system-aws-load-balancer-controller) is likely created automatically by eksctl when you use it to provision an IAM service account for the AWS Load Balancer Controller. This is part of the process where eksctl automates the creation of necessary AWS resources like IAM roles, service accounts, and Kubernetes resources for use with EKS.

How eksctl Automatically Creates a CloudFormation Template:

When you use eksctl to create or update a Kubernetes cluster, node groups, or add-ons (such as the AWS Load Balancer Controller), it can automatically generate and manage AWS CloudFormation stacks in the background to handle the creation of AWS infrastructure resources.

Here’s how it typically works:

1. IAM Service Account Creation via eksctl:

The command you use to create a service account with eksctl for the AWS Load Balancer Controller might look like this:

eksctl create iamserviceaccount \

  --cluster demo-cluster \

  --namespace kube-system \

  --name aws-load-balancer-controller \

  --attach-policy-arn arn:aws:iam::aws:policy/AWSLoadBalancerControllerIAMPolicy \

  --approve

In this command:

--cluster demo-cluster: Specifies the EKS cluster where the service account will be created.

--namespace kube-system: Specifies the Kubernetes namespace where the service account will exist.

--name aws-load-balancer-controller: The name of the service account in Kubernetes.

--attach-policy-arn: Attaches the necessary IAM policy to the service account (in this case, the AWS Load Balancer Controller IAM policy).

--approve: Automatically applies the change without manual confirmation.

2. CloudFormation Stack Creation:

Once this command is executed, eksctl generates a CloudFormation template that:
Creates an IAM Role associated with the Kubernetes service account.

Attaches the specified IAM policy (e.g., AWSLoadBalancerControllerIAMPolicy) to the IAM role.

Links the IAM role with the Kubernetes service account.

eksctl uses AWS CloudFormation to provision these AWS resources because it provides infrastructure-as-code management, making it easier to automate resource creation, updates, and deletions.

3. Generated CloudFormation Stack Name:

The name of the CloudFormation stack (eksctl-demo-cluster-addon-iamserviceaccount-kube-system-aws-load-balancer-controller) is automatically generated based on:

eksctl-: A prefix used by eksctl for all CloudFormation stacks it manages.

demo-cluster: The name of your EKS cluster.

addon-iamserviceaccount: A description indicating that this stack is creating an IAM service account.

kube-system: The namespace where the service account is created.

aws-load-balancer-controller: The name of the service account.

4. What the CloudFormation Template Does:

The CloudFormation template generated by eksctl for the IAM service account typically does the following:

Creates an IAM role: This role is created with trust policies that allow the EKS service to assume it on behalf of the Kubernetes service account.

Attaches the required IAM policies: For example, the AWSLoadBalancerControllerIAMPolicy allows the service account to manage AWS load balancers for the cluster.

Links the IAM role to the Kubernetes service account: This step ensures that the Kubernetes service account has the correct permissions to interact with AWS resources.

We will be going to the same service account in the application.

So let's proceed with the ALB controller that we were talking about : for this we are going to use this helm chart. And the helm chart will create an actual controller and it will use the service account for running the pods.

helm repo add eks aws.github.io/eks-charts

So we will going to execute the below command:

helm install aws-load-balancer-controller eks/aws-load-balancer-controller \            

  -n kube-system \

  --set clusterName=<your-cluster-name> \

  --set serviceAccount.create=false \

  --set serviceAccount.name=aws-load-balancer-controller \

  --set region=<region> \

  --set vpcId=<your-vpc-id>

take the vpc id from the aws console: to put in that command.

After so many errors for the above command we have successfully executed the aws Load balancer CONTROLLER successfully.

We have to check if the load balancer was created with at least 2 replicas.

Two replicas have been created for the two availability zones.

We can keep this in the watch state.

If we want to edit or see the status of deployment we can check below file 👍

Finally, (aws) load balancer controller has been created.

We can check on the AWS console if a Load balancer has been created or not ?

Yes, our load balancer has been created

The Load balancer controller has created LB and logically it has been created because we have configure the ingress controller.

Perfect, in this ingress resource here we have found the address.

k8s-game2048-ingress2-bcac0b5b37-899892634...” this address is the load balancer that ingress controller has created watching this ingress resource.

As we learned earlier the ingress controller will attach for the ingress resource configuration provided in the ingress resource & create a load balancer.

Then go to the aws console and click on load balancer.

If we check the DNS ip address of the load balancer & ingress controller address is the same.

Ingress controller has read the ingres resource and created the load balancer

We can check the other configurations in the Load Balancer through the console.

& finally with so much patience our application has been ready. We can access the application through “k8s-game2048-ingress2-bcac0b5b37-899892634...”.

Happy Learning!!