top of page

How to use a secondary CIDR in EKS

In this Teratip, we’ll deep dive into the realm of secondary CIDR blocks in AWS EKS and explore how they can empower you to enhance your cluster's flexibility and scalability. We’ll uncover the benefits of leveraging secondary CIDR blocks and walk you through the process of configuring them.



Introduction

As you expand your applications and services, you may face scenarios where the primary CIDR block assigned to your EKS cluster becomes insufficient. Perhaps you're introducing additional microservices, deploying multiple VPC peering connections, or integrating with legacy systems that have their own IP address requirements. These situations call for a solution that allows you to allocate more IP addresses to your cluster without sacrificing stability or network performance.

Secondary CIDR blocks provide an elegant solution by enabling you to attach additional IP address ranges to your existing VPC, thereby expanding the available address space for your EKS cluster. Throughout this post, we’ll go over the step-by-step process of adding secondary CIDR blocks to your AWS EKS cluster.


Create the EKS cluster

For this demonstration, I created a simple EKS cluster with only one node and deployed the famous game 2048 which can be accessed through an Internet-face Application Load Balancer.


So, the EKS cluster and workload looks like this:

EKS-cluster-example

And this is the VPC where the cluster is located:

example-of-VPC-of-a-cluster

As you can see, this VPC has the 10.0.0.0/16 IPv4 CIDR block assigned. Keeping this in consideration, all pods into the cluster will get an IP address within this range:

example-of-VP-adress-around-10.0.0.0/16-IPv4

Next, we will configure this same cluster to use a secondary CIDR block in the same VPC. In this way, almost all pods will get IP addresses from the new CIDR.


Step by step process

Step #1: Create the secondary CIDR within our VPC

RESTRICTION: EKS supports additional IPv4 CIDR blocks in the 100.64.0.0/16 range.


It’s possible to add a second CIDR to the current VPC, and of course create subnets within this VPC using the new range. I did it through Terraform but this can be done using the AWS Console as well.


The code that I used to create the VPC is the following:


module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "3.19.0" name = "teratip-eks-2cidr-vpc" cidr = "10.0.0.0/16" secondary_cidr_blocks = ["100.64.0.0/16"] azs = slice(data.aws_availability_zones.available.names, 0, 2) private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "100.64.1.0/24", "100.64.2.0/24"] public_subnets = ["10.0.4.0/24", "10.0.5.0/24"] enable_nat_gateway = true single_nat_gateway = true enable_dns_hostnames = true public_subnet_tags = { "kubernetes.io/cluster/${local.cluster_name}" = "shared" "kubernetes.io/role/elb" = 1 } private_subnet_tags = { "kubernetes.io/cluster/${local.cluster_name}" = "shared" "kubernetes.io/role/internal-elb" = 1 } }


Take a look to the line secondary_cidr_blocks = ["100.64.0.0/16"] and the private subnets (the last two) created into this CIDR: private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "100.64.1.0/24", "100.64.2.0/24"]


The obtained VPC looks like this:

example-pf-VPC-obtained


Step #2: Configure the CNI

DEFINITION: CNI (Container Network Interface), concerns itself with network connectivity of containers and removing allocated resources when the container is deleted.


In order to use a secondary CIDR into the cluster it’s needed to configure some environment variables in the CNI daemonset configuration, by running the following commands:


1. To turn on custom network configuration for the CNI plugin, run the following command:

kubectl set env ds aws-node-n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true


2. To add the ENIConfig label for identifying your worker nodes, run the following command:


kubectl set env daemonset aws-node-n kube-system ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone


3. Enable the parameter to assign prefixes to network interfaces for the Amazon VPC CNI DaemonSet.


kubectl set env daemonset aws-node-n kube-system ENABLE_PREFIX_DELEGATION=true


Terminate worker nodes so that Autoscaling launches newer nodes that come bootstrapped with custom network config.


Step #3: Create the ENIconfig resources for the new subnets

As next step, we will add custom resources to ENIConfig custom resource definition (CRD). In this case, we will store VPC Subnet and SecurityGroup configuration information in these CRDs so that Worker nodes can access them to configure VPC CNI plugin.


Create custom resources for each subnet by replacing Subnet and SecurityGroup IDs. Since we created two secondary subnets, we need to create three custom resources.


---

apiVersion: crd.k8s.amazonaws.com/v1alpha1

kind: ENIConfig

metadata:

name: us-east-2a

spec:

securityGroups:

-sg-087d0a0ece9800b00

subnet: subnet-0fabe93c6f43f492b

---

apiVersion: crd.k8s.amazonaws.com/v1alpha1

kind: ENIConfig

metadata:

name: us-east-2b

spec:

securityGroups:

-sg-087d0a0ece9800b00

subnet: subnet-0484194486fad2ce3


Note: The ENIConfig name must match the Availability Zone of your subnets.


You can get the Cluster security group included in the ENIConfigs in the EKS Console or by running the following command:


aws eks describe-cluster --name $cluster_name --querycluster.resourcesVpcConfig.clusterSecurityGroupId --output text


Once ENIConfigs YAML is created apply them:


kubectl apply -f <eniconfigs.yaml>


Check the new network configuration

Finally, all the pods running into your worker nodes should have IP addresses within the secondary CIDR.

example-of-IP-adresses-within-the-secondary-CIDR

As you can see in the screenshot below, the service is still working as it should be:

how-it-looks-a-service-working

example-of-how-it-looks-the-game-working

Also, notice that the EC2 worker node has now an ENI (Elastic Network Interface) with and IP address within the secondary CIDR:

example-of-an-ENI-with-an-IP-address-within-the-secondary-CIDR

Configure max-pods per node

Enabling a custom network removes an available network interface from each node that uses it and the primary network interface for the node is not used for pod placement when a custom network is enabled. In this case, you must update max-pods:


To do this step I used Terraform to update the node group increasing the max-pods parameter:


Following is the IaC for the EKS cluster:


module "eks" {

source = "terraform-aws-modules/eks/aws"

version = "19.5.1"


cluster_name = local.cluster_name

cluster_version = "1.24"


vpc_id = module.vpc.vpc_id

subnet_ids = [module.vpc.private_subnets[0], module.vpc.private_subnets[1]]

cluster_endpoint_public_access = true


eks_managed_node_group_defaults = {

ami_type = "AL2_x86_64"


}


eks_managed_node_groups = {

one = {

name = "node-group-1"


instance_types = ["t3.small"]


min_size = 1

max_size = 1

desired_size = 1


bootstrap_extra_args = "--container-runtime containerd --kubelet-extra-args '--max-pods=110'"


pre_bootstrap_user_data = <<-EOT

export CONTAINER_RUNTIME="containerd"

export USE_MAX_PODS=false

EOT

}

}

}



In the example I set up max-pods = 110, which is too much for this type of EC2 instance but is a hard limit. This node will allocate as many pods as its resources allow.


If you’re interested in learning more about how to Increase the amount of available IP addresses for your nodes, here is an interesting guide from AWS that you can read.


Just in case…

If for some reason, something goes wrong and your pods aren’t getting IP addresses, you can easily rollback your changes by running the following command and launching new nodes:


kubectl set env ds aws-node-n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=false


After running the command, the whole cluster will come back using the primary CIDR again.


Final Thoughts

By diligently following the step-by-step instructions provided in this guide, you have acquired the knowledge to effectively set up and supervise secondary CIDR blocks, tailoring your network to harmonize precisely with the needs of your applications. This newfound adaptability not only simplifies the allocation of resources but also grants you the capability to seamlessly adjust your infrastructure in response to evolving requirements.


img-of-Ignacio-Rubio




Ignacio Rubio

Cloud Engineer

Teracloud





If you want to know more about Kubernetes, we suggest going check Conftest: The path to more efficient and effective Kubernetes automated testing

 

If you are interested in learning more about our #TeraTips or our blog's content, we invite you to see all the content entries that we have created for you and your needs. And subscribe to be aware of any news! 👇 



Comments


Entradas recientes
Buscar por tags
Síguenos
  • Twitter Basic Square
bottom of page