Hands-on: Enable EKS Auto Mode and migrate a workload in ~30 minutes

Why Auto Mode?

EKS Auto Mode automates node provisioning, scaling, upgrades, load balancers, block storage, and networking—so you stop babysitting node groups and focus on apps. It treats nodes like locked-down “appliances,” rotates them (with a maximum 21-day lifetime), and integrates managed Karpenter under the hood.

Prereqs (5 min)

  • EKS v1.29+ (Auto Mode isn’t available in some regions, such as ap-southeast-7, mx-central-1).
  • awscli + kubectl + eksctl ≥ 0.195.0.
  • Cluster IAM role can be updated (you’ll attach specific AWS-managed policies).

Check versions

aws eks describe-cluster --name $CLUSTER --query 'cluster.version' --output text
eksctl version
kubectl version --short

(Existing clusters only) Make sure add-ons meet minimum versions

If you’ll enable Auto Mode on an existing cluster, confirm add-on minimums (VPC CNI, kube-proxy, EBS CSI, CSI snapshot controller, Pod Identity Agent).

Step 1 — Grant the cluster IAM role the right policies (3 min)

Attach these to the Cluster IAM role:

  • AmazonEKSComputePolicy
  • AmazonEKSBlockStoragePolicy
  • AmazonEKSLoadBalancingPolicy
  • AmazonEKSNetworkingPolicy
  • AmazonEKSClusterPolicy

Also ensure the role trust policy includes "sts:TagSession".

Step 2 — Enable Auto Mode (5–8 min)

Option A: eksctl (recommended)

Create/update config:

# cluster-automode.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: <cluster-name>
  region: <region>

autoModeConfig:
  enabled: true
  # omit nodePools to let EKS create the default 'general-purpose' and 'system' pools

Update the cluster:

eksctl update auto-mode-config -f cluster-automode.yaml

This sets compute, block storage, and ELB integration for Auto Mode.

Option B: AWS CLI

aws eks update-cluster-config \
  --name $CLUSTER \
  --compute-config enabled=true \
  --kubernetes-network-config '{"elasticLoadBalancing":{"enabled": true}}' \
  --storage-config '{"blockStorage":{"enabled": true}}'

Step 3 — Enable built-in NodePools (1 min)

Auto Mode ships with two built-ins:

  • system (tainted, for cluster-critical add-ons; arm64 & amd64)
  • general-purpose (your app workloads; amd64 only)

Enable (or re-enable) via CLI:

aws eks update-cluster-config \
  --name $CLUSTER \
  --compute-config '{
    "nodeRoleArn": "<node-role-arn>",
    "nodePools": ["general-purpose", "system"],
    "enabled": true
  }' \
  --kubernetes-network-config '{"elasticLoadBalancing":{"enabled": true}}' \
  --storage-config '{"blockStorage":{"enabled": true}}'

Step 4 — (Optional) Add a cost-saving Spot NodePool (3 min)

# nodepool-spot.yaml
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: spot-general
spec:
  template:
    metadata:
      labels:
        karpenter.sh/capacity-type: spot
    spec:
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: default
      requirements:
        - key: "eks.amazonaws.com/instance-category"
          operator: In
          values: ["c","m","r"]
        - key: "kubernetes.io/arch"
          operator: In
          values: ["arm64","amd64"]
  limits:
    cpu: "200"
    memory: 200Gi
kubectl apply -f nodepool-spot.yaml
kubectl get nodepools

(Use EKS-supported labels like eks.amazonaws.com/instance-category and karpenter.sh/capacity-type.)

Step 5 — Migrate a stateless workload

Add this node selector so pods land on Auto Mode nodes:

spec:
  nodeSelector:
    eks.amazonaws.com/compute-type: auto

Apply and watch Auto Mode spin nodes:

kubectl apply -f your-deployment.yaml
kubectl get events -w --sort-by='.lastTimestamp'
kubectl get nodes

(If no capacity exists, Auto Mode creates nodes for you.)

Retire Managed Node Groups (drains safely):

eksctl delete nodegroup --cluster $CLUSTER --name <mng-name>
# repeat for each MNG

EKS Auto Mode will reschedule onto its nodes.

Step 6 — Migrate stateful workloads (EBS) (8–12 min)

Auto Mode uses a different EBS CSI provisioner:

  • tandard: ebs.csi.aws.com
  • Auto Mode: ebs.csi.eks.amazonaws.co

You can’t just reuse PVCs between provisioners. Either:

  • Recreate PV/PVC against the same volume ID (retain then rebind), or
  • Use AWS Labs’ migration tool to rewrite StorageClass bindings.
./eks-auto-mode-ebs-migration-tool --help

High-level manual path: set PV persistentVolumeReclaimPolicy: Retain → delete PV object (keeps EBS vol) → create new PV pointing to the same volumeHandle with Auto Mode provisioner → create new PVC bound to that PV → update Deployment/StatefulSet to use the new PVC

Step 7 — Load balancers (if migrating from ALB Controller)

You can run the AWS Load Balancer Controller alongside Auto Mode, but existing LBs aren’t “migrated.” Do blue-green with DNS cutover when switching to Auto Mode (loadBalancerClass / IngressClass).

tep 8 — Pod Identity (no IRSA required)

On Auto Mode clusters you don’t need to install the Pod Identity Agent yourself. Create an association to grant AWS access to a service account:

aws eks create-pod-identity-association \
  --cluster-name $CLUSTER \
  --namespace default \
  --service-account myapp-sa \
  --role-arn arn:aws:iam::<acct>:role/MyAppPodRole

This replaces IRSA/OIDC setup and is cluster-native.

Verifications

kubectl get nodepools
kubectl get nodes -o wide
kubectl get pods -A -o wide

Nodes show eks.amazonaws.com/compute-type=auto.

Built-in pools: system, general-purpose

Rollback / Tuning

  • Disable Auto Mode (eksctl):
autoModeConfig:
  enabled: false

eksctl update auto-mode-config -f cluster-automode.yaml

Gotchas & tips

  • No SSH/SSM into Auto Mode nodes by design; plan ops via DaemonSets/telemetry.
  • Built-in general-purpose is amd64 only—create a custom NodePool for arm64 or GPUs.
  • Expect regular node replacements; keep PDBs realistic and ensure apps tolerate restarts.

Atiqur Rahman

I am MD. Atiqur Rahman graduated from BUET and is an AWS-certified solutions architect. I have successfully achieved 6 certifications from AWS including Cloud Practitioner, Solutions Architect, SysOps Administrator, and Developer Associate. I have more than 8 years of working experience as a DevOps engineer designing complex SAAS applications.

Leave a Reply