본문 바로가기

DevOps

[AWS EKS] (23) EKS 스터디 8주차 (Amazon EKS Upgrades: Strategies and Best Practices)

CloudNet@팀의 EKS 스터디 AEWS 2기에 작성된 자료를 토대로 작성합니다.

Amazon EKS Upgrades: Strategies and Best Practices

Amazon EKS 클러스터 업그레이드 워크숍의 목적은 고객에게 Amazon EKS 클러스터 업그레이드를 계획하고 실행할 수 있는 모범 사례를 제공하는 일련의 실험실을 소개하는 것입니다.

우리는 In-Place, Blue/Green 등 다양한 클러스터 업그레이드 전략을 탐구하고 각 전략의 실행 세부 사항을 자세히 살펴볼 것입니다.

워크숍 출처는 스터디원 분께서 제공해주셨습니다.

최영락님의 도움으로 AWS Upgrade Workshop 임시 계정(3일) 제공을 해주셨습니다. 다시 한번 더 감사드립니다. 

 

Amazon EKS Upgrades: Strategies and Best Practices

1. 실습 환경 배포 : 오레곤 리전(us-west-2), EC2(IDE-Server) 접속

2.IDE 정보 확인 : 버전(1.25.16), 노드(AL2, 커널 5.10.234, containerd 1.7.25)

ec2-user:~/environment:$ env
SHELL=/bin/bash
REGION=us-west-2
COLORTERM=truecolor
HISTCONTROL=ignoredups
TERM_PROGRAM_VERSION=1.91.1
CLUSTER_NAME=eksworkshop-eksctl
SYSTEMD_COLORS=false
HOSTNAME=ip-192-168-0-94.us-west-2.compute.internal
HISTSIZE=1000
AWS_DEFAULT_REGION=us-west-2
INSTANCE_IAM_ROLE_ARN=arn:aws:iam::586932131810:role/workshop-stack-IdeIdeRoleD654ADD4-Ko2wn5vUb46g
VSCODE_PROXY_URI=https://dp8uzbhhiz2ue.cloudfront.net/proxy/{{port}}/
AWS_REGION=us-west-2
PWD=/home/ec2-user/environment
LOGNAME=ec2-user
SYSTEMD_EXEC_PID=21290
VSCODE_GIT_ASKPASS_NODE=/usr/lib/code-server/lib/node
HOME=/home/ec2-user
LANG=C.UTF-8
TF_STATE_S3_BUCKET=workshop-stack-tfstatebackendbucketf0fc9a9d-p6wfwkicaj9w
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.m4a=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.oga=01;36:*.opus=01;36:*.spx=01;36:*.xspf=01;36:
IDE_PASSWORD=cXA03xqicZBTAuEb14Q4Mk2Hhasc7GM7
AWS_PAGER=
GIT_ASKPASS=/usr/lib/code-server/lib/vscode/extensions/git/dist/askpass.sh
PROMPT_COMMAND=__vsc_prompt_cmd_original
INVOCATION_ID=cf72fd961ed8498680ab38b39e38e8bb
INSTANCE_IAM_ROLE_NAME=workshop-stack-IdeIdeRoleD654ADD4-Ko2wn5vUb46g
VSCODE_GIT_ASKPASS_EXTRA_ARGS=
TF_VAR_eks_cluster_id=eksworkshop-eksctl
IDE_DOMAIN=dp8uzbhhiz2ue.cloudfront.net
EC2_PRIVATE_IP=192.168.0.94
TERM=xterm-256color
TF_VAR_aws_region=us-west-2
LESSOPEN=||/usr/bin/lesspipe.sh %s
USER=ec2-user
VSCODE_GIT_IPC_HANDLE=/tmp/vscode-git-d1191eaaba.sock
SHLVL=1
EKS_CLUSTER_NAME=eksworkshop-eksctl
ASSETS_BUCKET=ws-event-f4fef182-00c-us-west-2/d2117abb-06fa-4e89-8a6b-8e2b5d6fc697/assets/
S_COLORS=auto
PS1=\[\]\u:\w:$ \[\]
which_declare=declare -f
VSCODE_GIT_ASKPASS_MAIN=/usr/lib/code-server/lib/vscode/extensions/git/dist/askpass-main.js
JOURNAL_STREAM=8:40874
BROWSER=/usr/lib/code-server/lib/vscode/bin/helpers/browser.sh
PATH=/usr/lib/code-server/lib/vscode/bin/remote-cli:/home/ec2-user/.local/bin:/home/ec2-user/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
NODE_EXEC_PATH=/usr/lib/code-server/lib/node
MAIL=/var/spool/mail/ec2-user
IDE_URL=https://dp8uzbhhiz2ue.cloudfront.net
TERM_PROGRAM=vscode
VSCODE_IPC_HOOK_CLI=/tmp/vscode-ipc-80bd6e27-910e-4636-bb3f-5ca8dda6c493.sock
BASH_FUNC_which%%=() {  ( alias;
 eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot "$@"
}
_=/usr/bin/env
ec2-user:~/environment:$ 
ec2-user:~/environment:$ aws s3 ls
2025-03-30 05:08:08 workshop-stack-tfstatebackendbucketf0fc9a9d-p6wfwkicaj9w

ec2-user:~/environment:$ aws s3 ls s3://workshop-stack-tfstatebackendbucketf0fc9a9d-p6wfwkicaj9w
2025-03-30 05:30:27     933683 terraform.tfstate

 

3.클러스터 확인

2. 클러스터 확인 (clsuter,nodegroup,add-on등)
ec2-user:~/environment:$ eksctl get cluster
NAME                    REGION          EKSCTL CREATED
eksworkshop-eksctl      us-west-2       False
ec2-user:~/environment:$ eksctl get nodegroup --cluster $CLUSTER_NAME
CLUSTER                 NODEGROUP                               STATUS  CREATED                 MIN SIZE        MAX SIZE        DESIRED CAPACITY        INSTANCE TYPE                       IMAGE ID        ASG NAME                                                                        TYPE
eksworkshop-eksctl      blue-mng-2025033005221298520000002c     ACTIVE  2025-03-30T05:22:16Z    1               2               1                       m5.large,m6a.large,m6i.large        AL2_x86_64      eks-blue-mng-2025033005221298520000002c-24caf2c0-0dd1-662b-dc02-1757a879aed8    managed
eksworkshop-eksctl      initial-2025033005221298110000002a      ACTIVE  2025-03-30T05:22:15Z    2               10              2                       m5.large,m6a.large,m6i.large        AL2_x86_64      eks-initial-2025033005221298110000002a-32caf2c0-0dd0-cba3-2a6c-ac3298884e1e     managed
ec2-user:~/environment:$ eksctl get fargateprofile --cluster $CLUSTER_NAME
NAME            SELECTOR_NAMESPACE      SELECTOR_LABELS POD_EXECUTION_ROLE_ARN                                                  SUBNETS                          TAGS                                                                                                                                     STATUS
fp-profile      assets                  <none>          arn:aws:iam::586932131810:role/fp-profile-2025033005220734380000001f    subnet-0d03e1cf968b16978,subnet-0323e0805d46731c3,subnet-0636eb5e3776c0277        Blueprint=eksworkshop-eksctl,GithubRepo=github.com/aws-ia/terraform-aws-eks-blueprints,karpenter.sh/discovery=eksworkshop-eksctl  ACTIVE
ec2-user:~/environment:$ eksctl get addon --cluster $CLUSTER_NAME
2025-03-31 04:04:25 [ℹ]  Kubernetes version "1.25" in use by cluster "eksworkshop-eksctl"
2025-03-31 04:04:25 [ℹ]  getting all addons
2025-03-31 04:04:26 [ℹ]  to see issues for an addon run `eksctl get addon --name <addon-name> --cluster <cluster-name>`
NAME                    VERSION                 STATUS  ISSUES  IAMROLE                                                                                         UPDATE AVAILABLE                                                                                                                                                   CONFIGURATION VALUES
aws-ebs-csi-driver      v1.41.0-eksbuild.1      ACTIVE  0       arn:aws:iam::586932131810:role/eksworkshop-eksctl-ebs-csi-driver-2025033005213840970000001d
coredns                 v1.8.7-eksbuild.10      ACTIVE  0                                                                                                       v1.9.3-eksbuild.22,v1.9.3-eksbuild.21,v1.9.3-eksbuild.19,v1.9.3-eksbuild.17,v1.9.3-eksbuild.15,v1.9.3-eksbuild.11,v1.9.3-eksbuild.10,v1.9.3-eksbuild.9,v1.9.3-eksbuild.7,v1.9.3-eksbuild.6,v1.9.3-eksbuild.5,v1.9.3-eksbuild.3,v1.9.3-eksbuild.2
kube-proxy              v1.25.16-eksbuild.8     ACTIVE  0
vpc-cni                 v1.19.3-eksbuild.1      ACTIVE  0

4. 노드확인

ec2-user:~/environment:$ kubectl get node --label-columns=eks.amazonaws.com/capacityType,node.kubernetes.io/lifecycle,karpenter.sh/capacity-type,eks.amazonaws.com/compute-type
NAME                                               STATUS   ROLES    AGE   VERSION                CAPACITYTYPE   LIFECYCLE      CAPACITY-TYPE   COMPUTE-TYPE
fargate-ip-10-0-2-200.us-west-2.compute.internal   Ready    <none>   22h   v1.25.16-eks-2d5f260                                                 fargate
ip-10-0-14-172.us-west-2.compute.internal          Ready    <none>   22h   v1.25.16-eks-59bf375                  self-managed                   
ip-10-0-22-6.us-west-2.compute.internal            Ready    <none>   22h   v1.25.16-eks-59bf375                                 spot            
ip-10-0-27-83.us-west-2.compute.internal           Ready    <none>   22h   v1.25.16-eks-59bf375   ON_DEMAND                                     
ip-10-0-45-121.us-west-2.compute.internal          Ready    <none>   22h   v1.25.16-eks-59bf375                  self-managed                   
ip-10-0-6-49.us-west-2.compute.internal            Ready    <none>   22h   v1.25.16-eks-59bf375   ON_DEMAND                                     
ip-10-0-8-255.us-west-2.compute.internal           Ready    <none>   22h   v1.25.16-eks-59bf375   ON_DEMAND                                     

ec2-user:~/environment:$ kubectl get node -L eks.amazonaws.com/nodegroup,karpenter.sh/nodepool
NAME                                               STATUS   ROLES    AGE   VERSION                NODEGROUP                             NODEPOOL
fargate-ip-10-0-2-200.us-west-2.compute.internal   Ready    <none>   22h   v1.25.16-eks-2d5f260                                         
ip-10-0-14-172.us-west-2.compute.internal          Ready    <none>   22h   v1.25.16-eks-59bf375                                         
ip-10-0-22-6.us-west-2.compute.internal            Ready    <none>   22h   v1.25.16-eks-59bf375                                         default
ip-10-0-27-83.us-west-2.compute.internal           Ready    <none>   22h   v1.25.16-eks-59bf375   initial-2025033005221298110000002a    
ip-10-0-45-121.us-west-2.compute.internal          Ready    <none>   22h   v1.25.16-eks-59bf375                                         
ip-10-0-6-49.us-west-2.compute.internal            Ready    <none>   22h   v1.25.16-eks-59bf375   initial-2025033005221298110000002a    
ip-10-0-8-255.us-west-2.compute.internal           Ready    <none>   22h   v1.25.16-eks-59bf375   blue-mng-2025033005221298520000002c   
ec2-user:~/environment:$ 
ec2-user:~/environment:$ kubectl get nodepools
NAME      NODECLASS
default   default
ec2-user:~/environment:$ kubectl get nodeclaims -o yaml
apiVersion: v1
items:
- apiVersion: karpenter.sh/v1beta1
  kind: NodeClaim
  metadata:
    annotations:
      karpenter.k8s.aws/ec2nodeclass-hash: "5256777658067331158"
      karpenter.k8s.aws/ec2nodeclass-hash-version: v2
      karpenter.k8s.aws/tagged: "true"
      karpenter.sh/nodepool-hash: "12028663807258658692"
      karpenter.sh/nodepool-hash-version: v2
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"karpenter.k8s.aws/v1beta1","kind":"EC2NodeClass","metadata":{"annotations":{},"labels":{"argocd.argoproj.io/instance":"karpenter"},"name":"default"},"spec":{"amiFamily":"AL2","amiSelectorTerms":[{"id":"ami-0ee947a6f4880da75"}],"role":"karpenter-eksworkshop-eksctl","securityGroupSelectorTerms":[{"tags":{"karpenter.sh/discovery":"eksworkshop-eksctl"}}],"subnetSelectorTerms":[{"tags":{"karpenter.sh/discovery":"eksworkshop-eksctl"}}],"tags":{"intent":"apps","managed-by":"karpenter","team":"checkout"}}}
    creationTimestamp: "2025-03-30T05:32:47Z"
    finalizers:
    - karpenter.sh/termination
    generateName: default-
    generation: 1
    labels:
      env: dev
      karpenter.k8s.aws/instance-category: m
      karpenter.k8s.aws/instance-cpu: "2"
      karpenter.k8s.aws/instance-cpu-manufacturer: intel
      karpenter.k8s.aws/instance-ebs-bandwidth: "10000"
      karpenter.k8s.aws/instance-encryption-in-transit-supported: "true"
      karpenter.k8s.aws/instance-family: m6i
      karpenter.k8s.aws/instance-generation: "6"
      karpenter.k8s.aws/instance-hypervisor: nitro
      karpenter.k8s.aws/instance-memory: "8192"
      karpenter.k8s.aws/instance-network-bandwidth: "781"
      karpenter.k8s.aws/instance-size: large
      karpenter.sh/capacity-type: spot
      karpenter.sh/nodepool: default
      kubernetes.io/arch: amd64
      kubernetes.io/os: linux
      node.kubernetes.io/instance-type: m6i.large
      team: checkout
      topology.k8s.aws/zone-id: usw2-az2
      topology.kubernetes.io/region: us-west-2
      topology.kubernetes.io/zone: us-west-2b
    name: default-7fzdk
    ownerReferences:
    - apiVersion: karpenter.sh/v1beta1
      blockOwnerDeletion: true
      kind: NodePool
      name: default
      uid: c0220e73-1b3b-4271-b894-9af40bd399a7
    resourceVersion: "6627"
    uid: ebb25f14-0c7f-46aa-a7cf-ad85c12fdaa9
  spec:
    nodeClassRef:
      name: default
    requirements:
    - key: env
      operator: In
      values:
      - dev
    - key: karpenter.k8s.aws/instance-family
      operator: In
      values:
      - c4
      - c5
      - m5
      - m6a
      - m6i
      - r4
    - key: kubernetes.io/arch
      operator: In
      values:
      - amd64
    - key: node.kubernetes.io/instance-type
      operator: In
      values:
      - c4.2xlarge
      - c4.4xlarge
      - c4.8xlarge
      - c4.large
      - c4.xlarge
      - c5.12xlarge
      - c5.18xlarge
      - c5.24xlarge
      - c5.2xlarge
      - c5.4xlarge
      - c5.9xlarge
      - c5.large
      - c5.metal
      - c5.xlarge
      - m5.12xlarge
      - m5.16xlarge
      - m5.24xlarge
      - m5.2xlarge
      - m5.4xlarge
      - m5.8xlarge
      - m5.large
      - m5.metal
      - m5.xlarge
      - m6a.12xlarge
      - m6a.16xlarge
      - m6a.24xlarge
      - m6a.2xlarge
      - m6a.4xlarge
      - m6a.8xlarge
      - m6a.large
      - m6a.xlarge
      - m6i.12xlarge
      - m6i.16xlarge
      - m6i.24xlarge
      - m6i.2xlarge
      - m6i.4xlarge
      - m6i.8xlarge
      - m6i.large
      - m6i.xlarge
      - r4.16xlarge
      - r4.2xlarge
      - r4.4xlarge
      - r4.8xlarge
      - r4.large
      - r4.xlarge
    - key: karpenter.sh/capacity-type
      operator: In
      values:
      - on-demand
      - spot
    - key: kubernetes.io/os
      operator: In
      values:
      - linux
    - key: team
      operator: In
      values:
      - checkout
    - key: karpenter.sh/nodepool
      operator: In
      values:
      - default
    resources:
      requests:
        cpu: 430m
        memory: 632Mi
        pods: "6"
    taints:
    - effect: NoSchedule
      key: dedicated
      value: CheckoutApp
  status:
    allocatable:
      cpu: 1930m
      ephemeral-storage: 17Gi
      memory: 6903Mi
      pods: "29"
      vpc.amazonaws.com/pod-eni: "9"
    capacity:
      cpu: "2"
      ephemeral-storage: 20Gi
      memory: 7577Mi
      pods: "29"
      vpc.amazonaws.com/pod-eni: "9"
    conditions:
    - lastTransitionTime: "2025-03-30T05:33:56Z"
      message: ""
      reason: Initialized
      status: "True"
      type: Initialized
    - lastTransitionTime: "2025-03-30T05:32:52Z"
      message: ""
      reason: Launched
      status: "True"
      type: Launched
    - lastTransitionTime: "2025-03-30T05:33:56Z"
      message: ""
      reason: Ready
      status: "True"
      type: Ready
    - lastTransitionTime: "2025-03-30T05:33:38Z"
      message: ""
      reason: Registered
      status: "True"
      type: Registered
    imageID: ami-0ee947a6f4880da75
    nodeName: ip-10-0-22-6.us-west-2.compute.internal
    providerID: aws:///us-west-2b/i-0c1b2898f9832a69a
kind: List
metadata:
  resourceVersion: ""
ec2-user:~/environment:$ kubectl get nodeclaims
NAME            TYPE        ZONE         NODE                                      READY   AGE
default-7fzdk   m6i.large   us-west-2b   ip-10-0-22-6.us-west-2.compute.internal   True    22h
ec2-user:~/environment:$ kubectl get node --label-columns=node.kubernetes.io/instance-type,kubernetes.io/arch,kubernetes.io/os,topology.kubernetes.io/zone
NAME                                               STATUS   ROLES    AGE   VERSION                INSTANCE-TYPE   ARCH    OS      ZONE
fargate-ip-10-0-2-200.us-west-2.compute.internal   Ready    <none>   22h   v1.25.16-eks-2d5f260                   amd64   linux   us-west-2a
ip-10-0-14-172.us-west-2.compute.internal          Ready    <none>   22h   v1.25.16-eks-59bf375   m5.large        amd64   linux   us-west-2a
ip-10-0-22-6.us-west-2.compute.internal            Ready    <none>   22h   v1.25.16-eks-59bf375   m6i.large       amd64   linux   us-west-2b
ip-10-0-27-83.us-west-2.compute.internal           Ready    <none>   22h   v1.25.16-eks-59bf375   m5.large        amd64   linux   us-west-2b
ip-10-0-45-121.us-west-2.compute.internal          Ready    <none>   22h   v1.25.16-eks-59bf375   m5.large        amd64   linux   us-west-2c
ip-10-0-6-49.us-west-2.compute.internal            Ready    <none>   22h   v1.25.16-eks-59bf375   m5.large        amd64   linux   us-west-2a
ip-10-0-8-255.us-west-2.compute.internal           Ready    <none>   22h   v1.25.16-eks-59bf375   m5.large        amd64   linux   us-west-2a
ec2-user:~/environment:$

5. taint 확인

# 5. 노드 labels 확인

ec2-user:~/environment:$ kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, labels: .metadata.labels}'
{
  "name": "fargate-ip-10-0-2-200.us-west-2.compute.internal",
  "labels": {
    "beta.kubernetes.io/arch": "amd64",
    "beta.kubernetes.io/os": "linux",
    "eks.amazonaws.com/compute-type": "fargate",
    "failure-domain.beta.kubernetes.io/region": "us-west-2",
    "failure-domain.beta.kubernetes.io/zone": "us-west-2a",
    "kubernetes.io/arch": "amd64",
    "kubernetes.io/hostname": "ip-10-0-2-200.us-west-2.compute.internal",
    "kubernetes.io/os": "linux",
    "topology.kubernetes.io/region": "us-west-2",
    "topology.kubernetes.io/zone": "us-west-2a"
  }
}
{
  "name": "ip-10-0-14-172.us-west-2.compute.internal",
  "labels": {
    "beta.kubernetes.io/arch": "amd64",
    "beta.kubernetes.io/instance-type": "m5.large",
    "beta.kubernetes.io/os": "linux",
    "failure-domain.beta.kubernetes.io/region": "us-west-2",
    "failure-domain.beta.kubernetes.io/zone": "us-west-2a",
    "k8s.io/cloud-provider-aws": "a94967527effcefb5f5829f529c0a1b9",
    "kubernetes.io/arch": "amd64",
    "kubernetes.io/hostname": "ip-10-0-14-172.us-west-2.compute.internal",
    "kubernetes.io/os": "linux",
    "node.kubernetes.io/instance-type": "m5.large",
    "node.kubernetes.io/lifecycle": "self-managed",
    "team": "carts",
    "topology.ebs.csi.aws.com/zone": "us-west-2a",
    "topology.kubernetes.io/region": "us-west-2",
    "topology.kubernetes.io/zone": "us-west-2a"
  }
}
{
  "name": "ip-10-0-22-6.us-west-2.compute.internal",
  "labels": {
    "beta.kubernetes.io/arch": "amd64",
    "beta.kubernetes.io/instance-type": "m6i.large",
    "beta.kubernetes.io/os": "linux",
    "env": "dev",
    "failure-domain.beta.kubernetes.io/region": "us-west-2",
    "failure-domain.beta.kubernetes.io/zone": "us-west-2b",
    "k8s.io/cloud-provider-aws": "a94967527effcefb5f5829f529c0a1b9",
    "karpenter.k8s.aws/instance-category": "m",
    "karpenter.k8s.aws/instance-cpu": "2",
    "karpenter.k8s.aws/instance-cpu-manufacturer": "intel",
    "karpenter.k8s.aws/instance-ebs-bandwidth": "10000",
    "karpenter.k8s.aws/instance-encryption-in-transit-supported": "true",
    "karpenter.k8s.aws/instance-family": "m6i",
    "karpenter.k8s.aws/instance-generation": "6",
    "karpenter.k8s.aws/instance-hypervisor": "nitro",
    "karpenter.k8s.aws/instance-memory": "8192",
    "karpenter.k8s.aws/instance-network-bandwidth": "781",
    "karpenter.k8s.aws/instance-size": "large",
    "karpenter.sh/capacity-type": "spot",
    "karpenter.sh/initialized": "true",
    "karpenter.sh/nodepool": "default",
    "karpenter.sh/registered": "true",
    "kubernetes.io/arch": "amd64",
    "kubernetes.io/hostname": "ip-10-0-22-6.us-west-2.compute.internal",
    "kubernetes.io/os": "linux",
    "node.kubernetes.io/instance-type": "m6i.large",
    "team": "checkout",
    "topology.ebs.csi.aws.com/zone": "us-west-2b",
    "topology.k8s.aws/zone-id": "usw2-az2",
    "topology.kubernetes.io/region": "us-west-2",
    "topology.kubernetes.io/zone": "us-west-2b"
  }
}
{
  "name": "ip-10-0-27-83.us-west-2.compute.internal",
  "labels": {
    "beta.kubernetes.io/arch": "amd64",
    "beta.kubernetes.io/instance-type": "m5.large",
    "beta.kubernetes.io/os": "linux",
    "eks.amazonaws.com/capacityType": "ON_DEMAND",
    "eks.amazonaws.com/nodegroup": "initial-2025033005221298110000002a",
    "eks.amazonaws.com/nodegroup-image": "ami-0078a0f78fafda978",
    "eks.amazonaws.com/sourceLaunchTemplateId": "lt-08c7ce8fe527c1dd8",
    "eks.amazonaws.com/sourceLaunchTemplateVersion": "1",
    "failure-domain.beta.kubernetes.io/region": "us-west-2",
    "failure-domain.beta.kubernetes.io/zone": "us-west-2b",
    "k8s.io/cloud-provider-aws": "a94967527effcefb5f5829f529c0a1b9",
    "kubernetes.io/arch": "amd64",
    "kubernetes.io/hostname": "ip-10-0-27-83.us-west-2.compute.internal",
    "kubernetes.io/os": "linux",
    "node.kubernetes.io/instance-type": "m5.large",
    "topology.ebs.csi.aws.com/zone": "us-west-2b",
    "topology.kubernetes.io/region": "us-west-2",
    "topology.kubernetes.io/zone": "us-west-2b"
  }
}
{
  "name": "ip-10-0-45-121.us-west-2.compute.internal",
  "labels": {
    "beta.kubernetes.io/arch": "amd64",
    "beta.kubernetes.io/instance-type": "m5.large",
    "beta.kubernetes.io/os": "linux",
    "failure-domain.beta.kubernetes.io/region": "us-west-2",
    "failure-domain.beta.kubernetes.io/zone": "us-west-2c",
    "k8s.io/cloud-provider-aws": "a94967527effcefb5f5829f529c0a1b9",
    "kubernetes.io/arch": "amd64",
    "kubernetes.io/hostname": "ip-10-0-45-121.us-west-2.compute.internal",
    "kubernetes.io/os": "linux",
    "node.kubernetes.io/instance-type": "m5.large",
    "node.kubernetes.io/lifecycle": "self-managed",
    "team": "carts",
    "topology.ebs.csi.aws.com/zone": "us-west-2c",
    "topology.kubernetes.io/region": "us-west-2",
    "topology.kubernetes.io/zone": "us-west-2c"
  }
}
{
  "name": "ip-10-0-6-49.us-west-2.compute.internal",
  "labels": {
    "beta.kubernetes.io/arch": "amd64",
    "beta.kubernetes.io/instance-type": "m5.large",
    "beta.kubernetes.io/os": "linux",
    "eks.amazonaws.com/capacityType": "ON_DEMAND",
    "eks.amazonaws.com/nodegroup": "initial-2025033005221298110000002a",
    "eks.amazonaws.com/nodegroup-image": "ami-0078a0f78fafda978",
    "eks.amazonaws.com/sourceLaunchTemplateId": "lt-08c7ce8fe527c1dd8",
    "eks.amazonaws.com/sourceLaunchTemplateVersion": "1",
    "failure-domain.beta.kubernetes.io/region": "us-west-2",
    "failure-domain.beta.kubernetes.io/zone": "us-west-2a",
    "k8s.io/cloud-provider-aws": "a94967527effcefb5f5829f529c0a1b9",
    "kubernetes.io/arch": "amd64",
    "kubernetes.io/hostname": "ip-10-0-6-49.us-west-2.compute.internal",
    "kubernetes.io/os": "linux",
    "node.kubernetes.io/instance-type": "m5.large",
    "topology.ebs.csi.aws.com/zone": "us-west-2a",
    "topology.kubernetes.io/region": "us-west-2",
    "topology.kubernetes.io/zone": "us-west-2a"
  }
}
{
  "name": "ip-10-0-8-255.us-west-2.compute.internal",
  "labels": {
    "beta.kubernetes.io/arch": "amd64",
    "beta.kubernetes.io/instance-type": "m5.large",
    "beta.kubernetes.io/os": "linux",
    "eks.amazonaws.com/capacityType": "ON_DEMAND",
    "eks.amazonaws.com/nodegroup": "blue-mng-2025033005221298520000002c",
    "eks.amazonaws.com/nodegroup-image": "ami-0078a0f78fafda978",
    "eks.amazonaws.com/sourceLaunchTemplateId": "lt-0794c8dc0a94e9cee",
    "eks.amazonaws.com/sourceLaunchTemplateVersion": "1",
    "failure-domain.beta.kubernetes.io/region": "us-west-2",
    "failure-domain.beta.kubernetes.io/zone": "us-west-2a",
    "k8s.io/cloud-provider-aws": "a94967527effcefb5f5829f529c0a1b9",
    "kubernetes.io/arch": "amd64",
    "kubernetes.io/hostname": "ip-10-0-8-255.us-west-2.compute.internal",
    "kubernetes.io/os": "linux",
    "node.kubernetes.io/instance-type": "m5.large",
    "topology.ebs.csi.aws.com/zone": "us-west-2a",
    "topology.kubernetes.io/region": "us-west-2",
    "topology.kubernetes.io/zone": "us-west-2a",
    "type": "OrdersMNG"
  }
}
ec2-user:~/environment:$ kubectl get sts -A
NAMESPACE   NAME                                    READY   AGE
argocd      argo-cd-argocd-application-controller   1/1     23h
catalog     catalog-mysql                           1/1     23h
rabbitmq    rabbitmq                                1/1     23h
ec2-user:~/environment:$ kubectl get sc
NAME            PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
efs             efs.csi.aws.com         Delete          Immediate              true                   23h
gp2             kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  23h
gp3 (default)   ebs.csi.aws.com         Delete          WaitForFirstConsumer   true                   23h
ec2-user:~/environment:$ kubectl describe sc efs
Name:                  efs
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           efs.csi.aws.com
Parameters:            basePath=/dynamic_provisioning,directoryPerms=755,ensureUniqueDirectory=false,fileSystemId=fs-05d87c33956454415,gidRangeEnd=200,gidRangeStart=100,provisioningMode=efs-ap,reuseAccessPoint=false,subPathPattern=${.PVC.namespace}/${.PVC.name}
AllowVolumeExpansion:  True
MountOptions:
  iam
ReclaimPolicy:      Delete
VolumeBindingMode:  Immediate
Events:             <none>

6. pv , pvc 확인

# 6. pv ,pvc 확인
ec2-user:~/environment:$ kubectl get pv,pvc -A
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                         STORAGECLASS   REASON   AGE
persistentvolume/pvc-36d4f16a-c5fa-4642-8101-f2910b160c96   4Gi        RWO            Delete           Bound    checkout/checkout-redis-pvc   gp3                     23h
persistentvolume/pvc-9a9abc6b-33de-45da-8a19-237b894418d7   4Gi        RWO            Delete           Bound    orders/order-mysql-pvc        gp3                     23h
persistentvolume/pvc-9b32bb1c-0268-4bb3-8097-e9fa6616a2e5   4Gi        RWO            Delete           Bound    catalog/catalog-mysql-pvc     gp3                     23h

NAMESPACE   NAME                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
catalog     persistentvolumeclaim/catalog-mysql-pvc    Bound    pvc-9b32bb1c-0268-4bb3-8097-e9fa6616a2e5   4Gi        RWO            gp3            23h
checkout    persistentvolumeclaim/checkout-redis-pvc   Bound    pvc-36d4f16a-c5fa-4642-8101-f2910b160c96   4Gi        RWO            gp3            23h
orders      persistentvolumeclaim/order-mysql-pvc      Bound    pvc-9a9abc6b-33de-45da-8a19-237b894418d7   4Gi        RWO            gp3            23h

AWS 관리 콘솔 접속 : EKS, EC2, VPC 등 확인

1. EKS

2.EC2

3. ELB 

Sample Application

1. git clone 후, argocd 환경 변수 등록

ec2-user:~/environment:$ cd ~/environment
ec2-user:~/environment:$ git clone codecommit::${REGION}://eks-gitops-repo
Cloning into 'eks-gitops-repo'...
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint:   git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint:   git branch -m <name>
remote: Counting objects: 77, done.
Unpacking objects: 100% (77/77), 14.09 KiB | 577.00 KiB/s, done.
ec2-user:~/environment:$ sudo yum install tree -y
Last metadata expiration check: 1 day, 5:45:50 ago on Sun Mar 30 05:17:40 2025.
Dependencies resolved.
============================================================================================================================================================================================================================
 Package                                        Architecture                                     Version                                                        Repository                                             Size
============================================================================================================================================================================================================================
Installing:
 tree                                           x86_64                                           1.8.0-6.amzn2023.0.2                                           amazonlinux                                            56 k

Transaction Summary
============================================================================================================================================================================================================================
Install  1 Package

Total download size: 56 k
Installed size: 113 k
Downloading Packages:
tree-1.8.0-6.amzn2023.0.2.x86_64.rpm                                                                                                                                                        1.3 MB/s |  56 kB     00:00    
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                                                       624 kB/s |  56 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                                                                                                                    1/1 
  Installing       : tree-1.8.0-6.amzn2023.0.2.x86_64                                                                                                                                                                   1/1 
  Running scriptlet: tree-1.8.0-6.amzn2023.0.2.x86_64                                                                                                                                                                   1/1 
  Verifying        : tree-1.8.0-6.amzn2023.0.2.x86_64                                                                                                                                                                   1/1 

Installed:
  tree-1.8.0-6.amzn2023.0.2.x86_64                                                                                                                                                                                          

Complete!
ec2-user:~/environment:$ tree eks-gitops-repo/ -L 2
eks-gitops-repo/
├── app-of-apps
│   ├── Chart.yaml
│   ├── templates
│   └── values.yaml
└── apps
    ├── assets
    ├── carts
    ├── catalog
    ├── checkout
    ├── karpenter
    ├── kustomization.yaml
    ├── orders
    ├── other
    ├── rabbitmq
    └── ui

12 directories, 3 files
ec2-user:~/environment:$ export ARGOCD_SERVER=$(kubectl get svc argo-cd-argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname')
echo "ArgoCD URL: http://${ARGOCD_SERVER}"
ArgoCD URL: http://k8s-argocd-argocdar-xxx.elb.us-west-2.amazonaws.com
ec2-user:~/environment:$ export ARGOCD_USER="admin"
export ARGOCD_PWD=$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)
echo "Username: ${ARGOCD_USER}"
echo "Password: ${ARGOCD_PWD}"
Username: admin
Password: HclaDxxx

 

2. Argo CD CLI 확인

ec2-user:~/environment:$ argocd login ${ARGOCD_SERVER} --username ${ARGOCD_USER} --password ${ARGOCD_PWD} --insecure --skip-test-tls --grpc-web
'admin:login' logged in successfully
Context 'k8s-argocd-argocdar-cxxxxxxlb.us-west-2.amazonaws.com' updated
ec2-user:~/environment:$ argocd repo list
TYPE  NAME  REPO                                                                     INSECURE  OCI    LFS    CREDS  STATUS      MESSAGE  PROJECT
git         https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo  false     false  false  true   Successful           
ec2-user:~/environment:$ argocd app list
NAME              CLUSTER                         NAMESPACE  PROJECT  STATUS     HEALTH   SYNCPOLICY  CONDITIONS  REPO                                                                     PATH            TARGET
argocd/apps       https://kubernetes.default.svc             default  Synced     Healthy  Auto        <none>      https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo  app-of-apps     
argocd/assets     https://kubernetes.default.svc             default  Synced     Healthy  Auto-Prune  <none>      https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo  apps/assets     main
argocd/carts      https://kubernetes.default.svc             default  Synced     Healthy  Auto-Prune  <none>      https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo  apps/carts      main
argocd/catalog    https://kubernetes.default.svc             default  Synced     Healthy  Auto-Prune  <none>      https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo  apps/catalog    main
argocd/checkout   https://kubernetes.default.svc             default  Synced     Healthy  Auto-Prune  <none>      https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo  apps/checkout   main
argocd/karpenter  https://kubernetes.default.svc             default  Synced     Healthy  Auto-Prune  <none>      https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo  apps/karpenter  main
argocd/orders     https://kubernetes.default.svc             default  Synced     Healthy  Auto-Prune  <none>      https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo  apps/orders     main
argocd/other      https://kubernetes.default.svc             default  Synced     Healthy  Auto-Prune  <none>      https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo  apps/other      main
argocd/rabbitmq   https://kubernetes.default.svc             default  Synced     Healthy  Auto-Prune  <none>      https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo  apps/rabbitmq   main
argocd/ui         https://kubernetes.default.svc             default  OutOfSync  Healthy  Auto-Prune  <none>      https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo  apps/ui         main
ec2-user:~/environment:$ argocd app get apps
Name:               argocd/apps
Project:            default
Server:             https://kubernetes.default.svc
Namespace:          
URL:                https://k8s-argocd-argocdar-xxxx.elb.us-west-2.amazonaws.com/applications/apps
Source:
- Repo:             https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo
  Target:           
  Path:             app-of-apps
SyncWindow:         Sync Allowed
Sync Policy:        Automated
Sync Status:        Synced to  (afeeef7)
Health Status:      Healthy

GROUP        KIND         NAMESPACE  NAME       STATUS  HEALTH  HOOK  MESSAGE
argoproj.io  Application  argocd     other      Synced                application.argoproj.io/other created
argoproj.io  Application  argocd     assets     Synced                application.argoproj.io/assets created
argoproj.io  Application  argocd     catalog    Synced                application.argoproj.io/catalog created
argoproj.io  Application  argocd     carts      Synced                application.argoproj.io/carts created
argoproj.io  Application  argocd     karpenter  Synced                application.argoproj.io/karpenter created
argoproj.io  Application  argocd     checkout   Synced                application.argoproj.io/checkout created
argoproj.io  Application  argocd     orders     Synced                application.argoproj.io/orders created
argoproj.io  Application  argocd     rabbitmq   Synced                application.argoproj.io/rabbitmq created
argoproj.io  Application  argocd     ui         Synced                application.argoproj.io/ui created
ec2-user:~/environment:$ argocd app get carts
Name:               argocd/carts
Project:            default
Server:             https://kubernetes.default.svc
Namespace:          
URL:                https://k8s-argocd-argocdar-c32xxxxx.us-west-2.amazonaws.com/applications/carts
Source:
- Repo:             https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo
  Target:           main
  Path:             apps/carts
SyncWindow:         Sync Allowed
Sync Policy:        Automated (Prune)
Sync Status:        Synced to main (afeeef7)
Health Status:      Healthy

GROUP  KIND            NAMESPACE  NAME            STATUS  HEALTH   HOOK  MESSAGE
       Namespace                  carts           Synced                 namespace/carts created
       ServiceAccount  carts      carts           Synced                 serviceaccount/carts created
       ConfigMap       carts      carts           Synced                 configmap/carts created
       Service         carts      carts-dynamodb  Synced  Healthy        service/carts-dynamodb created
       Service         carts      carts           Synced  Healthy        service/carts created
apps   Deployment      carts      carts-dynamodb  Synced  Healthy        deployment.apps/carts-dynamodb created
apps   Deployment      carts      carts           Synced  Healthy        deployment.apps/carts created
ec2-user:~/environment:$ argocd app get carts -o yaml
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"argoproj.io/v1alpha1","kind":"Application","metadata":{"annotations":{},"finalizers":["resources-finalizer.argocd.argoproj.io"],"labels":{"argocd.argoproj.io/instance":"apps"},"name":"carts","namespace":"argocd"},"spec":{"destination":{"server":"https://kubernetes.default.svc"},"ignoreDifferences":[{"group":"apps","jsonPointers":["/spec/replicas","/metadata/annotations/deployment.kubernetes.io/revision"],"kind":"Deployment"},{"group":"autoscaling","jsonPointers":["/status"],"kind":"HorizontalPodAutoscaler"}],"project":"default","source":{"path":"apps/carts","repoURL":"https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo","targetRevision":"main"},"syncPolicy":{"automated":{"prune":true,"selfHeal":true},"syncOptions":["RespectIgnoreDifferences=true"]}}}
  creationTimestamp: "2025-03-30T05:32:42Z"
  finalizers:
  - resources-finalizer.argocd.argoproj.io
  generation: 607
  labels:
    argocd.argoproj.io/instance: apps
  managedFields:
  - apiVersion: argoproj.io/v1alpha1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:finalizers:
          .: {}
          v:"resources-finalizer.argocd.argoproj.io": {}
        f:labels:
          .: {}
          f:argocd.argoproj.io/instance: {}
      f:spec:
        .: {}
        f:destination:
          .: {}
          f:server: {}
        f:ignoreDifferences: {}
        f:project: {}
        f:source:
          .: {}
          f:path: {}
          f:repoURL: {}
          f:targetRevision: {}
        f:syncPolicy:
          .: {}
          f:automated:
            .: {}
            f:prune: {}
            f:selfHeal: {}
          f:syncOptions: {}
    manager: argocd-controller
    operation: Update
    time: "2025-03-30T05:32:42Z"
  - apiVersion: argoproj.io/v1alpha1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        .: {}
        f:controllerNamespace: {}
        f:health:
          .: {}
          f:status: {}
        f:history: {}
        f:operationState:
          .: {}
          f:finishedAt: {}
          f:message: {}
          f:operation:
            .: {}
            f:initiatedBy:
              .: {}
              f:automated: {}
            f:retry:
              .: {}
              f:limit: {}
            f:sync:
              .: {}
              f:prune: {}
              f:revision: {}
              f:syncOptions: {}
          f:phase: {}
          f:startedAt: {}
          f:syncResult:
            .: {}
            f:resources: {}
            f:revision: {}
            f:source:
              .: {}
              f:path: {}
              f:repoURL: {}
              f:targetRevision: {}
        f:reconciledAt: {}
        f:resources: {}
        f:sourceType: {}
        f:summary:
          .: {}
          f:images: {}
        f:sync:
          .: {}
          f:comparedTo:
            .: {}
            f:destination:
              .: {}
              f:server: {}
            f:ignoreDifferences: {}
            f:source:
              .: {}
              f:path: {}
              f:repoURL: {}
              f:targetRevision: {}
          f:revision: {}
          f:status: {}
    manager: argocd-application-controller
    operation: Update
    time: "2025-03-31T11:12:31Z"
  name: carts
  namespace: argocd
  resourceVersion: "780262"
  uid: 9b60a6be-e0f2-4bce-b7a7-36e638bb2439
spec:
  destination:
    server: https://kubernetes.default.svc
  ignoreDifferences:
  - group: apps
    jsonPointers:
    - /spec/replicas
    - /metadata/annotations/deployment.kubernetes.io/revision
    kind: Deployment
  - group: autoscaling
    jsonPointers:
    - /status
    kind: HorizontalPodAutoscaler
  project: default
  source:
    path: apps/carts
    repoURL: https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo
    targetRevision: main
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - RespectIgnoreDifferences=true
status:
  controllerNamespace: argocd
  health:
    status: Healthy
  history:
  - deployStartedAt: "2025-03-30T05:32:43Z"
    deployedAt: "2025-03-30T05:32:44Z"
    id: 0
    initiatedBy: {}
    revision: afeeef700565da3a50f8fa195426ac1a07aa9a02
    source:
      path: apps/carts
      repoURL: https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo
      targetRevision: main
  operationState:
    finishedAt: "2025-03-30T05:32:44Z"
    message: successfully synced (all tasks run)
    operation:
      initiatedBy:
        automated: true
      retry:
        limit: 5
      sync:
        prune: true
        revision: afeeef700565da3a50f8fa195426ac1a07aa9a02
        syncOptions:
        - RespectIgnoreDifferences=true
    phase: Succeeded
    startedAt: "2025-03-30T05:32:43Z"
    syncResult:
      resources:
      - group: ""
        hookPhase: Running
        kind: Namespace
        message: namespace/carts created
        name: carts
        namespace: ""
        status: Synced
        syncPhase: Sync
        version: v1
      - group: ""
        hookPhase: Running
        kind: ServiceAccount
        message: serviceaccount/carts created
        name: carts
        namespace: carts
        status: Synced
        syncPhase: Sync
        version: v1
      - group: ""
        hookPhase: Running
        kind: ConfigMap
        message: configmap/carts created
        name: carts
        namespace: carts
        status: Synced
        syncPhase: Sync
        version: v1
      - group: ""
        hookPhase: Running
        kind: Service
        message: service/carts-dynamodb created
        name: carts-dynamodb
        namespace: carts
        status: Synced
        syncPhase: Sync
        version: v1
      - group: ""
        hookPhase: Running
        kind: Service
        message: service/carts created
        name: carts
        namespace: carts
        status: Synced
        syncPhase: Sync
        version: v1
      - group: apps
        hookPhase: Running
        kind: Deployment
        message: deployment.apps/carts-dynamodb created
        name: carts-dynamodb
        namespace: carts
        status: Synced
        syncPhase: Sync
        version: v1
      - group: apps
        hookPhase: Running
        kind: Deployment
        message: deployment.apps/carts created
        name: carts
        namespace: carts
        status: Synced
        syncPhase: Sync
        version: v1
      revision: afeeef700565da3a50f8fa195426ac1a07aa9a02
      source:
        path: apps/carts
        repoURL: https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo
        targetRevision: main
  reconciledAt: "2025-03-31T11:12:31Z"
  resources:
  - kind: ConfigMap
    name: carts
    namespace: carts
    status: Synced
    version: v1
  - kind: Namespace
    name: carts
    status: Synced
    version: v1
  - health:
      status: Healthy
    kind: Service
    name: carts
    namespace: carts
    status: Synced
    version: v1
  - health:
      status: Healthy
    kind: Service
    name: carts-dynamodb
    namespace: carts
    status: Synced
    version: v1
  - kind: ServiceAccount
    name: carts
    namespace: carts
    status: Synced
    version: v1
  - group: apps
    health:
      status: Healthy
    kind: Deployment
    name: carts
    namespace: carts
    status: Synced
    version: v1
  - group: apps
    health:
      status: Healthy
    kind: Deployment
    name: carts-dynamodb
    namespace: carts
    status: Synced
    version: v1
  sourceHydrator: {}
  sourceType: Kustomize
  summary:
    images:
    - amazon/dynamodb-local:1.13.1
    - public.ecr.aws/aws-containers/retail-store-sample-cart:0.7.0
  sync:
    comparedTo:
      destination:
        server: https://kubernetes.default.svc
      ignoreDifferences:
      - group: apps
        jsonPointers:
        - /spec/replicas
        - /metadata/annotations/deployment.kubernetes.io/revision
        kind: Deployment
      - group: autoscaling
        jsonPointers:
        - /status
        kind: HorizontalPodAutoscaler
      source:
        path: apps/carts
        repoURL: https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo
        targetRevision: main
    revision: afeeef700565da3a50f8fa195426ac1a07aa9a02
    status: Synced
ec2-user:~/environment:$ argocd app get ui -o yaml
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"argoproj.io/v1alpha1","kind":"Application","metadata":{"annotations":{},"finalizers":["resources-finalizer.argocd.argoproj.io"],"labels":{"argocd.argoproj.io/instance":"apps"},"name":"ui","namespace":"argocd"},"spec":{"destination":{"server":"https://kubernetes.default.svc"},"ignoreDifferences":[{"group":"apps","jsonPointers":["/spec/replicas","/metadata/annotations/deployment.kubernetes.io/revision"],"kind":"Deployment"},{"group":"autoscaling","jsonPointers":["/status"],"kind":"HorizontalPodAutoscaler"}],"project":"default","source":{"path":"apps/ui","repoURL":"https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo","targetRevision":"main"},"syncPolicy":{"automated":{"prune":true,"selfHeal":true},"syncOptions":["RespectIgnoreDifferences=true"]}}}
  creationTimestamp: "2025-03-30T05:32:42Z"
  finalizers:
  - resources-finalizer.argocd.argoproj.io
  generation: 85687
  labels:
    argocd.argoproj.io/instance: apps
  managedFields:
  - apiVersion: argoproj.io/v1alpha1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:finalizers:
          .: {}
          v:"resources-finalizer.argocd.argoproj.io": {}
        f:labels:
          .: {}
          f:argocd.argoproj.io/instance: {}
      f:spec:
        .: {}
        f:destination:
          .: {}
          f:server: {}
        f:ignoreDifferences: {}
        f:project: {}
        f:source:
          .: {}
          f:path: {}
          f:repoURL: {}
          f:targetRevision: {}
        f:syncPolicy:
          .: {}
          f:automated:
            .: {}
            f:prune: {}
            f:selfHeal: {}
          f:syncOptions: {}
    manager: argocd-controller
    operation: Update
    time: "2025-03-30T05:32:42Z"
  - apiVersion: argoproj.io/v1alpha1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        .: {}
        f:controllerNamespace: {}
        f:health:
          .: {}
          f:status: {}
        f:history: {}
        f:operationState:
          .: {}
          f:finishedAt: {}
          f:message: {}
          f:operation:
            .: {}
            f:initiatedBy:
              .: {}
              f:automated: {}
            f:retry:
              .: {}
              f:limit: {}
            f:sync:
              .: {}
              f:prune: {}
              f:resources: {}
              f:revision: {}
              f:syncOptions: {}
          f:phase: {}
          f:startedAt: {}
          f:syncResult:
            .: {}
            f:resources: {}
            f:revision: {}
            f:source:
              .: {}
              f:path: {}
              f:repoURL: {}
              f:targetRevision: {}
        f:reconciledAt: {}
        f:resources: {}
        f:sourceType: {}
        f:summary:
          .: {}
          f:images: {}
        f:sync:
          .: {}
          f:comparedTo:
            .: {}
            f:destination:
              .: {}
              f:server: {}
            f:ignoreDifferences: {}
            f:source:
              .: {}
              f:path: {}
              f:repoURL: {}
              f:targetRevision: {}
          f:revision: {}
          f:status: {}
    manager: argocd-application-controller
    operation: Update
    time: "2025-03-31T11:14:05Z"
  name: ui
  namespace: argocd
  resourceVersion: "780933"
  uid: f432777b-61c2-4ca8-851a-bb0b92b95fbe
spec:
  destination:
    server: https://kubernetes.default.svc
  ignoreDifferences:
  - group: apps
    jsonPointers:
    - /spec/replicas
    - /metadata/annotations/deployment.kubernetes.io/revision
    kind: Deployment
  - group: autoscaling
    jsonPointers:
    - /status
    kind: HorizontalPodAutoscaler
  project: default
  source:
    path: apps/ui
    repoURL: https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo
    targetRevision: main
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - RespectIgnoreDifferences=true
status:
  controllerNamespace: argocd
  health:
    status: Healthy
  history:
  - deployStartedAt: "2025-03-30T05:32:43Z"
    deployedAt: "2025-03-30T05:32:45Z"
    id: 0
    initiatedBy: {}
    revision: afeeef700565da3a50f8fa195426ac1a07aa9a02
    source:
      path: apps/ui
      repoURL: https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo
      targetRevision: main
  operationState:
    finishedAt: "2025-03-31T11:14:01Z"
    message: successfully synced (all tasks run)
    operation:
      initiatedBy:
        automated: true
      retry:
        limit: 5
      sync:
        prune: true
        resources:
        - group: autoscaling
          kind: HorizontalPodAutoscaler
          name: ui
        revision: afeeef700565da3a50f8fa195426ac1a07aa9a02
        syncOptions:
        - RespectIgnoreDifferences=true
    phase: Succeeded
    startedAt: "2025-03-31T11:14:01Z"
    syncResult:
      resources:
      - group: autoscaling
        hookPhase: Running
        kind: HorizontalPodAutoscaler
        message: horizontalpodautoscaler.autoscaling/ui configured
        name: ui
        namespace: ui
        status: Synced
        syncPhase: Sync
        version: v2beta2
      revision: afeeef700565da3a50f8fa195426ac1a07aa9a02
      source:
        path: apps/ui
        repoURL: https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo
        targetRevision: main
  reconciledAt: "2025-03-31T11:14:01Z"
  resources:
  - kind: ConfigMap
    name: ui
    namespace: ui
    status: Synced
    version: v1
  - kind: Namespace
    name: ui
    status: Synced
    version: v1
  - health:
      status: Healthy
    kind: Service
    name: ui
    namespace: ui
    status: Synced
    version: v1
  - kind: ServiceAccount
    name: ui
    namespace: ui
    status: Synced
    version: v1
  - group: apps
    health:
      status: Healthy
    kind: Deployment
    name: ui
    namespace: ui
    status: Synced
    version: v1
  - group: autoscaling
    health:
      message: recommended size matches current size
      status: Healthy
    kind: HorizontalPodAutoscaler
    name: ui
    namespace: ui
    status: OutOfSync
    version: v2beta2
  sourceHydrator: {}
  sourceType: Kustomize
  summary:
    images:
    - public.ecr.aws/aws-containers/retail-store-sample-ui:0.4.0
  sync:
    comparedTo:
      destination:
        server: https://kubernetes.default.svc
      ignoreDifferences:
      - group: apps
        jsonPointers:
        - /spec/replicas
        - /metadata/annotations/deployment.kubernetes.io/revision
        kind: Deployment
      - group: autoscaling
        jsonPointers:
        - /status
        kind: HorizontalPodAutoscaler
      source:
        path: apps/ui
        repoURL: https://git-codecommit.us-west-2.amazonaws.com/v1/repos/eks-gitops-repo
        targetRevision: main
    revision: afeeef700565da3a50f8fa195426ac1a07aa9a02
    status: OutOfSync
ec2-user:~/environment:$

 

AWS EKS Release Cycles

  • Kubernetes 프로젝트는 새로운 기능, 최신 보안 패치 및 버그 수정을 통해 지속적으로 업데이트되고 있습니다. Kubernetes 버전의 의미론을 처음 접하는 경우 Semantic Versioning 용어를 따르며 일반적으로 x.y.z로 표현됩니다.
  • 여기서 x는 주요 버전, y는 마이너 버전, z는 패치 버전입니다.
  • 새로운 Kubernetes 마이너 버전(y)은 대략 4개월마다 출시되며, 모든 버전 >=v1.19는 12개월 동안 표준 지원을 제공하며, 한 번에 최소 3개의 마이너 버전을 지원합니다. Kubernetes 프로젝트는 최신 세 가지 마이너 버전에 대한 릴리스 브랜치를 유지
  • Amazon Elastic Kubernetes Service(EKS)는 Kubernetes 프로젝트 릴리스 주기를 따르지만, 버전이 Amazon EKS에서 처음 제공된 후 14개월 동안 한 번에 4개의 마이너 버전에 대한 표준 지원을 제공합니다. 이는 업스트림 Kubernetes가 더 이상 Amazon EKS에서 제공되는 버전을 지원하지 않더라도 마찬가지입니다. Amazon EKS에서 지원되는 Kubernetes 버전에 적용되는 보안 패치를 백포트합니다.
  • Amazon EKS Kubernetes 릴리스 캘린더에는 Amazon EKS에서 지원되는 각 Kubernetes 버전에 대한 중요한 릴리스 및 지원 날짜가 있습니다. 최신 EKS 버전의 릴리스가 해당 버전의 Kubernetes보다 몇 주 뒤처지는 이유가 궁금하다면, 이는 Amazon이 새로운 버전의 Kubernetes를 Amazon EKS에서 제공하기 전에 다른 AWS 서비스 및 도구와의 안정성과 호환성을 철저히 테스트하기 때문입니다. 새 버전이 얼마나 빨리 지원될지에 대한 구체적인 날짜나 SLA는 제공하지 않지만, Amazon EKS 팀은 업스트림 릴리스와 EKS 지원 착륙 사이의 격차를 줄이기 위해 노력하고 있습니다.

  • [총 26개월 = 14개월 + 12개월]  표준 지원 외에도 Amazon EKS는 최근 확장 지원 기능(출시 발표)을 출시했습니다. 이제 모든 Kubernetes 버전 1.21 이상이 Amazon EKS에서 확장 지원을 받을 수 있습니다. 확장 지원은 표준 지원 종료 후 즉시 자동으로 시작되며 추가로 12개월 동안 계속되어 각 Kubernetes 마이너 버전에 대한 지원이 총 26개월로 늘어납니다. 확장 지원 기간이 끝나기 전에 클러스터가 업데이트되지 않으면 현재 지원되는 가장 오래된 확장 버전으로 자동 업그레이드됩니다. 자세한 내용은 Amazon EKS의 Kubernetes 버전 확장 지원을 참조하세요.

 

  • 확장 지원 기간 동안 Kubernetes 버전을 실행하는 클러스터의 가격은 2024년 4월 1일부터 클러스터당 시간당 총 $0.60의 요금이 부과되며, 비용 표준 지원은 변경되지 않습니다(클러스터당 시간당 $0.10).
  • supportType 속성을 사용하여 새 클러스터와 기존 클러스터 모두에 대한 버전 정책을 설정할 수 있습니다. 버전 지원 정책을 설정하는 데 사용할 수 있는 두 가지 옵션이 있습니다:
    • 표준 — 표준 지원이 종료되면 EKS 클러스터가 자동으로 업그레이드될 수 있습니다. 이 설정에서는 연장 지원 비용이 발생하지 않지만, 표준 지원에서 EKS 클러스터는 다음 지원되는 Kubernetes 버전으로 자동으로 업그레이드됩니다.
    • 확장 — Kubernetes 버전이 표준 지원 종료에 도달하면 EKS 클러스터가 확장 지원을 시작합니다. 이 설정에서는 확장 지원 요금이 부과됩니다. 클러스터를 표준 지원 Kubernetes 버전으로 업그레이드하여 확장 지원 요금이 발생하지 않도록 할 수 있습니다. 확장 지원에서 실행 중인 클러스터는 확장 지원 종료 시 자동으로 업그레이드할 수 있습니다.

 왜 업그레이드를 해야하나요? 

  • Kubernetes 버전은 제어 평면과 데이터 평면을 모두 포함합니다.
  • AWS가 제어 평면을 관리하고 업그레이드하는 동안, 사용자(클러스터 소유자/고객)는 클러스터 제어 평면과 데이터 평면 모두에 대한 업그레이드를 시작할 책임을 집니다.
  • 클러스터 업그레이드를 시작할 때, AWS는 제어 평면을 관리하며 데이터 평면의 업그레이드를 시작할 책임이 있습니다.
  • 여기에는 자가 관리 노드 그룹, 관리 노드 그룹, Fargate기타 애드온을 통해 프로비저닝된 작업자 노드가 포함됩니다.
  • 작업자 노드가 Karpenter Controller를 통해 프로비저닝된 경우, 자동 노드 재활용 및 업그레이드를 위해 드리프트 또는 디스럽션 컨트롤러 기능(spec.expireAfter)을 활용할 수 있습니다.
  • 또한 클러스터 업그레이드를 계획할 때 작업 부하의 애플리케이션 가용성을 보장해야 하며, 데이터 평면이 업그레이드되는 동안 작업 부하의 가용성을 보장하기 위해서는 적절한 PodDisruptionBudgetstopologySpreadConstraint가 필수적입니다.
  • Kubernetes 마이너 버전 외에도 Amazon EKS는 새로운 Kubernetes 컨트롤 플레인 설정을 가능하게 하고 보안 수정을 제공하기 위해 주기적으로 새로운 플랫폼 버전을 출시합니다.
  • 각 Amazon EKS 마이너 버전은 하나 이상의 관련 플랫폼 버전을 가질 수 있습니다.
  • 1.30과 같이 Amazon EKS에서 새로운 Kubernetes 마이너 버전을 사용할 수 있는 경우, 해당 Kubernetes 마이너 버전의 초기 플랫폼 버전은 eks.1부터 시작되며 새로운 버전이 출시될 때마다 플랫폼 버전이 증가합니다(eks.n+1).
  • 아래 표는 이를 더 잘 시각화하는 데 도움이 될 수 있으며, Amazon EKS 플랫폼 버전에서 더 자세한 내용을 확인할 수 있습니다.
  • 좋은 소식은 Amazon EKS가 해당 Kubernetes 마이너 버전에 대해 모든 기존 클러스터를 최신 Amazon EKS 플랫폼 버전으로 자동 업그레이드하며, 사용자 측에서 명시적인 조치가 필요하지 않다는 것입니다.
  • 따라서 Kubernetes 업데이트 관점에서 볼 때, 안전하고 효율적인 EKS 환경을 위해서는 현재 마이너 버전으로 최신 상태를 유지하는 것이 매우 중요하며, 이는 Amazon EKS의 공유 책임 모델을 반영합니다. 이를 통해 클러스터가 최신 보안 패치 및 버그 수정을 실행하고 있는지 확인하여 보안 취약점의 위험을 줄일 수 있습니다. 또한 성능, 확장성 및 신뢰성을 향상시켜 애플리케이션과 고객에게 더 나은 서비스를 제공합니다.

EKS Upgrades

 

목표:

이 워크숍을 진행하면서 클러스터 내 업그레이드 및 청록색 업그레이드와 같은 다양한 업그레이드 전략에 대해 배우게 됩니다.

또한 업그레이드 전략을 결정하는 기준과 각 단계에 대해 자세히 알아보세요.

아래는 인플레이스 클러스터 업그레이드의 고급 워크플로우이며, 다음 모듈에서 이에 대해 자세히 살펴보겠습니다.

Amazon EKS 클러스터를 in-place upgrade  업그레이드하려면 다음과 같은 조치를 취해야 합니다

  1.  Kubernetes 및 EKS 릴리스 노트를 검토하세요. 업그레이드 전에 확인 하세요.
  2. 클러스터 백업을 수행 합니다. ( 선택 사항)
  3. AWS 콘솔이나 CLI를 사용하여 클러스터 제어 평면을 업그레이드 합니다.
  4. add-on 기능 호환성을 검토 합니다.
  5. 클러스터 데이터 플레인을 업그레이드 합니다.

준비 단계 ( Preparing for Cluster Upgrades )

  • Amazon EKS에서는 클러스터를 생성할 때 지정한 서브넷에서 사용 가능한 IP 주소를 최대 5개까지 필요로 합니다 .
  • 클러스터의 AWS Identity and Access Management(IAM) 역할과 보안 그룹 이 AWS 계정에 있어야 합니다 .
  • 비밀 암호화를 활성화하는 경우 클러스터 IAM 역할에 AWS Key Management Service(AWS KMS) 키 권한이 있어야 합니다 .
Upgrade Workflow

 

  • Amazon EKS 및 Kubernetes 버전에 대한 주요 업데이트 식별 Identify
  • 사용 중단 정책 이해 및 적절한 매니페스트 리팩토링 Understand , Refactor
  • 올바른 업그레이드 전략을 사용하여 EKS 제어 평면데이터 평면 업데이트 Update
  • 마지막으로 다운스트림 애드온 종속성 업그레이드
+--------------------------------------+
|        Start Upgrade Process         |
+--------------------------------------+
                    |
                    |
+--------------------------------------+
| Identify Major Updates for Amazon    |
|      EKS and Kubernetes Versions     |
+--------------------------------------+
                    |
                    |
+--------------------------------------+
| Understand Deprecation Policy and    |
| Refactor Manifests Accordingly       |
+--------------------------------------+
                    |
                    |
+--------------------------------------+
| Update EKS Control Plane and Data    |
| Plane Using Right Upgrade Strategy   |
+--------------------------------------+
                    |
                    |
+--------------------------------------+
| Upgrade Downstream Add-on            |
|           Dependencies               |
+--------------------------------------+
                    |
                    |
+--------------------------------------+
|        Upgrade Completed             |
+--------------------------------------+