Cloudformation으로 간단하게 EKS cluster 구동하기

2021-07-30

.

Data_Engineering_TIL(20210730)

[학습자료]

“클라우드 네이티브를 위한 쿠버네티스 실전 프로젝트” 책을 읽고 실습한 내용입니다.

** 동양북스, 아이자와 고지&사토 가즈히코 지음, 박상욱 옮김

참고자료 URL : https://github.com/dybooksIT/k8s-aws-book

[학습내용]

  • 실습의 최종 목표

서비스의 최종 구현 목표

architecture3

아키텍처 관점에서 최종목표

architecture2

  • 오늘의 실습목표

아래와 같은 아키텍처 흐름으로 EKS 클러스터를 띄워보는 실습이다.

architecture

  • 실습 상세내용

STEP 1) S3에 아래와 같은 두개의 야믈파일을 업로드 한다.

01_base_resources_cfn.yaml : EKS가 위치할 기본적인 네트워크를 설정하는 클라우드 포메이션 템플릿 야믈파일

AWSTemplateFormatVersion: '2010-09-09'

Parameters:
  ClusterBaseName:
    Type: String
    Default: eks-work

  TargetRegion:
    Type: String
    Default: ap-northeast-2

  AvailabilityZone1:
    Type: String
    Default: ap-northeast-2a

  AvailabilityZone2:
    Type: String
    Default: ap-northeast-2b

  AvailabilityZone3:
    Type: String
    Default: ap-northeast-2c

  VpcBlock:
    Type: String
    Default: 192.168.0.0/16

  WorkerSubnet1Block:
    Type: String
    Default: 192.168.0.0/24

  WorkerSubnet2Block:
    Type: String
    Default: 192.168.1.0/24

  WorkerSubnet3Block:
    Type: String
    Default: 192.168.2.0/24

Resources:
  EksWorkVPC:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: !Ref VpcBlock
      EnableDnsSupport: true
      EnableDnsHostnames: true
      Tags:
        - Key: Name
          Value: !Sub ${ClusterBaseName}-VPC

  WorkerSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      AvailabilityZone: !Ref AvailabilityZone1
      CidrBlock: !Ref WorkerSubnet1Block
      VpcId: !Ref EksWorkVPC
      MapPublicIpOnLaunch: true
      Tags:
        - Key: Name
          Value: !Sub ${ClusterBaseName}-WorkerSubnet1

  WorkerSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      AvailabilityZone: !Ref AvailabilityZone2
      CidrBlock: !Ref WorkerSubnet2Block
      VpcId: !Ref EksWorkVPC
      MapPublicIpOnLaunch: true
      Tags:
        - Key: Name
          Value: !Sub ${ClusterBaseName}-WorkerSubnet2

  WorkerSubnet3:
    Type: AWS::EC2::Subnet
    Properties:
      AvailabilityZone: !Ref AvailabilityZone3
      CidrBlock: !Ref WorkerSubnet3Block
      VpcId: !Ref EksWorkVPC
      MapPublicIpOnLaunch: true
      Tags:
        - Key: Name
          Value: !Sub ${ClusterBaseName}-WorkerSubnet3

  InternetGateway:
    Type: AWS::EC2::InternetGateway

  VPCGatewayAttachment:
    Type: AWS::EC2::VPCGatewayAttachment
    Properties:
      InternetGatewayId: !Ref InternetGateway
      VpcId: !Ref EksWorkVPC

  WorkerSubnetRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref EksWorkVPC
      Tags:
        - Key: Name
          Value: !Sub ${ClusterBaseName}-WorkerSubnetRouteTable

  WorkerSubnetRoute:
    Type: AWS::EC2::Route
    Properties:
      RouteTableId: !Ref WorkerSubnetRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref InternetGateway

  WorkerSubnet1RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref WorkerSubnet1
      RouteTableId: !Ref WorkerSubnetRouteTable

  WorkerSubnet2RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref WorkerSubnet2
      RouteTableId: !Ref WorkerSubnetRouteTable

  WorkerSubnet3RouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref WorkerSubnet3
      RouteTableId: !Ref WorkerSubnetRouteTable

Outputs:
  VPC:
    Value: !Ref EksWorkVPC

  WorkerSubnets:
    Value: !Join
      - ","
      - [!Ref WorkerSubnet1, !Ref WorkerSubnet2, !Ref WorkerSubnet3]

  RouteTable:
    Value: !Ref WorkerSubnetRouteTable

02_nginx_k8s.yaml

쿠버네티스 클러스터에 엔진엑스 팟을 생성하는 명령을 날리는 야믈파일

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx-app
spec:
  containers:
  - name: nginx-container
    image: nginx
    ports:
    - containerPort: 80

STEP 2) 01_base_resources_cfn.yaml 템플릿을 이용해서 아래 그림과 같이 클라우드 포메이션 스텍(EKS 클러스터를 위한 네트워크 인프라 생성)을 생성한다.

먼저 아래와 같이 cloudformation에서 스텍을 생성한다.

cloudformation_install

스텍 생성완료 화면은 아래와 같다.

cloudformation_status1

cloudformation_status

클라우드 포메이션 생성이 완료되면 상단 ‘출력’ 메뉴에가서 WorkerSubnets라고 되어있는 서브넷 세개를 복사해준다.

스텍 생성완료후 실제 vpc 인프라가 생성되었는지 확인

vpc

STEP 3) 쿠버네티스 명령어를 날릴 ec2를 생성하고 필요한 환경설정을 아래와 같이 해준다.


       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/

# aws credential 셋팅
[ec2-user@ip-10-0-1-49 ~]$ aws configure
AWS Access Key ID [None]: xxxxxxxxxxxxxxxxxxxxx
AWS Secret Access Key [None]: xxxxxxxxxxxxxxxxxxxxx
Default region name [None]: ap-northeast-2
Default output format [None]: json

# eksctl 설치
# 참고자료 : https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/eksctl.html
[ec2-user@ip-10-0-1-49 ~]$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
[ec2-user@ip-10-0-1-49 ~]$ sudo mv /tmp/eksctl /usr/local/bin
[ec2-user@ip-10-0-1-49 ~]$ eksctl version
0.58.0
            
# kubectl 설치
# 참고자료 : https://kubernetes.io/ko/docs/tasks/tools/install-kubectl-linux/
[ec2-user@ip-10-0-1-49 ~]$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   154  100   154    0     0    603      0 --:--:-- --:--:-- --:--:--   603
100 44.2M  100 44.2M    0     0  11.3M      0  0:00:03  0:00:03 --:--:-- 14.8M
[ec2-user@ip-10-0-1-49 ~]$ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
[ec2-user@ip-10-0-1-49 ~]$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"xxxxxxxxxxx", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}

STEP 4) EKS 클러스터 생성

아래와 같이 커멘드를 날려서 클러스터를 생성한다.

# subnet 목록은 아까 위에서 복사한 거를 아래와 같이 넣어주면 된다.
[ec2-user@ip-10-0-1-49 ~]$ eksctl create cluster --vpc-public-subnets subnet-xxxxxx,subnet-yyyyyy,subnet-zzzzzz --name eks-work-cluster --region ap-northeast-2 --version 1.19 --nodegroup-name eks-work-nodegroup --node-type t2.small --nodes 2 --nodes-min 2 --nodes-max 5
2021-07-30 13:02:56 [ℹ]  eksctl version 0.58.0
2021-07-30 13:02:56 [ℹ]  using region ap-northeast-2
2021-07-30 13:02:56 [✔]  using existing VPC (vpc-xxxxxxxxx) and subnets (private:map[] public:map[ap-northeast-2a:{subnet-yyyyyyyyyy ap-northeast-2a 192.168.0.0/24} ap-northeast-2b:{subnet-zzzzzzzzzz ap-northeast-2b 192.168.1.0/24} ap-northeast-2c:{subnet-zzzzzzzzz ap-northeast-2c 192.168.2.0/24}])
2021-07-30 13:02:56 [!]  custom VPC/subnets will be used; if resulting cluster doesn't function as expected, make sure to review the configuration of VPC/subnets
2021-07-30 13:02:56 [ℹ]  nodegroup "eks-work-nodegroup" will use "" [AmazonLinux2/1.19]
2021-07-30 13:02:56 [ℹ]  using Kubernetes version 1.19
2021-07-30 13:02:56 [ℹ]  creating EKS cluster "eks-work-cluster" in "ap-northeast-2" region with managed nodes
2021-07-30 13:02:56 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2021-07-30 13:02:56 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-northeast-2 --cluster=eks-work-cluster'
2021-07-30 13:02:56 [ℹ]  CloudWatch logging will not be enabled for cluster "eks-work-cluster" in "ap-northeast-2"
2021-07-30 13:02:56 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=ap-northeast-2 --cluster=eks-work-cluster'
2021-07-30 13:02:56 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "eks-work-cluster" in "ap-northeast-2"
2021-07-30 13:02:56 [ℹ]  2 sequential tasks: { create cluster control plane "eks-work-cluster", 3 sequential sub-tasks: { wait for control plane to become ready, 1 task: { create addons }, create managed nodegroup "eks-work-nodegroup" } }
2021-07-30 13:02:56 [ℹ]  building cluster stack "eksctl-eks-work-cluster-cluster"
2021-07-30 13:02:56 [ℹ]  deploying stack "eksctl-eks-work-cluster-cluster"
2021-07-30 13:03:26 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-07-30 13:03:56 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-07-30 13:04:57 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-07-30 13:05:57 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-07-30 13:06:57 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-07-30 13:07:57 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-07-30 13:08:57 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-07-30 13:09:57 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-07-30 13:10:57 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-07-30 13:11:57 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-07-30 13:12:57 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-07-30 13:16:58 [ℹ]  building managed nodegroup stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:16:58 [ℹ]  deploying stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:16:58 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:17:15 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:17:32 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:17:51 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:18:08 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:18:28 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:18:47 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:19:06 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:19:23 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:19:41 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:19:57 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:20:13 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:20:32 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:20:48 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:21:04 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:21:21 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-07-30 13:21:21 [ℹ]  waiting for the control plane availability...
2021-07-30 13:21:21 [✔]  saved kubeconfig as "/home/ec2-user/.kube/config"
2021-07-30 13:21:21 [ℹ]  no tasks
2021-07-30 13:21:21 [✔]  all EKS cluster resources for "eks-work-cluster" have been created
2021-07-30 13:21:21 [ℹ]  nodegroup "eks-work-nodegroup" has 2 node(s)
2021-07-30 13:21:21 [ℹ]  node "ip-192-168-0-113.ap-northeast-2.compute.internal" is ready
2021-07-30 13:21:21 [ℹ]  node "ip-192-168-2-238.ap-northeast-2.compute.internal" is ready
2021-07-30 13:21:21 [ℹ]  waiting for at least 2 node(s) to become ready in "eks-work-nodegroup"
2021-07-30 13:21:21 [ℹ]  nodegroup "eks-work-nodegroup" has 2 node(s)
2021-07-30 13:21:21 [ℹ]  node "ip-192-168-0-113.ap-northeast-2.compute.internal" is ready
2021-07-30 13:21:21 [ℹ]  node "ip-192-168-2-238.ap-northeast-2.compute.internal" is ready
2021-07-30 13:23:22 [✔]  EKS cluster "eks-work-cluster" in "ap-northeast-2" region is ready
        
# 컨텍스트 리스트 확인. 
# 컨텍스트는 설정값이라고 생각하면 된다.
[ec2-user@ip-10-0-1-49 ~]$ kubectl config get-contexts
CURRENT   NAME                                                   CLUSTER                                     AUTHINFO                                               NAMESPACE
*         minsu.park@eks-work-cluster.ap-northeast-2.eksctl.io   eks-work-cluster.ap-northeast-2.eksctl.io   minsu.park@eks-work-cluster.ap-northeast-2.eksctl.io

# kubectl에서 EKS 클러스터에 접속이 가능한지 노드상태를 확인
[ec2-user@ip-10-0-1-49 ~]$ kubectl get nodes
NAME                                               STATUS   ROLES    AGE   VERSION
ip-192-168-0-113.ap-northeast-2.compute.internal   Ready    <none>   17m   v1.19.6-eks-49a6c0
ip-192-168-2-238.ap-northeast-2.compute.internal   Ready    <none>   17m   v1.19.6-eks-49a6c0

위와 같은 명령어를 eks로 생성하면 eks 관련 아래 그림과 같이 클라우드 포메이션에도 ‘EKS 생성하는 스텍’이 2개 만들어져있다.

eks

STEP 5) nginx pod 생성

[ec2-user@ip-10-0-1-49 ~]$ aws s3 cp s3://test-bucket/02_nginx_k8s.yaml .
download: s3://test-bucket/02_nginx_k8s.yaml to ./02_nginx_k8s.yaml

[ec2-user@ip-10-0-1-49 ~]$ kubectl apply -f 02_nginx_k8s.yaml
pod/nginx-pod created

[ec2-user@ip-10-0-1-49 ~]$ cat 02_nginx_k8s.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx-app
spec:
  containers:
  - name: nginx-container
    image: nginx
    ports:
    - containerPort: 80
        
[ec2-user@ip-10-0-1-49 ~]$ kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
nginx-pod   1/1     Running   0          14s

# 8080포트로 접근해오면 내부적으로는 80으로 가게하도록 포트포워딩 설정
[ec2-user@ip-10-0-1-49 ~]$ kubectl port-forward nginx-pod 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080

^C[ec2-user@ip-10-0-1-49 ~]$ kubectl delete pod nginx-pod
pod "nginx-pod" deleted



# kubectl delete pod nginx-pod 하기전에 다른 터미널을 열어서 아래와 같이 nginx에 접속이 가능한 것으 확인할 수 있다.
[ec2-user@ip-10-0-1-49 ~]$ curl localhost:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

STEP 6) 실습이 종료되면 클라우드 포메이션 스텍이 생성된 역순으로 하나씩 삭제해주면된다.