쿠버네티스 용어정리 - kubeconfig 파일이란

2021-09-05

.

Data_Engineering_TIL(20210904)

[학습자료]

“클라우드 네이티브를 위한 쿠버네티스 실전 프로젝트” 책을 읽고 실습한 내용입니다.

** 동양북스, 아이자와 고지&사토 가즈히코 지음, 박상욱 옮김

참고자료 URL : https://github.com/dybooksIT/k8s-aws-book

[학습내용]

eksctl은 EKS 클러스터 구축중에 kubeconfig 파일을 자동으로 업데이트 한다. kubeconfig 파일은 쿠버네티스 클라이언트인 kubectl이 이용할 설정 파일로 접속 대상 쿠버네티스 클러스터의 접속정보 (컨트롤 플레인 URL, 인증정보, 이용할 쿠버네티스의 네임 스페이스 등) 를 저장하고 있다.

EKS 클러스터에 접속하기 위한 인증정보는 AWS CLI 명령으로 확인이 가능하며, eksctl을 사용하면 AWS CLI 명령을 호출하여 인증하기 위한 설정을 kubeconfig 파일에 포함할 수 있다. EKS 인증은 aws cli와 eksctl 명령어 둘다 가능하다. AWS CLI 명령어 자체에도 EKS 클러스터를 지정하고 kubeconfig 파일을 업데이트 할 수 있는 기능이 있다.

[실습내용]

아래와 같이 쿠버네티스 클러스터를 생성하고 kubeconfig가 어떻게 되어 있는지 확인해보았다.

[ec2-user@ip-10-10-1-206 ~]$ eksctl create cluster --vpc-public-subnets subnet-xxxxxxxx,subnet-tttttttttt,subnet-zzzzzzzzzzzzz --name eks-work-cluster --region ap-northeast-2 --version 1.19 --nodegroup-name eks-work-nodegroup --node-type t3.small --nodes 2 --nodes-min 2 --nodes-max 5
2021-09-04 03:48:27 [ℹ]  eksctl version 0.64.0
2021-09-04 03:48:27 [ℹ]  using region ap-northeast-2
2021-09-04 03:48:27 [✔]  using existing VPC (vpc-xxxxxxxx) and subnets (private:map[] public:map[ap-northeast-2a:{subnet-xxxxxxxx ap-northeast-2a 192.168.0.0/24} ap-northeast-2b:{subnet-xxxxxxxx ap-northeast-2b 192.168.1.0/24} ap-northeast-2c:{subnet-xxxxxxxx ap-northeast-2c 192.168.2.0/24}])
2021-09-04 03:48:27 [!]  custom VPC/subnets will be used; if resulting cluster doesn't function as expected, make sure to review the configuration of VPC/subnets
2021-09-04 03:48:27 [ℹ]  nodegroup "eks-work-nodegroup" will use "" [AmazonLinux2/1.19]
2021-09-04 03:48:27 [ℹ]  using Kubernetes version 1.19
2021-09-04 03:48:27 [ℹ]  creating EKS cluster "eks-work-cluster" in "ap-northeast-2" region with managed nodes
2021-09-04 03:48:27 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2021-09-04 03:48:27 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-northeast-2 --cluster=eks-work-cluster'
2021-09-04 03:48:27 [ℹ]  CloudWatch logging will not be enabled for cluster "eks-work-cluster" in "ap-northeast-2"
2021-09-04 03:48:27 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=ap-northeast-2 --cluster=eks-work-cluster'
2021-09-04 03:48:27 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "eks-work-cluster" in "ap-northeast-2"
2021-09-04 03:48:27 [ℹ]  2 sequential tasks: { create cluster control plane "eks-work-cluster", 3 sequential sub-tasks: { wait for control plane to become ready, 1 task: { create addons }, create managed nodegroup "eks-work-nodegroup" } }
2021-09-04 03:48:27 [ℹ]  building cluster stack "eksctl-eks-work-cluster-cluster"
2021-09-04 03:48:28 [ℹ]  deploying stack "eksctl-eks-work-cluster-cluster"
2021-09-04 03:48:58 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-09-04 03:49:28 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-09-04 03:50:28 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-09-04 03:51:28 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-09-04 03:52:28 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-09-04 03:53:28 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-09-04 03:54:28 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-09-04 03:55:28 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-09-04 03:56:28 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-09-04 03:57:28 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-09-04 03:58:28 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-cluster"
2021-09-04 04:02:29 [ℹ]  building managed nodegroup stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:02:30 [ℹ]  deploying stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:02:30 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:02:46 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:03:03 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:03:23 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:03:39 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:03:59 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:04:19 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:04:37 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:04:54 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:05:12 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:05:29 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:05:45 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:06:03 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:06:19 [ℹ]  waiting for CloudFormation stack "eksctl-eks-work-cluster-nodegroup-eks-work-nodegroup"
2021-09-04 04:06:19 [ℹ]  waiting for the control plane availability...
2021-09-04 04:06:19 [✔]  saved kubeconfig as "/home/ec2-user/.kube/config"
2021-09-04 04:06:19 [ℹ]  no tasks
2021-09-04 04:06:19 [✔]  all EKS cluster resources for "eks-work-cluster" have been created
2021-09-04 04:06:19 [ℹ]  nodegroup "eks-work-nodegroup" has 2 node(s)
2021-09-04 04:06:19 [ℹ]  node "ip-192-168-1-114.ap-northeast-2.compute.internal" is ready
2021-09-04 04:06:19 [ℹ]  node "ip-192-168-2-192.ap-northeast-2.compute.internal" is ready
2021-09-04 04:06:19 [ℹ]  waiting for at least 2 node(s) to become ready in "eks-work-nodegroup"
2021-09-04 04:06:19 [ℹ]  nodegroup "eks-work-nodegroup" has 2 node(s)
2021-09-04 04:06:19 [ℹ]  node "ip-192-168-1-114.ap-northeast-2.compute.internal" is ready
2021-09-04 04:06:19 [ℹ]  node "ip-192-168-2-192.ap-northeast-2.compute.internal" is ready
2021-09-04 04:08:21 [ℹ]  kubectl command should work with "/home/ec2-user/.kube/config", try 'kubectl get nodes'
2021-09-04 04:08:21 [✔]  EKS cluster "eks-work-cluster" in "ap-northeast-2" region is ready

[ec2-user@ip-10-10-1-206 ~]$ kubectl get nodes
NAME                                               STATUS   ROLES    AGE     VERSION
ip-192-168-1-xxx.ap-northeast-2.compute.internal   Ready    <none>   9m59s   v1.19.13-eks-xxxxx
ip-192-168-2-xxx.ap-northeast-2.compute.internal   Ready    <none>   9m59s   v1.19.13-eks-xxxxx

[ec2-user@ip-10-10-1-206 ~]$ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE   IP              NODE                                               NOMINATED NODE   READINESS GATES
kube-system   aws-node-xxxx              1/1     Running   0          32m   192.168.2.xxx   ip-192-168-2-xxx.ap-northeast-2.compute.internal   <none>           <none>
kube-system   aws-node-yyyy              1/1     Running   0          32m   192.168.1.xxx   ip-192-168-1-xxx.ap-northeast-2.compute.internal   <none>           <none>
kube-system   coredns-xxxxxx-xxxx        1/1     Running   0          41m   192.168.1.xxx   ip-192-168-1-xxx.ap-northeast-2.compute.internal   <none>           <none>
kube-system   coredns-yyyyyy-yyyy        1/1     Running   0          41m   192.168.2.xxx   ip-192-168-2-xxx.ap-northeast-2.compute.internal   <none>           <none>
kube-system   kube-proxy-xxxx            1/1     Running   0          32m   192.168.2.xxx   ip-192-168-2-xxx.ap-northeast-2.compute.internal   <none>           <none>
kube-system   kube-proxy-yyyy            1/1     Running   0          32m   192.168.1.xxx   ip-192-168-1-xxx.ap-northeast-2.compute.internal   <none>           <none>

[ec2-user@ip-10-10-1-206 ~]$ kubectl config get-contexts
CURRENT   NAME                                                   CLUSTER                                     AUTHINFO                                               NAMESPACE
*         minsu.park@eks-work-cluster.ap-northeast-2.eksctl.io   eks-work-cluster.ap-northeast-2.eksctl.io   minsu.park@eks-work-cluster.ap-northeast-2.eksctl.io

[ec2-user@ip-10-10-1-206 .kube]$ cd /home/ec2-user/.kube

[ec2-user@ip-10-10-1-206 .kube]$ pwd
/home/ec2-user/.kube

[ec2-user@ip-10-10-1-206 .kube]$ ll
total 4
drwxr-x--- 4 ec2-user ec2-user   35 Sep  4 04:15 cache
-rw------- 1 ec2-user ec2-user 2331 Sep  4 04:06 config
-rw------- 1 ec2-user ec2-user    0 Sep  4 04:06 config.eksctl.lock

[ec2-user@ip-10-10-1-206 .kube]$ cat config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    server: https://xxxxxxxxxxxxxxxxxxxxxxxxxx.ttt.ap-northeast-2.eks.amazonaws.com
  name: eks-work-cluster.ap-northeast-2.eksctl.io
contexts:
- context:
    cluster: eks-work-cluster.ap-northeast-2.eksctl.io
    user: minman@eks-work-cluster.ap-northeast-2.eksctl.io
  name: minman@eks-work-cluster.ap-northeast-2.eksctl.io
current-context: minman@eks-work-cluster.ap-northeast-2.eksctl.io
kind: Config
preferences: {}
users:
- name: minman@eks-work-cluster.ap-northeast-2.eksctl.io
  user:
    exec:
      apiVersion: client.authentication.k8s.io/xxxxxxxxx
      args:
      - eks
      - get-token
      - --cluster-name
      - eks-work-cluster
      - --region
      - ap-northeast-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      provideClusterInfo: false