Kustomize Basic

kubernetes manifest 리소스 관리 도구 Kustomize https://kubernetes.io/ko/docs/tasks/manage-kubernetes-objects/kustomization/{:target="_blank"} kubernetes manifest (yaml) 파일을 Template 형태로 관리 Patch(Merge) 및 배포 해주는 툴 kubernetes 1.14 이후, kubectl 명령어로 kustomization 지원 Simple Example $ tree . . ├── deployment.yaml ├── kustomization.yaml └── version.yaml kustomization.yaml Manifest 파일의 기본 구조 및 리소스, 패치 파일을 기술하는 파일 resources: - deployment.yaml patchesStrategicMerge: - version.yaml resources : 리소스 파일 리스트 resources 이외에 configMapGenerator, secretGenerator 기능도 있음 https://kubernetes.io/ko/docs/tasks/manage-kubernetes-objects/kustomization/#kustomize-%EA%B8%B0%EB%8A%A5-%EB%A6%AC%EC%8A%A4%ED%8A%B8{:target="_blank"} patchesStrategicMerge : resources의 Patch 파일 Patch : yaml file merge ...

December 15, 2021 · Byung Kyu KIM

Helm chart 생성, 배포

Kubernetes 패키지 매니저 도구인 helm을 통해 chart 생성 및 Kubernetes 배포 K3S 환경에서 테스트 Helm https://helm.sh/{:target="_blank"} Kubernetes 배포를 위한 패키지 매니저 툴 (e.g yum, choco) chart 라는 yaml 파일 기반의 템플릿 파일을 통해 패키지화 및 Kubernetes 설치 관리 Deployment, Service, Ingress 등 Kubernetes 서비스의 manifest 생성 및 설치 Helm Repository 를 통해 패키지 등록 및 다른 패키지 설치 가능 Helm Install 바이너리 직접 설치 및 설치 Script 활용 Homebrew, Chocolatey 등의 패키지로도 설치 가능 바이너리 다운로드 https://github.com/helm/helm/releases $ curl -LO https://get.helm.sh/helm-v3.7.1-linux-amd64.tar.gz $ tar -zxvf helm-v3.7.1-linux-amd64.tar.gz $ tree linux-amd64 linux-amd64 ├── LICENSE ├── README.md └── helm $ sudo cp linux-amd64/helm /usr/local/bin/ 설치 Script 활용 $ chmod 700 get_helm.sh $ ./get_helm.sh Creating Your Own Charts kubernetes 설치를 위한 chart 생성 및 세팅 https://helm.sh/docs/helm/helm_create/{:target="_blank"} # chart 생성 $ helm create mvcapp Creating mvcapp Chart directory 구조 Chart.yaml : Chart 버전, 이미지버전, 설명등을 기술하는 파일 values.yaml : manifest template 파일 기반, 기준 값을 세팅하는 파일 templates/ : kubernetes manifest template 파일 charts/ : chart 의존성 파일 $ tree mvcapp mvcapp ├── Chart.yaml ├── charts ├── templates │ ├── NOTES.txt │ ├── _helpers.tpl │ ├── deployment.yaml │ ├── hpa.yaml │ ├── ingress.yaml │ ├── service.yaml │ ├── serviceaccount.yaml │ └── tests │ └── test-connection.yaml └── values.yaml Chart.yaml 수정 version : Chart 버전 appVersion : Deploy 되는 image 버전 apiVersion: v2 name: mvcapp description: .net core test mvc application # ... 생략 type: application # ... 생략 version: 0.1.0 # ... 생략 appVersion: "0.6" # appVersion: "1.16.0" values.yaml 수정 replicaCount : Pod 의 replica 개수, 2개로 수정 image.repository : docker image 이름, cdecl/mvcapp 로 수정 service.type : On-Premise에서 테스트 목적, NodePort로 수정 service.nodePort : nodePort를 적용하기 위해 신규 추가 # Default values for mvcapp. # This is a YAML-formatted file. # Declare variables to be passed into your templates. replicaCount: 2 image: repository: cdecl/mvcapp pullPolicy: IfNotPresent # Overrides the image tag whose default is the chart appVersion. tag: "" imagePullSecrets: [] nameOverride: "" fullnameOverride: "" # ... 생략 service: type: NodePort # ClusterIP port: 80 nodePort: 30010 ingress: # ... 생략 resources: {} # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. # limits: # cpu: 100m # memory: 128Mi # requests: # cpu: 100m # memory: 128Mi # ... 생략 templates/service.yaml 수정 nodePort 를 적용하기 위해 template 수정 spec.ports.nodePort: {{ .Values.service.nodePort }} 추가 apiVersion: v1 kind: Service metadata: name: {{ include "mvcapp.fullname" . }} labels: {{- include "mvcapp.labels" . | nindent 4 }} spec: type: {{ .Values.service.type }} ports: - port: {{ .Values.service.port }} targetPort: http pro name: http nodePort: {{ .Values.service.nodePort }} selector: {{- include "mvcapp.selectorLabels" . | nindent 4 }} helm lint : chart 파일 검사 https://helm.sh/docs/helm/helm_lint/{:target="_blank"} $ helm lint mvcapp ==> Linting mvcapp [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed helm template : kubernetes manifest 생성 https://helm.sh/docs/helm/helm_template/{:target="_blank"} values.yaml 에 세팅한 기준으로 manifest 생성 $ helm template mvcapp --- # Source: mvcapp/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: RELEASE-NAME-mvcapp labels: helm.sh/chart: mvcapp-0.1.0 app.kubernetes.io/name: mvcapp app.kubernetes.io/instance: RELEASE-NAME app.kubernetes.io/version: "0.6" app.kubernetes.io/managed-by: Helm --- # Source: mvcapp/templates/service.yaml apiVersion: v1 kind: Service metadata: name: RELEASE-NAME-mvcapp labels: helm.sh/chart: mvcapp-0.1.0 app.kubernetes.io/name: mvcapp app.kubernetes.io/instance: RELEASE-NAME app.kubernetes.io/version: "0.6" app.kubernetes.io/managed-by: Helm spec: type: NodePort ports: - port: 80 targetPort: http pro name: http selector: app.kubernetes.io/name: mvcapp app.kubernetes.io/instance: RELEASE-NAME --- # Source: mvcapp/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: RELEASE-NAME-mvcapp labels: helm.sh/chart: mvcapp-0.1.0 app.kubernetes.io/name: mvcapp app.kubernetes.io/instance: RELEASE-NAME app.kubernetes.io/version: "0.6" app.kubernetes.io/managed-by: Helm spec: replicas: 2 selector: matchLabels: app.kubernetes.io/name: mvcapp app.kubernetes.io/instance: RELEASE-NAME template: metadata: labels: app.kubernetes.io/name: mvcapp app.kubernetes.io/instance: RELEASE-NAME spec: serviceAccountName: RELEASE-NAME-mvcapp securityContext: {} containers: - name: mvcapp securityContext: {} image: "cdecl/mvcapp:0.6" imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 pro livenessProbe: httpGet: path: / port: http readinessProbe: httpGet: path: / port: http resources: {} --- # Source: mvcapp/templates/tests/test-connection.yaml apiVersion: v1 kind: Pod metadata: name: "RELEASE-NAME-mvcapp-test-connection" labels: helm.sh/chart: mvcapp-0.1.0 app.kubernetes.io/name: mvcapp app.kubernetes.io/instance: RELEASE-NAME app.kubernetes.io/version: "0.6" app.kubernetes.io/managed-by: Helm annotations: "helm.sh/hook": test spec: containers: - name: wget image: busybox command: ['wget'] args: ['RELEASE-NAME-mvcapp:80'] restartPolicy: Never helm install : chart 활용 kubernetes service install https://helm.sh/docs/helm/helm_install/{:target="_blank"} install : helm install [NAME] [CHART] [flags] # 설치하지는 않고 테스트 $ helm install mvcapp-svc mvcapp --dry-run # 로컬 Chart 를 통한 설치 $ helm install mvcapp-svc mvcapp NAME: mvcapp-svc LAST DEPLOYED: Thu Nov 4 13:29:38 2021 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: 1. Get the application URL by running these commands: export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services mvcapp-svc) export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}") echo http://$NODE_IP:$NODE_PORT $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 78d mvcapp-svc NodePort 10.43.202.254 <none> 80:31503/TCP 29s $ kubectl get pod NAME READY STATUS RESTARTS AGE mvcapp-svc-78ff4d97f9-hd9rf 1/1 Running 0 37s mvcapp-svc-78ff4d97f9-x4984 1/1 Running 0 37s KS3 export KUBECONFIG=/etc/rancher/k3s/k3s.yaml 세팅 ...

November 3, 2021 · Byung Kyu KIM

Kubernetes Job 실행

Kubernetes Job 을 활용한 동시작업 Kubernetes Job https://kubernetes.io/ko/docs/concepts/workloads/controllers/job/{:target="_blank"} Pod 를 생성하고, Pod를 통해 성공적으로 종료할떄까지의 일련의 작업실행 Job : 단일 Job 테스트 alpine pod 실행 및 ip 명령어로 IP 확인 command : 명령어 (배열) restartPolicy : Always, OnFailure, Never (default Always) 배치 작업이므로 재시작 하면 안됨 : Never backoffLimit : 실패시 재시작 횟수 (defalut: 6) # time.yml apiVersion: batch/v1 kind: Job metadata: name: ip spec: template: metadata: name: ip spec: containers: - name: ip image: alpine command: ["ip", "a"] restartPolicy: Never backoffLimit: 0 $ kubectl apply -f ip.yml job.batch/ip created $ kubectl get pod NAME READY STATUS RESTARTS AGE ip-5x8qm 0/1 Completed 0 14s 로그 확인 $ kubectl logs ip-5x8qm 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: eth0@if58: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP link/ether 9a:f9:d3:9f:32:eb brd ff:ff:ff:ff:ff:ff inet 10.42.0.37/24 brd 10.42.0.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::98f9:d3ff:fe9f:32eb/64 scope link valid_lft forever preferred_lft forever Job : Parallel 동시작업 wrk 를 활용 http 퍼포먼스 테스트 : https://github.com/wg/wrk{:target="_blank"} Image : cdecl/asb{:target="_blank"} Parallel 실행 1 parallelism : 동시 실행 Pod 개수 (default: 1) completions : Pod 완료로 판단하는 개수 (default: parallelism ) apiVersion: batch/v1 kind: Job metadata: name: wrk spec: completions: 4 parallelism: 4 template: metadata: name: wrk spec: containers: - name: wrk image: cdecl/asb command: ["wrk", "-d5", "http://httpbin.org/get"] restartPolicy: Never backoffLimit: 0 실행, 로그 확인 $ kubectl apply -f wrk.yml ...

September 16, 2021 · Byung Kyu KIM

Kubernetes install with kubespray

Kubespray 를 이용한 Kubernetes 설치 (Baremetal) 사전준비 서버 노드 준비 3대 Node 192.168.28.15 192.168.28.16 192.168.28.17 서버 노드 환경설정 : kubernetes-101 참고 Swap 영역을 비활성화 SELinux Disable 방화벽 Disable 브릿지 네트워크 할성화 설치 노드에서의 ssh 접속 허용 : SSH 키복사 ssh-copy-id 설치 준비 Git : Repository Clone Python3 : Inventory 및 환경 설정을 위한 스크립트 실행 Ansible : 원격 실행(설치) ansible-playbook Repository clone 및 Python package install $ git clone https://github.com/kubernetes-sigs/kubespray $ cd kubespray # Package install $ pip3 install -r requirements.txt 설치 방법 1 : inventory_builder inventory_builder Python script 활용 inventory 구성 및 설치 ...

September 13, 2021 · Byung Kyu KIM

K3S Overview

Lightweight Kubernetes : The certified Kubernetes distribution built for IoT & Edge computing 특징 https://k3s.io/{:target="_blank"} Kubernetes의 경량화 버전으로 아래와 같은 특징 기본 설치만으로 바로 배포 테스트 가능 Overlay Netowrk(Flannel), Load balancer, Ingress(Traefik), CoreDNS 등이 기본 설치 됨 https://rancher.com/docs/k3s/latest/en/networking/{:target="_blank"} etcd 대신 sqlite 운영 High Availability with an External DB High Availability with Embedded DB (Experimental) Master node schedulable uncordon 제외 가능 Worker node 필요 없음 (필요시 추가 가능) 사용 목적 Edge Computing 개발 테스트 및 스테이징 서버 구성 기타 어플리케이션 테스트 용 Master 설치 설치 curl -sfL https://get.k3s.io | sh - 실행으로 끝 systemd 관리 kubectl 설치 및 심볼릭 링크 설정 해줌 이미 kubectl 가 설치 되어 있는 경우는 심볼릭 링크 실패 # alias 필요시 아래 참고 $ alias kubectl='sudo k3s kubectl' # Install $ curl -sfL https://get.k3s.io | sh - # master node $ kubectl get node NAME STATUS ROLES AGE VERSION centos1 Ready control-plane,master 37s v1.21.3+k3s1 $ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system local-path-provisioner-5ff76fc89d-wh9cg 1/1 Running 0 2m35s kube-system coredns-7448499f4d-2d7pb 1/1 Running 0 2m35s kube-system metrics-server-86cbb8457f-x9l6n 1/1 Running 0 2m35s kube-system helm-install-traefik-crd-w27q7 0/1 Completed 0 2m35s kube-system helm-install-traefik-2zllj 0/1 Completed 1 2m35s kube-system svclb-traefik-55qfd 2/2 Running 0 113s kube-system traefik-97b44b794-smzl9 1/1 Running 0 114s K8S 서비스 테스트 서비스 타입 : NodePort https://kubernetes.github.io/ingress-nginx/deploy/baremetal/{:target="_blank"} apiVersion: apps/v1 kind: Deployment metadata: name: mvcapp spec: selector: matchLabels: app: mvcapp replicas: 2 # --replicas=2 옵션과 동일 template: metadata: labels: app: mvcapp spec: containers: - name: mvcapp image: cdecl/mvcapp:0.6 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: mvcapp spec: type: NodePort selector: app: mvcapp ports: - port: 80 targetPort: 80 $ kubectl apply -f mvcapp-deploy-service.yaml deployment.apps/mvcapp created service/mvcapp created $ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system local-path-provisioner-5ff76fc89d-wh9cg 1/1 Running 0 9m29s kube-system coredns-7448499f4d-2d7pb 1/1 Running 0 9m29s kube-system metrics-server-86cbb8457f-x9l6n 1/1 Running 0 9m29s kube-system helm-install-traefik-crd-w27q7 0/1 Completed 0 9m29s kube-system helm-install-traefik-2zllj 0/1 Completed 1 9m29s kube-system svclb-traefik-55qfd 2/2 Running 0 8m47s kube-system traefik-97b44b794-smzl9 1/1 Running 0 8m48s default mvcapp-79874d888c-6htvq 1/1 Running 0 62s default mvcapp-79874d888c-clslc 1/1 Running 0 62s $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 10m mvcapp NodePort 10.43.36.139 <none> 80:32105/TCP 106s # Nodeport IP $ curl 10.43.36.139:80 * Project : Mvcapp * Version : 0.5 / net5.0 * Hostname : mvcapp-79874d888c-6htvq * RemoteAddr : 10.42.0.1 * X-Forwarded-For : * Request Count : 1 * User-Agent : curl/7.29.0 $ curl localhost:32105 * Project : Mvcapp * Version : 0.5 / net5.0 * Hostname : mvcapp-79874d888c-clslc * RemoteAddr : 10.42.0.1 * X-Forwarded-For : * Request Count : 1 * User-Agent : curl/7.29.0 Agent 추가 Master Node만으로도 테스트 가능하나 Scale 테스트시 Agent(Worker Node) 추가 가능 환경변수 세팅 : 필요시 참고 $ sudo cat /var/lib/rancher/k3s/server/node-token > ~/.node-token $ K3S_TOKEN=$(< ~/.node-token) $ HOST_IP=$(ip a | sed -rn 's/.*inet ([0-9\.]+).*eth0/\1/p') Agent 등록 : 원격실행 OR Agent 머신에서 실행 HostIP, Token 정보 필요 (위 환경변수 세팅 참고) # Agent 머신에서 실행 $ curl -sfL https://get.k3s.io | K3S_URL=https://$HOST_IP:6443 K3S_TOKEN=$K3S_TOKEN sh - # Agent 추가 다른방법 $ ansible node01 -m shell -a "curl -sfL https://get.k3s.io | sh -s - agent --server https://$HOST_IP:6443 --token $K3S_TOKEN" -v K3S 삭제 ls /usr/local/bin/k3s-* | xargs -n1 sh -

August 17, 2021 · Byung Kyu KIM

Kubernetes 101

Kubernetes 설치 및 운영 101 사전 준비 Kubernetes 설치 전 서버 구성 변경 참고 : https://www.mirantis.com/blog/how-install-kubernetes-kubeadm/{:target="_blank"} Swap 영역을 비활성화 # 일시적인 설정 $ sudo swapoff -a # 영구적인 설정, 아래 swap 파일 시스템을 주석처리 $ sudo vi /etc/fstab ... # /dev/mapper/kube--master--vg-swap_1 none swap sw 0 0 SELinux Disable # 임시 $ sudo setenforce 0 # 영구 $ sudo vi /etc/sysconfig/selinux ... SELinux=disabled 방화벽 Disable $ sudo systemctl disable firewalld $ sudo systemctl stop firewalld 브릿지 네트워크 할성화 # Centos $ sudo vim /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 # Ubuntu $ sudo vim /etc/ufw/sysctl.conf net/bridge/bridge-nf-call-ip6tables = 1 net/bridge/bridge-nf-call-iptables = 1 net/bridge/bridge-nf-call-arptables = 1 Docker Install Centos Install : https://docs.docker.com/engine/install/centos/{:target="_blank"} Cgroup 드라이버 이슈 최신 Kubernetes는 docker cgroup driver를 cgroupfs → systemd 변경 필요 Master Init 및 Worker Join 시 WARNING 발생 https://kubernetes.io/ko/docs/setup/production-environment/container-runtimes/{:target="_blank"} kubeadm init --pod-network-cidr 10.244.0.0/16 ... [init] Using Kubernetes version: v1.19.3 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". ... ... 드라이버 변경 작업 /etc/docker/daemon.json 파일 작성 $ cat <<EOF | sudo tee /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF # 도커 재시작 $ sudo systemctl restart docker # 확인 $ sudo docker info | grep -i cgroup Cgroup Driver: systemd Kubernetes (kubeadm, kubelet, kubectl) 설치 참고 : https://kubernetes.io/docs/setup/independent/install-kubeadm/{:target="_blank"} Kubernetes 설치 : Centos7 기준 Docker 설치 sudo yum install -y docker sudo systemctl enable docker && systemctl start docker sudo usermod -aG docker $USER kubeadm, kubelet, kubectl : Repo 추가 및 패키지 설치 $ cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF $ sudo yum install -y kubelet kubeadm kubectl $ sudo systemctl enable kubelet && systemctl start kubelet # 버전이 안맞을 경우 지정 # sudo yum install kubelet-[version] kubeadm-[version] kubectl-[version] kubectl 자동완성 # sh source <(kubectl completion sh) echo "source <(kubectl completion sh)" >> ~/.shrc # zsh source <(kubectl completion zsh) echo "if [ $commands[kubectl] ]; then source <(kubectl completion zsh); fi" >> ~/.zshrc Master Node Init 및 Worker Node Join Master Node 설정 Master 초기화 네트워크 클래스 대역을 설정 필요 : --pod-network-cidr 10.244.0.0/16 sudo kubeadm init --pod-network-cidr 10.244.0.0/16 Kubectl 사용 : To start using your cluster.. 아래 항목 3줄 실행 [init] Using Kubernetes version: v1.10.5 ... To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ... You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.28.15:6443 --token 1ovd36.ft4mefr909iotg0a --discovery-token-ca-cert-hash sha256:82953a3ed178aa8c511792d0e21d9d3283e7575f3d3350a00bea3e34c2b87d29 Pod 상태 확인 coredns STATUS → Pending (∵ Overlay network 미설치) $ kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-66bff467f8-ktvsz 0/1 Pending 0 19s kube-system coredns-66bff467f8-nvvjz 0/1 Pending 0 19s kube-system etcd-node1 1/1 Running 0 29s kube-system kube-apiserver-node1 1/1 Running 0 29s kube-system kube-controller-manager-node1 1/1 Running 0 29s kube-system kube-proxy-s582x 1/1 Running 0 19s kube-system kube-scheduler-node1 1/1 Running 0 29s Overlay network : Calico 설치 Overlay network 종류 https://kubernetes.io/docs/concepts/cluster-administration/networking/{:target="_blank"} Install Calico for on-premises deployments https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises{:target="_blank"} # Install Calico for on-premises deployments $ kubectl apply -f https://docs.projectcalico.org/manifests/calico-typha.yaml coredns 서비스가 정상적으로 Running $ kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-799fb94867-bcntz 0/1 CrashLoopBackOff 3 2m6s kube-system calico-node-jtcmt 0/1 Running 1 2m7s kube-system calico-typha-6bc9dd6468-x2hjj 0/1 Pending 0 2m6s kube-system coredns-66bff467f8-ktvsz 0/1 Running 0 3m23s kube-system coredns-66bff467f8-nvvjz 0/1 Running 0 3m23s kube-system etcd-node1 1/1 Running 0 3m33s kube-system kube-apiserver-node1 1/1 Running 0 3m33s kube-system kube-controller-manager-node1 1/1 Running 0 3m33s kube-system kube-proxy-s582x 1/1 Running 0 3m23s kube-system kube-scheduler-node1 1/1 Running 0 3m33s Worker Node 추가 (Join) Worker Node 실행 # Join 명령 가져오기 $ kubeadm token create --print-join-command kubeadm join 192.168.28.15:6443 --token 1ovd36.ft4mefr909iotg0a --discovery-token-ca-cert-hash sha256:82953a3ed178aa8c511792d0e21d9d3283e7575f3d3350a00bea3e34c2b87d29 # Worker node 에서 실행 $ kubeadm join 192.168.28.15:6443 --token 1ovd36.ft4mefr909iotg0a --discovery-token-ca-cert-hash sha256:82953a3ed178aa8c511792d0e21d9d3283e7575f3d3350a00bea3e34c2b87d29 노드 상태 확인 > kubectl get node NAME STATUS ROLES AGE VERSION node1 Ready master 8m50s v1.18.6 node2 Ready <none> 16s v1.18.6 node3 Ready <none> 16s v1.18.6 서비스 배포 : 명령어(CLI) 기반 ...

August 12, 2021 · Byung Kyu KIM