K3S Overview

Lightweight Kubernetes : The certified Kubernetes distribution built for IoT & Edge computing 특징 https://k3s.io/{:target="_blank"} Kubernetes의 경량화 버전으로 아래와 같은 특징 기본 설치만으로 바로 배포 테스트 가능 Overlay Netowrk(Flannel), Load balancer, Ingress(Traefik), CoreDNS 등이 기본 설치 됨 https://rancher.com/docs/k3s/latest/en/networking/{:target="_blank"} etcd 대신 sqlite 운영 High Availability with an External DB High Availability with Embedded DB (Experimental) Master node schedulable uncordon 제외 가능 Worker node 필요 없음 (필요시 추가 가능) 사용 목적 Edge Computing 개발 테스트 및 스테이징 서버 구성 기타 어플리케이션 테스트 용 Master 설치 설치 curl -sfL https://get.k3s.io | sh - 실행으로 끝 systemd 관리 kubectl 설치 및 심볼릭 링크 설정 해줌 이미 kubectl 가 설치 되어 있는 경우는 심볼릭 링크 실패 # alias 필요시 아래 참고 $ alias kubectl='sudo k3s kubectl' # Install $ curl -sfL https://get.k3s.io | sh - # master node $ kubectl get node NAME STATUS ROLES AGE VERSION centos1 Ready control-plane,master 37s v1.21.3+k3s1 $ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system local-path-provisioner-5ff76fc89d-wh9cg 1/1 Running 0 2m35s kube-system coredns-7448499f4d-2d7pb 1/1 Running 0 2m35s kube-system metrics-server-86cbb8457f-x9l6n 1/1 Running 0 2m35s kube-system helm-install-traefik-crd-w27q7 0/1 Completed 0 2m35s kube-system helm-install-traefik-2zllj 0/1 Completed 1 2m35s kube-system svclb-traefik-55qfd 2/2 Running 0 113s kube-system traefik-97b44b794-smzl9 1/1 Running 0 114s K8S 서비스 테스트 서비스 타입 : NodePort https://kubernetes.github.io/ingress-nginx/deploy/baremetal/{:target="_blank"} apiVersion: apps/v1 kind: Deployment metadata: name: mvcapp spec: selector: matchLabels: app: mvcapp replicas: 2 # --replicas=2 옵션과 동일 template: metadata: labels: app: mvcapp spec: containers: - name: mvcapp image: cdecl/mvcapp:0.6 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: mvcapp spec: type: NodePort selector: app: mvcapp ports: - port: 80 targetPort: 80 $ kubectl apply -f mvcapp-deploy-service.yaml deployment.apps/mvcapp created service/mvcapp created $ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system local-path-provisioner-5ff76fc89d-wh9cg 1/1 Running 0 9m29s kube-system coredns-7448499f4d-2d7pb 1/1 Running 0 9m29s kube-system metrics-server-86cbb8457f-x9l6n 1/1 Running 0 9m29s kube-system helm-install-traefik-crd-w27q7 0/1 Completed 0 9m29s kube-system helm-install-traefik-2zllj 0/1 Completed 1 9m29s kube-system svclb-traefik-55qfd 2/2 Running 0 8m47s kube-system traefik-97b44b794-smzl9 1/1 Running 0 8m48s default mvcapp-79874d888c-6htvq 1/1 Running 0 62s default mvcapp-79874d888c-clslc 1/1 Running 0 62s $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 10m mvcapp NodePort 10.43.36.139 <none> 80:32105/TCP 106s # Nodeport IP $ curl 10.43.36.139:80 * Project : Mvcapp * Version : 0.5 / net5.0 * Hostname : mvcapp-79874d888c-6htvq * RemoteAddr : 10.42.0.1 * X-Forwarded-For : * Request Count : 1 * User-Agent : curl/7.29.0 $ curl localhost:32105 * Project : Mvcapp * Version : 0.5 / net5.0 * Hostname : mvcapp-79874d888c-clslc * RemoteAddr : 10.42.0.1 * X-Forwarded-For : * Request Count : 1 * User-Agent : curl/7.29.0 Agent 추가 Master Node만으로도 테스트 가능하나 Scale 테스트시 Agent(Worker Node) 추가 가능 환경변수 세팅 : 필요시 참고 $ sudo cat /var/lib/rancher/k3s/server/node-token > ~/.node-token $ K3S_TOKEN=$(< ~/.node-token) $ HOST_IP=$(ip a | sed -rn 's/.*inet ([0-9\.]+).*eth0/\1/p') Agent 등록 : 원격실행 OR Agent 머신에서 실행 HostIP, Token 정보 필요 (위 환경변수 세팅 참고) # Agent 머신에서 실행 $ curl -sfL https://get.k3s.io | K3S_URL=https://$HOST_IP:6443 K3S_TOKEN=$K3S_TOKEN sh - # Agent 추가 다른방법 $ ansible node01 -m shell -a "curl -sfL https://get.k3s.io | sh -s - agent --server https://$HOST_IP:6443 --token $K3S_TOKEN" -v K3S 삭제 ls /usr/local/bin/k3s-* | xargs -n1 sh -

August 17, 2021 · Byung Kyu KIM

MinIO 101

MinIO 101 Introduction https://docs.min.io/{:target="_blank"} Open Source, S3 Compatible, Enterprise Hardened and Really, Really Fast S3 Compatible : Client(mc), SDK (Java, Javascript, Python, Golang, .Net ..) high performance, distributed object storage system Private cloud object storage Getting Started MinID 는 Golang으로 제작되어 의존성 없는 단일 파일로 운영 가능 Docker의 경우 Alpine linux 로 배포 Downloads : https://min.io/download{:target="_blank"} Quickstart Server https://docs.min.io/docs/minio-quickstart-guide.html{:target="_blank"} # linux - server run $ wget https://dl.min.io/server/minio/release/linux-amd64/minio $ chmod +x minio $ export MINIO_ROOT_USER=minio $ export MINIO_ROOT_PASSWORD=miniopass # ./minio server --address 0.0.0.0:9000 /data $ ./minio server /data Status: 1 Online, 0 Offline. Endpoint: http://192.168.144.5:9000 http://127.0.0.1:9000 Browser Access: http://192.168.144.5:9000 http://127.0.0.1:9000 Object API (Amazon S3 compatible): Go: https://docs.min.io/docs/golang-client-quickstart-guide Java: https://docs.min.io/docs/java-client-quickstart-guide Python: https://docs.min.io/docs/python-client-quickstart-guide JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide .NET: https://docs.min.io/docs/dotnet-client-quickstart-guide docker-compose 9000 : 데이터 I/F 포트 9001 : Web Console Port ./data:/data : 데이터 영역 version: '3' services: minio: image: minio/minio command: server /data --console-address ":9001" container_name: minio environment: MINIO_ROOT_USER: minio MINIO_ROOT_PASSWORD: miniopass restart: always ports: - "9000:9000" - "9001:9001" volumes: - ./data:/data $ docker-compose up -d Creating network "minio_default" with the default driver Pulling minio (minio/minio:)... latest: Pulling from minio/minio c2c17d84f25a: Pull complete 46cdcde062b2: Pull complete c88923a3df19: Pull complete 1afaaeffed49: Pull complete 6c066ed8931e: Pull complete b889e4f29831: Pull complete 51b722521628: Pull complete Digest: sha256:ff4892c4248ad0ef73981d9f2e7b8a721dae45c55bdd25d7a23e1670540f36e1 Status: Downloaded newer image for minio/minio:latest Creating minio ... done Quickstart Client https://docs.min.io/docs/minio-client-quickstart-guide.html{:target="_blank"} # linux # wget https://dl.min.io/client/mc/release/linux-amd64/mc $ curl -O https://dl.min.io/client/mc/release/linux-amd64/mc $ chmod +x mc $ mv ./mc /usr/bin/ $ mc --help # add server config $ mc alias set local http://localhost:9000 minio miniopass Added `local` successfully. # creates a new bucket $ minio mc mb local/backup Bucket created successfully `local/backup`. # copy $ mc cp docker-compose.yml local/backup docker-compose.yml: 322 B / 322 B ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 30.39 KiB/s 0s # list $ mc ls local/backup [2021-08-18 11:01:50 KST] 322B docker-compose.yml # remove $ mc rm --recursive --force local/backup/ Removing `local/backup/docker-compose.yml`. MinIO Erasure Code https://docs.min.io/docs/minio-erasure-code-quickstart-guide.html{:target="_blank"} Erasure Code 누락되거나 손상된 데이터를 재구성하는 수학적 알고리즘 Erasure code와 Checksums 사용하여 하드웨어 오류 및 자동 데이터 손상으로부터 데이터를 보호 중복 수준이 높으면 전체 드라이브의 최대 절반 (N/2)이 손실 되어도 데이터를 복구 가능 드라이브를 4, 6, 8, 10, 12, 14 또는 16 개의 erasure-coding sets 구성 (최소 4개) Run MinIO Server with Erasure Code 4 drives setup Drives를 물리적인 디스크로 구성하면 별도의 Raid 구성이 필요 없음 $ minio server /data1 /data2 /data3 /data4 docker-compose docker-compose.yml version: '3' services: minio1: image: minio/minio command: server /data1 /data2 /data3 /data4 --console-address ":9001" container_name: minio1 environment: MINIO_ROOT_USER: minio MINIO_ROOT_PASSWORD: miniopass restart: always ports: - "9000:9000" - "9001:9001" volumes: - ./data1:/data1 - ./data2:/data2 - ./data3:/data3 - ./data4:/data4 Distributed MinIO https://docs.min.io/docs/distributed-minio-quickstart-guide.html{:target="_blank"} MinIO in distributed mode lets you pool multiple drives (even on different machines) into a single object storage server. 서버를 분리하여 다중 Drives를 지원 Data protection High availability Consistency Guarantees Run distributed MinIO MINIO_ROOT_USER and MINIO_ROOT_PASSWORD 같은 키로 구성 Erasure Code와 동일한 Drivers 정책으로 분산 Distributed MinIO 와 서버별로 Erasure Code 같이 적용 가능 docker-compose : docker로 4대 서버 시뮬레이션 version: '3' services: minio1: image: minio/minio command: server http://minio{1...4}:9000/data --console-address ":9001" container_name: minio1 environment: MINIO_ROOT_USER: minio MINIO_ROOT_PASSWORD: miniopass restart: always ports: - "9101:9000" - "9001:9001" volumes: - ./minio1:/data minio2: image: minio/minio command: server http://minio{1...4}:9000/data --console-address ":9001" container_name: minio2 environment: MINIO_ROOT_USER: minio MINIO_ROOT_PASSWORD: miniopass restart: always ports: - "9102:9000" volumes: - ./minio2:/data minio3: image: minio/minio command: server http://minio{1...4}:9000/data --console-address ":9001" container_name: minio3 environment: MINIO_ROOT_USER: minio MINIO_ROOT_PASSWORD: miniopass restart: always ports: - "9103:9000" volumes: - ./minio3:/data minio4: image: minio/minio command: server http://minio{1...4}:9000/data --console-address ":9001" container_name: minio4 environment: MINIO_ROOT_USER: minio MINIO_ROOT_PASSWORD: miniopass restart: always ports: - "9104:9000" volumes: - ./minio4:/data MinIO Admin Guide mc 명령어를 통해 Admin 기능을 수행 https://docs.min.io/docs/minio-admin-complete-guide.html{:target="_blank"} service restart and stop all MinIO servers update update all MinIO servers info display MinIO server information user manage users group manage groups policy manage policies defined in the MinIO server config manage MinIO server configuration heal heal disks, buckets and objects on MinIO server profile generate profile data for debugging purposes top provide top like statistics for MinIO trace show http trace for MinIO server console show console logs for MinIO server prometheus manages prometheus config kms perform KMS management operations user - Manage users User 생성 및 삭제 ## create User $ mc admin user add myinfo cdecl cdeclpass ## remove User # mc admin user remove myinfo cdecl $ mc admin user info myinfo cdecl AccessKey: cdecl Status: enabled PolicyName: MemberOf: Policy $ mc admin policy set myinfo readonly user=cdecl Policy readonly is set on user `cdecl` $ mc admin policy set myinfo writeonly user=cdecl Policy writeonly is set on user `cdecl` $ mc admin policy set myinfo readwrite user=cdecl Policy readwrite is set on user `cdecl` heal - Heal disks, buckets and objects on MinIO server This command is only applicable for MinIO erasure coded setup (standalone and distributed). Erasure Code 상태에서 특정 디스크의 데이터가 문제가 있을 경우, 균등하게 데이터를 복구 해줌 $ mc admin heal -r myinfo - data1 2/2 objects; 403 MiB in 1s ┌────────┬───┬─────────────────────┐ │ Green │ 5 │ 100.0% ████████████ │ │ Yellow │ 0 │ 0.0% │ │ Red │ 0 │ 0.0% │ │ Grey │ 0 │ 0.0% │ └────────┴───┴─────────────────────┘

August 17, 2021 · Byung Kyu KIM

MySQL 8 Docker Basic, Json Type 지원

MySQL 8 Docker 실행 및 백업, 복원 MySQL Docker 실행 docker-compose data:/var/lib/mysql : Data 파일 conf.d:/etc/mysql/conf.d : my.cnf 등의 설정파일 root:/root : login-path 사용시 ports : “3380:3306” : mysql port “33800:33060” : mysql-shell port version: '3' services: db: image: mysql:8 container_name: mysql8 command: --default-authentication-plugin=mysql_native_password restart: always environment: MYSQL_ROOT_PASSWORD: passwd TZ: Asia/Seoul ports: - "3380:3306" - "33800:33060" volumes: - data:/var/lib/mysql - conf.d:/etc/mysql/conf.d - root:/root volumes: data: conf.d: root: $ docker-compose up -d Creating network "mysql_default" with the default driver Creating volume "mysql_data" with default driver Creating volume "mysql_conf.d" with default driver Creating volume "mysql_root" with default driver Pulling db (mysql:8)... 8: Pulling from library/mysql e1acddbe380c: Pull complete bed879327370: Pull complete 03285f80bafd: Pull complete ccc17412a00a: Pull complete 1f556ecc09d1: Pull complete adc5528e468d: Pull complete 1afc286d5d53: Pull complete 6c724a59adff: Pull complete 0f2345f8b0a3: Pull complete c8461a25b23b: Pull complete 3adb49279bed: Pull complete 77f22cd6c363: Pull complete Digest: sha256:d45561a65aba6edac77be36e0a53f0c1fba67b951cb728348522b671ad63f926 Status: Downloaded newer image for mysql:8 Creating mysql8 ... done 연결 테스트 docker -t 옵션 제외 : the input device is not a TTY # echo 'select host, user from mysql.user' | docker exec -i mysql8 mysql -uroot -ppasswd $ docker exec -i mysql8 mysql -uroot -ppasswd <<< 'select host, user from mysql.user' mysql: [Warning] Using a password on the command line interface can be insecure. host user % root localhost mysql.infoschema localhost mysql.session localhost mysql.sys localhost root mysql: [Warning] Using a password on the command line interface can be insecure. ...

August 16, 2021 · Byung Kyu KIM

ElasticSearch to MySQL ETL

ElasticSearch to MySQL ETL 준비 curl : Http 기반 ElasticSearch Query (SQL) jq : ElasticSearch 결과 JSON 변환 (NDJSON) mysql-shell : MySQL 데이터 Bulk Insert (Json 타입의 Table) ElasticSearch Query curl 쿼리 후, 해당 내용 JSON 저장 $ curl -s -XPOST -H 'content-type: application/json' \ -d '{"query" : "select timestamp, activesession, loadavg, processor from \"mysql-perf-*\" limit 100"}' \ 'http://elasticsearch-server:7200/_sql' > result.json $ cat result.json | jq . { "columns": [ { "name": "timestamp", "type": "datetime" }, { "name": "activesession", "type": "long" }, { "name": "loadavg", "type": "float" }, { "name": "processor", "type": "float" } ], "rows": [ [ "2021-05-02T03:40:47.000Z", 2, 0.019999999552965164, 0 ], [ "2021-06-01T00:04:24.000Z", 4, 2.0399999618530273, 14.579999923706055 ], ... NDJSON 형태로 변환 jq -c : compact instead of pretty-printed output $ cat result.json | \ jq -c ' { "col": [.columns[] | .name], "row" : .rows[] } | [.col, .row] | transpose | map({ (.[0]): .[1] }) | add ' > result_t.json $ cat result_t.json | jq . { "timestamp": "2021-05-02T03:40:47.000Z", "activesession": 2, "loadavg": 0.019999999552965164, "processor": 0 } { "timestamp": "2021-06-22T05:25:24.000Z", "activesession": 3, "loadavg": 1.809999942779541, "processor": 4.599999904632568 } { "timestamp": "2021-07-01T00:04:21.000Z", "activesession": 1, "loadavg": 0.2800000011920929, "processor": 0.41999998688697815 MySQL JSON Import mysql-shell 테이블내 doc json 필드 생성 및 JSON 데이터 Row 단위 적재 # truncate table $ mysqlsh --sql user@mysql-server:3360/dbname -p<PASSWD> \ --execute 'truncate table data_table ;' # import $ mysqlsh --mysqlx user@mysql-server:3360/dbname -p<PASSWD> \ --import result_t.json --collection=data_table ...

August 15, 2021 · Byung Kyu KIM

Docker Swarm 101

https://docs.docker.com/engine/swarm/ Docker Swarm Node Manager Node 초기화 https://docs.docker.com/engine/reference/commandline/swarm_init/ 초기화 Initialize a swarm docker swarm init $ docker swarm init Swarm initialized: current node (bvz81updecsj6wjz393c09vti) is now a manager. To add a worker to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx \ 172.17.0.2:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. Initialize a swarm with advertised address docker swarm init --advertise-addr <ip|interface>[:port] $ docker swarm init --advertise-addr 192.168.99.121 Swarm initialized: current node (bvz81updecsj6wjz393c09vti) is now a manager. To add a worker to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx \ 172.17.0.2:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. 노드 추가 Join Node Join a swarm as a node and/or manager https://docs.docker.com/engine/reference/commandline/swarm_join/ ...

August 12, 2021 · Byung Kyu KIM

Github Actions 101

Github Actions 101 Github 에서 제공하는 Workflow 툴 GitHub-hosted Runner or Self-Hosted Runner 에서 실행 Actions 탭을 통해서 Template을 선택하고 Yaml 파일로 Task 내용을 기술 .github/workflows 디렉토리 밑에 위치 Runner 종류 GitHub-hosted Runner : MS Azure 가상머신에서 실행 Public Repository : 무료 Private Repository : 2000분/월 무료 Self-Hosted Runner : 자체 머신을 통해 Runner Hosting https://help.github.com/en/actions/hosting-your-own-runners/adding-self-hosted-runners{:target="_blank"} Actions Basic Actions Tab Workflow syntax for GitHub Actions https://help.github.com/en/actions/reference/workflow-syntax-for-github-actions{:target="_blank"} awesome-actions https://github.com/sdras/awesome-actions{:target="_blank"} Workflow runs-on: Virtual machine ubuntu, macos, windows server 제공 기본 Package or apps 가 등록되어 있음 : https://github.com/actions/virtual-environments{:target="_blank"} Ubuntu : https://github.com/actions/virtual-environments/blob/master/images/linux/Ubuntu1804-README.md{:target="_blank"} Windows Server : https://github.com/actions/virtual-environments/blob/master/images/win/Windows2019-Readme.md{:target="_blank"} steps : uses: 예약된 Actions 실행이나 Apps 통합을 통해 Apps 사용 환경 구성 ex> uses: actions/checkout@v2 : git checkout 실행 ex> uses: nuget/setup-nuget@v1 : nuget apps setup ex> uses: microsoft/setup-msbuild@v1 : msbuild setup run: run command 지정 name: CI # workflow 이름 on: # Triggers Event push: branches: [ master ] pull_request: branches: [ master ] jobs: build: # Single job name runs-on: ubuntu-latest # virtual machine steps: - uses: actions/checkout@v2 - name: Run a one-line script run: echo Hello, world! - name: Run a multi-line script run: | echo Add other actions to build, echo test, and deploy your project. Actions 예제 Docker Build & Registry Push ubuntu-latest 이미지에는 Docker Daemon 활성화됨 secrets 변수 : [Settings] - [Secrets] 에서 변수 세팅 (DOCKERHUB_PASS) ${{ secrets.DOCKERHUB_PASS }} name: Docker Image CI on: push: branches: [ master ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Build the Docker image run: docker build . --tag cdecl/gcc-boost - name: docker login run: echo '${{ secrets.DOCKERHUB_PASS }}' | docker login -u cdecl --password-stdin - name: docker push run: docker push cdecl/gcc-boost ...

August 12, 2021 · Byung Kyu KIM

Gitlab CI/CD

Gitlab CI/CD 101 Gitlab 에서 제공하는 CI/CD 목적의 Workflow 툴 Auto DevOps or gitlab-runner 에서 실행 Setup CI/CD 를 통해 세팅 .gitlab-ci.yml 파일에 기술 Gitlab-Runner gitlab-runner : .gitlab-ci.yml 기반 파이프 라인 구성 Shared Runners : gitlab.com 에서 hosting 해주는 Runner Self hosting Runners : 별도 머신을 통해 Runner 설치 Gitlab-Runner 세팅 (Self hosting) Installing the Runner https://docs.gitlab.com/runner/install/linux-repository.html{:target="_blank"} Registering Runners https://docs.gitlab.com/runner/register/index.html{:target="_blank"} Interactive register runner $ sudo gitlab-runner register Runtime platform arch=amd64 os=linux pid=120146 revision=c5874a4b version=12.10.2 Running in system-mode. Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/): http://hqgit.inpark.kr/ Please enter the gitlab-ci token for this runner: xxxxxxxxxxxxxxxxxxxxxxxxxxx Please enter the gitlab-ci description for this runner: ci-test runner Please enter the gitlab-ci tags for this runner (comma separated): centos24,ci-test,cdecl Registering runner... succeeded runner=WpQDakzK Please enter the executor: shell, kubernetes, parallels, docker, docker-ssh, ssh, virtualbox, docker+machine, docker-ssh+machine, custom: shell Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded! # inline sudo gitlab-runner register \ --non-interactive \ --url "https://gitlab.com/" \ --registration-token "PROJECT_REGISTRATION_TOKEN" \ --executor "docker" \ --docker-image alpine:latest \ --description "docker-runner" \ --tag-list "docker" \ sudo gitlab-runner register \ --non-interactive \ --url "http://centos.cdecl.net/" \ --registration-token "PROJECT_REGISTRATION_TOKEN" \ --executor "docker" \ --docker-image alpine \ --description "docker-runner" \ --tag-list "docker" \ --env "DOCKER_TLS_CERTDIR=" \ --docker-privileged=true \ --docker-volumes "/ansible:/ansible" \ --docker-extra-hosts "centos.cdecl.net:192.168.0.20" Pipeline Configuration Basic GitLab CI/CD Pipeline Configuration Reference https://docs.gitlab.com/ee/ci/yaml/{:target="_blank"} Pipeline 기본적으로 git checkout 실행 Github Action 과 다르게 매뉴얼 실행 버튼 존재 image: ubuntu stages: # statge 정의 - build - test - deploy before_script: # - echo "Before script section" - echo "For example you might run an update here or install a build dependency" - echo "Or perhaps you might print out some debugging details" after_script: - echo "After script section" - echo "For example you might do some cleanup here" build_stage: stage: build script: - echo "Do your build here" test_stage1: stage: test script: - echo "Do a test here" - echo "For example run a test suite" test_stage2: stage: test script: - echo "Do another parallel test here" - echo "For example run a lint test" deploy_stage: stage: deploy script: - echo "Do your deploy here" ...

August 12, 2021 · Byung Kyu KIM

Kubernetes 101

Kubernetes 설치 및 운영 101 사전 준비 Kubernetes 설치 전 서버 구성 변경 참고 : https://www.mirantis.com/blog/how-install-kubernetes-kubeadm/{:target="_blank"} Swap 영역을 비활성화 # 일시적인 설정 $ sudo swapoff -a # 영구적인 설정, 아래 swap 파일 시스템을 주석처리 $ sudo vi /etc/fstab ... # /dev/mapper/kube--master--vg-swap_1 none swap sw 0 0 SELinux Disable # 임시 $ sudo setenforce 0 # 영구 $ sudo vi /etc/sysconfig/selinux ... SELinux=disabled 방화벽 Disable $ sudo systemctl disable firewalld $ sudo systemctl stop firewalld 브릿지 네트워크 할성화 # Centos $ sudo vim /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 # Ubuntu $ sudo vim /etc/ufw/sysctl.conf net/bridge/bridge-nf-call-ip6tables = 1 net/bridge/bridge-nf-call-iptables = 1 net/bridge/bridge-nf-call-arptables = 1 Docker Install Centos Install : https://docs.docker.com/engine/install/centos/{:target="_blank"} Cgroup 드라이버 이슈 최신 Kubernetes는 docker cgroup driver를 cgroupfs → systemd 변경 필요 Master Init 및 Worker Join 시 WARNING 발생 https://kubernetes.io/ko/docs/setup/production-environment/container-runtimes/{:target="_blank"} kubeadm init --pod-network-cidr 10.244.0.0/16 ... [init] Using Kubernetes version: v1.19.3 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". ... ... 드라이버 변경 작업 /etc/docker/daemon.json 파일 작성 $ cat <<EOF | sudo tee /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF # 도커 재시작 $ sudo systemctl restart docker # 확인 $ sudo docker info | grep -i cgroup Cgroup Driver: systemd Kubernetes (kubeadm, kubelet, kubectl) 설치 참고 : https://kubernetes.io/docs/setup/independent/install-kubeadm/{:target="_blank"} Kubernetes 설치 : Centos7 기준 Docker 설치 sudo yum install -y docker sudo systemctl enable docker && systemctl start docker sudo usermod -aG docker $USER kubeadm, kubelet, kubectl : Repo 추가 및 패키지 설치 $ cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF $ sudo yum install -y kubelet kubeadm kubectl $ sudo systemctl enable kubelet && systemctl start kubelet # 버전이 안맞을 경우 지정 # sudo yum install kubelet-[version] kubeadm-[version] kubectl-[version] kubectl 자동완성 # sh source <(kubectl completion sh) echo "source <(kubectl completion sh)" >> ~/.shrc # zsh source <(kubectl completion zsh) echo "if [ $commands[kubectl] ]; then source <(kubectl completion zsh); fi" >> ~/.zshrc Master Node Init 및 Worker Node Join Master Node 설정 Master 초기화 네트워크 클래스 대역을 설정 필요 : --pod-network-cidr 10.244.0.0/16 sudo kubeadm init --pod-network-cidr 10.244.0.0/16 Kubectl 사용 : To start using your cluster.. 아래 항목 3줄 실행 [init] Using Kubernetes version: v1.10.5 ... To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ... You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.28.15:6443 --token 1ovd36.ft4mefr909iotg0a --discovery-token-ca-cert-hash sha256:82953a3ed178aa8c511792d0e21d9d3283e7575f3d3350a00bea3e34c2b87d29 Pod 상태 확인 coredns STATUS → Pending (∵ Overlay network 미설치) $ kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-66bff467f8-ktvsz 0/1 Pending 0 19s kube-system coredns-66bff467f8-nvvjz 0/1 Pending 0 19s kube-system etcd-node1 1/1 Running 0 29s kube-system kube-apiserver-node1 1/1 Running 0 29s kube-system kube-controller-manager-node1 1/1 Running 0 29s kube-system kube-proxy-s582x 1/1 Running 0 19s kube-system kube-scheduler-node1 1/1 Running 0 29s Overlay network : Calico 설치 Overlay network 종류 https://kubernetes.io/docs/concepts/cluster-administration/networking/{:target="_blank"} Install Calico for on-premises deployments https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises{:target="_blank"} # Install Calico for on-premises deployments $ kubectl apply -f https://docs.projectcalico.org/manifests/calico-typha.yaml coredns 서비스가 정상적으로 Running $ kubectl get po -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-799fb94867-bcntz 0/1 CrashLoopBackOff 3 2m6s kube-system calico-node-jtcmt 0/1 Running 1 2m7s kube-system calico-typha-6bc9dd6468-x2hjj 0/1 Pending 0 2m6s kube-system coredns-66bff467f8-ktvsz 0/1 Running 0 3m23s kube-system coredns-66bff467f8-nvvjz 0/1 Running 0 3m23s kube-system etcd-node1 1/1 Running 0 3m33s kube-system kube-apiserver-node1 1/1 Running 0 3m33s kube-system kube-controller-manager-node1 1/1 Running 0 3m33s kube-system kube-proxy-s582x 1/1 Running 0 3m23s kube-system kube-scheduler-node1 1/1 Running 0 3m33s Worker Node 추가 (Join) Worker Node 실행 # Join 명령 가져오기 $ kubeadm token create --print-join-command kubeadm join 192.168.28.15:6443 --token 1ovd36.ft4mefr909iotg0a --discovery-token-ca-cert-hash sha256:82953a3ed178aa8c511792d0e21d9d3283e7575f3d3350a00bea3e34c2b87d29 # Worker node 에서 실행 $ kubeadm join 192.168.28.15:6443 --token 1ovd36.ft4mefr909iotg0a --discovery-token-ca-cert-hash sha256:82953a3ed178aa8c511792d0e21d9d3283e7575f3d3350a00bea3e34c2b87d29 노드 상태 확인 > kubectl get node NAME STATUS ROLES AGE VERSION node1 Ready master 8m50s v1.18.6 node2 Ready <none> 16s v1.18.6 node3 Ready <none> 16s v1.18.6 서비스 배포 : 명령어(CLI) 기반 ...

August 12, 2021 · Byung Kyu KIM

Ansible-101

Ansible-101 Ansible Install Centos 7 기본적으로 Python 2.7 기반으로 설치 됨 추후 pywinrm 패키지가 필요할때, python2-pip 패키지 추가 설치 필요 sudo yum install ansible 대안으로 pip 으로 설치 하는 방법 (user) pip install ansible # --user 용어 정의 Inventory : 관리하는 원격 서버 목록 지정하지 않으면 Default Path 의 hosts 파일을 참조 /etc/ansible/hosts Module : Task 를 실행하는 방법 (모듈) https://docs.ansible.com/ansible/latest/modules/modules_by_category.html{:target="_blank"} e.g. - command, shell, copy, service Playbook : Task 실행 프로세스 정의서 Yaml 파일로 작성 멱등성(idempotence) : 연산을 여러 번 적용하더라도 결과가 달라지지 않는 성질을 의미한다. Ansbile Config Path : /etc/ansible/ansible.cfg Ignore ansible ssh authenticity : ssh 최초 접속시 인증을 host key 체크 생략 https://stackoverflow.com/questions/32297456/how-to-ignore-ansible-ssh-authenticity-checking{:target="_blank"} [defaults] host_key_checking = False Callback plugins minimal stdout [defaults] stdout_callback = minimal 참고 : How to disable strict host key checking in ssh? https://askubuntu.com/questions/87449/how-to-disable-strict-host-key-checking-in-ssh{:target="_blank"} Inventory 구성 Public key 등록 사용 방법 : 키를 생성하고 public 키를 관리되는 서버에 등록 후 사용 # Key 생성 ssh-keygen # Public key 등록 ssh-copy-id [server-ip] Inventory 서버 등록 : /etc/ansible/hosts 기본디렉토리의 Host 파일을 사용하지 않는다면 -i 옵션으로 Inventory 파일 지정 https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#list-of-behavioral-inventory-parameters{:target="_blank"} # ansible host 파일 예제 192.168.1.5 [db] 192.168.1.10 [webservers] 192.168.1.100 192.168.1.110 # hosts 파일에 등록된 경우 [apiserver] apiserver1 apiserver2 # host명으로 등록 server1 ansible_host=192.168.1.50 server2 ansible_host=192.168.1.51 Host 지정 방법 all : 해당 Inventory의 모든 서버 webservers : webservers 라고 정의된 서버 목록 192.168.1.100 : 192.168.1.100 서버 Ad-hook Execute ansible 명령을 통한 inline 실행 주요 인수 Usage: ansible <host-pattern> [options] -a MODULE_ARGS, --args=MODULE_ARGS module arguments -e EXTRA_VARS, --extra-vars=EXTRA_VARS set additional variables as key=value or YAML/JSON, if filename prepend with @ -f FORKS, --forks=FORKS specify number of parallel processes to use (default=5) -h, --help show this help message and exit -i INVENTORY, --inventory=INVENTORY, --inventory-file=INVENTORY specify inventory host path or comma separated host list. --inventory-file is deprecated -m MODULE_NAME, --module-name=MODULE_NAME module name to execute (default=command) -v, --verbose verbose mode (-vvv for more, -vvvv to enable connection debugging) --version show program's version number, config file location, configured module search path, module location, executable location and exit -b, --become run operations with become (does not imply password prompting) - Example hostname 확인 # webservers 목록 실행 # command 모듈, "hostname" 인수 ansible webservers -m command -a "hostname" # default module (생략가능) : -m command ansible webservers -a "hostname" # 모든 서버 ansible all -a "hostname" 기타 모듈 사용 # ping 모듈 ansible webservers -m ping # copy 모듈 : local → host ansible webservers -m copy -a "src=file.txt dest=/home/cdecl/web/" # service 모듈 : kubelet 서비스를 시작 ansible webservers -m service -a "name=kubelet state=started" Playbook 사용 일련의 Task(작업)를 기술하여, 프로세스를 실행하는 방법 ...

August 11, 2021 · Byung Kyu KIM