서버 설정

vultr 서버에 도메인 등록

godaddy 네임서버 설정

DNS
ns1.vultr.com
ns2.vultr.com

vultr DNS 도메인 설정

Type Name Data TTL
A   IP ADDRESS 300
A www IP ADDRESS 300
A gitlab IP ADDRESS 300
CNAME * DOMAIN NAME 300
MX   mail.DOMAIN NAME  

Let’s encrypt 인증서 설치

nginx + CentOS/RHEL 8 crtbot

확인 사항

certbot 설치

The Certbot snap supports the x86_64, ARMv7, and ARMv8 architectures.
While we strongly recommend that most users install Certbot through the snap,
you can find alternate installation instructions here.

Installing snap on CentOS
dnf install epel-release
dnf upgrade

yum install snapd
systemctl enable --now snapd.socket
ln -s /var/lib/snapd/snap /snap
로그아웃 후 다시 로그인 하거나 시스템 재시작하여 snap의 경로가 정확하게 업데이트 됐는지 확인
# snap install core;
Download snap "core" (10958) from channel "stable"                                                                         0%  464kB/s 3m43s2021-04-15T20:34:29+09:00 INFO Waiting for automatic snapd restart...
core 16-2.49.2 from Canonical✓ installed
# snap refresh core
snap "core" has no updates available
snap 사용한 certbot 설치
# snap install --classic certbot
certbot 1.14.0 from Certbot Project (certbot-eff✓) installed
# ln -s /snap/bin/certbot /usr/bin/certbot
권한 동의서 확인
nginx 설정 변경
# certbot --nginx

Which names would you like to activate HTTPS for?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: vhost0.aimpugn.me
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel):
Requesting a certificate for vhost0.aimpugn.me
Performing the following challenges:
http-01 challenge for vhost0.aimpugn.me
Waiting for verification...
Cleaning up challenges
Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/aimpugn.me.conf
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/aimpugn.me.conf

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Congratulations! You have successfully enabled https://vhost0.aimpugn.me
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   <PATH>
   Your key file has been saved at:
   <PATH>
   Your certificate will expire on 2021-07-14. To obtain a new or
   tweaked version of this certificate in the future, simply run
   certbot again with the "certonly" option. To non-interactively
   renew *all* of your certificates, run "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le
# nginx -t
# nginx -s reload

Docker + Kubernetes + Helm

Docker

Docker - install on centos

yum remove  docker \
            docker-client \
            docker-client-latest \
            docker-common \
            docker-latest \
            docker-latest-logrotate \
            docker-logrotate \
            docker-engine

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# 최신 버전으로 설치된다
yum install docker-ce docker-ce-cli containerd.io
# https://kubernetes.io/ko/docs/setup/production-environment/container-runtimes/#containerd
# 컨테이너 런타임과 kubelet이 systemd를 cgroup 드라이버로 사용하도록 설정을 변경하면 시스템이 안정화된다. 
# 도커에 대해 구성하려면, native.cgroupdriver=systemd를 설정한다.
mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

cat /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}

sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

Kubernetes

Kubernetes 개요

Controller Node

Worder Node

kubeadm, kubelet 및 kubectl 설치

패키지 설명
kubeadm 클러스터를 부트스트랩하는 명령
kubelet 클러스터의 모든 머신에서 실행되는 파드와 컨테이너 시작과 같은 작업을 수행하는 컴포넌트
kubectl 클러스터와 통신하기 위한 커맨드 라인 유틸리티

kubectl 설치

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

version --client
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}

kubeadm 설치

swap space 제거

vi /etc/fstab
# swap 부분을 주석 처리
# /dev/mapper/centos-swap ... 

iptables가 브리지된 트래픽을 보게 하기

lsmod | grep br_netfilter

br_netfilter           24576  0
bridge                188416  1 br_netfilter
sysctl --all | grep bridge
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-filter-pppoe-tagged = 0
net.bridge.bridge-nf-filter-vlan-tagged = 0
net.bridge.bridge-nf-pass-vlan-input-dev = 0
# repo 추가
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# permissive 모드로 SELinux 설정(효과적으로 비활성화)
# 컨테이너가 호스트 파일시스템(예를 들어, 파드 네트워크에 필요한)에 접근하도록 허용하는 데 필요
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.

컨트롤 플레인 노드에서 kubelet이 사용하는 cgroup 드라이버 구성

# kubeadm-custom-config.yaml
# kubeadm init --config /path/to/kubeadm-custom-config.yaml 
# https://kubernetes.io/ko/docs/setup/production-environment/container-runtimes/#containerd
# https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/configure-cgroup-driver/
# kubeadm-config.yaml
kind: ClusterConfiguration # required
apiVersion: kubeadm.k8s.io/v1beta2
# kubernetesVersion: v1.21 # version "v1.21" doesn't match patterns for neither semantic version nor labels (stable, latest, ...) 에러 발생
# https://github.com/kubernetes/kubernetes/blob/0f1d105f8d3e114f0bf47307513fe519a71351a2/cmd/kubeadm/app/util/version.go#L72 참조
kubernetesVersion: latest-1.21
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
# 컨트롤 플레인 노드
firewall-cmd --zone=public --permanent --add-port=6443/tcp
firewall-cmd --zone=public --permanent --add-port=2379-2380/tcp
firewall-cmd --zone=public --permanent --add-port=10250/tcp
firewall-cmd --zone=public --permanent --add-port=10251/tcp
firewall-cmd --zone=public --permanent --add-port=10252/tcp

systemctl restart firewalld

# 워커 노드
firewall-cmd --zone=public --permanent --add-port=10250/tcp
firewall-cmd --zone=public --permanent --add-port=30000-32767/tcp
systemctl restart firewalld

# 불필요 rule이 있어서 삭제
# https://stackoverflow.com/a/47015978
firewall-cmd --zone=public --permanent --remove-rich-rule='rule family="ipv4" source address="172.17.0.0/16" accept'
systemctl stop firewalld
pkill -f firewalld
systemctl start firewalld
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path

kubeadm init

옵션들
--apiserver-advertise-address
--pod-network-cidr

cidr 사용? 클러스터 내의 pods 간에 통신하기 위해 특별한 가상 네트워크를 생성하는 데 Container Network Interface 사용

--service-cidr
트러블 슈팅
[kubelet-check] Initial timeout of 40s passed 발생 시
The connection to the server localhost:8080 was refused - did you specify the right host or port?
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
init 결과
[aimpugn@vultr ~]$ kubectl cluster-info
Kubernetes control plane is running at https://IP_ADDRESS:6443
CoreDNS is running at https://IP_ADDRESS:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

[aimpugn@vultr ~]$ kubectl get nodes
NAME          STATUS     ROLES                  AGE   VERSION
vultr.guest   NotReady   control-plane,master   12m   v1.21.0

Network

Network 개요

종류
container 간 네트워크
pod 간 네트워크
pod와 service 간 네트워크
external과 service 간 네트워크

Pod 네트워크 플러그인

Falnnel

Weave

Calico

AWS VPC

pod 네트워크 애드온 설치

Weave - Integrating Kubernetes via the Addon

# https://cloud.weave.works/k8s/net?k8s-version=Q2xpZW50IFZlcnNpb246IHZlcnNpb24uSW5mb3tNYWpvcjoiMSIsIE1pbm9yOiIyMSIsIEdpdFZlcnNpb246InYxLjIxLjAiLCBHaXRDb21taXQ6ImNiMzAzZTYxM2ExMjFhMjkzNjRmNzVjYzY3ZDNkNTgwODMzYTc0NzkiLCBHaXRUcmVlU3RhdGU6ImNsZWFuIiwgQnVpbGREYXRlOiIyMDIxLTA0LTA4VDE2OjMxOjIxWiIsIEdvVmVyc2lvbjoiZ28xLjE2LjEiLCBDb21waWxlcjoiZ2MiLCBQbGF0Zm9ybToibGludXgvYW1kNjQifQo=
[root@vultr ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
[root@vultr ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-558bd4d5db-9d5b5              1/1     Running   0          5d
kube-system   coredns-558bd4d5db-kpzcd              1/1     Running   0          5d
kube-system   etcd-vultr.guest                      1/1     Running   0          5d
kube-system   kube-apiserver-vultr.guest            1/1     Running   0          5d
kube-system   kube-controller-manager-vultr.guest   1/1     Running   0          5d
kube-system   kube-proxy-tjqwq                      1/1     Running   0          5d
kube-system   kube-scheduler-vultr.guest            1/1     Running   0          5d
kube-system   weave-net-tld6h                       2/2     Running   1          3m19s
[root@vultr ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE   VERSION
vultr.guest   Ready    control-plane,master   5d    v1.21.0

kubeadm join

토큰 생성 방법
kubeadm join으로 클러스터에 조인
kubeadm join <IP_ADDRESS>:6443 --token <TOKEN_ID>.<TOKEN> \ 
                               --discovery-token-ca-cert-hash <hash-type>:<hex-encoded-value>
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster
# kubectl get nodes
NAME             STATUS   ROLES                  AGE    VERSION
vultr.guest      Ready    control-plane,master   5d     v1.21.0
worker-aimpugn   Ready    <none>                 104s   v1.21.0
kubeadm join 트러블슈팅
ERROR FileContent–proc-sys-net-ipv4-ip_forward

[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...

The cluster-info ConfigMap does not yet contain a JWS signature for token ID “TOKEN_ID”, will try again

I0426 15:29:47.674860 391575 token.go:221] [discovery] The cluster-info ConfigMap does not yet contain a JWS signature for token ID “TOKEN_ID”, will try again

Kubernetes Networking

Cluster Networking

Ansible

Ansible 개요

Ansible 설치 on CentOS

# yum install ansible

# ansible --version
ansible 2.9.20
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.8 (default, Aug 24 2020, 17:57:11) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]

Ansible command sheel completion

yum install python3-argcomplete

hosts 설정

# /etc/ansible/hosts
# - 빈 라인은 무시된다
# - 호스트 그룹은 `[header]` 엘리먼트로 구별된다
# - `hostname` 또는 `ip` 입력 가능
# - `hostname` 또는 `ip`는 여러 그룹의 멤버가 될 수 있다

# 그룹 없는 호스트들은 그룹 헤더 앞에 위치
## green.example.com
## blue.example.com
## 192.168.100.1
## 192.168.100.10

## [webservers] # `webservers` 그룹에 속한 호스트
## alpha.example.org http_port=80 maxRequestsPerChild=808
## beta.example.org
## 192.168.1.100
## 192.168.1.110

## www[001:006].example.com 패턴 있는 여러 호스트의 경우

## [dbservers] `dbservers` 그룹에 속한 데이터베이스 서버들
## db01.intranet.mydomain.net
## db02.intranet.mydomain.net
## 10.25.1.56
## db-[99:101]-node.example.com
master.com

[webservers]
worker.domain.com
all:
  hosts:
    green.example.com:
    blue.example.com:
    192.168.100.1:
    192.168.100.10:
  children:
    webservers:
      hosts:
        alpha.example.org:
          http_port: 80
          maxRequestsPerChild: 808
        beta.example.org:
        192.168.1.100:
        192.168.1.110:
    dbservers:
      hosts:
        db01.intranet.mydomain.net:
        db02.intranet.mydomain.net:
        10.25.1.56:
        db-[99:101]-node.example.com:

Helm charts

Helm charts 개요

Helm charts 설치

# curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
# chmod 700 get_helm.sh
# ./get_helm.sh
Downloading https://get.helm.sh/helm-v3.5.4-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
# helm repo add stable https://charts.helm.sh/stable
"stable" has been added to your repositories
# helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# helm repo update
# helm install ingress-nginx ingress-nginx/ingress-nginx
NAME: ingress-nginx
LAST DEPLOYED: Wed Apr 28 23:55:09 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w ingress-nginx-controller'

An example Ingress that makes use of the controller:

  apiVersion: networking.k8s.io/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
# kubectl --namespace default get services -o wide -w ingress-nginx-controller
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
ingress-nginx-controller   LoadBalancer   10.110.64.189   <pending>     80:30687/TCP,443:32008/TCP   45s   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

GitLab

GitLab 개요

GitLab 리눅스 패키지로 설치

GitLab 리눅스 패키지로 설치 시 요구 사항

OS

Git

git clone https://gitlab.com/gitlab-org/gitaly.git -b 13-11-stable /tmp/gitaly
cd /tmp/gitaly
make git GIT_PREFIX=/usr/local

git --version # git version 2.31.1

GraphicsMagick

dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm -y
dnf install GraphicsMagick

Mail server

dnf install postfix

Exiftool

dnf install perl-Image-ExifTool

Ruby

dnf install gnupg2 curl tar
dnf install @ruby

curl -sSL https://get.rvm.io | bash

usermod -aG rvm aimpugn # rvm 그룹에 사용자 추가

source /etc/profile.d/rvm.sh # 시스템 환경 변수 업데이트

rvm reload

rvm requirements # 패키지 요구 사항 설치

rvm list known # 설치 끝나면 설치 가능한 Ruby 버전들 확인 가능

rvm install ruby 2.7.3 # 3.0.1도 있지만, 최신 버전이라 호환이 안 될 수도 있으니 2.7.3 설치

Go

go version
go version go1.15.5 linux/amd64

Node

curl -sL "https://rpm.nodesource.com/setup_16.x" | bash - # 리파지토리 추가

yum install nodejs # 설치 완료 후 실행

## Run `sudo yum install -y nodejs` to install Node.js 16.x and npm.
## You may also need development tools to build native addons:
sudo yum install gcc-c++ make
## To install the Yarn package manager, run:
curl -sL https://dl.yarnpkg.com/rpm/yarn.repo | sudo tee /etc/yum.repos.d/yarn.repo
sudo yum install yarn

node --version # v16.1.0

System users

adduser -M --system -c 'GitLab' git

postgresql

yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
설치된 패키지 정보 확인
rpm -qi pgdg-redhat-repo

Name        : pgdg-redhat-repo
Version     : 42.0
Release     : 17
Architecture: noarch
Install Date: 2021년 05월 10일 () 오후 09시 56분 50초
Group       : Unspecified
Size        : 11735
License     : PostgreSQL
Signature   : DSA/SHA1, 2021년 05월 05일 () 오후 08시 52분 04초, Key ID 1f16d2e1442df0f8
Source RPM  : pgdg-redhat-repo-42.0-17.src.rpm
Build Date  : 2021년 05월 05일 () 오후 08시 51분 18초
Build Host  : koji-rhel8-x86-64-pgbuild
Relocations : (not relocatable)
Vendor      : PostgreSQL Global Development Group
URL         : https://yum.postgresql.org
Summary     : PostgreSQL PGDG RPMs- Yum Repository Configuration for Red Hat / CentOS
Description :
This package contains yum configuration for Red Hat Enterprise Linux, CentOS,
and also the GPG key for PGDG RPMs.
내장 PostgreSQL 모듈 비활성화
dnf search postgresql12
dnf -qy module disable postgresql
postgresql12 설치
install postgresql13 postgresql12-server
# PGDATA 경로 확인
[root@vultr ~]# systemctl show -p Environment postgresql-13.service
Environment=PGDATA=/var/lib/pgsql/12/data/ PG_OOM_ADJUST_FILE=/proc/self/oom_score_adj PG_OOM_ADJUST_VALUE=0

[root@vultr downloads]# /usr/pgsql-13/bin/postgresql-13-setup initdb
Initializing database ... OK
systemctl enable --now postgresql-12
psql -c "ALTER USER postgres WITH PASSWORD '<PASSWORD>'"
sudo -u postgres psql -d template1 -c "CREATE USER git CREATEDB;"
sudo -u postgres psql -d template1 -c "CREATE EXTENSION IF NOT EXISTS pg_trgm;"
dnf install postgresql12-contrib
sudo -u postgres psql -d template1 -c "CREATE EXTENSION IF NOT EXISTS btree_gist;"
sudo -u postgres psql -d template1 -c "CREATE DATABASE gitlabhq_production OWNER git;"
$ vi /var/lib/pgsql/12/data/pg_hba.conf

# TYPE DATABASE USER ADDRESS METHOD
local  all      all          md5


$ systemctl restart postgresql-12
SELECT name, true AS enabled
FROM pg_available_extensions
WHERE name IN ('pg_trgm', 'btree_gist')
AND installed_version IS NOT NULL;

    name    | enabled
------------+---------
 pg_trgm    | t
 btree_gist | t
(2 rows)

참조