二进制部署k8s文档

1.环境准备

软件环境

软件 版本
操作系统 虚拟机为CentOS Linux release 7.9.2009 (Core);生产为CentOS Linux release 7.7.1908 (Core)
docker 19.03.8
kubernetes 1.13.0

服务器整体规划:

角色 ip 组件
master 192.168.188.128 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
node1 192.168.188.129 kubelet,kube-proxy,docker etcd
node2 192.168.188.110 kubelet,kube-proxy,docker etcd

w1-192.168.188.129

w2- 192.168.188.128

w3-192.168.188.130

操作系统初始化配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时

# 关闭swap
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久

#w2- 192.168.188.128- master
hostnamectl set-hostname k8s-master

#w1-192.168.188.129- node1
hostnamectl set-hostname k8s-node1

#w3-192.168.188.130- node2
hostnamectl set-hostname k8s-node2

# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.188.128 k8s-master
192.168.188.129 k8s-node1
192.168.188.130 k8s-node2
EOF

# 时间同步
yum install ntpdate -y
ntpdate time.windows.com
#指定东8时区
sudo cp -a /usr/share/zoneinfo/Etc/GMT-8 /etc/localtime

2.部署etcd集群

节点名称 ip
etcd-1 192.168.188.128
etcd-2 192.168.188.129
etcd-3 192.168.188.130

准备开源的证书管理工具cfssl,使用json生成证书

1
2
3
4
5
6
7
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

因为下载地址在外网,虚拟机配代理比较麻烦,有ssl验证,dns解析等问题,所以是在本地下好用共享文件夹传进去。

生成Etcd证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
mkdir -p ~/TLS/{etcd,k8s}
cd ~
cd TLS/etcd

#自签CA
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF

cat > ca-csr.json << EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF

#生成证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem

#使用自签CA签发Etcd HTTPS证书

#创建证书申请文件
cat > server-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"192.168.188.128",
"192.168.188.129",
"192.168.188.130"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

2023/03/09 18:18:11 [INFO] generate received request
2023/03/09 18:18:11 [INFO] received CSR
2023/03/09 18:18:11 [INFO] generating key: rsa-2048
2023/03/09 18:18:11 [INFO] encoded CSR
2023/03/09 18:18:11 [INFO] signed certificate with serial number 302139856182803477179962090624930630774019179150
2023/03/09 18:18:11 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

部署etcd集群

下载:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

在节点1上操作完再复制到2和3

创建工作目录并解压二进制包

1
2
3
mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

创建etcd配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.188.128:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.188.128:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.188.128:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.188.128:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.188.128:2380,etcd-2=https://192.168.188.128:2380,etcd-3=https://192.168.188.128:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

systemd管理etcd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

拷贝刚才生成的证书

1
cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/

启动并设置开机启动

1
2
3
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

将上面节点1所有生成的文件拷贝到节点2和节点3

1
2
3
4
scp -r /opt/etcd/ root@192.168.188.129:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.188.129:/usr/lib/systemd/system/
scp -r /opt/etcd/ root@192.168.188.130:/opt/
scp /usr/lib/systemd/system/etcd.service root@192.168.188.130:/usr/lib/systemd/system/

然后在节点2和节点3分别修改etcd.conf配置文件中的节点名称和当前服务器IP:

1
2
3
4
5
6
7
8
9
10
11
12
13
vi /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1" # 修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380" # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379" # 修改此处为当前服务器IP

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380" # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379" # 修改此处为当前服务器IP
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

查看集群状态

1
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.188.128:2379,https://192.168.188.129:2379,https://192.168.188.130:2379" endpoint health

3.安装docker

1
2
3
cd /mnt/hgfs/tmp
tar zxvf docker-19.03.9.tgz
cp docker/* /usr/bin

systemd管理docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF

创建配置文件

1
2
3
4
5
6
7
8
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
"log-driver":"json-file",
"log-opts": {"max-size":"500m", "max-file":"3"}
}
EOF

启动并设置开机启动

1
2
3
systemctl daemon-reload
systemctl start docker
systemctl enable docker

4.部署Master Node

生成kube-apiserver证书

自签证书颁发机构(CA)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
cd /root/TLS/k8s
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

生成证书

1
2
3
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem

使用自签CA签发kube-apiserver HTTPS证书

创建证书申请文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
cd TLS/k8s
cat > server-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.188.128",
"192.168.188.129",
"192.168.188.130",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

生成证书:

1
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

解压二进制包

1
2
3
4
5
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/

部署kube-apiserver

创建配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.188.128:2379,https://192.168.188.129:2379,https://192.168.188.130:2379 \\
--bind-address=192.168.188.128 \\
--secure-port=6443 \\
--advertise-address=192.168.188.128 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF

拷贝刚才生成的证书

1
cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/

启用 TLS Bootstrapping 机制

创建上述配置文件中token文件:

1
2
3
cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF

生成token:

1
head -c 16 /dev/urandom | od -An -t x | tr -d ' '

systemd管理apiserver

1
2
3
4
5
6
7
8
9
10
11
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

启动并设置开机启动

1
2
3
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

授权kubelet-bootstrap用户允许请求证书

1
2
3
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

部署kube-controller-manager

创建配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF

systemd管理controller-manager

1
2
3
4
5
6
7
8
9
10
11
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

启动并设置开机启动

1
2
3
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

部署kube-scheduler

创建配置文件

1
2
3
4
5
6
7
8
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF

systemd管理scheduler

1
2
3
4
5
6
7
8
9
10
11
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF

启动并设置开机启动

1
2
3
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler

查看集群状态

1
kubectl get cs

5.部署Worker Node

创建工作目录并拷贝二进制文件

在所有worker node创建工作目录:

1
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 

从master节点拷贝:

1
2
cd /mnt/hgfs/tmp/kubernetes/server/bin
cp kubelet kube-proxy /opt/kubernetes/bin # 本地拷贝

部署kubelet

创建配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF

cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-node1 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF

cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-node2 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF

配置参数文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

生成bootstrap.kubeconfig文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
KUBE_APISERVER="https://192.168.188.128:6443" # apiserver IP:PORT
TOKEN="c47ffb939f5ca36231d9e3121a252940" # 与token.csv里保持一致

# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=bootstrap.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

拷贝到配置文件路径:

1
cp bootstrap.kubeconfig /opt/kubernetes/cfg

systemd管理kubelet

1
2
3
4
5
6
7
8
9
10
11
12
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

启动并设置开机启动

1
2
3
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

批准kubelet证书申请并加入集群

1
2
3
4
5
6
7
8
9
# 查看kubelet证书请求
kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-Hn8Xzb1N8xf2-UQ0qKy3HRnqjsqvVKZXhK8w9R4efng 18s kubelet-bootstrap Pending
# 批准申请
kubectl certificate approve node-csr-Hn8Xzb1N8xf2-UQ0qKy3HRnqjsqvVKZXhK8w9R4efng

# 查看节点
kubectl get node

部署kube-proxy

创建配置文件

1
2
3
4
5
6
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

配置参数文件

1
2
3
4
5
6
7
8
9
10
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.0.0.0/24
EOF

生成kube-proxy.kubeconfig文件

生成kube-proxy证书:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# 切换工作目录
cd ~/TLS/k8s

# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

ls kube-proxy*pem
kube-proxy-key.pem kube-proxy.pem

生成kubeconfig文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
KUBE_APISERVER="https://192.168.188.128:6443"

kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

拷贝到配置文件指定路径:

1
cp kube-proxy.kubeconfig /opt/kubernetes/cfg/

systemd管理kube-proxy

1
2
3
4
5
6
7
8
9
10
11
12
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

启动并设置开机启动

1
2
3
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy

部署CNI网络

先准备好CNI二进制文件:

下载地址:https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

解压二进制包并移动到默认工作目录:

1
2
3
4
5
6
 mkdir /etc/cni/net.d -p
cd /mnt/hgfs/tmp/
mkdir /opt/cni/bin -p
cp cni-plugins-linux-amd64-v0.8.6.tgz /opt/cni/bin/cni-plugins-linux-amd64-v0.8.6.tgz
cd /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

部署CNI网络:

1
2
3
4
5
6
7
8
9
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
cp /mnt/hgfs/tmp/kube-flannel.yml /opt/cni/bin
# 默认镜像地址无法访问,修改为docker hub镜像仓库。
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml

kubectl -n kube-system apply -f
wget https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.10.0-amd64#g" kube-flannel.yml.2

手动下载:kube-flannel.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- "networking.k8s.io"
resources:
- clustercidrs
verbs:
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
image: docker.io/flannel/flannel-cni-plugin:v1.1.2
#image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
image: docker.io/flannel/flannel:v0.21.3
#image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.3
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: docker.io/flannel/flannel:v0.21.3
#image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.3
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
1
2
3
4
5
6
7
8
9
kubectl apply -f kube-flannel.yml

namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

新增加Worker Node

拷贝已部署好的Node相关文件到新节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
scp -r /opt/kubernetes root@192.168.188.129:/opt/

scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.188.129:/usr/lib/systemd/system

scp -r /opt/cni/ root@192.168.188.129:/opt/

scp /opt/kubernetes/ssl/ca.pem root@192.168.188.129:/opt/kubernetes/ssl
######
scp -r /opt/kubernetes root@192.168.188.130:/opt/

scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.188.130:/usr/lib/systemd/system

scp -r /opt/cni/ root@192.168.188.130:/opt/

scp /opt/kubernetes/ssl/ca.pem root@192.168.188.130:/opt/kubernetes/ssl

删除kubelet证书和kubeconfig文件

1
2
rm /opt/kubernetes/cfg/kubelet.kubeconfig 
rm -f /opt/kubernetes/ssl/kubelet*

修改主机名

1
2
3
4
5
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node1

vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node1

启动并设置开机启动

1
2
3
4
5
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy

在Master上批准新Node kubelet证书申请

1
2
3
4
5
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro 89s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending

kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro

查看Node状态

1
2
3
4
5
6
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready <none> 26h v1.18.3
k8s-node1 Ready <none> 21h v1.13.0
k8s-node2 Ready <none> 21h v1.13.0

部署kubeboard和CoreDNS

1
2
3
4
5
6
7
8
9
10
kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml
kubectl apply -f https://addons.kuboard.cn/metrics-server/0.3.7/metrics-server.yaml

kubectl get pods -l k8s.kuboard.cn/name=kuboard -n kube-system

# 获取Token
# 如果您参考 www.kuboard.cn 提供的文档安装 Kuberenetes,可在第一个 Master 节点上执行此命令
# https://kuboard.cn/install/install-dashboard.html#%E8%8E%B7%E5%8F%96token
echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep ^kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d)

部署CoreDNS

CoreDNS用于集群内部Service名称解析。

1
2
3
kubectl apply -f coredns.yaml

kubectl get pods -n kube-system

DNS解析测试:

1
2
3
4
5
6
7
8
9
kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.

/ # nslookup kubernetes
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name: kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

Docker Compose 安装

下载 docker-compose文件:

1
sudo curl -L "https://get.daocloud.io/docker/compose/releases/download/1.27.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

赋予执行权限:

1
sudo chmod +x /usr/local/bin/docker-compose

使用命令 docker-compose -v 查看 compose 版本

1
2
[root@k8s-master tmp]# docker-compose -v
docker-compose version 1.27.4, build 40524192

Harbor安装

在master节点:

离线安装包:harbor-offline-installer-v1.5.0.tgz_免费高速下载|百度网盘-分享无限制 (baidu.com)

解压

tar xzvf 包名

进入解压出来的文件夹harbor中

vi harbor.cfg

把其中的hostname修改为:master1 的IP地址;然后 修改harbor的登录密码

执行命令:

./prepare

./install.sh

启动完,查看一下

docker-compose ps

用网页看下

Harbor

gitlab安装

在node1节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
mkdir gitlab && cd gitlab
vim docker-compose.yml

version: '3'

services:
gitlab:
image: 'twang2218/gitlab-ce-zh'
container_name: 'gitlab'
restart: always
hostname: '192.168.188.129' #部署机器的ip,非容器ip(因为是本地不是线上所以用ip,线上的话可以用域名)
environment:
TZ: 'Asia/Shanghai'
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://192.168.188.129:81' #使用这个地址访问gitlab web ui(因为是本地不是线上所以用ip,线上的话可以用域名)
gitlab_rails['gitlab_shell_ssh_port'] = 2222 #ssh clone代码地址
unicorn['port'] = 8888 #gitlab一个内部端口
ports:
- '80:81' #web 80 端口
#- '443:443' #web 443 端口,本次未使用就不开放了
- '2222:22' #ssh 检出代码 端口
volumes:
- ./etc:/etc/gitlab #Gitlab配置文件目录
- ./data:/var/opt/gitlab #Gitlab数据目录
- ./logs:/var/log/gitlab #Gitlab日志目录


启动gitlab

1
docker-compose up -d

重启gitlab

1
2
docker-compose stop
docker-compose up -d