部署控制节点
本部分将会在三台控制节点上部署 Kubernetes 控制服务,并配置高可用的集群架构。并且还会创建一个用于外部访问的负载均衡器。每个控制节点上需要部署的服务包括:Kubernetes API Server、Scheduler 以及 Controller Manager 等。

事前准备

以下命令需要在每台控制节点上面都运行一遍,包括 controller-0controller-1controller-2。可以使用 gcloud 命令登录每个控制节点。例如:
1
gcloud compute ssh controller-0
Copied!
可以使用 tmux 同时登录到三点控制节点上,加快部署步骤。

部署 Kubernetes 控制平面

创建 Kubernetes 配置目录
1
sudo mkdir -p /etc/kubernetes/config
Copied!

下载并安装 Kubernetes Controller 二进制文件

1
wget -q --show-progress --https-only --timestamping \
2
"https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-apiserver" \
3
"https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-controller-manager" \
4
"https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-scheduler" \
5
"https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubectl"
6
7
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
8
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
Copied!

配置 Kubernetes API Server

1
{
2
sudo mkdir -p /var/lib/kubernetes/
3
4
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
5
service-account-key.pem service-account.pem \
6
encryption-config.yaml /var/lib/kubernetes/
7
}
Copied!
使用节点的内网 IP 地址作为 API server 与集群内部成员的广播地址。首先查询当前节点的内网 IP 地址:
1
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
2
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
Copied!
生成 kube-apiserver.service systemd 配置文件:
1
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
2
[Unit]
3
Description=Kubernetes API Server
4
Documentation=https://github.com/kubernetes/kubernetes
5
6
[Service]
7
ExecStart=/usr/local/bin/kube-apiserver \\
8
--advertise-address=${INTERNAL_IP} \\
9
--allow-privileged=true \\
10
--apiserver-count=3 \\
11
--audit-log-maxage=30 \\
12
--audit-log-maxbackup=3 \\
13
--audit-log-maxsize=100 \\
14
--audit-log-path=/var/log/audit.log \\
15
--authorization-mode=Node,RBAC \\
16
--bind-address=0.0.0.0 \\
17
--client-ca-file=/var/lib/kubernetes/ca.pem \\
18
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
19
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
20
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
21
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
22
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
23
--event-ttl=1h \\
24
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
25
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
26
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
27
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
28
--kubelet-https=true \\
29
--runtime-config='api/all=true' \\
30
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
31
--service-cluster-ip-range=10.32.0.0/24 \\
32
--service-node-port-range=30000-32767 \\
33
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
34
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
35
--v=2
36
Restart=on-failure
37
RestartSec=5
38
39
[Install]
40
WantedBy=multi-user.target
41
EOF
Copied!

配置 Kubernetes Controller Manager

生成 kube-controller-manager.service systemd 配置文件:
1
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
2
3
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
4
[Unit]
5
Description=Kubernetes Controller Manager
6
Documentation=https://github.com/kubernetes/kubernetes
7
8
[Service]
9
ExecStart=/usr/local/bin/kube-controller-manager \\
10
--bind-address=0.0.0.0 \\
11
--cluster-cidr=10.200.0.0/16 \\
12
--cluster-name=kubernetes \\
13
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
14
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
15
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
16
--leader-elect=true \\
17
--root-ca-file=/var/lib/kubernetes/ca.pem \\
18
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
19
--service-cluster-ip-range=10.32.0.0/24 \\
20
--use-service-account-credentials=true \\
21
--v=2
22
Restart=on-failure
23
RestartSec=5
24
25
[Install]
26
WantedBy=multi-user.target
27
EOF
Copied!

配置 Kubernetes Scheduler

生成 kube-scheduler.service systemd 配置文件:
1
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
2
3
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
4
apiVersion: kubescheduler.config.k8s.io/v1alpha1
5
kind: KubeSchedulerConfiguration
6
clientConnection:
7
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
8
leaderElection:
9
leaderElect: true
10
EOF
11
12
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
13
[Unit]
14
Description=Kubernetes Scheduler
15
Documentation=https://github.com/kubernetes/kubernetes
16
17
[Service]
18
ExecStart=/usr/local/bin/kube-scheduler \\
19
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
20
--v=2
21
Restart=on-failure
22
RestartSec=5
23
24
[Install]
25
WantedBy=multi-user.target
26
EOF
Copied!

启动控制器服务

1
sudo systemctl daemon-reload
2
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
3
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
Copied!
请等待 10 秒以便 Kubernetes API Server 初始化。

开启 HTTP 健康检查

Google Network Load Balancer 将用在在三个 API Server 之前作负载均衡,并可以终止 TLS 并验证客户端证书。但是该负载均衡仅支持 HTTP 健康检查,因而这里部署 nginx 来代理 API Server 的 /healthz 连接。
/healthz API 默认不需要认证。
1
sudo apt-get update
2
sudo apt-get install -y nginx
3
cat > kubernetes.default.svc.cluster.local <<EOF
4
server {
5
listen 80;
6
server_name kubernetes.default.svc.cluster.local;
7
8
location /healthz {
9
proxy_pass https://127.0.0.1:6443/healthz;
10
proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
11
}
12
}
13
EOF
14
{
15
sudo mv kubernetes.default.svc.cluster.local \
16
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
17
18
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
19
}
20
sudo systemctl restart nginx
21
sudo systemctl enable nginx
Copied!

验证

1
kubectl get componentstatuses --kubeconfig admin.kubeconfig
Copied!
将输出结果
1
NAME STATUS MESSAGE ERROR
2
scheduler Healthy ok
3
controller-manager Healthy ok
4
etcd-0 Healthy {"health":"true"}
5
etcd-1 Healthy {"health":"true"}
6
etcd-2 Healthy {"health":"true"}
Copied!
验证 Nginx HTTP 健康检查
1
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
Copied!
将输出
1
HTTP/1.1 200 OK
2
Server: nginx/1.18.0 (Ubuntu)
3
Date: Sat, 18 Jul 2020 06:20:48 GMT
4
Content-Type: text/plain; charset=utf-8
5
Content-Length: 2
6
Connection: keep-alive
7
Cache-Control: no-cache, private
8
X-Content-Type-Options: nosniff
9
10
ok
Copied!
记得在每台控制节点上面都运行一遍,包括 controller-0controller-1controller-2

Kubelet RBAC 授权

本节将会配置 API Server 访问 Kubelet API 的 RBAC 授权。访问 Kubelet API 是获取 metrics、日志以及执行容器命令所必需的。
这里设置 Kubeket --authorization-modeWebhook 模式。Webhook 模式使用 SubjectAccessReview API 来决定授权。
1
gcloud compute ssh controller-0
Copied!
创建 system:kube-apiserver-to-kubelet ClusterRole 以允许请求 Kubelet API 和执行许用来管理 Pods 的任务:
1
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
2
apiVersion: rbac.authorization.k8s.io/v1beta1
3
kind: ClusterRole
4
metadata:
5
annotations:
6
rbac.authorization.kubernetes.io/autoupdate: "true"
7
labels:
8
kubernetes.io/bootstrapping: rbac-defaults
9
name: system:kube-apiserver-to-kubelet
10
rules:
11
- apiGroups:
12
- ""
13
resources:
14
- nodes/proxy
15
- nodes/stats
16
- nodes/log
17
- nodes/spec
18
- nodes/metrics
19
verbs:
20
- "*"
21
EOF
Copied!
Kubernetes API Server 使用客户端凭证授权 Kubelet 为 kubernetes 用户,此凭证用 --kubelet-client-certificate flag 来定义。
绑定 system:kube-apiserver-to-kubelet ClusterRole 到 kubernetes 用户:
1
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
2
apiVersion: rbac.authorization.k8s.io/v1beta1
3
kind: ClusterRoleBinding
4
metadata:
5
name: system:kube-apiserver
6
namespace: ""
7
roleRef:
8
apiGroup: rbac.authorization.k8s.io
9
kind: ClusterRole
10
name: system:kube-apiserver-to-kubelet
11
subjects:
12
- apiGroup: rbac.authorization.k8s.io
13
kind: User
14
name: kubernetes
15
EOF
Copied!

Kubernetes 前端负载均衡器

本节将会建立一个位于 Kubernetes API Servers 前端的外部负载均衡器。 kubernetes-the-hard-way 静态 IP 地址将会配置在这个负载均衡器上。
本指南创建的虚拟机内部并没有操作负载均衡器的权限,需要到创建这些虚拟机的那台机器上去做下面的操作。
创建外部负载均衡器网络资源:
1
{
2
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
3
--region $(gcloud config get-value compute/region) \
4
--format 'value(address)')
5
6
gcloud compute http-health-checks create kubernetes \
7
--description "Kubernetes Health Check" \
8
--host "kubernetes.default.svc.cluster.local" \
9
--request-path "/healthz"
10
11
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
12
--network kubernetes-the-hard-way \
13
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
14
--allow tcp
15
16
gcloud compute target-pools create kubernetes-target-pool \
17
--http-health-check kubernetes
18
19
gcloud compute target-pools add-instances kubernetes-target-pool \
20
--instances controller-0,controller-1,controller-2
21
22
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
23
--address ${KUBERNETES_PUBLIC_ADDRESS} \
24
--ports 6443 \
25
--region $(gcloud config get-value compute/region) \
26
--target-pool kubernetes-target-pool
27
}
Copied!

验证

查询 kubernetes-the-hard-way 静态 IP 地址:
1
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
2
--region $(gcloud config get-value compute/region) \
3
--format 'value(address)')
Copied!
发送一个查询 Kubernetes 版本信息的 HTTP 请求
1
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
Copied!
结果为
1
{
2
"major": "1",
3
"minor": "18",
4
"gitVersion": "v1.18.6",
5
"gitCommit": "dff82dc0de47299ab66c83c626e08b245ab19037",
6
"gitTreeState": "clean",
7
"buildDate": "2020-07-15T16:51:04Z",
8
"goVersion": "go1.13.9",
9
"compiler": "gc",
10
"platform": "linux/amd64"
11
}
Copied!
最近更新 1yr ago