Keepalived-VIP
kubernetes v1.6 版提供了三种方式去暴露 Service:
- 1.
- 2.
- 3.L7 Ingress :Ingress 为一个 LoadBalancer(例: nginx, HAProxy, traefik, vulcand) 会将 HTTP/HTTPS 的各个请求导向到相对应的 service endpoint
有了这些方式, 为何我们还需要 keepalived ?
___________________
| |
|-----| Host IP: 10.4.0.3 |
| |___________________|
|
| ___________________
| | |
Public ----(example.com = 10.4.0.3/4/5)----|-----| Host IP: 10.4.0.4 |
| |___________________|
|
| ___________________
| | |
|-----| Host IP: 10.4.0.5 |
|___________________|
我们假设 Ingress 运行在 3 个 kubernetes 节点上, 并对外暴露
10.4.0.x
的 IP 去做 loadbalanceDNS Round Robin (RR) 将对应到
example.com
的请求轮循给这 3 个节点, 如果 10.4.0.3
掛了, 仍有三分之一的流量会导向 10.4.0.3
, 这样就会有一段 downtime, 直到 DNS 发现 10.4.0.3
掛了并修正导向严格来说, 这并没有真正的做到 High Availability (HA)
这边 IPVS 可以帮助我们解决这件事, 这个想法是虚拟 IP(VIP) 对应到每个 service 上, 并将 VIP 暴露到 kubernetes 群集之外
我们看到以下的图
___________________
| |
| VIP: 10.4.0.50 |
|-----| Host IP: 10.4.0.3 |
| | Role: Master |
| |___________________|
|
| ___________________
| | |
| | VIP: Unassigned |
Public ----(example.com = 10.4.0.50)----|-----| Host IP: 10.4.0.4 |
| | Role: Slave |
| |___________________|
|
| ___________________
| | |
| | VIP: Unassigned |
|-----| Host IP: 10.4.0.5 |
| Role: Slave |
|___________________|
我们可以看到只有一个 node 被选为 Master(透过 VRRP 选择的), 而我们的 VIP 是
10.4.0.50
, 如果 10.4.0.3
掛掉了, 那会从剩余的节点中选一个成为 Master 并接手 VIP, 这样我们就可以确保落实真正的 HAvip-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kube-keepalived-vip
rules:
- apiGroups: [""]
resources:
- pods
- nodes
- endpoints
- services
- configmaps
verbs: ["get", "list", "watch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-keepalived-vip
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kube-keepalived-vip
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-keepalived-vip
subjects:
- kind: ServiceAccount
name: kube-keepalived-vip
namespace: default
clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: ClusterRoleBinding
metadata:
name: kube-keepalived-vip
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-keepalived-vip
subjects:
- kind: ServiceAccount
name: kube-keepalived-vip
namespace: default
$ kubectl create -f vip-rbac.yaml
$ kubectl create -f clusterrolebinding.yaml
先建立一个简单的 service
nginx-deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30302
targetPort: 80
protocol: TCP
name: http
selector:
app: nginx