Kubernetes指南
Linux性能优化实战eBPF 核心技术与实战SDN指南个人博客
EN
EN
  • Overview
  • Introduction
    • Kubernetes Introduction
    • Kubernetes Concepts
    • Kubernetes 101
    • Kubernetes 201
    • Kubernetes Cluster
  • Concepts
    • Concepts
    • Architecture
    • Design Principles
    • Components
      • etcd
      • kube-apiserver
      • kube-scheduler
      • kube-controller-manager
      • kubelet
      • kube-proxy
      • kube-dns
      • Federation
      • kubeadm
      • hyperkube
      • kubectl
    • Objects
      • Autoscaling
      • ConfigMap
      • CronJob
      • CustomResourceDefinition
      • DaemonSet
      • Deployment
      • Ingress
      • Job
      • LocalVolume
      • Namespace
      • NetworkPolicy
      • Node
      • PersistentVolume
      • Pod
      • PodPreset
      • ReplicaSet
      • Resource Quota
      • Secret
      • SecurityContext
      • Service
      • ServiceAccount
      • StatefulSet
      • Volume
  • Setup
    • Setup Guidance
    • kubectl Install
    • Single Machine
    • Feature Gates
    • Best Practice
    • Version Support
    • Setup Cluster
      • kubeadm
      • kops
      • Kubespray
      • Azure
      • Windows
      • LinuxKit
      • kubeasz
    • Setup Addons
      • Addon-manager
      • DNS
      • Dashboard
      • Monitoring
      • Logging
      • Metrics
      • GPU
      • Cluster Autoscaler
      • ip-masq-agent
  • Extension
    • API Extension
      • Aggregation
      • CustomResourceDefinition
    • Access Control
      • Authentication
      • RBAC Authz
      • Admission
    • Scheduler Extension
    • Network Plugin
      • CNI
      • Flannel
      • Calico
      • Weave
      • Cilium
      • OVN
      • Contiv
      • SR-IOV
      • Romana
      • OpenContrail
      • Kuryr
    • Container Runtime
      • CRI-tools
      • Frakti
    • Storage Driver
      • CSI
      • FlexVolume
      • glusterfs
    • Network Policy
    • Ingress Controller
      • Ingress + Letsencrypt
      • minikube Ingress
      • Traefik Ingress
      • Keepalived-VIP
    • Cloud Provider
    • Device Plugin
  • Cloud Native Apps
    • Apps Management
      • Patterns
      • Rolling Update
      • Helm
      • Operator
      • Service Mesh
      • Linkerd
      • Linkerd2
    • Istio
      • Deploy
      • Traffic Management
      • Security
      • Policy
      • Metrics
      • Troubleshooting
      • Community
    • Devops
      • Draft
      • Jenkins X
      • Spinnaker
      • Kompose
      • Skaffold
      • Argo
      • Flux GitOps
  • Practices
    • Overview
    • Resource Management
    • Cluster HA
    • Workload HA
    • Debugging
    • Portmap
    • Portforward
    • User Management
    • GPU
    • HugePage
    • Security
    • Audit
    • Backup
    • Cert Rotation
    • Large Cluster
    • Big Data
      • Spark
      • Tensorflow
    • Serverless
  • Troubleshooting
    • Overview
    • Cluster Troubleshooting
    • Pod Troubleshooting
    • Network Troubleshooting
    • PV Troubleshooting
      • AzureDisk
      • AzureFile
    • Windows Troubleshooting
    • Cloud Platform Troubleshooting
      • Azure
    • Troubleshooting Tools
  • Community
    • Development Guide
    • Unit Test and Integration Test
    • Community Contribution
  • Appendix
    • Ecosystem
    • Learning Resources
    • Domestic Mirrors
    • How to Contribute
    • Reference Documents
由 GitBook 提供支持
在本页
  • Prelude
  • The difference with service-loadbalancer and ingress-nginx
  • Environment Requirements
  • RBAC
  • Example
  • Reference Documents
  1. Extension
  2. Ingress Controller

Keepalived-VIP

上一页Traefik Ingress下一页Cloud Provider

最后更新于1年前

Kubernetes presents as a tool to create a Virtual IP address (VIP).

In this discussion, we will unravel how to effectively use to configure a VIP for Kubernetes.

Prelude

The v1.6 version of Kubernetes provides three modes to expose a Service:

  1. L4 LoadBalancer : This can only be utilized on like GCE or AWS.

  2. NodePort : allows the opening of a port on each node. This port then routes the request to a randomly selected pod.

  3. L7 Ingress : serves as a LoadBalancer (for instance, nginx, HAProxy, traefik, vulcand) which directs HTTP/HTTPS requests to the corresponding service endpoint.

So if we've got all these ways, why do we need keepalived?


                                             ___________________
                                            |                   |
                                          |-----| Host IP: 10.4.0.3 |
                                          |     |___________________|
                                          |
                                          |     ___________________
                                          |    |                   |
Public ----(example.com = 10.4.0.3/4/5)----|-----| Host IP: 10.4.0.4 |
                                          |    |___________________|
                                          |
                                          |     ___________________
                                          |    |                   |
                                          |-----| Host IP: 10.4.0.5 |
                                                |___________________|

Let's suppose that the Ingress operates on 3 Kubernetes nodes, exposing the 10.4.0.x IP for load balancing purposes.

If the DNS Round Robin (RR) cycles the requests corresponding to example.com to these 3 nodes and 10.4.0.3 crashes, a third of the traffic will still be directed towards 10.4.0.3. This causes a downtime, until the DNS identifies the failure and corrects the direction.

Strictly speaking, this doesn't truly offer High Availability (HA).

Here, IPVS comes to our rescue by associating each service with a VIP and exposing the VIP outside the Kubernetes cluster.

Looking at the diagram below,

                                              ___________________
                                             |                   |
                                             | VIP: 10.4.0.50    |
                                       |-----| Host IP: 10.4.0.3 |
                                       |    | Role: Master      |
                                       |    |___________________|
                                       |
                                       |     ___________________
                                       |    |                   |
                                       |    | VIP: Unassigned   |
Public ----(example.com = 10.4.0.50)----|-----| Host IP: 10.4.0.4 |
                                       |    | Role: Slave       |
                                       |    |___________________|
                                       |
                                       |     ___________________
                                       |    |                   |
                                       |    | VIP: Unassigned   |
                                       |-----| Host IP: 10.4.0.5 |
                                             | Role: Slave       |
                                             |___________________|

Only one node (selected by VRRP) is chosen as the Master, and the VIP is 10.4.0.50. If 10.4.0.3 fails, another node from the remaining ones becomes Master and takes on the VIP, ensuring the true implementation of HA.

Environment Requirements

RBAC

vip-rbac.yaml

... (Please refer to original text for better code understanding)

clusterrolebinding.yaml

... (Please refer to original text for better code understanding)
$ kubectl create -f vip-rbac.yaml
$ kubectl create -f clusterrolebinding.yaml

Example

Firstly, create a simple service.

nginx-deployment.yaml

... (Please refer to original text for better code understanding)

The primary task is to get the pod to listen to port 80, then open the service NodePort monitoring 30320.

$ kubecrl create -f nginx-deployment.yaml

Next, we focus on the config map.

... (Please refer to original text for better code understanding)

Do make a note, 10.87.2.50 must be replaced with an unused IP from your own network segment, e.g., 10.87.2.X. nginx is the service name, and this can be changed accordingly.

Following the confirmation,

... (Please refer to original text for better code understanding)

The next step is to set up keepalived-vip.

... (Please refer to original text for better code understanding)

Establish the daemonset

... (Please refer to original text for better code understanding)

Check the configuration status

... (Please refer to original text for better code understanding)

You can randomly select a pod to inspect its configuration

... (Please refer to original text for better code understanding)

Finally, test the feature

... (Please refer to original text for better code understanding)

10.87.2.50:80 (our hypothetical VIP, as no node actually uses this IP) can now help us direct this service.

Reference Documents

The difference with and

All that's needed is to ensure that the Kubernetes cluster running keepalived-vip has a normal feature.

As Kubernetes introduced the concept of RBAC post version 1.6, we first need to set the rule. For detailed information regarding RBAC, please refer to .

All the codes mentioned above can be found .

keepalived
IPVS - The Linux Virtual Server Project
cloud providers
NodePort
Ingress
service-loadbalancer
ingress-nginx
DaemonSets
the guide
here
kweisamx/kubernetes-keepalived-vip
kubernetes/keepalived-vip