Kubernetes指南
Linux性能优化实战eBPF 核心技术与实战SDN指南个人博客
EN
EN
  • Overview
  • Introduction
    • Kubernetes Introduction
    • Kubernetes Concepts
    • Kubernetes 101
    • Kubernetes 201
    • Kubernetes Cluster
  • Concepts
    • Concepts
    • Architecture
    • Design Principles
    • Components
      • etcd
      • kube-apiserver
      • kube-scheduler
      • kube-controller-manager
      • kubelet
      • kube-proxy
      • kube-dns
      • Federation
      • kubeadm
      • hyperkube
      • kubectl
    • Objects
      • Autoscaling
      • ConfigMap
      • CronJob
      • CustomResourceDefinition
      • DaemonSet
      • Deployment
      • Ingress
      • Job
      • LocalVolume
      • Namespace
      • NetworkPolicy
      • Node
      • PersistentVolume
      • Pod
      • PodPreset
      • ReplicaSet
      • Resource Quota
      • Secret
      • SecurityContext
      • Service
      • ServiceAccount
      • StatefulSet
      • Volume
  • Setup
    • Setup Guidance
    • kubectl Install
    • Single Machine
    • Feature Gates
    • Best Practice
    • Version Support
    • Setup Cluster
      • kubeadm
      • kops
      • Kubespray
      • Azure
      • Windows
      • LinuxKit
      • kubeasz
    • Setup Addons
      • Addon-manager
      • DNS
      • Dashboard
      • Monitoring
      • Logging
      • Metrics
      • GPU
      • Cluster Autoscaler
      • ip-masq-agent
  • Extension
    • API Extension
      • Aggregation
      • CustomResourceDefinition
    • Access Control
      • Authentication
      • RBAC Authz
      • Admission
    • Scheduler Extension
    • Network Plugin
      • CNI
      • Flannel
      • Calico
      • Weave
      • Cilium
      • OVN
      • Contiv
      • SR-IOV
      • Romana
      • OpenContrail
      • Kuryr
    • Container Runtime
      • CRI-tools
      • Frakti
    • Storage Driver
      • CSI
      • FlexVolume
      • glusterfs
    • Network Policy
    • Ingress Controller
      • Ingress + Letsencrypt
      • minikube Ingress
      • Traefik Ingress
      • Keepalived-VIP
    • Cloud Provider
    • Device Plugin
  • Cloud Native Apps
    • Apps Management
      • Patterns
      • Rolling Update
      • Helm
      • Operator
      • Service Mesh
      • Linkerd
      • Linkerd2
    • Istio
      • Deploy
      • Traffic Management
      • Security
      • Policy
      • Metrics
      • Troubleshooting
      • Community
    • Devops
      • Draft
      • Jenkins X
      • Spinnaker
      • Kompose
      • Skaffold
      • Argo
      • Flux GitOps
  • Practices
    • Overview
    • Resource Management
    • Cluster HA
    • Workload HA
    • Debugging
    • Portmap
    • Portforward
    • User Management
    • GPU
    • HugePage
    • Security
    • Audit
    • Backup
    • Cert Rotation
    • Large Cluster
    • Big Data
      • Spark
      • Tensorflow
    • Serverless
  • Troubleshooting
    • Overview
    • Cluster Troubleshooting
    • Pod Troubleshooting
    • Network Troubleshooting
    • PV Troubleshooting
      • AzureDisk
      • AzureFile
    • Windows Troubleshooting
    • Cloud Platform Troubleshooting
      • Azure
    • Troubleshooting Tools
  • Community
    • Development Guide
    • Unit Test and Integration Test
    • Community Contribution
  • Appendix
    • Ecosystem
    • Learning Resources
    • Domestic Mirrors
    • How to Contribute
    • Reference Documents
由 GitBook 提供支持
在本页
  1. Concepts
  2. Components

kube-proxy

上一页kubelet下一页kube-dns

最后更新于1年前

A service known as kube-proxy operates on each machine, attentively observing the variations in the service and endpoint within the API server. Kube-proxy configures load balancing for services via mechanisms like iptables, keeping in mind that it only supports TCP and UDP.

Kube-proxy can function directly on a physical machine, or operate in the style of a static pod or daemonset.

There are a few modes of operation available for kube-proxy:

  • userspace: This is the earliest load balancing scheme. It operates on a port in user space, forwarding all services via iptables to this port. Inside, it handles load balancing to the actual Pod. Its most significant drawback is its inefficiency, creating an evident performance bottleneck.

  • iptables: This is the currently recommended scheme. It brings service load balancing into existence using iptables rules alone. The trouble with this approach is that an exorbitant number of iptables rules may result in service overload, and non-incremental updates can introduce certain delays. In extensive situations, performance degradation is quite significant.

  • ipvs: In order to counteract the performance problems of the iptables mode, ipvs mode was introduced in version v1.11 (support for test version began in v1.8, and it was introduced to the GA in v1.11). It updates incrementally and maintains connection consistency during service updates.

  • winuserspace: Operating just like userspace, it only works on windows nodes.

When using ipvs mode, you will need to preload the kernel modules nf_conntrack_ipv4, ip_vs, ip_vs_rr, ip_vs_wrr, ip_vs_sh, etc., on each Node.

Check the diagram below for an illustration of Kube-proxy iptables

Now let's check out the Kube-proxy NAT Diagram

Note that IPVS mode also uses iptables for tasks like SNAT and IP Masquerading (MASQUERADE), and uses ipset to simplify the management of iptables rules.

Let's look at how to start kube-proxy with this example:

kube-proxy --kubeconfig=/var/lib/kubelet/kubeconfig --cluster-cidr=10.240.0.0/12 --feature-gates=ExperimentalCriticalPodAnnotation=true --proxy-mode=iptables

To better understand how kube-proxy works:

kube-proxy monitors the changes in the service and endpoint within the API server. It configures load balancing (only supporting TCP and UDP) for services using proxiers such as userspace, iptables, ipvs, or winuserspace.

The image is from

The image is from

In , you can find a detailed explanation of how different services work under IPVS mode.

There are, however, some shortcomings of kube-proxy. It currently only supports TCP and UDP, and does not support HTTP routing or a health check mechanism. But these gaps can be closed by customizing an .

cilium/k8s-iptables-diagram
kube-proxy iptables "nat" control flow
Kube-proxy IPVS mode
Ingress Controller