Flannel
最后更新于
Flannel is a virtual networking solution for containers which works by assigning a subnet to each host, effectively enabling inter-container communication. It operates on the Linux TUN/TAP, encapsulates IP packets in UDP to build an overlay network, and uses etcd to track how the network is allotted across various machines.
On the control plane, the local flanneld
instance is responsible for synchronizing local and other hosts' subnet information from a remote ETCD cluster, and allocating IP addresses to pods. On the data plane, Flannel employs Backends—such as UDP encapsulation—to implement an L3 Overlay, allowing either the standard TUN device or a VxLAN device to be selected.
In addition to UDP, Flannel supports a variety of other Backends:
udp: User-space UDP encapsulation, defaulting to port 8285. Because the wrapping and unwrapping of packets occur in user space, this can impact performance.
vxlan: VXLAN encapsulation requires the configuration of VNI, a Port (default 8472), and GBP.
host-gw: Direct routing which updates the routing table of the host with the container network's routes; applicable only to layer-2 networks that are directly reachable.
aws-vpc: Creates routes using the Amazon VPC route table, suited for containers running on AWS.
gce: Creates routes using the Google Compute Engine Network; all instances need to enable IP forwarding and is suitable for containers running on GCE.
ali-vpc: Creates routes using the Alibaba Cloud VPC route table, suitable for containers running on Alibaba Cloud.
The CNI Flannel plugin translates the Flannel network configuration into the bridge plugin configuration and invokes the bridge plugin to configure the container netns networking. For instance, the following Flannel configuration
would be transformed by the CNI Flannel plugin into
Before using Flannel, it's necessary to set up kube-controller-manager --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16
.
This command launches a Flannel container and sets up the CNI network plugin:
Flanneld automatically connects to the Kubernetes API, configures the local Flannel network subnet based on node.Spec.PodCIDR
, and sets up the containers' vxlan and associated subnet routes.
Easy to configure and convenient to use
Well-integrated with cloud platforms, offering no additional performance loss with VPC solutions
VXLAN mode has poor support for zero-downtime restarts
When running with a backend other than udp, the kernel is providing the data path with flanneld acting as the control plane. As such, flanneld can be restarted (even to do an upgrade) without disturbing existing flows. However, in the case of vxlan backend, this needs to be done within a few seconds as ARP entries can start to timeout requiring the flannel daemon to refresh them. Also, to avoid interruptions during restart, the configuration must not be changed (e.g., VNI, --iface values).
References