DaemonSet
最后更新于
最后更新于
A DaemonSet ensures a specific container copy runs on each Node - a way commonly used to deploy cluster logs, monitors, or other system management applications. Stellar examples include:
Log collection systems, like fluentd or logstash.
System monitors such as Prometheus Node Exporter, collectd, New Relic agent, or Ganglia gmond.
System programs like kube-proxy, kube-dns, glusterd, and ceph.
Kubernetes version | Deployment version |
---|---|
There's an example of using Fluentd to collect logs:
From version 1.6 onwards, DaemonSets support rolling updates. You can set your update strategy with .spec.updateStrategy.type
. Two strategies are currently supported:
OnDelete: The default strategy. After updating the template, a new Pod will only be created once the old one has been manually deleted.
RollingUpdate: After the DaemonSet template has been updated, the old Pod is automatically removed and a new one is created.
The RollingUpdate strategy enables you to set:
.spec.updateStrategy.rollingUpdate.maxUnavailable
, defaulting to 1
spec.minReadySeconds
, defaulting to 0
From version 1.7 onwards, support for rollback is included.
DaemonSet ignores a Node's unschedulable status. There are two ways to ensure a Pod only runs on specified Node nodes:
nodeSelector: Only schedules on Nodes that match the specific label.
nodeAffinity: A more feature-rich Node selector that, for instance, supports set operations.
podAffinity: Schedules on the Node where the Pod meeting conditional criteria is located.
First, label the node:
Then specify disktype=ssd
as nodeSelector in DaemonSet:
NodeAffinity currently supports both requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution, which represent mandatory and preferred conditions. The following example represents scheduling on a Node that contains the label kubernetes.io/e2e-az-name
with a value of e2e-az1 or e2e-az2, and it's preferred the Node also carries the label another-node-label-key=another-node-label-value
.
PodAffinity selects Nodes based on Pod labels, scheduling only on the Node where the Pod meeting the conditions resides. It supports podAffinity and podAntiAffinity. This feature can be quite convoluted. Take the following example:
It'll schedule on any "Node that contains at least one running Pod tagged with security=S1
".
It improves its chances of not being scheduled on the "Nodes containing at least one running Pod tagged with security=S2
".
Besides using DaemonSet, you can operate specific Pods on each server with Static Pod. This requires the kubelet to specify the manifest directory when launching:
Then place the needed Pod definition file into the specified manifest directory.
Note: Static Pods cannot be deleted through the API Server. But, you can automate the deletion of the corresponding Pod by eliminating the manifest file.
v1.5-v1.6
extensions/v1beta1
v1.7-v1.15
apps/v1beta1
v1.8-v1.15
apps/v1beta2
v1.9+
apps/v1