Setup Cluster
最后更新于
最后更新于
After obtaining a token from https://discovery.etcd.io/new?size=3
, place etcd.yaml
on each machine's /etc/kubernetes/manifests/etcd.yaml
and replace ${DISCOVERY_TOKEN}
, ${NODE_NAME}
, and ${NODE_IP}
. With this, the kubelet can initiate an etcd cluster.
For an etcd running outside the kubelet, refer to the etcd cluster guide for manually setting the cluster mode.
Place kube-apiserver.yaml
on each Master node's /etc/kubernetes/manifests/
, and put related configurations into /srv/kubernetes/
. This lets kubelet automatically create and launch the apiserver, which requires:
basic_auth.csv - basic authentication username and password
ca.crt - Certificate Authority cert
known_tokens.csv - tokens that specific entities (like the kubelet) can use to communicate with the apiserver
kubecfg.crt - Client certificate, public key
kubecfg.key - Client certificate, private key
server.cert - Server certificate, public key
server.key - Server certificate, private key
After launching the apiserver, load balancing is crucial. This can be achieved via the elastic load balance service of cloud platforms or configuring master nodes with haproxy/lvs/nginx.
Moreover, tools like Keepalived, OSPF, Pacemaker, etc., can ensure high availability of load balance nodes.
Note:
For large-scale clusters, increase --max-requests-inflight
(default at 400)
When using nginx, increase proxy_timeout: 10m
It's important to ensure that at any given moment, only a single instance of both the controller manager and scheduler is running. This requires a leader election process, so include --leader-elect=true
at startup, such as:
Placing kube-scheduler.yaml
and kube-controller-manager
on each Master node's /etc/kubernetes/manifests/
and the related configuration into /srv/kubernetes/
lets kubelet automatically create and start kube-scheduler and kube-controller-manager.
kube-dns can be deployed via the Deployment method. While kubeadm automatically creates it in a default setting, for large-scale clusters, you need to relax resource limits, like:
Additionally, resources for dnsmasq need to be increased too, such as enlarging cache size to 10000, increasing concurrent handling ability with --dns-forward-max=1000
, etc.
In addition to the above configurations, persistent storage is essential for a high availability Kubernetes cluster.
For clusters deployed on public cloud, consider using persistent storage provided by the cloud platform, like AWS EBS or GCE persistent disk.
For clusters deployed on physical machines, consider network storage options like iSCSI, NFS, Gluster, Ceph, or even RAID.
On Azure, you can use AKS or acs-engine to deploy a Kubernetes cluster. For detailed deployment methods, refer here.
On GCE, you can conveniently deploy clusters utilizing cluster scripts:
Deploying on AWS is best done using kops.
On Linux physical or virtual machines, we recommend using kubeadm or kubespray for Kubernetes cluster deployment.