Kubernetes指南
Linux性能优化实战eBPF 核心技术与实战SDN指南个人博客
EN
EN
  • Overview
  • Introduction
    • Kubernetes Introduction
    • Kubernetes Concepts
    • Kubernetes 101
    • Kubernetes 201
    • Kubernetes Cluster
  • Concepts
    • Concepts
    • Architecture
    • Design Principles
    • Components
      • etcd
      • kube-apiserver
      • kube-scheduler
      • kube-controller-manager
      • kubelet
      • kube-proxy
      • kube-dns
      • Federation
      • kubeadm
      • hyperkube
      • kubectl
    • Objects
      • Autoscaling
      • ConfigMap
      • CronJob
      • CustomResourceDefinition
      • DaemonSet
      • Deployment
      • Ingress
      • Job
      • LocalVolume
      • Namespace
      • NetworkPolicy
      • Node
      • PersistentVolume
      • Pod
      • PodPreset
      • ReplicaSet
      • Resource Quota
      • Secret
      • SecurityContext
      • Service
      • ServiceAccount
      • StatefulSet
      • Volume
  • Setup
    • Setup Guidance
    • kubectl Install
    • Single Machine
    • Feature Gates
    • Best Practice
    • Version Support
    • Setup Cluster
      • kubeadm
      • kops
      • Kubespray
      • Azure
      • Windows
      • LinuxKit
      • kubeasz
    • Setup Addons
      • Addon-manager
      • DNS
      • Dashboard
      • Monitoring
      • Logging
      • Metrics
      • GPU
      • Cluster Autoscaler
      • ip-masq-agent
  • Extension
    • API Extension
      • Aggregation
      • CustomResourceDefinition
    • Access Control
      • Authentication
      • RBAC Authz
      • Admission
    • Scheduler Extension
    • Network Plugin
      • CNI
      • Flannel
      • Calico
      • Weave
      • Cilium
      • OVN
      • Contiv
      • SR-IOV
      • Romana
      • OpenContrail
      • Kuryr
    • Container Runtime
      • CRI-tools
      • Frakti
    • Storage Driver
      • CSI
      • FlexVolume
      • glusterfs
    • Network Policy
    • Ingress Controller
      • Ingress + Letsencrypt
      • minikube Ingress
      • Traefik Ingress
      • Keepalived-VIP
    • Cloud Provider
    • Device Plugin
  • Cloud Native Apps
    • Apps Management
      • Patterns
      • Rolling Update
      • Helm
      • Operator
      • Service Mesh
      • Linkerd
      • Linkerd2
    • Istio
      • Deploy
      • Traffic Management
      • Security
      • Policy
      • Metrics
      • Troubleshooting
      • Community
    • Devops
      • Draft
      • Jenkins X
      • Spinnaker
      • Kompose
      • Skaffold
      • Argo
      • Flux GitOps
  • Practices
    • Overview
    • Resource Management
    • Cluster HA
    • Workload HA
    • Debugging
    • Portmap
    • Portforward
    • User Management
    • GPU
    • HugePage
    • Security
    • Audit
    • Backup
    • Cert Rotation
    • Large Cluster
    • Big Data
      • Spark
      • Tensorflow
    • Serverless
  • Troubleshooting
    • Overview
    • Cluster Troubleshooting
    • Pod Troubleshooting
    • Network Troubleshooting
    • PV Troubleshooting
      • AzureDisk
      • AzureFile
    • Windows Troubleshooting
    • Cloud Platform Troubleshooting
      • Azure
    • Troubleshooting Tools
  • Community
    • Development Guide
    • Unit Test and Integration Test
    • Community Contribution
  • Appendix
    • Ecosystem
    • Learning Resources
    • Domestic Mirrors
    • How to Contribute
    • Reference Documents
由 GitBook 提供支持
在本页
  • The Architecture of a Kubernetes Cluster
  • The etcd Cluster
  • The kube-apiserver
  • Controller Manager and Scheduler
  • kube-dns
  • Data Persistence
  • Azure
  • GCE
  • AWS
  • Physical or Virtual Machines
  1. Setup

Setup Cluster

上一页Version Support下一页kubeadm

最后更新于1年前

The Architecture of a Kubernetes Cluster

The etcd Cluster

After obtaining a token from https://discovery.etcd.io/new?size=3, place etcd.yaml on each machine's /etc/kubernetes/manifests/etcd.yaml and replace ${DISCOVERY_TOKEN}, ${NODE_NAME}, and ${NODE_IP}. With this, the kubelet can initiate an etcd cluster.

The kube-apiserver

Place kube-apiserver.yaml on each Master node's /etc/kubernetes/manifests/, and put related configurations into /srv/kubernetes/. This lets kubelet automatically create and launch the apiserver, which requires:

  • basic_auth.csv - basic authentication username and password

  • ca.crt - Certificate Authority cert

  • known_tokens.csv - tokens that specific entities (like the kubelet) can use to communicate with the apiserver

  • kubecfg.crt - Client certificate, public key

  • kubecfg.key - Client certificate, private key

  • server.cert - Server certificate, public key

  • server.key - Server certificate, private key

After launching the apiserver, load balancing is crucial. This can be achieved via the elastic load balance service of cloud platforms or configuring master nodes with haproxy/lvs/nginx.

Moreover, tools like Keepalived, OSPF, Pacemaker, etc., can ensure high availability of load balance nodes.

Note:

  • For large-scale clusters, increase --max-requests-inflight (default at 400)

  • When using nginx, increase proxy_timeout: 10m

Controller Manager and Scheduler

It's important to ensure that at any given moment, only a single instance of both the controller manager and scheduler is running. This requires a leader election process, so include --leader-elect=true at startup, such as:

kube-scheduler --master=127.0.0.1:8080 --v=2 --leader-elect=true
kube-controller-manager --master=127.0.0.1:8080 --cluster-cidr=10.245.0.0/16 --allocate-node-cidrs=true --service-account-private-key-file=/srv/kubernetes/server.key --v=2 --leader-elect=true

Placing kube-scheduler.yaml and kube-controller-manager on each Master node's /etc/kubernetes/manifests/ and the related configuration into /srv/kubernetes/ lets kubelet automatically create and start kube-scheduler and kube-controller-manager.

kube-dns

kube-dns can be deployed via the Deployment method. While kubeadm automatically creates it in a default setting, for large-scale clusters, you need to relax resource limits, like:

dns_replicas: 6
dns_cpu_limit: 100m
dns_memory_limit: 512Mi
dns_cpu_requests: 70m
dns_memory_requests: 70Mi

Additionally, resources for dnsmasq need to be increased too, such as enlarging cache size to 10000, increasing concurrent handling ability with --dns-forward-max=1000, etc.

Data Persistence

In addition to the above configurations, persistent storage is essential for a high availability Kubernetes cluster.

  • For clusters deployed on public cloud, consider using persistent storage provided by the cloud platform, like AWS EBS or GCE persistent disk.

  • For clusters deployed on physical machines, consider network storage options like iSCSI, NFS, Gluster, Ceph, or even RAID.

Azure

GCE

On GCE, you can conveniently deploy clusters utilizing cluster scripts:

# gce,aws,gke,azure-legacy,vsphere,openstack-heat,rackspace,libvirt-coreos
export KUBERNETES_PROVIDER=gce
curl -sS https://get.k8s.io | bash
cd kubernetes
cluster/kube-up.sh

AWS

Physical or Virtual Machines

For an etcd running outside the kubelet, refer to the for manually setting the cluster mode.

On Azure, you can use AKS or acs-engine to deploy a Kubernetes cluster. For detailed deployment methods, refer .

Deploying on AWS is best done using .

On Linux physical or virtual machines, we recommend using or for Kubernetes cluster deployment.

etcd cluster guide
here
kops
kubeadm
kubespray
cluster