Deploy a three-instance kube-controller-manager cluster. After startup, a leader node will be generated through a competitive election mechanism, and other nodes will be in a blocked state. When the leader node is unavailable, the blocked node will elect a new leader node again to ensure service availability.
Deployment strategy:
Deployment software planning
Create certificate signing request:
Create kube-controller-manager certificate and private key:
The following files will be generated as a result:
kube-controller-manager uses the kubeconfig file to access the apiserver, which provides the apiserver address, CA certificate and kube-controller-manager certificate< /p>
First determine the address where apiserver provides external services
Please use the service-account certificate generated in the kube-apiserver document, where kube-apiserver uses the public key and kube-controller-manager uses the private key. Key
kube-controller-manager can be started and stopped as follows:
Check whether the process is normal
Check permissions:
ClusterRole system:kube-controller-manager has very small permissions and can only create resource objects such as secret and serviceaccount. The permissions of each controller are distributed to ClusterRole system:controller:XXX.
When adding the --use-service-account-credentials=true parameter to the startup parameters of kube-controller-manager, the main controller will create the corresponding ServiceAccount XXX-controller for each controller. The built-in ClusterRoleBinding system:controller:XXX will grant each XXX-controller ServiceAccount the corresponding ClusterRole system:controller:XXX permissions.
Check out the deployment controller
kube-controller-manager, the brain of k8s, where most of the controllers are located, the big housekeeper, the configuration includes:
Start the election.
Taking advantage of the strong consistency of etcd, it can be used to select components. kube-controller-manager uses this feature to achieve high availability.
High availability conditions: The number of kube-controller-manager is greater than or equal to 2.
Eviction timeout, defaults to 5 minutes, configured to 3 minutes, starts when the controller senses that the node is down.
First-level eviction rate, where two 0.1 is equivalent to evicting one Pod from each node every 10 seconds.
When multiple availability zones are enabled, the first-level eviction rate only takes effect when the zone is healthy; when multiple availability zones are not enabled, the zone represents the entire cluster.
Secondary eviction rate, where two 0.1 is equivalent to evicting one Pod from each node every 10 seconds.
Set the threshold of a "large" cluster. The default value is 50. When the number of cluster nodes is less than this value, the secondary eviction rate is 0 and no eviction is performed!
Zone is considered unhealthy when more than 55% of the nodes fail (the number of NodeReady nodes is greater than or equal to 3).
Example:
For example, ZoneB now has 20 nodes, then when more than 20 * 0.55 = 11 nodes hang up, then this Zone is considered unhealthy, and the first-level eviction rate will not take effect at this time; since the summary point is greater than 1, then the second-level rate is effective , so the entire cluster will be evicted at 10s/pod/node.
Q:
A:
Q:
A:
You need to add the following to the startup command Configuration