Current location - Quotes Website - Collection of slogans - K8s Series 02-kubeadm ADM Deploys K8s Cluster of Flannel Network
K8s Series 02-kubeadm ADM Deploys K8s Cluster of Flannel Network
In this paper, the k8s native cluster with v 1.23.6 version is mainly deployed on centos7 system based on docker and Flannel components. Because the cluster is mainly used for self-study and testing, resources are limited, and high availability deployment is not involved for the time being.

I have written some schemes about k8s basic knowledge and cluster construction before, and students in need can have a look.

The machines are all 8C8G virtual machines with hard disks 100G.

All nodes in the same k8s cluster need to ensure that the mac address and product_uuid are unique, and check the relevant information before starting the cluster initialization.

If the nodes of the k8s cluster have multiple network cards, please ensure that each node can access each other through the correct network card.

You can choose ntp or chrony synchronization according to your own habits, and the synchronization time source server can choose ntp 1.aliyun.com in Alibaba Cloud or ntp.ntsc.ac.cn in National Time Center.

Communication and service exposure between k8s clusters need to use more ports. For convenience, the firewall is directly disabled.

Here, it is mainly necessary to configure the kernel to load br_netfilter and iptables to release the traffic of ipv6 and ipv4, so as to ensure that the containers in the cluster can communicate normally.

Although the new version of k8s already supports dual-stack network, this cluster deployment process does not involve the communication of IPv6 network, so the IPv6 network support is turned off.

IPVS is a component specially designed to handle load balancing scenarios. The implementation of IPVS in kube-proxy increases scalability by reducing the use of iptables. Instead of using PREROUTING in the input chain of iptables, a fake interface named kube-IPVS0 was created. Ipv can achieve more efficient forwarding performance than iptables when the load balancing configuration in k8s cluster increases.

For detailed official documents, please refer to here. Because docker-shim was removed in the newly released version of 1.24, you should pay attention to the choice of container runtime when installing version ≥ 1.24. The version we installed here is lower than 1.24, so we continue to use docker.

The specific installation of docker can refer to this article I wrote before, so I won't go into details here.

CentOS7 uses systemd to initialize the system and manage processes. The initialization process generates and uses a root control group (cgroup) and acts as a cgroup manager. Systemd is closely integrated with cgroup, and each systemd unit will be assigned a cgroup. We can also configure the container runtime and kubelet to use cgroupfs. Using cgroupfs with systemd means that there will be two different cgroup managers. However, when there are both cgroupfs and systemd in a system, it is easy to become unstable, so it is better to change the settings and let the container runtime and kubelet use systemd as cgroup driver, which can make the system more stable. For Docker, you need to set the native.cgroupdriver=systemd parameter.

K8s officials have detailed documentation on how to set up the cgroup driver of kubelet. It should be noted that starting from the 1.22 version, if the cgroup driver of kubelet is not set manually, the default setting is systemd.

A relatively simple way to specify the cgroup driver of kubelet is to add the cgroup driver field in kubeadm-config.yaml

We can directly look at configmaps to see the kubeadm-config configuration after the cluster is initialized.

Of course, because you need to install a version higher than 1.22.0 and use systemd, you don't need to repeat the configuration.

Kube three-piece set is kubeadm, kubelet and kubectl, with specific functions and functions as follows:

It should be pointed out that:

The installation of CentOS7 is relatively simple, and we can directly use the official yum source code. It should be noted that the state of selinux needs to be set here, but we have turned off selinux before, so we skip this step here.

After all the nodes in the cluster have performed the above three operations, we can start to create the k8s cluster. Because we don't involve high availability deployment this time, we can operate directly on our target master node during initialization.

At this time, if we look at the mirror version in the corresponding configuration file, we will find that it has become the version corresponding to the Alibaba Cloud mirror source.

When we see the output below, our cluster is initialized successfully.

Just after the successful initialization, we can't view the k8s cluster information immediately, and we need to configure the related parameters of kubeconfig, so that we can normally use kubectl to connect with apiserver to read the cluster information.

After the configuration is completed, we can view the information of the cluster by executing related commands.

At this point, we need to continue to add the remaining two nodes as the workload of the worker node, and directly run the command output when the cluster is successfully initialized on the remaining nodes to successfully join the cluster:

It doesn't matter if you accidentally don't save the output information of successful initialization. We can use the kubectl tool to view or generate tokens.

When we look at the added cluster nodes, we can find that there are already two more nodes, but the status of these nodes is still NotReady, and then CNI needs to be deployed.

Flannel should be one of the CNI plug-ins with the lowest entry threshold among many open source CNI plug-ins, with simple deployment, easy-to-understand principle and rich related documents on the network.

For kube- flannel. yml file, we need to modify some parameters to adapt to our cluster:

We can deploy directly after the modification is completed.

After the cluster deployment is completed, we deploy a nginx in the k8s cluster to test whether it can work normally. First, we create a namespace named nginx-quic, then create a deployment named nginx-quic-deployment in this namespace to deploy pod, and finally create a service to expose the service. Here, we first use nodeport to expose the port for testing.

After the deployment is completed, we can directly check the status.

Finally, we test that the image of nginx-quic will return the IP and port requested by the user in the nginx container by default.