Current location - Quotes Website - Personality signature - 202 1- 10- I think we can learn about this article K8S(Kubernetes)! ! ! 28
202 1- 10- I think we can learn about this article K8S(Kubernetes)! ! ! 28
Kubernetes is Google's open source distributed container management platform, in order to manage our containerized applications in the server more conveniently.

Kubernetes is called K8S for short. Why is there such a title? Because K and S are the first and last letters of Kubernetes, and there are eight letters between K and S, it is called K8S for short, and Kubernetes is circuitous, so it is generally called K8S for short.

Kubernetes is not only a container orchestration tool, but also a brand-new distributed architecture scheme based on container technology. On the basis of Docker, it can provide a series of services from creating applications > services: application deployment > providing services > dynamic scaling > application updating, which improves the convenience of container cluster management.

You can look at the picture below first, which contains our configuration information such as mysql, redis, tomcat, nginx and so on. If you want to install the data inside, you need to manually install them one by one, which seems to be ok. There's only one anyway. It's a little troublesome, but it won't be delayed.

However, with the development of technology and the needs of business, a single server can no longer meet our daily needs. More and more companies need cluster environment and multi-container deployment. If they still deploy one by one, I'm afraid the operation and maintenance will go crazy and deploy machines without doing anything for a day. Sometimes it may be because a link is wrong, and redeployment is really vomiting blood. . . . . , as shown in the figure below:

If I want to deploy the following computers:

If you want to deploy one by one, people will be stupid. When will this be the end? If it's 20,000 machines of a certain Ba Li, should we submit our resignation letter on the spot? So K8S is to help us do these things, which is convenient for us to manage containers and automatically deploy applications, reduce duplication of work, and can automatically deploy applications and self-heal.

If K8S has a good support for micro-services, and a micro-service can be adjusted with the load change of the system, then the inherent service elasticity expansion mechanism of K8S can also cope with sudden traffic.

Docker-Compose is used to manage containers, similar to user container housekeeper. When we have more than n containers or applications to start, manual operation is very time-consuming. If we have Docker-Compose, we only need a configuration file to help us complete it, but Docker-Compose can only manage Docker on the current host, not services on other servers. It means stand-alone environment.

Docker Swarm is a tool developed by Docker Company to manage Docker containers in a cluster, which makes up for the defect of single node of Docker-Compose. Docker Swarm can help us start the container and monitor the state of the container. If the container service is suspended, we can restart a new container, thus ensuring normal external services and supporting load balance among services. Docker-Compose doesn't support these things,

Kubernetes itself and Docker Swarm have the same role orientation, that is to say, they are responsible for the same part of the container field, and of course they need some different features. Kubernetes is Google's own product, which has been very mature after a lot of practice and hosting experiments. Therefore, Kubernetes is becoming a leader in the field of container orchestration, and its configurability, reliability and wide support from the community have surpassed Docker Swarm. As an open source project of Google, it works with the entire Google cloud platform.

The picture below shows a K8S cluster, which contains three hosts, and each square here is our physical virtual machine. Through these three physical machines, we form a complete cluster, which can be divided into two types according to roles.

To make a more vivid analogy, we can understand Pod as a pod, the container is the beans inside, and it is a * * * life.

What's in the pod?

How to deploy containers in Pod is a reasonable choice according to the characteristics of our project and resource allocation.

Pause container:

The full name of the pause container is infrastucture container (also called infra). As an init pod, other PODs will be derived from the pause container, which is necessary for PODs.

Application containers in Pod * * * share the same resources:

In the above figure, if there is no pause container, if we want to communicate with the containers in Nginx, Ghost and Pod, we need to use their own IP addresses and ports to access each other. If there is a pause container, we can regard the whole Pod as a whole, that is, our Nginx and Ghost can access each other directly by using localhost. The only difference between the two is the port, which may seem simple, but it is actually realized by many things at the bottom of the network. Interested friends can find out for themselves.

In Kubernetes, each Pod will be assigned a separate IP address, but there is no direct interaction between pods. If you want to communicate on the network, you must communicate through another component, which is our service.

Service is service. In K8S, the main job of service is to connect PODs on different hosts through Service, so that PODs can communicate normally.

We can imagine a service as a domain name, and the Pod cluster of the same service is a different ip address, and the service is defined by a tag selector.

Using NodePort to provide external access, you only need to open a real port of a host on each node to access internal services through the client of that node.

Labels are usually attached to various objects in the form of kv. Labels are descriptive labels and play an important role. When we deploy containers, we need to search and filter the containers to operate. We can understand that Label is the alias of each POD, and only by taking the name can we find the corresponding Pod to operate as the main node of K8S.

The user submits a request to create a replication controller through Kubectl, and the request is written into etcd through API server. At this time, the controller manager heard the name created through the monitoring of the Api server. After careful analysis, I found that there was no corresponding Pod instance in the current cluster, so I quickly created a Pod object according to the definition of the replication controller template, and then wrote it into our etcd through the API server.

Go there. If dispatch finds out, kid, don't tell me, okay? Unemployed, this guy is not a good person at first glance. He will immediately run a complicated scheduling process, choose a node that can be settled in for this new Pod, and finally have an identity, which is really worrying, and then write this result into etcd through the API server. Subsequently, Kubelet, the housekeeper running on our node, detected the newborn baby-"Pod" through the API server, and will start the Pod according to it, and take care of it for a lifetime according to its characteristics until the end of its life.

Then, we submit a new service creation request mapped to this Pod through Kubectl, and the controller manager will query related Pod instances through the Label tag, generate endpoint information of the service, and write it into etcd through the API server. Next, the agent processes running on all nodes query and monitor the service objects and their corresponding endpoint information through the Api server, and establish a software-based load balancer to realize the traffic forwarding function of the service access back-end Pod.

Kube-proxy: It is a proxy that acts as a proxy for multi-host communication. As mentioned earlier, the service realizes the network communication between the host and the container, which is technically realized by kube-proxy. The service logically groups PODs, and the bottom layer communicates through kube-proxy.

Kubelet: used to execute K8S command, and also the core command of K8S. It is used to execute the related instructions of K8S, and is responsible for the life cycle management of Pod creation, modification, monitoring and deletion on the current node. At the same time, Kubelet will regularly "report" the status information of this node to the API server.

Etcd: Used to persist all resource objects in the storage cluster. The API server provides the encapsulation interface API for operating etcd. These APIs are basically interfaces for manipulating resource objects and monitoring resource changes.

API server: provides an operation portal for resource objects, and other components need to operate resource data through its API. Through the "full query" and "change monitoring" of related resource data, related business functions can be completed in real time.

Scheduler: Scheduler is responsible for the scheduling and distribution of Pod in cluster nodes.

Controller Manager: the internal management control center of the cluster, which is mainly used to realize the automation of fault detection and recovery of Kubernetes cluster. For example, the replication and deletion of Pod, the creation and update of endpoints, the discovery, management and status monitoring of nodes are all completed by the controller manager.

We have explained the basic situation of K8S here. Remember to pay attention to your favorite friends. Compared with Docker, the function of K8S is more mature. It is a relatively mature and perfect system after a lot of practice by Google.

If you have any questions or questions about K8S, please leave a message and let me know.

I am a small-scale herder, and I am a humble migrant worker. If you think the content of this article is helpful to you, remember that one-click three links is the biggest motivation for small farmers.