Current location - Quotes Website - Collection of slogans - Linux server is overloaded. Linux server load
Linux server is overloaded. Linux server load
The Linux server is stuck?

First, judge the cause of the crash. Generally speaking, the most likely cause of the crash is that the system load is too high and programs and applications that consume a lot of memory are running.

At this time, you can use Ctrl+Alt+F 1 to switch to TTY text interface, enter top at the prompt and press enter, and you can see which processes and applications consume how much resources. Just enter kill to close the program.

In addition to high load, some underlying software errors can also lead to rare crash problems. If you still don't speed up or crash after closing the program, you can try to restart the computer.

What are the daily inspection items of linux servers?

1, uptime command This command can quickly check the load of the machine.

2.dmesg command This command will output the last 10 line of the system log.

3, vmstat command vmstat(8) command, each line outputs some system core indicators, which can let us know the system status in more detail.

4.mpstat command This command can display the occupancy rate of each CPU.

5, pidstat command pidstat command output process CPU occupancy.

6.iostat command 7. Free command The free command can view the usage of system memory. 8.sar command sar command can check the throughput of network devices here. 9. The 9.top command includes checking the contents of the previous command.

How to balance the load of multiple network cards under Linux server?

◆ Load balancing technology of application server If the load balancing layer of the client is transplanted to an intermediate platform to form a three-tier structure, the client application can transparently balance the requests to the corresponding service nodes through the middle-tier application server without special modification. The common implementation means is reverse proxy technology. Using the reverse proxy server, the request can be forwarded to multiple servers evenly, and the cached data can be directly returned to the client. This acceleration mode can improve the access speed of static web pages to a certain extent, thus achieving the purpose of load balancing. The advantage of using reverse proxy is that it can combine load balancing with the caching technology of proxy server, thus providing beneficial performance. However, it has some problems. First of all, it is not easy to develop a reverse proxy server for each service. Although the reverse proxy server itself can achieve high efficiency, for each agent, the proxy server must maintain two connections, one external connection and one internal connection, so the load of the proxy server is very large for extremely high connection requests. The reverse proxy can realize the load balancing strategy optimized for the application protocol, and only access the most idle internal server at a time to provide services. However, with the increase of the number of concurrent connections, the load of the proxy server itself becomes very large, and finally the reverse proxy server itself will become the bottleneck of service. ◆ Load Balancing Based on Domain Name System NCSA Extensible web is the earliest Web system using dynamic DNS polling technology. In DNS, multiple addresses are configured with the same name, so clients who query this name will get one of them, so different clients can access different servers to achieve the purpose of load balancing. This technology has been adopted by many well-known websites, including the early Yahoo website, 163 and so on. Dynamic DNS polling is easy to implement without complicated configuration and management. Generally, unix-like systems supporting bind8.2 or above can run, so they are widely used. DNS load balancing is a simple and effective method, but there are many problems. First, the domain name server cannot know whether the service node is valid. If the service node fails, the redundant name system will still resolve the domain name to this node, resulting in user access failure. Secondly, because of the TTL(TimetoLIVE (TTL) symbol of DNS, once this TTL is exceeded, other DNS servers need to interact with this server to get address data again, and it is possible to get different IP addresses. Therefore, in order to assign addresses randomly, TTL should be as short as possible, and DNS servers in different places can update the corresponding addresses to obtain addresses randomly. However, setting TTL too short will greatly increase DNS traffic and cause additional network problems. Finally, it can't distinguish the differences between servers, nor can it reflect the current running state of servers. When using DNS load balancing, we must try to ensure that different client computers can get different addresses on average. For example, user A may only browse a few web pages, while user B may download a lot. Because the domain name system does not have a suitable load strategy, it is simply balanced by turns, so it is easy to send user A's request to a lightly loaded site and B's request to an already heavily loaded site. Therefore, from the dynamic balance characteristics, the effect of dynamic DNS polling is not ideal. In addition to the above-mentioned load balancing methods, the content exchange technology of high-level protocols also supports the load balancing ability within the protocol, that is, URL exchange or seven-layer exchange, which provides a high-level control method for access traffic. Web content exchange technology checks all HTTP headers and makes load balancing decisions according to the information in the headers. For example, based on this information, we can determine how to provide services for personal homepage and picture data, such as redirection ability in HTTP protocol. HTTP runs at the highest level of TCP connection. The client connects directly to the server through TCP service, and the port number is constant at 80, and then sends an HTTP request to the server through TCP connection. Protocol switching controls the load according to the content policy, not according to the TCP port number, so it will not cause the detention of access traffic. Because the load balancing device distributes incoming requests to multiple servers, it can only be established when establishing TCP connections, and how to balance the load can only be determined after HTTP requests pass. When the click rate of a website reaches hundreds or even thousands of times per second, the analysis of TCP connection, HTTP header information and process delay becomes very important, and every effort should be made to improve the performance of these parts. There is a lot of information useful for load balancing in HTTP requests and headers. We can know the URL and web page requested by the client from this information. With this information, the load balancing device can direct all mirror requests to the mirror server, or call CGI program according to the database query content of URL to direct the requests to a dedicated high-performance database server. If the network administrator is familiar with the content exchange technology, he can use the Web content exchange technology according to the cookie field in the HTTP header to improve the service for specific customers. If he can find some rules from HTTP requests, he can also make full use of it to make various decisions. In addition to the problem of TCP connection table, how to find the appropriate HTTP header information and the process of making load balancing decisions are important issues that affect the performance of Web content exchange technology. If the Web server has been optimized for special functions such as image service, SSL session and database transaction service, then adopting this level of flow control will improve the performance of the network. ◆ Network access protocol switching Large networks are generally composed of a large number of specialized technical equipment, such as firewalls, routers, layer 3 and layer 4 switches, load balancing equipment, buffer servers, Web servers, etc. How to combine these technical devices organically is a key problem that directly affects the network performance. At present, many switches provide layer 4 switching function, which provides a consistent IP address to the outside and maps it to multiple internal IP addresses. For each TCP and UDP connection request, according to its port number, an internal address is dynamically selected according to the given strategy, and the data packet is forwarded to this address to achieve the purpose of load balancing.

How does linux judge high load?

Load is an important index of linux machine, which directly reflects the current state of the machine. If the machine load is too high, it will be difficult to operate the machine.

The high load of Linux is mainly due to three parts: CPU utilization, memory utilization and IO consumption. Excessive use of any item will lead to a sharp increase in server load.

There are many commands to view the server load, and w or uptime can directly display the load.

uptime

12:20:30 44 days, 2 1:46, 2 users, average load: 8.99, 7.55, 5.40.

Dollar w

12:22:02up44days, 2 1:48, 2 users, average load: 3.96, 6.28, 5. 16.

The loadaverage corresponds to the average load of 1 minute, 5 minutes and 15 minutes respectively.