HTTP is the abbreviation of Hyper Text Transfer Protocol. It is an application layer protocol, consisting of requests and responses, and is a standard client-server model. HTTP is a stateless protocol.
First introduce the concepts of side effects and idempotence. Side effects refer to changes to resources on the server. Search has no side effects, while registration has side effects. Idempotent means sending M and N requests (both are different and both are greater than 1), and the status of the resources on the server is consistent. For example, registering 10 and 11 accounts is not idempotent, and changing the article 10 and 11 times is idempotent. In terms of standard application scenarios, Get is mostly used in side-effect-free and idempotent scenarios, such as searching for keywords. Post is mostly used for side effects and idempotent scenarios, such as registration.
Indicates that the request has been received and continues to be processed
The HTTP protocol adopts the "request-response" mode, and HTTP is connected based on TCP. In normal mode (non-keep-alive), each request or response needs to establish a connection and disconnect it immediately after completion.
When using Conection: keep-alive mode (also known as persistent connection, connection reuse), keep-alive makes the client-server connection continue to be valid, that is, the underlying TCP connection is not closed. The keep-alive function avoids re-establishing the connection on subsequent requests from the server.
[Image upload failed...(image-e0197e-1569467184701)]
After pipelineization, requests and responses no longer alternate sequentially. It can support sending multiple requests at one time and receiving multiple responses at one time.
When the client sends a request to the server, the client will declare the acceptable data format and some data-related restrictions; when the server receives the request, it will The information is used to determine what kind of data should be returned.
CSP Content-Security-Policy
Example:
Compared with HTTP 1.X, HTTP 2.0 can be said to have greatly improved the performance of the web.
HTTP2 uses binary format transmission, replacing the text format of HTTP1.x, and binary format parsing is more efficient. Multiplexing replaces the sequencing and blocking mechanisms of HTTP 1.x, and all requests for the same domain name are completed concurrently through the same TCP connection.
This is the core of all performance enhancements in HTTP 2.0. In previous versions of HTTP, we transmitted data via text. A new encoding mechanism was introduced in HTTP 2.0, and all transmitted data will be split and encoded in a binary format.
In HTTP1.x, multiple concurrent requests require multiple TCP connections. In order to control resources, the browser has a limit of 6-8 TCP connections. In HTTP2
In HTTP 2.0, there are two very important concepts, namely frame and stream.
Frame represents the smallest unit of data. Each frame identifies which stream the frame belongs to. A stream is a data stream composed of multiple frames.
Multiplexing means that multiple streams can exist in one TCP connection. In other words, multiple requests can be sent, and the peer can know which request it belongs to through the identifier in the frame. Through this technology, the head-of-line blocking problem in the old version of HTTP can be avoided and the transmission performance can be greatly improved.
[Image upload failed...(image-f4755d-1569467184697)]
In HTTP 1.X, we use the form of text to transmit the header, when the header carries cookies In this case, hundreds to thousands of bytes may need to be transmitted repeatedly each time.
In HTTP 2.0, the HPACK compression format is used to encode the transmitted header, reducing the size of the header. Index tables are maintained at both ends to record the headers that have appeared. Later, during the transmission process, the key names of the recorded headers can be transmitted. After the peer receives the data, the corresponding values ??can be found through the key names.
In HTTP 2.0, the server can actively push other resources after a request from the client.
You can imagine the following situation. The client will definitely request certain resources. In this case, the server-side push technology can be used to push the necessary resources to the client in advance, so that the delay can be relatively reduced. time.
Of course, you can also use prefetch if the browser is compatible.
This process is relatively complicated. You must first understand two concepts. Symmetric encryption and asymmetric encryption
Symmetric encryption means that both communicating parties use the same key for encryption and decryption. Although symmetric encryption is simple and has good performance, it cannot solve the security problem of sending the key to the other party for the first time. It is easy to Intercepted by hackers.
Asymmetric encryption is more secure, but the problem is that it is slow and affects performance
HTTPS combines two decryption methods and uses asymmetric encryption keys for symmetric encryption. The public key is encrypted and sent to the other party, and the other party decrypts it with the private key to obtain the symmetric encryption key. After that, the parallel bars can communicate through symmetric encryption
At this time, a new problem is brought, the man-in-the-middle problem.
If there is an intermediary between the client and the server at this time, this intermediary can easily decrypt all the data of the communicating parties as long as the public key of the communication between the two parties is replaced by his own.
At this time, a secure third-party certificate (CA) is needed to prove the identity of the middleman. This certificate contains:
But this is not secure enough. If the middleman tampered with the certificate, the identity certificate would be in vain.
So there is a new technology, digital signature
Digital signature is to use CA's own hash algorithm to HASH the certificate content to get a summary, and then encrypt this summary with CA's private key to finally form a digital signature.
When someone else sends their certificate, I use the same hash algorithm to generate the message digest again, and then use the CA public key to decrypt the digital signature to get the message digest created by the CA, and compare the two. You will know if the middleman has been tampered with
At this time, you can ensure the security of communication to the greatest extent