Current location - Quotes Website - Signature design - How to optimize the scenario of ultra-high concurrent reading and writing?
How to optimize the scenario of ultra-high concurrent reading and writing?
A scene of writing more and reading less: suppose Didi has a driver of 100w, and the information is stored in a map < driver_id, and driver _ info & gt is in the driver information map. Drivers update their latitude and longitude information every 5 seconds, so the qps for writing this map is 20W/s, the daily order volume is 1000w, and the average query order is 1000qps. That is, 20w writes qp, 1k reads qp.

Generally speaking, the reading and writing operations of driving information are as follows:

When this implementation meets the above-mentioned scenario of writing more and reading less, the read-write lock will become the bottleneck of performance, because the granularity of the lock is too large, and all data depends on this lock.

Optimization concept:

1. Assign a lock to each user information and store the user information in an array, where array[driver_id]=dirver_info. If the amount of data is small, this scheme is feasible. If the amount of data is very large, it will consume a lot of memory.

2. Horizontal segmentation: according to the driver_id hash, such as modulus 10000, the driving information is divided into 1w groups, and the driving information is stored by 1w mapping. Faced with big data, this method is very suitable.

3. Unlock and store it in KV map, but the value is not a separate dirver_info. Instead, first generate a signature for driver_info, and then write the signature +driver_info into a fixed-length memory in two steps and store it as value. When you take it, sign Mr. driver_info in the value, and then compare it with the signature taken out. If they are the same, it means that driver_info is not competitive.