Current location - Quotes Website - Personality signature - The real Redis distributed lock should be implemented like this.
The real Redis distributed lock should be implemented like this.
As we all know, redis distributed lock can be realized by using the SET command, but is it enough to use this command alone? Whether the upper bound theory should be considered.

If only it were as simple as the above, the distributed locking scheme we usually use in development may be relatively simple, depending on the complexity and concurrency of the business.

Let's talk about how to use distributed locks correctly in high concurrency scenarios.

Before formally explaining distributed locks, let's look at several issues that will be discussed around deployment:

At the same time, only one thread can read and write a shared resource, that is, in a high concurrency scenario, it is usually necessary to control only one thread to access at the same time to ensure the correctness of the data.

At this point, you need to use distributed locks.

In short, distributed locks are used to control that only one thread can access protected resources at a time.

You can use the SETNX key value command to achieve mutually exclusive functions.

Explanation: If the key does not exist, set the value to the key, otherwise, do nothing.

The return value of this command has the following two situations:

Examples are as follows:

Success stories:

& gt set NX lock: 10 1 (integer) 1 # Successfully acquired10/lock.

The situation of failure:

& gt failed to apply for lock acquisition after setting NX lock: 10 12 (integer) 0 #.

It can be seen that successful people can start to use "* * * to enjoy resources".

Release the lock in time after use and give the latter the opportunity to apply for resources.

Unlocking is relatively simple. Just use the DEL command to delete this key. As follows:

& gtDEL lock: 10 1 (integer) 1

The simple usage scheme of distributed lock is as follows:

Doesn't this look simple? What's the problem? Listen to my analysis.

First of all, there is a problem that the lock cannot be released. The scenario is as follows:

It can be seen that this lock will always be occupied, resulting in other clients not getting this lock.

Examples of settings are as follows:

& gt set NX lock: 1 01/integer1.

& gt expire lock: 1 0160//60s expired deletion (integer)1.

It can be seen that the lock will be released after 60 seconds, and other customers can apply.

As can be seen from the above example, locking and setting the expiration time are two operational commands, not atomic operations.

Imagine that there may be such a situation:

For example, the first command was executed successfully, and an exception occurred before the second command was executed, resulting in the failure to set the "expiration time" and the lock could not be released.

Set the keyName value NX PX 30000.

In this way, there seems to be nothing wrong with it. No, on closer inspection, the writing is still not rigorous enough. Think about it. Is it possible that you didn't lock it yourself

think ...

Under what circumstances, the released lock is not your own:

So there is a key point to pay attention to: you can only release the lock you applied for.

In a word, the one who tied the bell is needed to solve the problem.

The pseudo code is as follows:

//Determine the unique identification of the value and lock.

At this time, Lua script can be considered, so the process of judgment and deletion is atomic operation.

//Get the value of the lock and compare it with ARGV[ 1]. If the match is successful, execute del.

Using the above script, assign a random string "signature" to each lock. Only when the "signature" of the client that deleted the lock matches the value of the lock will it be deleted.

Don't panic when you encounter problems. Let's start with the official documents: redis.io/topics/dist …

So far, the above modification (optimization) scheme is relatively perfect, and most industries are using it.

Of course, the expiration time of this lock cannot be scribbled, and it is generally selected according to the results of many tests. For example, after several rounds of stress testing, the average execution time is 300 milliseconds.

Then our lock expiration time should be enlarged to 3~4 times the average execution time.

Some friends may ask: Why is it magnified 3~4 times?

This is called keeping one hand in everything: considering that if there is network IO operation in the operation logic of the lock, the online network will not be stable all the time, so it is necessary to leave some time buffer.

Set an expiration time when locking, and the client starts a "daemon thread" to detect the expiration time of this lock regularly.

If it is about to expire, but the business logic has not been completed, the lock will be automatically updated and the expiration time will be reset.

You can Google it first. I believe brother Google will tell you that there is such a library that all these jobs are encapsulated. Just use it. It's called Radisson.

When using distributed locks, we actually adopt an "automatic update" scheme to avoid lock expiration. This daemon thread is also commonly called the watchdog thread.

This scheme can be said to be very OK. I can think that these optimization points have defeated a large number of projects.

For extreme programmers, you can consider:

I won't discuss it here. Interested parties can discuss and communicate together in the comment area.