Distributed coordination technology is used to solve the synchronization control between multiple processes in distributed environment, so that they can access some key resources in an orderly manner and prevent the consequences of "dirty data". Distribution is not easily solved by a simple scheduling algorithm, which is a misunderstanding. If the processes are all on one machine, it is really easy to handle them for so long, but how to solve them in a distributed environment?
Let's analyze this picture. Each program of the three machines in the picture runs an application program, which is connected through the network diagram to form a system to provide services for users. For users, this is a transparent architecture system, and we can call this system a distributed system.
After analyzing how this distributed system schedules processes, suppose a resource is mounted on server 1, and all three distributed processes want to compete for this resource, but they don't want to access it at the same time. At this time, a "coordinator" is needed for them to obtain resources in order. This "coordinator" is called a lock. For example, "process-1" means that the program will acquire the lock first when using the resource, and "process 1" will acquire the lock exclusive to the resource, and release the lock after use, so that other processes can acquire the lock. This lock is a distributed lock, which is a distributed collaborative technology.
Google's chubby and Apache's city zoo are better distributed. Xiaopang is not open source. Later, Yahoo developed Zookeeper, Xiaopang imitated Xiaopang, realized similar distributed functions, and donated Zookeeper to Apache as an open source program. Using Zookeeper to build our system not only has fewer bugs, but also saves costs.
The advantage of Zookeeper is that it is a highly available, high-performance and always open source coordination tool. It also provides a "distributed lock service", which is powerful and can configure maintenance, group service, distributed notification and distributed message queue. In performance, it adopts Zab protocol, data structure adopts Znode, primitive is defined in the data structure, notification mechanism adopts Watcher mechanism, and it has strict sequential access control, so it will not crash because of a node's error.
(1) zookeeper has two master nodes, master node -A master-0000 1 and master node -B master-00002. After startup, they all register a node like Zookeeper, and the main node -A becomes the main node and the main node -B becomes the standby node, thus scheduling two main processes.
(2) The master node A hangs up, its registered nodes are automatically deleted, and the sensing node initiates the election. After the election, the master node B becomes the master node and replaces the master node A. ..
(3) Master recovery, which will register a node master-00003 with zookeeper. After re-election, the primary node B will remain the primary node, and the primary node A will become the standby node.
Installation configuration steps:
Test machine:192.168.10.10 Host name: ZK 1.
192.168.10.11host name: zk2.
192.168.10.12 host name: zk3.
1. Install jdk
[^_^]~ # tar xf JDK-8u 13 1-Linux-x64 . tar . gz-c/usr/local/
[^_^] ~# mkdir /usr/local/java
[^_^]Java # mv JDK 1 . 8 . 0 _ 17 1/usr/local/Java
[^_^]~ # cat/etc/profile . d/Java . sh
JAVA _ HOME =/usr/local/JAVA/JDK 1 . 8 . 0 _ 13 1
JRE _ HOME =/usr/local/Java/JDK 1 . 8 . 0 _ 13 1/JRE
CLASS_PATH=。 :$ JAVA _ HOME/lib/dt . jar:$ JAVA _ HOME/lib/tools . jar:$ JRE _ HOME/lib
PATH = $ JAVA _ HOME/bin:$ JRE _ HOME/bin:$ PATH
Export JAVA_HOME JRE_HOME CLASS_PATH.
[^_^]~ # source/etc/profile . d/Java . sh
Authentication: java version
Install zookeeper
[_] ~ # cat /etc/hosts
127.0.0. 1? Local Host Local Host. Local Domain Local Host 4 Local Host 4 Local Domain 4
* 1 local host local host. Local domain local host 6 local host 6. Local domain 6
192. 168.36. 10? zk 1
192. 168.36. 1 1? zk2
192. 168.36. 12? zk3
[^_^] ~# cat /etc/profile.d/zk.sh
# Set city zoo Environment
Export zookeeper _ home =/usr/local/zookeeper-3.4.11
Export path = $ path: $ zookeeper _ home/bin: $ zookeeper _ home/conf.
[^_^]~ # source/etc/profile . d/ZK . sh
[^_^]~ # CP/usr/local/zookeeper/conf/zoo _ sample . CFG/usr/local/zookeeper/conf/zoo . CFG
[^_^]~ # mkdir-p/usr/local/zookeeper/data/log
[^_^]~ # echo " 1 " & gt; ? /usr/local/zookeeper/data/myid # The MyIDs of three city zoo are different.
[^_^]~ # egrep-v“^$|#”/usr/local/zookeeper-3 . 4 . 1 1/conf/zoo . CFG
Ticket time =2000
initLimit= 10
syncLimit=5
dataDir=/tmp/zookeeper
dataLogDir=/tmp/zookeeper/log
clientPort=2 18 1? # service port
server . 1 = ZK 1:2888:3888? #2888 is the port where the follower connects to the leader, and 3888 is the port where the leader is selected.
server.2=zk2:2888:3888
server.3=zk3:2888:3888
[^_^]~ # CD/usr/local/zookeeper-3 . 4 . 1 1/bin/
[_] Bin #. /zkServer.sh start? & amp
[_] bin # jps # Check whether it started successfully.
1 155 Jps
1093 QuorumPeerMain? # quorumper main is a city zoo process.
[_] Bin #. /zkServer.sh status? # Check node status
【^_^】bin # ZK CLI . sh-server ZK 1:2 18 1,zk2:2 18 1,zk3。 :2 18 1 ? # You can create a zk cluster.