Distributed Eureka

The CAP theory has to be mentioned when selecting the microservice technology. CAP contains the contents of Consistency (consistency), Availavility (availability), Partition Tolerance (partition fault tolerance). The CAP theory proves that the service of the distributed architecture cannot satisfy the consistency, availability and partition fault tolerance at the same time.
Because the services distributed on different machine nodes form a cluster to provide a complete service, it must be ensured that one or several node failures or network partition failures do not affect the use of the entire service, which is to meet the partition fault tolerance . The system with distributed architecture has many service nodes and scattered deployment. Therefore, node failures and network failures (delay, packet loss) are problems that the system must implement. The system design can only make choices in consistency and availability.
Speaking of availability, how does Eureka guarantee availability? This has to be said about Eureka's self-protection model. Eureka adopts the CS architecture method. There are server-EurekaServer and client EurekaClient. EurekaClient runs in the service and maintains the heartbeat between the EurekaServer (default heartbeat 30s) to synchronize the service. Information about the node. If some nodes have network partition failure or node failure, EurekaServer cannot receive EurekaClient's heartbeat information normally. After several heartbeat cycles, the heartbeat cannot be recovered. EurekaServer removes the registration information of the failed node from the service registry. By default, when the network partition fails, the failed node will be removed by EurekaServer. However, there is a situation where the node should not be removed. When a network partition failure occurs between EurekaServer and EurekaClient, the service node is healthy. Eureka provides a self-protection mechanism in consideration of this situation. When the heartbeat is abnormal in this situation, Eureka will protect the registration information in the registry. It will no longer allow the deletion of the registration information, but only allow the registration of new services. The principle of its architecture is to prefer to keep all microservices (healthy and unhealthy), rather than blindly delete any healthy service nodes. This mechanism makes the Eureka cluster more robust and stable. This is also a typical choice to obtain availability at the expense of the number of healthy nodes in the registry of the consistency registry, which may be different from the actual number of healthy nodes, the so-called AP .
Let’s talk about the zk election mechanism. When the master node faces the same failure conditions as EurekaServer (network delay, packet loss), a new master node will be re-elected from the remaining nodes. But, the zk election master node mechanism takes a long time, which takes 30s~120s. During the election period, the zk cluster is not available. In a distributed deployment environment, the master node loses contact with other nodes due to network problems. What may happen, although the service will eventually be repaired, the long-term unavailability of the cluster during the election period is intolerable. For functions like service registration, availability is above all else. A certain degree of inconsistency is acceptable, but unavailability of the cluster is absolutely unacceptable. Eureka has seen this problem. When designing, each node is equal, and the failure of several nodes will not affect the use of the cluster. Therefore, ZK priority guarantees that CP
can tolerate inconsistencies in information when we query the service list from the registration center, and even return registration information that is out of date a few minutes ago, but it cannot accept the service directly down and unavailable. That is to say, availability is more important than data consistency for the registry.
In terms of availability, ek is better than zk, and for functions such as service registration, availability is more important than consistency. But the existence is reasonable. As a mature and high-performance distributed service scheduling framework, zk is very popular. In some application scenarios, choosing zk is more suitable than ek.
For example: distributed locks, distributed queues, globally unique IDs, load balancing, consistent configuration management, etc. These can be implemented with zk

Guess you like

Origin blog.csdn.net/feifeixiongxiong/article/details/113035033