Detailed Explanation of Redis Queue Stream and Redis Multithreading (2)

Summary of several implementations of Redis queue

Implementation of List-based LPUSH+BRPOP

It is simple enough, and the delay in consuming messages is almost zero, but it needs to deal with the problem of idle connections.

If the thread has been blocked there, the connection of the Redis client will become an idle connection. If it is idle for too long, the server will usually disconnect the connection actively to reduce the occupation of idle resources. At this time, blpop and brpop may throw an exception, so when writing the client consumption Be careful when catching an exception, and you need to retry if you catch an exception.

Other disadvantages include:

It is troublesome for consumers to confirm ACK, and it is impossible to guarantee whether consumers can process successfully after consuming messages (downtime or processing exceptions, etc.). Usually, it is necessary to maintain a Pending list to ensure message processing confirmation; broadcast mode, such as pub/sub, cannot be used. Message publishing/subscribing model; repeated consumption is not possible, once consumed it will be deleted; group consumption is not supported.

Implementation based on Sorted-Set

It is mostly used to implement delay queues. Of course, orderly ordinary message queues can also be implemented, but consumers cannot block to obtain messages, they can only poll, and repeated messages are not allowed.

PUB/SUB, subscribe/publish mode

advantage:

In a typical broadcast mode, a message can be published to multiple consumers; multi-channel subscription, consumers can subscribe to multiple channels at the same time, so as to receive multiple types of messages; messages are sent immediately, and consumers do not need to wait for consumers to read messages, consumers will automatically A message published by the channel is received.

shortcoming:

Once a message is published, it cannot be received. In other words, if the client is not online at the time of publication, the message will be lost and cannot be retrieved; it cannot be guaranteed that the receiving time of each consumer is consistent; if there is a backlog of messages on the consumer client, to a certain extent, it will be forcibly disconnected. turned on, resulting in unexpected loss of messages. It usually occurs when the production of messages is much faster than the speed of consumption; it can be seen that the Pub/Sub model is not suitable for message storage and message backlog services, but is good at handling broadcasting, instant messaging, and instant feedback services.

Implementation based on Stream type

Basically, there is already a rudimentary form of message middleware, which can be considered for use in the production process. Of course, there are still many things to do if it is really applied in production. For example, the management and monitoring of message queues need to be implemented with great effort. And professional message queues already come with or have good third-party solutions and plug-ins.

message queue problem

From our use of Stream above, it shows that Stream already has the basic elements of a message queue, producer API, consumer API, message broker, message confirmation mechanism, etc., so problems arising in the use of message middleware, The same will happen here.

What should I do if there are too many Stream messages?

If too many messages are accumulated, wouldn’t the linked list of Stream be very long, and the content will be exploded? The xdel command will not delete the message, it just marks the message.

Redis naturally takes this into consideration, so it provides a fixed-length Stream function. The xadd instruction provides a fixed length maxlen, which can kill old messages and ensure that the maximum length does not exceed the specified length.

What happens if the message forgets ACK?

Stream saves the message ID list PEL being processed in each consumer structure. If the consumer receives the message and processes it but does not reply ack, the PEL list will continue to grow. If there are many consumer groups, then the PEL The occupied memory will be enlarged. So the message should be consumed and confirmed as quickly as possible.

How does PEL avoid message loss?

When the client consumer reads the Stream message and the Redis server replies the message to the client, the client suddenly disconnects and the message is lost. But the ID of the sent message has been saved in the PEL. After the client reconnects, it can receive the message ID list in the PEL again. However, at this time, the initial message ID of xreadgroup cannot be the parameter >, but must be any valid message ID. Generally, the parameter is set to 0-0, which means to read all PEL messages and new messages after last_delivered_id.

dead letter problem

If a message cannot be processed by the consumer, that is, it cannot be XACKed, it will remain in the Pending list for a long time, even if it is repeatedly transferred to each consumer. At this time, the delivery counter of the message (which can be queried through XPENDING) will accumulate. When the accumulation reaches a certain threshold value we preset, we will consider it bad news (also called dead letter, DeadLetter, undeliverable message) , because of the judgment condition, we can deal with the bad news and delete it. To delete a message, use XDEL syntax. Note that this command does not delete the message in Pending, so check Pending, and the message will still be there. After executing XDEL, the message XACK indicates that it has been processed.

High availability of Stream

Stream's high availability is based on master-slave replication, which is no different from other data structure replication mechanisms, that is to say, Stream can support high availability in Sentinel and Cluster cluster environments. However, since Redis's command replication is asynchronous, Redis may lose a very small part of data when failover occurs, and the same is true for other data structures of Redis.

Partition Partition

The Redis server does not natively support partitioning. If you want to use partitioning, you need to allocate multiple Streams, and then use certain strategies on the client side to generate messages to different Streams.

StreamSummary

Stream's consumption model draws on the concept of Kafka's consumption group, which makes up for the defect that Redis Pub/Sub cannot persist messages. But it is different from Kafka. Kafka's messages can be divided into partitions, but Stream cannot. If you have to partition, you have to do it on the client side, provide different Stream names, and perform hash modulo on the message to choose which Stream to stuff.

In general, if it is a small and medium-sized project or enterprise, Redis has been used in the work, and if the business volume is not very large and the message middleware function is required, the Stream function of Redis can be considered. However, if the concurrency is high and the resources are sufficient, it is better to use professional message middleware, such as RocketMQ, Kafka, etc. to support the business.

Guess you like

Origin blog.csdn.net/yaya_jn/article/details/130224172