MQ RabbitMQ 2021

MQ RabbitMQ 2021

MQ MQ

  • -
  • -
  • -
  • -
  • -

A BCD E C A A A A MQ A MQ MQ MQ MQ A

MQ

A BCD 3ms BCD 300ms 450ms 200ms 3 + 300 + 450 + 200 = 953ms 1s MQ A 3 MQ 5ms A 3 + 5 = 8ms

RabbitMQ

A BCD BD C

10

RabbitMQ MQ

ActiveMQ

ActiveMQ ActiveMQ

RabbitMQ

RabbitMQ case RabbitMQ

RabbitMQ bug RabbitMQ

RabbitMQ erlang erlang

RocketMQ

RocketMQ Java

Kafka Kafka MQ

Kafka

Kafka Spark Streaming Storm Flink MQ

Kafka ActiveMQ RabbitMQ RocketMQ

ActiveMQRabbitMQRocketMQKafkaZeroMQ
RabbitMQ 2.6w/s 11.6w/s17.3w/s29w/s
JavaErlangJavaScala/JavaC
ApacheMozilla/SpringAlibabaApacheiMatix
C PHP
(p2p) - 4 direct, topic ,Headers fanout fanout topic/messageTag topic topic (p2p)
- ' Master-Slave Slave Master Leader-Slave Master Slave

MQ ActiveMQ

RabbitMQ erlang Java

RocketMQ RocketMQ Apache GitHub RocketMQ RabbitMQ

RabbitMQ RocketMQ

Kafka

MQ

MQ

2 M1 M2 M1 S1 M2 S2 M1 M2

1 - MQServer -

  • 2

ID ID

RabbitMQ

RabbitMQ Erlang AMQP

rabbitmq

1

2

3

4

RabbitMQ

  • Broker
  • Exchange
  • Queue
  • Binding exchange queue
  • Routing Key exchange
  • VHost vhost broker mini-RabbitMQ server queue exchange binding vhost RabbitMQ vhost vhost
  • Producer
  • Consumer
  • Channel channel channel

Exchange Queue RoutingKey Exchange Queue

RabbitMQ

.simple

1.

2. (consumer) , , , , ( , , ack, ack ack )

.work ( )

1. , 1, 2 , C1 C2 , ( , , (syncronize) )

.publish/subscribe ( )

1

2 broker

.routing

1. , (info) ( ), key, key , ;

2.

3. ,

4. :error ;EXCEPTION; ; ; ; key , , , ;

.topic ( )

1.

2. ,

3.

4. ,

5. key ,

routing sql

RabbitMQ

queue queue consumer queue queue consumer consumer worker

round-robin

-> -> routing key RabbitMQ

fanout

direct: If the routing key exactly matches, the message will be delivered to the corresponding queue

topic: enables messages from different sources to reach the same queue. When using the topic exchange, you can use wildcards

What transmission is the message based on?

Since the creation and destruction of TCP connections are expensive, and the number of concurrency is limited by system resources, it will cause performance bottlenecks. RabbitMQ uses a channel to transmit data. A channel is a virtual connection established within a real TCP connection, and there is no limit to the number of channels on each TCP connection.

How to ensure that messages are not re-consumed In other words, how to ensure the idempotence of message consumption?

Let me talk about why repeated consumption: Under normal circumstances, when consumers consume messages, they will send a confirmation message to the message queue after consumption. The message queue knows that the message has been consumed, and will remove the message from the message queue. Delete

However, due to network transmission and other failures, the confirmation information is not transmitted to the message queue, causing the message queue to not know that it has consumed the message, and distribute the message to other consumers again.

To solve the above problems, a solution is to ensure the uniqueness of the message, even if it is transmitted multiple times, do not let the multiple consumption of the message affect it; ensure that the message is idempotent;

For example: the data written into the message queue is uniquely marked, and when the message is consumed, it is judged whether it has been consumed according to the unique identifier;

Suppose you have a system that inserts one piece of data into the database when one message is consumed. If you repeat one message twice, you insert two pieces. Isn't the data wrong? But if you consume it for the second time, judge for yourself whether you have already consumed it. If so, just throw it away, so that a piece of data is not retained, thus ensuring the correctness of the data.

How to ensure that the message is sent to RabbitMQ correctly? How to ensure that the message recipient consumes the message?

Sender confirmation mode

Set the channel to confirm mode (sender confirmation mode), and all messages published on the channel will be assigned a unique ID.

Once the message is delivered to the destination queue, or after the message is written to disk (a persistent message), the channel will send an acknowledgement to the producer (including the unique ID of the message).

If RabbitMQ has an internal error that causes the message to be lost, it will send a nack (notacknowledged) message.

The sender confirmation mode is asynchronous, and the producer application can continue to send messages while waiting for the confirmation. When the confirmation message reaches the producer application, the callback method of the producer application will be triggered to process the confirmation message.

Receiver confirmation mechanism

Consumers must confirm each message after they receive it (message reception and message confirmation are two different operations). Only the consumer confirms the message, RabbitMQ can safely delete the message from the queue.

The timeout mechanism is not used here, RabbitMQ only confirms whether the message needs to be resent through the Consumer's connection interruption. In other words, as long as the connection is not interrupted, RabbitMQ gives the Consumer long enough to process the message. Ensure the ultimate consistency of data;

Several special cases are listed below

  • If the consumer receives the message and disconnects or cancels the subscription before the confirmation, RabbitMQ will think that the message has not been distributed, and then redistribute it to the next subscribed consumer. (There may be a hidden danger of repeated consumption of messages, which needs to be deduplicated)
  • If the consumer receives the message but does not confirm the message, and the connection is not disconnected, RabbitMQ considers the consumer to be busy and will not distribute more messages to the consumer.

How to ensure the reliable transmission of RabbitMQ messages?

Unreliable messages may be caused by message loss, hijacking, etc.;

Loss is divided into: producer lost message, message list lost message, consumer lost message;

Producers lose messages : From the perspective of producers losing data, RabbitMQ provides transaction and confirm modes to ensure that producers do not lose messages;

The transaction mechanism means: before sending the message, open the transaction (channel.txSelect()), and then send the message. If there is any exception during the sending process, the transaction will be rolled back (channel.txRollback()), and if the sending is successful, the transaction will be committed. (Channel.txCommit()). However, this approach has a disadvantage: throughput decreases;

The confirm mode is mostly used: once the channel enters the confirm mode, all messages published on the channel will be assigned a unique ID (starting from 1), once the message is delivered to all matching queues;

rabbitMQ will send an ACK to the producer (contains the unique ID of the message), which makes the producer know that the message has arrived at the destination queue correctly;

If rabbitMQ fails to process the message, a Nack message will be sent to you, and you can retry the operation.

Message queue lost data : message persistence.

To deal with the case of data loss in the message queue, the configuration of the persistent disk is generally turned on.

This persistence configuration can be used in conjunction with the confirm mechanism. You can send an Ack signal to the producer after the message is persisted to the disk.

In this way, if rabbitMQ dies before the message is persisted to the disk, the producer will not receive the Ack signal, and the producer will automatically resend it.

So how to persist?

By the way, it is actually very easy, just the following two steps

  1. Set the persistent flag of the queue durable to true, which means it is a persistent queue
  2. When sending a message, set deliveryMode=2

After this setting, even if rabbitMQ hangs, the data can be restored after restarting

Consumers lose messages : Consumers usually lose data because they use the automatic message confirmation mode, so you can change to manually confirm the message!

After the consumer receives the message, before processing the message, it will automatically reply that RabbitMQ has received the message;

If processing the message fails at this time, the message will be lost;

Solution: After processing the message successfully, manually reply to the confirmation message.

Why shouldn't you use a persistence mechanism for all messages?

First of all, it will inevitably lead to a decrease in performance, because writing to disk is much slower than writing to RAM, and the throughput of message may be 10 times different.

Secondly, when the persistence mechanism of message is used in RabbitMQ's built-in cluster solution, there will be a "cheating" problem. The contradiction is that if the message has the persistent property set but the queue has not set the durable property, then when the owner node of the queue is abnormal, the message sent to the queue will be blackholed before the queue is rebuilt; if the message is set The persistent attribute and the durable attribute is also set for the queue, so when the owner node of the queue is abnormal and cannot be restarted, the queue cannot be rebuilt on other nodes, and the use of the queue can only be resumed after the owner node is restarted. , And the message sent to the queue during this time will be blackholed.

Therefore, whether you want to persist the message, you need to comprehensively consider the performance needs and possible problems. If you want to achieve a message throughput of more than 100,000 messages per second (single RabbitMQ server), you must either use other methods to ensure reliable delivery of messages, or use a very fast storage system to support full persistence (for example, using SSD). Another processing principle is: only persistent processing of key messages (according to the importance of the business), and should ensure that the amount of key messages will not cause performance bottlenecks.

How to ensure high availability? RabbitMQ cluster

RabbitMQ is more representative, because it is based on master-slave (non-distributed) for high availability, we will use RabbitMQ as an example to explain how to achieve the first type of MQ high availability. RabbitMQ has three modes: stand-alone mode, normal cluster mode, and mirrored cluster mode.

The stand-alone mode is the Demo level. Generally, you start it locally for fun? No one produces the stand-alone mode.

Normal cluster mode means to start multiple instances of RabbitMQ on multiple machines, one for each machine. The queue you create will only be placed on one RabbitMQ instance, but each instance synchronizes the metadata of the queue (metadata can be considered as some configuration information of the queue, through the metadata, you can find the instance where the queue is located). When you consume, if you actually connect to another instance, that instance will pull data from the instance where the queue is located. This solution is mainly to improve throughput, that is, to allow multiple nodes in the cluster to serve the read and write operations of a certain queue.

Mirror cluster mode : This mode is the so-called high-availability mode of RabbitMQ. Unlike the normal cluster mode, in the mirrored cluster mode, the queue you create, regardless of metadata or messages in the queue, will exist on multiple instances, that is, each RabbitMQ node has a complete queue of this queue. Mirror, including the meaning of all queue data. Then every time you write a message to the queue, it will automatically synchronize the message to the queues of multiple instances. RabbitMQ has a very good management console, which is to add a new strategy in the background. This strategy is a mirroring cluster mode strategy. When specified, you can request data to be synchronized to all nodes, or you can request to synchronize to a specified number of nodes. When you create a queue, apply this strategy, and the data will be automatically synchronized to other nodes. In this case, the advantage is that any one of your machines is down, it's okay, other machines (nodes) also contain the complete data of the queue, and other consumers can go to other nodes to consume data. The downside is, first, this performance overhead is too large, the message needs to be synchronized to all machines, resulting in heavy network bandwidth pressure and consumption! The data of a queue of RabbitMQ is placed in one node, and under the mirrored cluster, the complete data of the queue is also placed on each node.

How to solve the delay and expiration of the message queue? What should I do when the message queue is full? There are millions of news backlogged for several hours, talk about how to solve it?

Message backlog processing method: temporary emergency expansion:

Fix the consumer's problem first to ensure that it resumes its consumption speed, and then stop all existing cnosumers. Create a new topic, the partition is 10 times the original, and the number of queues is temporarily created 10 times the original. Then write a temporary consumer program that distributes data. This program is deployed to consume the backlog of data. After consumption, it does not do time-consuming processing, and directly polls and writes the temporarily established 10 times the number of queues evenly. Then temporarily requisition 10 times the machines to deploy consumers, and each batch of consumers consumes a temporary queue of data. This approach is equivalent to temporarily expanding queue resources and consumer resources by 10 times, and consuming data at 10 times the normal speed. After the backlog of data is consumed quickly, the original deployed architecture has to be restored, and the original consumer machine is used to consume messages again. Message invalidation in MQ: Assuming you are using RabbitMQ, RabbtiMQ can set the expiration time, which is TTL. If the backlog of messages in the queue exceeds a certain time, they will be cleaned up by RabbitMQ, and the data will be gone. Then this is the second pit. This is not to say that a large amount of data will be accumulated in mq, but that a large amount of data will be lost directly. We can take a plan, which is to re-direct in batches. We have done similar scenes online before. When there was a large backlog, we just discarded the data at that time, and then after the peak period, for example, everyone drank coffee together and stayed up until 12 o'clock in the evening, and the users were all asleep. At this time, we started to write the program, write a temporary program for the lost data, find out bit by bit, and then refill it in mq to make up for him the data lost during the day. It can only be so. Suppose that 10,000 orders are backlogged in mq and are not processed. 1,000 of them are lost. You can only manually write a program to find out those 1,000 orders, and manually send them to mq to make up again.

The mq message queue block is full: If the message backlog is in mq and you haven't dealt with it for a long time, the mq is almost full at this time, what should I do? Is there any other way to do this? No, who made your first plan execute too slowly? You write programs temporarily, access data to consume, consume one and discard one, don't need it, and consume all the messages quickly. Then go to the second plan and make up the data at night.

Design MQ ideas

For example, for this message queuing system, let's consider it from the following perspectives:

First of all, this mq must support scalability, that is, if you need to quickly expand the capacity, you can increase the throughput and capacity, then how to do it? To design a distributed system, refer to Kafka's design philosophy, broker -> topic -> partition, and place a machine in each partition to store part of the data. If the resources are not enough now, it's simple, add partitions to the topic, then do data migration, and add more machines. Can you store more data and provide higher throughput?

Secondly, you have to consider whether this mq data should land on disk, right? That must be necessary. Only when the disk is dropped can the data be lost if other processes fail. How do you drop the disk when you drop it? Sequential writing, so there is no addressing overhead of random disk read and write, and the performance of disk sequential read and write is very high. This is the idea of kafka.

Secondly, you consider the availability of your mq? For this matter, please refer to Kafka's high availability guarantee mechanism explained in the previous section on availability. Multiple copies -> leader & follower -> broker hangs up and re-elects the leader to serve externally.

Can it support data 0 loss? Yes, refer to the kafka zero data loss solution we mentioned earlier.

Author: ThinkWon Source: thinkwon.blog.csdn.net/article/det...