ActiveMQ usage summary (cluster solution)

1. MQ status

1.1. Overview

ActiveMQ is the most popular and powerful open source message bus produced by Apache. A JMS Provider implementation that fully supports the JMS1.1 and J2EE 1.4 specifications

1.2. Features of activemq

  • Write clients in multiple languages ​​and protocols. Languages: Java, C, C++, C#, Ruby, Perl, Python, PHP. Application Protocol: OpenWire, Stomp REST, WS Notification, XMPP, AMQP
  • Full support for JMS1.1 and J2EE 1.4 specifications (persistence, XA messages, transactions)
  • For the support of Spring, ActiveMQ can be easily embedded into the system using Spring, and also supports the features of Spring 2.0
  • Passed the test of common J2EE servers (such as Geronimo, JBoss 4, GlassFish, WebLogic), and through the configuration of JCA 1.5 resourceadaptors, ActiveMQ can be automatically deployed to any compatible J2EE1.4 commercial server
  • Supports multiple transport protocols: in-VM, TCP, SSL, NIO, UDP, JGroups, JXTA
  • Supports high-speed message persistence via JDBC and journal
  • High-performance clustering, client-server, peer-to-peer, guaranteed by design
  • Ajax support
  • Support integration with Axis
  • It is easy to call the embedded JMS provider for testing

1.3. Existing solutions

Use the officially provided network-based cluster solution.

1.4. There are problems

The cluster-based solution uses an older version 5.6, and there is a tcp semi-connection problem between the channels between the networks, which causes frequent cluster problems.

At present, one MQ is directly used for business operations. According to the existing business, it can support the existing high concurrency.

In fact, what is urgently needed at present is the high availability of MQ, that is, the Master-Slave solution

2. The cluster solution provided by MQ

2.1. broker-cluster load scheme

The deployment method of Broker-Cluster can solve the problem of load balancing. In the Broker-Cluster deployment method, each broker is connected to each other through the network and shares the queue. When the queue-A specified above by broker-A receives a message in the pending state, and there is no consumer connected to broker-A at this time. If a consumer is consuming messages from queue-A on broker-B in the cluster, broker-B will first obtain the message on broker-A through the internal network and notify its own consumer to consume it.

2.1.1. Static Broker-Cluster Deployment

<networkConnectors> 

                <networkConnector   uri="static:(tcp:// 0.0.0.0:61617)"duplex="false"/>

</networkConnectors>

2.1.2.   Dynamic Broker-Cluster部署

<networkConnectors> 

           <networkConnectoruri="multicast://default"

           dynamicOnly="true"

           networkTTL="3"

           prefetchSize="1"

           decreaseNetworkConsumerPriority="true" />

</networkConnectors>

 

2.2. master-slave high availability solution

2.2.1. Shared filesystem Master-Slave deployment method

The hot backup of master and slave is mainly realized by sharing the storage directory. All ActiveMQ applications are constantly acquiring the control right of the shared directory. Whichever application grabs the control right becomes the master.

For applications with multiple shared storage directories, whoever starts them first can obtain the control right of the shared directory first and become the master, and other applications can only be used as slaves.

 

2.2.2.   shared database Master-Slave方式

Similar to the shared filesystem, except that the shared storage medium is changed from a file system to a database.

2.2.3.   Replicated LevelDB Store方式

         This active-standby method is a new feature after ActiveMQ 5.9. ZooKeeper is used to coordinate the selection of a node as the master. The selected master broker node starts and accepts client connections.

The other nodes go into slave mode, connect to the master and synchronize their stored state. The slave does not accept client connections. All storage operations will be replicated to slaves connected to the Master.

If the master dies, the slave with the latest update is allowed to become the master. The fielded node can rejoin the network and connect to the master into slave mode. All message operations for disks that require synchronization will wait for the storage state to be replicated to other quorum nodes to complete. So, if you configure replicas=3, then the legal size is (3/2)+1=2. The master will store and update and wait for (2-1)=1 slaves to store and update before reporting success. As for why it is 2-1, those familiar with Zookeeper should know that there is a node that exists as a watcher.

For a single new master to be elected, you need to keep at least one quorum node online to be able to find the node with the latest state. This node will become the new master. Therefore, it is recommended to run at least 3 replica nodes to prevent one node from failing and service interruption.

2.3. Scheme comparison

  • The cluster of broker-cluster fails-over and load-balance among multiple brokers
  • master-slave can fail-over, but not load-balance
  • The message is forwarded between multiple brokers, but the message is only stored on one broker. Once it fails, it must be restarted, and the master-slave mode master fails, and the slave backs up the message in real time.
  • The jdbc method has high cost and low efficiency
  • The pure mode in the master-slave mode is troublesome to manage


Therefore, we combine MASTER/SLAVE and BROKER CLUSTER to get a complete solution: both clustering and any BROKER will not lose messages if the node goes down.

 

3. Best solution

3.1. Program introduction

According to the above solutions, combined with the advantages and disadvantages, we finally need to achieve high availability and load balancing of the cluster, so after the combination, the cluster solution based on zookeeper+network is shown in the following figure:

 

The cluster connection method is as follows:

 

3.2. Upgrade strategy

According to the analysis of the current situation of using MQ online, it is necessary to first consider supporting high availability, so the above solutions can be implemented step by step:

1 对现有的MQ升级到最新版本,并测试验证对5.6的兼容性

2 搭建基于zookeeper的Master-Slave集群GA-MQ。

3 在高可用集群GA-MQ和原有MQ之间建立network连接。

4 切换线上生产者到新的集群GA-MQ

5 切换线上消费者到新的集群GA-MQ

6 等待原有MQ消息完全处理完成后,关闭服务。

7 搭建GB-MQ集群,并按照业务进行负载划分。

8 在GA-MQ和GB-MQ之间建立network连接,实现最终方案。

 

Each of the above steps requires strict testing and verification before the production environment can be switched.

{{o.name}}
{{m.name}}

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324138479&siteId=291194637