rabbitMQ docker安装测试

MQ是什么

消息队列就是用存放消息的队列结构,简称MQ;
参考:
http://blog.51cto.com/lee90/2058126

选型

参考
+ https://www.sojson.com/blog/48.html
+ https://blog.csdn.net/pkueecser/article/details/50613989

特性 ActiveMQ RabbitMQ Kafka NMQ
PRODUCER-COMSUMER 支持 支持 支持 -
PUBLISH-SUBSCRIBE 支持 支持 支持 支持
REQUEST-REPLY 支持 支持 - 不支持
API完备性 低(静态配置)
多语言支持 支持,JAVA优先 语言无关 支持,JAVA优先 C/C++
单机呑吐量 万级 万级 十万级 单机万级
消息延迟 - 微秒级 毫秒级 -
可用性 高(主从) 高(主从) 非常高(分布式) 非常高(分布式)
消息丢失 - 理论上不会丢失 -
消息重复 - 可控制 理论上会有重复 -
文档的完备性
提供快速入门
首次部署难度 -

RabitMq 搭建

参考
+ https://hub.docker.com/_/rabbitmq/

安装

docker pull rabiitmq:3.7-alpine

运行方式

几个注意点:

  • 设置hostname, 因为rabbimq数据存放依赖node name; 而默认的节点名称是hostname;
$ docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3.7-alpine
# 为持久化可在启动时增加以下两参数
-v "rabbitmq_log:/var/log/rabbitmq" \
-v "rabbitmq_data:/var/lib/rabbitmq" \

默认监听端口5672;

  • 内存限制
    rabbitmq参数vm_memory_high_watermark, 由环境变量RABBITMQ_VM_MEMORY_HIGH_WATERMARK决定;

    –memory 2048m (and the implied upstream-default RABBITMQ_VM_MEMORY_HIGH_WATERMARK of 40%) will set the effective limit to 819MB (which is 40% of 2048MB).

  • Erlang Cookie

    RabbitMQ nodes 和 CLI 命令行工具 (e.g. rabbitmqctl) 利用 cookie来决定是否允许互相通讯;

RABBITMQ_ERLANG_COOKIE 环境变量设置cookie
RABBITMQ_NODENAME 可反复唤起rabbitmqctl

$ docker run -it --rm --link some-rabbit:my-rabbit -e RABBITMQ_ERLANG_COOKIE='secret cookie here' -e RABBITMQ_NODENAME=rabbit@my-rabbit rabbitmq:3 bash

管理插件

带管理插件的rabbitmq镜像

安装

docker pull rabbitmq:3.7-management-alpine

运行

$ docker run -d --hostname my-rabbit --name some-rabbit -p 8080:15672 rabbitmq:3.7-management-alpine

默认端口15672, 默认账号密码: guest / guest; 可通过RABBITMQ_DEFAULT_USER 和 RABBITMQ_DEFAULT_PASS修改;

配置

参考

HiPE(High Performance Erlang) 霸爷有一个一语中的的描述”erlang的hipe相当于jit, 根据语言评测有hipe支持在纯erlang的运算上会快2-3倍,这个性能的提升对于计算密集型的应用还是比较可观的。”

插件

通过docker file来查看运行

FROM rabbitmq:3.7-alpine-management
RUN rabbitmq-plugins enable --offline rabbitmq_mqtt rabbitmq_federation_management rabbitmq_stomp

使用

参考
+ https://www.rabbitmq.com/getstarted.html

启动服务

docker run -d --hostname my-rabbit --name some-rabbit -p 15672:15672 rabbitmq:3.7-management-alpine
docker logs some-rabbit
 node           : rabbit@my-rabbit
 home dir       : /var/lib/rabbitmq
 config file(s) : /etc/rabbitmq/rabbitmq.conf
 cookie hash    : 9dc/v2egQn2ldwnTkb+DSg==
 log(s)         : <stdout>
 database dir   : /var/lib/rabbitmq/mnesia/rabbit@my-rabbit
2018-08-07 10:20:49.914 [info] <0.205.0> Memory high watermark set to 801 MiB (840700723 bytes) of 2004 MiB (2101751808 bytes) total
...
2018-08-07 10:20:49.925 [info] <0.197.0> Node database directory at /var/lib/rabbitmq/mnesia/rabbit@my-rabbit is empty. Assuming we need to join an existing cluster or initialise from scratch...
...
2018-08-07 10:20:50.201 [info] <0.197.0> Adding vhost '/'
2018-08-07 10:20:50.227 [info] <0.415.0> Making sure data directory '/var/lib/rabbitmq/mnesia/rabbit@my-rabbit/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists
2018-08-07 10:20:50.233 [info] <0.415.0> Starting message stores for vhost '/'
2018-08-07 10:20:50.233 [info] <0.419.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index
2018-08-07 10:20:50.235 [info] <0.415.0> Started message store of type transient for vhost '/'
2018-08-07 10:20:50.236 [info] <0.422.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
2018-08-07 10:20:50.237 [warning] <0.422.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": rebuilding indices from scratch
2018-08-07 10:20:50.238 [info] <0.415.0> Started message store of type persistent for vhost '/'
2018-08-07 10:20:50.240 [info] <0.197.0> Creating user 'guest'
2018-08-07 10:20:50.247 [info] <0.197.0> Setting user tags for user 'guest' to [administrator]
2018-08-07 10:20:50.250 [info] <0.197.0> Setting permissions for 'guest' in '/' to '.*', '.*', '.*'
2018-08-07 10:20:50.256 [info] <0.460.0> started TCP Listener on 0.0.0.0:5672
2018-08-07 10:20:50.263 [info] <0.197.0> Setting up a table for connection tracking on this node: 'tracked_connection_on_node_rabbit@my-rabbit'
2018-08-07 10:20:50.269 [info] <0.197.0> Setting up a table for per-vhost connection counting on this node: 'tracked_connection_per_vhost_on_node_rabbit@my-rabbit'

python post/subscribe模型

需要Pika库;

pip3 install pika

hello wold

publish端

# 建立连接
#!/usr/bin/env python
import pika

connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()

# 建立queue
channel.queue_declare(queue='hello')

# 消息不能直接发送到queue,必须经过exchange; 
channel.basic_publish(exchange='',
                      routing_key='hello',
                      body='Hello World!')
print(" [x] Sent 'Hello World!'")
connection.close()

client端


#!/usr/bin/env python
import pika

connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()

channel.queue_declare(queue='hello')  #可以多次调用,总是同一个queue (即idempotent)

def callback(ch, method, properties, body):
    print(" [x] Received %r" % body)

channel.basic_consume(callback,
                      queue='hello',
                      no_ack=True)

print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()

exchange绑定多个queue

send.py

#!/usr/bin/env python
import pika
import sys

connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()

channel.exchange_declare(exchange='logs',
                         exchange_type='fanout')

message = ' '.join(sys.argv[1:]) or "info: Hello World!"
channel.basic_publish(exchange='logs',
                      routing_key='',
                      body=message)
print(" [x] Sent %r" % message)
connection.close()

receive.py

#!/usr/bin/env python
import pika

connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()

channel.exchange_declare(exchange='logs',
                         exchange_type='fanout')

result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue

channel.queue_bind(exchange='logs',
                   queue=queue_name)

print(' [*] Waiting for logs. To exit press CTRL+C')

def callback(ch, method, properties, body):
    print(" [x] %r" % body)

channel.basic_consume(callback,
                      queue=queue_name,
                      no_ack=True)

channel.start_consuming()

AMQ 基本API

https://www.rabbitmq.com/amqp-0-9-1-quickref.html

basic
basic.ack
basic.cancel
basic.consume
basic.deliver
basic.get
basic.nack
basic.publish
basic.qos
basic.recover
basic.recover-async
basic.reject
basic.return
channel 
channel.close
channel.flow
channel.open
confirm
confirm.select
exchange  路由/交换
exchange.bind
exchange.declare
exchange.delete
exchange.unbind
queue 队列
queue.bind
queue.declare
queue.delete
queue.purge
queue.unbind
tx  事物
tx.commit
tx.rollback
tx.select

权限

参考

RabbitMQ的用户角色分类:
none、management、policymaker、monitoring、administrator
两种数据;
* 定义类 Definitions (Topology)
定义类数据在内部数据库保存,并在所有集群节点内自动同步. 当一个节点改变时,其他节点也会同步改变; 也即备份时,从任一节点备份皆可.
- 消息数据 Messages data
消息数据存在消息store区域; 对用户透明; 用户也不需要关心;

发布了25 篇原创文章 · 获赞 3 · 访问量 1万+

猜你喜欢

转载自blog.csdn.net/jamsan_n/article/details/81490581