In my last blog post, I showed how easy it is to deploy a single RabbitMQ node for testing or non-HA queues. In this scenario, we will deploy a cluster of 2 nodes. First, we need to create a wrapper Box built on top of RabbitMQ to create the cluster. Let’s call it RabbitMQ Cluster Node.

RabbitMQ Cluster Node Box

The wrapper Box has the following variables:


Email me if you’d like me to share my RabbitMQ Cluster Node Box with you.

Puppet Scripts

Now we should make sure that the puppet manifest used to configure RabbitMQ includes the nodename property. In this interesting scenario we will decide to use as hostname IP-{{ address.private.replace(“.”, “-“) }}. This tells jinja to replace the dots in the private IP address of the RabbitMQ node with dashes and that will be configured as the nodename by default in AWS machines.

# default.pp
node default {
stage { 'init': before => Stage['main'] }class { 'rabbitmq':
port => '$rabbitmq',
ssl_port => '$ssl_port',
key_path => '$SERVER_KEY_PATH',
cert_path => '$SERVER_CERT_PATH',
ca_cert_path => '$CA_CERT_PATH',
mnesia_base => '$MNESIA_BASE',
log_base => '$LOG_BASE',
user_name => '$username',
password => '$password',
version => '$VERSION',
node_name => 'rabbit@ip-{{ address.private.replace(".", "-") }}',
stage => 'init'

rabbitmq::plugin { 'rabbitmq_management':
ensure => present,
require => Class['rabbitmq']

#if ($username != '')

rabbitmq_user { '$username':
ensure => present,
admin => true,
password => '$password',
require => Class['rabbitmq']

rabbitmq_user_permissions { '$username':
configure_permission => '.*',
read_permission => '.*',
write_permission => '.*',
require => Rabbitmq_user['$username'],
#if ($username != 'guest')

rabbitmq_user { 'guest':
ensure => absent,
require => Class['rabbitmq']

We also need to create a post_configure Event with the following script to initialize or join an existing cluster. This is a great example of Box composition. This Box is adding the clustering logic to the standard RabbitMQ box without having to modify the original Box.

#!/bin/bashrabbitmqctl set_parameter federation-upstream {{ instance }} '{"uri":"amqp://{{ upstream.rabbitmq.username }}:{{ upstream.rabbitmq.password }}@{{ upstream.address.private }}:5672","expires":3600000}'
rabbitmqctl set_policy --apply-to exchanges federate-me "^amq\." '{"federation-upstream-set":"all"}'

Deploying the Cluster

Step 1: Deploying the Cluster in EC2 (AWS) Click the “New Instance” button and select the “RabbitMQ Node for Cluster”


Create a new profile with AWS, t1.micro as the instance type, and AWS Linux as the image.You will need also to select Automatic security group for ensure Cloud Application Manager opens the correct tcp ports needed for cluster communication. Save the profile.


Fill the COOKIE field with the one you want to be shared between all the RabbitMQ nodes of the cluster. We will use this convention for naming the profiles: — i.e., rabbitmq-prod-us-west-1, as the environment. And deploy it.


Step 2: Deploying the Second Node of the Cluster in EC2 (AWS)

The second node follows much of the same steps as the first node, with the following differences.

  • Before launching the second node, the master must be already deployed in EC2.
  • When deploying the second node, name the environment rabbitmq-prod-us-west-2 and use the corresponding zone.
  • Select Automatic security group to ensure Cloud Application Manager opens the correct TCP ports needed for cluster communication.
  • Fill the COOKIE field with the one you want to be shared between the two RabbitMQ nodes of the cluster.

Select the binding to the master node of RabbitMQ for building the cluster.


Deploy the second node, using the same name for the environment as the zone profile you are using to deploy. Repeat these steps if you need to add more nodes.

Extras: Adding the Recordsets for the RabbitMQ Nodes and Cluster using Route53

  • We could also use Route53 latency prioritized recordsets to point to the RabbitMQ nodes for load balancing the service instead of connecting directly to each of the nodes, or using our own LB. So we will use only one address to connect to the RabbitMQ cluster and Route53 or own load balancer will decide to which node:
  • Ideally, we should do this with an AWS CloudFormation template or using an NGINX or HA proxy box as load balancer. But I’ll save that for another blog post. If you do try using an AWS CloudFormation template, you should also configure each of the nodename in the CloudFormation Box.
  • Another approach is having an Admin Box that creates a recordset in route53 for each of the nodes / instances that are being deployed.

Stay tuned for my next blog post on RabbitMQ Federation

Want to Learn More About Cloud Application Manager and ElasticKube?

Cloud Application Manager is a powerful, scalable platform for deploying applications into production across any cloud infrastructure – private, public or hosted. It provides interactive visualization to automate application provisioning, including configuration, deployment, scaling, updating and migration of applications in real-time. Offering two approaches to cloud orchestration — Cloud Application Manager and ElasticKube — enterprise IT and developers alike can benefit from multi-cloud flexibility.

Explore ElasticKube by visiting GitHub (curl -s | bash).

Visit the Cloud Application Manager product page to learn more.