How to Configure Mariadb Maxscale Master-Slave with Galera Cluster

In this article we will install MariaDB galera cluster with MaxScale proxy database by MariaDB corporation. MaxScale is inteligent proxy database that can route database statements from cluster to one server. But unlike HAproxy, MaxScale uses asynchronius I/O of the Linux kernel  which should help with performance.

Cluster we are going to make will have read write split, meaning that all writes are done in master by MaxScale, and replicates them to all nodes as where they can be read.

In this article we use node-01 as the master and node-02 / node-03 as slaves. Node-04 will be our maxscale core machine.

Mariadb Maxscale Master Slave Cluster

Main Features of maxscale

a) If any database server fails, connection will be automatically created to another node
b) Connections can be dynamically added or removed from session
c) Maxscale will route client request to number of database servers

Installing the cluster

First thing is to set up your host file with hosnames and private ips of the all your hosts. This is needed so you could have your nodes communicate over private IPs and avoid the need for encryption of traffic. Here is my hosts file (/etc/hosts) on all 4 servers: node-01 node-01 node-02 node-02 node-03 node-03 node-04 node-04

First three will be for Galera cluster, and fourth for MaxScale proxy.

Lets add key for MariaDB repository on first 3 servers

apt-key adv --recv-keys --keyserver hkp:// 0xF1656F24C74CD1D8

Then we will add repostitory for x86 and POWER little endian architectures, the package will be installed according you your arch.

add-apt-repository 'deb [arch=amd64,i386,ppc64el] xenial main'

Update the sources list

apt update

And then install the mariadb

apt install mariadb-server rsync

Configuring and building the cluster

Next we need to edit configuration files and build the cluster. The node-01 will be the node for bootstrapping the cluster and other nodes will join to this one. So lets first edit configuration files on all three nodes.

nano /etc/mysql/my.cnf

There we need to find [galera] section and change these lines:

[galera] # Mandatory settings

# Allow server to accept connections on all interfaces.


The last two lines need to have address and hostname of current node, so this above is file from node-01. On every node those two lines need to be changed accordingly, while other lines can be same.

After this is done, we need to start the cluster. If the database server is running by any chance, stop it on all three nodes.

systemctl stop mysql

On first node run:


On the other two nodes

systemctl start mysql

Back on first node, we need to set the password, so we will run


After you ran that script, you can type this command on any node

mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

It should give you output similar to this


MaxScale Proxy installation and preparing the cluster

When we have cluster up and running, we can turn to node-04 to install MaxScale on it. MaxScale is only supported on x86_64 architecture for now.  Lets download maxscale deb package with wget


Next we will install dependancy

apt install libcurl3

And then istall MaxScale

dpkg -i maxscale-2.0.1-2.ubuntu.xenial.x86_64.deb

Maxscale have been installed now we need to again enter the mysql prompt on our Galera cluster in order to make maxscale user and grant him enough privileges for maxscale to operate.

mysql -u root -p

And in mysql prompt type this line by line

CREATE USER 'maxscale'@'%' IDENTIFIED BY 'you-password-here';
Query OK, 0 rows affected (0.00 sec)

GRANT SELECT ON mysql.db TO 'maxscale'@'%';
Query OK, 0 rows affected (0.01 sec)

GRANT SELECT ON mysql.user TO 'maxscale'@'%';
Query OK, 0 rows affected (0.01 sec)

GRANT SHOW DATABASES ON *.* TO 'maxscale'@'%';
Query OK, 0 rows affected (0.01 sec)

Configuring MaxScale

Lets explain how this config file works. First part under [maxscale] will  respectively set number of CPU threads to 4, turn off logging to /var/log/syslog, turn on logging to /var/log/maxscale, turn on log warning, log to memory and log notices, and turn of log info and log developer options for debugging the code.

Next important section is [Galera Monitor]. There the we need to concentrate on several lines. The line that say servers=  need to be filled with names of servers. This is not hostname, this is how MaxScale names the servers in this config file further down. In our case we will set server1 to server3. User is username that we created in previous section, maxscale in our case. Password is whatever you set for your password for maxscale user. Galera Monotor will pick one node as master and others as slave (out of our three nodes). The node with lowest WSREP_LOCAL_INDEX will be selected as master. If cluster configuration changes, the new selection may happen and node with lower index will selected as master. If you don't want for master to change this way,  you can use option disable_master_failback and set it to 1 like in our config file. This way master wont change even if new node with lower index joins the cluster.

Then we move to next important section which is  [RW Split Router]. Here we again enter names for three servers and same user and password as the the [Galera Monitor] section.

Lastly, we need to edit those three servers, and enter names by which the MaxScale will see them, and ip addresses which will use to communicate with them. The names will be [server1][server2] and [server3], and for ip we will use private ip addresses to avoid having to encrypt the traffic.

Now on maxscale server, which is node-04, we are going to configure and start the maxscale proxy database. First lets set up the ufw to allow connections on needed ports.

ufw allow 6603
ufw allow 4442

Then backup the config file

mv /etc/maxscale.cnf /etc/maxscale.cnf.bk

After the file have been backed up and moved, lets make new file from scratch.

nano /etc/maxscale.cnf

There, you can use this as skeleton for configuration, except off course you need to change the bold parts:

[maxscale] threads=4

[Galera Monitor] type=monitor

[qla] type=filter

[fetch] type=filter

[RW Split Router] type=service

[CLI] type=service

[RW Split Listener] type=listener
service=RW Split Router

[CLI Listener] type=listener

[server1] type=server

[server2] type=server

[server3] type=server

After this have been saved, you can start the maxscale service

systemctl start maxscale.service

And test whether it is working

maxadmin -pmariadb list servers

maxadmin list servers


We have successfully installed MaxScale proxy database as load balancer for our Galera cluster running on 3 Ubuntu 16.04 nodes with fourth node for MaxScale. MaxScale is good solution for large clusters, today we made smallest possible configuration but scaling out from here is possible. I hope that article was useful for introducing yourself with MaxScale configuration, than you for reading and have a good day.

Mihajlo Milenovic 3:00 am

About Mihajlo Milenovic

Miki is a long time GNU/Linux user, Free Software advocate and a freelance system administrator from Serbia. Got introduced to GNU/Linux in year 2003 on old AMD Duron computer, and since than always eager to learn new stuff about this system. From 2016 writes for Linoxide to share his experiences with wider audience

Author Archive Page

Have anything to say?

Your email address will not be published. Required fields are marked *

All comments are subject to moderation.