Install And Configure Galera MySQL Cluster On CentOS 7 Server

Install And Configure Galera MySQL Cluster On CentOS 7 Server

 

Some overall info about the Galera Cluster solution. Galera Cluster for MySQL is a true Multimaster Cluster based on synchronous replication. Galera Cluster is an easy-to-use, high-availability solution, which provides high system uptime, no data loss and scalability for future growth. ( Source )

For more information, you could take a look at this official Galera Cluster for MySQL introduction video:

https://youtu.be/n8vM_HVnnfc

 


Note! With MariaDB 10.1, the wsrep API for Galera Cluster is included by default. This is available as a separate download for MariaDB 10.0 and MariaDB 5.5. If you need to do that for older version you can follow the steps here. If you want to do this for 10.1+ you can follow the steps at: Galera Cluster & MariaDB 10.1+ On CentOS 7 Server

After a few hours of research I've came to the conclusion that there are more Ubuntu guides on how to set install and configure a Galera Cluster, so I've decided to write a step by step guide on how to do that on a CentOS 7 server.

Here's a step by step guide on how to setup Galera Cluster on a new CentOS 7 server.

- Very important! Before you proceed please do the following or your setup might not work at all:

1) Disable the Selinux service. Configure SELINUX=disabled in the /etc/selinux/config file:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#       enforcing - SELinux security policy is enforced.
#       permissive - SELinux prints warnings instead of enforcing.
#       disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#       targeted - Targeted processes are protected,
#       mls - Multi Level Security protection.
SELINUXTYPE=targeted

Reboot your system. After reboot, confirm that the getenforce command returns Disabled:

# getenforce
Disabled

2) Disable and Stop Firewalld. To disable firewalld, run the following command as root:

systemctl disable firewalld
systemctl stop firewalld

Check the Status of Firewalld

systemctl status firewalld

Alright, now we are ready to begin!

In my case the IP addresses of my 3 machines are:

Server1: 192.168.201.90
Server2: 192.168.201.91
Server2: 192.168.201.92

Make sure to change the IP addresses to the IP addresses of your machines.

As a side note, according to many sources, a good practice and also to keep a healthy Quorum, it's very important to always keep an odd numbers of Galera nodes.

Installation
- Remove the default MariaDB library files for all the three servers

[[email protected] ~]# yum remove mariadb-libs -y

- Once the default MariaDB library files have been removed, create the MariaDB repository file for all three servers:

[[email protected] ~]# vim /etc/yum.repos.d/cluster.repo

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.0/centos7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1

- Then install the socat dependency package in all the 3 servers:

yum install socat

- Now to install MariaDB Galera cluster 10.0 and its dependencies on all the servers run:

yum install MariaDB-Galera-server MariaDB-client rsync galera

- Once it is installed, start the MariaDB service for all the servers:

systemctl start msyql

- Note! If you get the following error:

 Failed to start msyql.service: Unit not found

Then please use this command here:

/etc/init.d/mysql start

- Complete the secure MariaDB installation by running the following command (Please set a root password!):

mysql_secure_installation

- Then you can create a MySQL user that you would use for the replication, you can use these commands here:

grant usage on *.* to my_db_user@'%' identified by 'mypassword';
grant all privileges on *.* to my_db_user@'%' identified by 'mypassword';
flush privileges;

- Now you can stop the MariaDB service by running the following command. Do the same for all servers:

systemctl stop msyql

- Note! If you get the following error:

 Failed to stop msyql.service: Unit not found

Then please use this command here:

/etc/init.d/mysql stop

Configuration

We can now start with the Galera cluster configuration by adding the cluster address, cluster name, node address, etc. We need to do that for all servers:

[[email protected] ~]#  vim /etc/my.cnf.d/server.cnf
[mariadb-10.0]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
datadir=/var/lib/mysql
innodb_log_file_size=100M
innodb_file_per_table
innodb_flush_log_at_trx_commit=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.201.90,192.168.201.91,192.168.201.92"
wsrep_cluster_name='galera_cluster'
wsrep_node_address='192.168.201.90'
wsrep_node_name='server1'
wsrep_sst_method=rsync
wsrep_sst_auth=my_db_user:mypassword

For completing the cluster configuration on server2 and server3 do not forget to adjust the wsrep_node_address and wsrep_node_name variables. Before doing that you need to stop the MySQL service on both servers.

Server2:

wsrep_node_address='192.168.201.91'
wsrep_node_name='server2'

Server3:

wsrep_node_address='192.168.201.92'
wsrep_node_name='server3'

Once all the nodes are configured, you can start first cluster server.

NOTE This command should be executed only on server1:

[[email protected] ~]# /etc/init.d/mysql start --wsrep-new-cluster

After that, start MySQL on both server2 and serevr3 as well. Run the following command on both the remaining servers:

systemctl start mysql

- Note! If you get the following error:

 Failed to start msyql.service: Unit not found

; then please use this command here:

/etc/init.d/mysql start

Now check the status of configured Galera cluster on server1. Repeat this command on server2 and server3 to check the status:

# mysql

MariaDB [(none)]> show status like 'wsrep%';

| wsrep_incoming_addresses      | 192.168.201.90:3306,192.168.201.91:3306,192.168.201.92:3306 
 
| wsrep_local_state_comment     | Synced        

| wsrep_cluster_size            | 3                                                        

| wsrep_ready                   | ON    

This is pretty much it! Now you can play a little bit by creating databases on all servers and etc. Here is an example:

On server1:

# mysql
> create database test1;
>show databases;

+--------------------+
| Database           |
+--------------------+
| test1              |
| information_schema |
| mysql              |
| performance_schema |
+--------------------+

After that login MySQL in server2 and check the replication process there too:

>show databases;

+--------------------+
| Database           |
+--------------------+
| test1              |
| information_schema |
| mysql              |
| performance_schema |
+--------------------+

Login to MySQL on server3 and check the replication process in it as well:

>show databases;

+--------------------+
| Database           |
+--------------------+
| test1              |
| information_schema |
| mysql              |
| performance_schema |
+--------------------+

You can create some tables or import some data to fully test the replication. Here is a short example:

MariaDB [(none)]> create database test1;
Query OK, 1 row affected (0.01 sec)

MariaDB [(none)]> use test1;
Database changed

MariaDB [test1]> create table replication(master char(25));
Query OK, 0 rows affected (0.01 sec)

MariaDB [test1]> show tables;
+-----------------+
| Tables_in_test1 |
+-----------------+
| replication     |
+-----------------+
1 row in set (0.00 sec)

You can insert some values from all servers just as a test:

MariaDB [test1]> insert into replication values("server1");
Query OK, 1 row affected (0.00 sec)

MariaDB [test1]> select * from replication;
+---------+
| master  |
+---------+
| server1 |
+---------+
2 rows in set (0.00 sec)

Hope that this helps!

On a side note if you need to add a new server to the pool, please keep the following in mind. If you’re using rsync or mysqldump for SST transfer, bringing up the new nodes will lock the databases on the donor nodes. This could bring down heavily used setups. It would be strongly recommended to add those nodes when the setup is less solicited. Additionally, you may want to set the load balancer to only redirect requests on 2 nodes instead of 3. You would now be able to set the third node as a donor, which means it would be the only node with its DB getting locked and providing the new nodes with the data.

To set a SST donor, add the following line in the mysql configuration on the two new nodes:

wsrep_sst_donor= donornodename

( Source 1 )
( Source 2 )

 

Materialize

The Real-Time Data Platform for Developers

Buy me a coffeeBuy me a coffee