What is DRBD (Distributed Replicated Block Device)?
DRBD (Distributed Replicated Block Device) is a Linux-based software component to mirror or replicate individual storage devices (such as hard disks or partitions) from one node to the other(s) over a network connection. DRBD makes it possible to maintain consistency of data among multiple systems in a network. DRBD also ensures high availability (HA) for Linux applications.
DRBD supports three distinct replication modes, allowing three degrees of replication synchronicity.
- Protocol A: Asynchronous replication protocol.
- Protocol B: Memory synchronous (semi-synchronous) replication protocol.
- Protocol C: Synchronous replication protocol.
In this tutorial, we are going to create and configure a DRBD Cluster Across two servers. Both servers have an empty disk attached /dev/sdb.
Enviroment
Servers | ylpldrbd01.yallalabs.local | 192.168.1.20 | CentOS 7 |
ylpldrbd02.yallalabs.local | 192.168.1.21 | CentOS 7 |
Installing DRBD
1. In order to install DRBD, you will need to enable the ELRepo repository on both nodes, because this software package is not distributed through the standard CentOS and Red Hat Enterprise Linux repositories.
Use the following commands to import GPG key and install ELRepo repository on both nodes:
# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org # rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
2. Run the following command on both nodes to install the DRBD software and the all necessaries kernel modules
# yum install drbd90-utils kmod-drbd90
– Once the installation is completed, you will need to check whether the kernel module is loaded correctly, using this command:
# lsmod | grep -i drbd
If it is not loaded automatically, you can load the module to the kernel on both nodes, using the follow command:
# modprobe drbd
Note that modprobe command will take care of loading the kernel module for the time being on your current session. However, in order for it to be loaded during boot, you have to make use of the systemd-modules-load service by creating a file inside /etc/modulesload.d/ so that the DRBD module is loaded properly each time the system boots:
# echo drbd > /etc/modules-load.d/drbd.conf
Configuring DRBD
After having successfully installed DRBD on both nodes, we need to modify the DRBD global and common settings by editing the file /etc/drbd.d/global_common.conf.
1. Let’s backup the original settings on both nodes with the following command:
# mv /etc/drbd.d/global_common.conf /etc/drbd.d/global_common.conf.orig
2. Create a new global_common.conf file on both nodes with the following contents:
# vi /etc/drbd.d/global_common.conf global { usage-count no; } common { net { protocol C; } }
3./ Next, we will need to create a new configuration file called /etc/drbd.d/drbd0.res for the new resource named drbd0, with the following
contents:
# vi /etc/drbd.d/drbd0.res resource drbd0 { disk /dev/sdb; device /dev/drbd0; meta-disk internal; on ylpldrbd01 { address 192.168.1.20:7789; } on ylpldrbd02 { address 192.168.1.21:7789; } }
In the above resource file, we created a new resource drbd0 where 192.168.1.20 and 192.168.1.21 are the IP addresses of our two nodes, and 7789 is the port used for communication, using the disk /dev/sdb to create the new device /dev/drbd0.
4. Initialize the meta data storage on each nodes by executing the following command on both nodes
# drbdadm create-md drbd0
5.Starting and Enabling the DRBD Daemon on both nodes.
# systemctl start drbd # systemctl enable drbd
6. Lets define the DRBD Primary node as first node “ylpldrbd01”.
# drbdadm up drbd0 # drbdadm primary drbd0
Note:
if you get any error to make the node primary, use the following command to forcefully make the node as primary:
# drbdadm primary drbd0 --force
7. On the Secondary node “ylpldrbd02” run the following command to start the drbd0
# drbdadm up drbd0
8. You can check the current status of the synchronization while it’s being performed. The cat /proc/drbd command displays the creation and synchronization progress of the resource, as shown here:
# cat /proc/drbd
9. Adjust the firewall using the following commands:
# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="ip_address" port port="7789" protocol="tcp" accept'
# firewall-cmd --reload
Test the DRBD
In order to test the DRBD functionality we need to Create a file system, mount the volume and write some data on primary node “ylpldrbd01” and finally switch the primary node to “ylpldrbd02”
– Run the following command on the primary node to create an xfs filesystem on /dev/drbd0 and mount it to the mnt directory, using the following commands:
# mkfs.xfs /dev/drbd0 # mount /dev/drbd0 /mnt
– Create some data using the following command:
# touch /mnt/file{1..5} # ls -l /mnt/ total 0 -rw-r--r--. 1 root root 0 Sep 22 21:43 file1 -rw-r--r--. 1 root root 0 Sep 22 21:43 file2 -rw-r--r--. 1 root root 0 Sep 22 21:43 file3 -rw-r--r--. 1 root root 0 Sep 22 21:43 file4 -rw-r--r--. 1 root root 0 Sep 22 21:43 file5
– Let’s now switch primary mode “ylpldrbd01” to second node “ylpldrbd02” to check the data replication works or not.
First, we have to unmount the volume drbd0 on the first drbd cluster node “ylpldrbd01”.
# umount /mnt
Change the primary node to secondary node on the first drbd cluster node “ylpldrbd01”
# drbdadm secondary drbd
Change the secondary node to primary node on the second drbd cluster node “ylpldrbd02”
# drbdadm primary drbd
Mount the volume and check the data available or not.
# mount /dev/drbd0 /mnt # ls -l /mnt total 0 -rw-r--r--. 1 root root 0 Sep 22 21:43 file1 -rw-r--r--. 1 root root 0 Sep 22 21:43 file2 -rw-r--r--. 1 root root 0 Sep 22 21:43 file3 -rw-r--r--. 1 root root 0 Sep 22 21:43 file4 -rw-r--r--. 1 root root 0 Sep 22 21:43 file5
We hope this tutorial was enough Helpful. If you need more information, or have any questions, just comment below and we will be glad to assist you!
7 comments
Excellent post
Thx Manuel!
So I was just able to sync two nodes using your guide. I am still trying to understand how DRBD works, but during the process I got stuck into a couple of issues. In step 6 and after, I’d get the error :
‘drbd’ not defined in your config(for this host).
In this case ‘drbd’ refers to the which you have defined as ‘drbd0’
Also, the second issue is the connection part, after doing everything my connection was still i “Connecting” status:
I tried many things, but I think what fixed it was issuing the “drbdadm up drbd0” command on the second node too, which the guide here does not seem to indicate.
Overall, really good post. It helped a lot!
I followed the steps correctly but after I initiate drbdadm up drbd0 on both nodes it gives me peer-disk:Inconsistent on Secondary node. Is this normal???
You need to wait until the node get replicated and synchronized.
use the follwing command to check the status of the DRBD nodes :drbd-overview
We followed the steps mentioned but looks like it si not replicating any data.
Old primary now new Secondary:
# drbd-overview
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.
0:drbd0/0 Connec/StaAlo Second/Unknow UpToDa/DUnkno
Bew Primary now old Secondary:
# drbd-overview
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.
0:drbd0/0 Connec/StaAlo Primar/Unknow UpToDa/DUnkno
Yes #drbd-overview will be obseleted soon.
Please use #drbdadm status will give you the node status.
You need to use heartbeat package or PCS package for managing HA across these DRBD nodes. Without that these will just replicate the data.