Table of Contents
Introduction
This how-to will help walk you through the DRBD replication and configuration process. Distributed Replicated Block Device (DRBD) is a block-level replication between two or more nodes and replaces shared storage by creating a networked mirror. DRBD is used in environments that require systems or data to be Highly Available.
Prerequisites
* Two servers running Debian GNU/Linux Distribution. Other Linux versions will also work, but the installation packages may be different.
* Both servers should be cross-connected or have a separate Network Interface for private communication.
* Both servers should have the same partitioning. This walkthrough assumes that both systems have a single /dev/sdb device that is going to be used as the
DRBD volume.
Network:
The first part of the process is ensuring that both nodes can talk to each other. This can be done by configuring both nodes with a static private IP address.
You can modify the network interface file directly. Here is an example of the /etc/network/interfaces file of one of our nodes:
# network interface settings auto lo iface lo inet loopback iface eth0 inet manual auto eth1 iface eth1 inet static address 10.0.10.10 netmask 255.255.255.0 auto vmbr0 iface vmbr0 inet static address 172.16.10.10 netmask 255.255.255.0 gateway 172.16.10.1 bridge_ports eth0 bridge_stp off bridge_fd 0
In our setup, host01 is configured to use IP 10.0.10.10, and host02 is configured to use IP 10.0.10.11.
After changing the /etc/network/interfaces file, restart networking or bring up the new interface and ensure both servers can communicate
on their new private IP.
Disk for DRBD:
Partitioning
Use parted, where /dev/sdb is the device we want to use:
parted /dev/sdb
Once done, the below commands will create your first partition on /dev/sdb and be used to make a 100GB volume for our first VM/DRBD device. This partition will be /dev/sdb1.
(parted) mkpart primary 0GB 100GB
It’s important to note that the sizes listed are the disk locations in Gigabytes. This tells parted to create a new partition at disk size location 0GB through disk size location 100GB. To add a second partition, your starting disk size location about be 100GB, see below:
(parted) mkpart primary 100GB 200GB
If you want to double-check and review your existing partitions to make sure that you using the right disk size locations, run the following and take a look at the results:
(parted) print all Number Start End Size File system Name Flags 1 0GB 100GB 100GB primary 2 100GB 200GB 100GB primary 3 200GB 300GB 100GB primary 4 300GB 400GB 100GB primary 5 400GB 500GB 100GB primary
DRBD configuration:
Software installation:
Install the DRBD user tools. On ALL DRBD nodes, run:
apt-get update && apt-get install drbd8-utils
Prepare DRBD configuration:
Replace /etc/drbd.d/global_common.conf with the following content:
global { usage-count no; } common { syncer { rate 30M; verify-alg md5; } handlers { out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; } }
Configuring the rate of synchronization:
A good rule of thumb for this value is to use about 30% of the available replication bandwidth or IO.
Create a resource configuration file:
Create a new file, /etc/drbd.d/r0.res, on ALL DRBD nodes.
resource r1 { protocol C; startup { wfc-timeout 0; # non-zero wfc-timeout can be dangerous degr-wfc-timeout 60; become-primary-on both; } net { cram-hmac-alg sha1; shared-secret "my-secret"; allow-two-primaries; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; } on host01 { device /dev/drbd1; disk /dev/sdb1; address 10.0.10.10:8001; meta-disk internal; } on host02 { device /dev/drbd1; disk /dev/sdb1; address 10.0.10.11:8001; meta-disk internal; } disk { no-disk-barrier; no-disk-flushes; } }
If you start adding additional resources, the following fields will need to be updated in your new resources:
resource r1 --> resource r2
device /dev/drbd1; --> device /dev/drbd2 disk /dev/sdb1; --> disk /dev/sdb2 address 10.0.10.10:8001; --> address 10.0.10.10:8002
device /dev/drbd1; --> device /dev/drbd2 disk /dev/sdb1; --> disk /dev/sdb2 address 10.0.10.11:8001; --> address 10.0.10.11:8002
Bring DRBD Online:
On both servers, start DRBD:
/etc/init.d/drbd start
Now create the device metadata, also on both nodes:
drbdadm create-md r1
Bring the device up, also on both nodes:
drbdadm up r1
Now you can check the current status of the new DRBD volume; it should look like this on both nodes:
host01:~# cat /proc/drbd version: 8.3.13 (api:88/proto:86-96) GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by root@sighted, 2012-10-09 12:47:51 1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r---- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:2096348
DRBD has successfully allocated resources and is ready for further configuration. Start the initial synchronization (only on one node!!!):
drbdadm -- --overwrite-data-of-peer primary r1
Wait until the initial sync is finished (depending on the size and speed, this process can take some time):
host01:~# watch cat /proc/drbd
Once completed, check that your DRBD starts in Primary/Primary mode. To do this, stop DRBD service on both nodes:
/etc/init.d/drbd stop
And start again on both nodes:
/etc/init.d/drbd start
Now, DRBD should be in the Primary/Primary mode:
host01:~# cat /proc/drbd version: 8.3.13 (api:88/proto:86-96) GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by root@sighted, 2012-10-09 12:57:41 1: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r----- ns:1192004977 nr:0 dw:1191846322 dr:705864868 al:282022 bm:32 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
At this point, you have completed your DRBD setup, and the DRBD resource can be further configured for use as local storage. Thank you for following along, and feel free to check back with us for further updates or check out related articles like Configuring LVM on DRBD in our blog.
VPS hosting is just one of Atlantic.Net’s many hosting services offered by Atlantic.Net – We also offer dedicated, managed, and HIPAA-compliant hosting solutions. Contact us today for more information on any of our services!