Setting up Raid 1 (mirror) in linux mint (or any distro)

Reaction score
3
Location
Oklahoma City, OK
Setting up Raid 1 (mirror) on Linux Mint XFCE (or any debian based linux)

boot into debian linux distro from live disk or USB drive
This is done with 2 500 SATA drives – Using mine as a small cheap file server

Open terminal
sudo su
dd if=/dev/zero of=/dev/sda bs=1048576
dd if=/dev/zero of=/dev/sdb bs=1048576

the DD command will clean the hard drives off of anything.


cfdisk /dev/sda
Create a new 500 MB partition or 1 gig partition at the beginning
Move the selection bar into the Free Space
Create a new partition with the remaining space
Change the type of this partition to FD.
Write the changes
Quit cfdisk

cfdisk /dev/sdb
Create a new 500 MB partition or 1 gig at the beginning

Move the selection bar into the Free Space
Create a new partition with the remaining space
Change the type of this partition to FD.
Write the changes
Quit cfdisk



cfdisk -P s /dev/sda
To get last sector #




apt-get install mdadm
mdadm --create /dev/md0 --level=raid1 --raid-devices=2 /dev/sda2 /dev/sdb2
mkfs.ext3 /dev/md0
ln /dev/md0 /dev/sde
mkfs.ext3 /dev/sda1
mkfs.ext3 /dev/sdb1


That command will copy the boot partition over to the sdb drive so if sda goes down, the system will still start up.
Install Debian based Linux
Start the installer, it has an icon on the desktop. When you get to the 'Allocate drive space' dialog, choose the 'Specify partitions manually' option. Click the /dev/sda1 partition, choose 'Change...' and fill out the following options:
Use as: Ext3 journaling file system Format the partition: No (do not check the checkmark) Mount point: /boot
Click the /dev/md0 partition, choose 'Change', and fill it out like this:
Use as: Ext3 journaling file system Format the partition: No (do not check the checkmark Mount point: /
There's also a dropdown for 'Device for boot loader installation'. Make sure that /dev/sda is selected there. Click the 'Install Now' button, and 'Continue' on any confirmation dialogs you might get. Fill out the rest of the install questions and sit back while the OS installs itself.

When the installer is done, do*not*reboot the system yet (choose 'Continue testing'). Go back to your terminal screen and prepare the boot partition:

mkdir /raid
mount /dev/md0 /raid
mount /dev/sda1 /raid/boot
mount --bind /dev /raid/dev
mount -t devpts devpts /raid/dev/pts
mount -t proc proc /raid/proc
mount -t sysfs sysfs /raid/sys
mount -o bind /etc/resolv.conf /raid/etc/resolv.conf
chroot /raid
apt-get install mdadm

nano /etc/mdadm/mdadm.conf and remove the / from between md and the 0
hit control X and save the file
then type mdadm --assemble /dev/md0

then type exit: then

dd if=/dev/sda of=/dev/sdb count=(last sector # here)

That command will copy the boot partition over to the sdb drive so if sda goes down, the system will still start up.

Reboot and it should boot up to the OS with no problem

Checking the RAID
A useful command that will tell you the status of the RAID is*cat /proc/mdstat
It's output is something like:
md0 : active raid1 sda2[0] sdb2[1] 62468672 blocks [2/2] [UU]
The UU means both RAID1 components are 'Up'. If one disappears, it's 'U' will change to and underscore ('_').


Most of the info was taken from
http://www.michielovertoom.com/linux/ubuntu-software-raid/

With updates/modifications from me (like size of boot partition)

I got this working in a VM setting after toying with it for 2 days and doing loads of research. Figured it might help someone. Even if it's just to learn how.

~Joshua​

PS: Just got this working on a dell optiplex 330 with 2 500 gig hard drives and everything works with linux mint 17 xfce :)
 
Last edited:
Nice write-up!

Just for reference, and because I'm an Arch Linux fanboy :D, my first stop for such information is usually the Arch Wiki. From my experience, Arch has probably the most comprehensive and useful wiki of any distro out there. Here's the Arch Wiki RAID entry:
https://wiki.archlinux.org/index.php/RAID
 
Haha, yeah, i've never really played with arch. My first linux experience was mandrake back in the almost 12-13 years ago, never really got into it. I've also tried Puppy linux, but I think Debian based is my favorite, mainly because the commands are the same no matter which deb flavor'd linux you have, makes doing updates through terminal quick and easy :p

~Joshua
 
I realize that his is an old topic to which I am replying, but I followed Joshua's instructions to install Linux Mint 17.3 Cinnamon on a pair of 2TB drives -- except that I used Ext4 for the md0 partition. After a couple of tries where the installer aborted, everything proceeded without an error message, and I carried out all the post-installation steps before rebooting. but when I did reboot all I got was a blinking cursor.
 
I realize that his is an old topic to which I am replying, but I followed Joshua's instructions to install Linux Mint 17.3 Cinnamon on a pair of 2TB drives -- except that I used Ext4 for the md0 partition. After a couple of tries where the installer aborted, everything proceeded without an error message, and I carried out all the post-installation steps before rebooting. but when I did reboot all I got was a blinking cursor.

The most common issue is that your bootable partition is located on the raid. You cannot have a raided boot partition.

If you want to make things easy then follow this way:

1 Install Linux Mint on just one of your two drives.
2. Download and install 'Raider' software package.
3. Go thru the instructions (easy peasy...) and perform the raid1 setup.
4. Done.

Here is the link:
http://raider.sourceforge.net/

This is what I use for my raid systems and it works great. If you have any other questions feel free to contact me.

coffee
 
You cannot have a raided boot partition.
I don't know about the specifics for Mint, but Debian has been able to boot entirely from RAID since Lenny (Debian 5, 2009). At the time, you had to specify the RAID set to be loaded early in the boot sequence (kernel module load). I think you still have to do this if retrofitting RAID, but it Just Happens if you install with the RAID already in place.

Personally, I have the system on a small drive (currently a Samsung SSD) and /home on 2x2 TB in RAID 1. The system is backed up to the RAID set with brandysnap in a cron job. When I eventually moved from Wheezy to Jessie (when the SSD went in), the installer recognised the RAID set and I set it for /home use again. Very easy.
 
I don't know about the specifics for Mint, but Debian has been able to boot entirely from RAID since Lenny (Debian 5, 2009). At the time, you had to specify the RAID set to be loaded early in the boot sequence (kernel module load). I think you still have to do this if retrofitting RAID, but it Just Happens if you install with the RAID already in place.

Personally, I have the system on a small drive (currently a Samsung SSD) and /home on 2x2 TB in RAID 1. The system is backed up to the RAID set with brandysnap in a cron job. When I eventually moved from Wheezy to Jessie (when the SSD went in), the installer recognised the RAID set and I set it for /home use again. Very easy.

The problem is - Things are always changing rapidly in linux :) . I took a look at my partitions on my 'main surfing box' that I have setup as a RAID1 bootable linux Mint system and sure enough. No separate partition for boot. So, I stand corrected ( or updated).

However, I found the Raider scripts to work fantastic and made things a lot easier. You can take pretty much any flavor of linux installed on a single drive and convert it to a RAID system pretty easily.

Thanks for the correction.
 
I have just wasted an hour trying to do it in a single pass in Mint – stick with your Raider scripts! It's a complete nightmare otherwise, with no working result. I'll stick with Debian! ;)
 
I tried using raider. The first pass (raider -R1) went OK, except that it complained about not being able to create a swap partition (or similar wording). But after I swapped the drives and executed raider --run I got

":: Copying partitions from disk /dev/sda to /dev/sdb
:: Configure partitions
:: Creating raid 1 array /dev/md0 with devices /dev/sdb1
:: FATAL ERROR: /dev/md0 raid array, failed to be created!"

When I looked at the log file (which is no no longer available), it said something about the drive not being suitable.
 
I tried using raider. The first pass (raider -R1) went OK, except that it complained about not being able to create a swap partition (or similar wording). But after I swapped the drives and executed raider --run I got

":: Copying partitions from disk /dev/sda to /dev/sdb
:: Configure partitions
:: Creating raid 1 array /dev/md0 with devices /dev/sdb1
:: FATAL ERROR: /dev/md0 raid array, failed to be created!"

When I looked at the log file (which is no no longer available), it said something about the drive not being suitable.

Whats wrong with the drive? Bad sectors? Wrong size? ??
 
Both Seagate ST2000DM001; the original one is an older one, the added one is brand new. The FATAL ERROR message comes within just a few seconds of starting raider --run.
 
Both Seagate ST2000DM001; the original one is an older one, the added one is brand new.
Just a semi-educated WAG, but the geometry may be different between the two drives, e.g., the newer one may be Advanced Format (4096 B sectors). If this makes the actual drive capacities different (slightly smaller for the newer one), then copying the partition table will fail due to partition size beyond disk capacity.

If you don't have a pair of identical drives, at least make sure that they are as similar as possible (same sector size). There's a limit to what an automation script can do with edge cases. Check the actual size in bytes (as reported by, e.g., fdisk) and set up the initial partitions on the smaller one.
 
gparted shows the two drives as having identical characteristics: sector size, number of sectors, etc.
 
Back
Top