Linux softRAID 5 array (mdadm) creation problem - anyone know what to do?

tankman1989

Active Member
Reaction score
5
I am creating a software RAID 5 array and I am getting a strange output response from the create array command. It is listing the sizes of the partitions available and it is not correct at all. I have 5 500GB drives as listed in the Fdisk -l command below and they all appear to have the same number of blocks and the disk sizes are all listed the same. I just created the partitions in Fdisk and all the disks were empty. Can anyone tell me what is going on here?

The create command is at the bottom of the code.

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sda1 1 60801 488384001 fd Linux raid autodetect

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x07df3ce0

Device Boot Start End Blocks Id System
/dev/sdb1 1 60801 488384001 fd Linux raid autodetect

Disk /dev/sdc: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0008c8f9

Device Boot Start End Blocks Id System
/dev/sdc1 1 60801 488384001 fd Linux raid autodetect

Disk /dev/sdd: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3bf537ab

Device Boot Start End Blocks Id System
/dev/sdd1 1 60801 488384001 fd Linux raid autodetect

Disk /dev/sde: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x44fdfe06

Device Boot Start End Blocks Id System
/dev/sde1 1 60801 488384001 fd Linux raid autodetect

Disk /dev/sdf: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000a4906

root@fserve:/home/mike# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=5 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: /dev/sda1 appears to contain an ext2fs file system
size=488384000K mtime=Fri Mar 23 00:52:34 2012
mdadm: layout defaults to left-symmetric
mdadm: /dev/sdb1 appears to contain an ext2fs file system
size=8032468K mtime=Fri Mar 23 00:55:34 2012
mdadm: layout defaults to left-symmetric
mdadm: /dev/sdc1 appears to contain an ext2fs file system
size=24097468K mtime=Fri Mar 23 00:55:50 2012
mdadm: layout defaults to left-symmetric
mdadm: /dev/sdd1 appears to contain an ext2fs file system
size=488384000K mtime=Fri Mar 23 00:56:01 2012
mdadm: layout defaults to left-symmetric
mdadm: size set to 488382464K
Continue creating array?
 
Last edited by a moderator:
you need to clear the existing partition info off first. I know it should when you do the original fdisk and set it to linux raid autodetect but it doesnt.

Soooo you need to :

dd if=/dev/zero of=/dev/sda bs=512 count=2048
dd if=/dev/zero of=/dev/sdb bs=512 count=2048
dd if=/dev/zero of=/dev/sdc bs=512 count=2048
dd if=/dev/zero of=/dev/sdd bs=512 count=2048
dd if=/dev/zero of=/dev/sde bs=512 count=2048

then fdisk each drive back to how you have it now.
make sure you clean your /etc/mdadm.conf before recreating your new raid
now redo your mdadm command.

botta bing!

P.S = make sure your aligning them partitions. When you enter fdisk hit u "i think its u" to change it to sector and not human readable. MAKE sure your starting at a offset of 2048. I got a 15ish% in IOPS by aligning my partitions correctly. If this is a updated system then the new fdisk aligning by default.

P.S.S - No idea why it says ext partition. I had new drives and put them in my raid. When I went to rebuild the raid I ran into the same problem. Mine said it had a NTFS partition on it....... haha
 
Last edited:
I have setup quite a few raids in linux and currently run a 1TB raid on my home/business server. Setting up the raid is not very hard but can be confusing at times when you are first learning. What I can do for you is list the steps involved and you can go thru and make sure everything is good to go.
---------------------------------------------
First thing is first. Make sure the drives have NO partitions on them. You can do this by going into fdisk for each drive and deleteing all partitions on the drives (D command in fdisk). Make sure you hit W after you are done to write changes to disk. Also, What gets some is that you might need to re-read the partition tables on the drives after you are done otherwise they will show up like you never edited them. To have linux reread the tables issue the following command:

partprobe

or for a particular drive:
partprobe /dev/sdX

Now go into fdisk and make your partitions. On mine I just made ext2 partitions but you might want to make gpt partitions if your going to do a raid device larger than 3tb I do believe. Otherwise stuff like windows may not recognize the whole array. The choice is up to you. Linux will work on almost any type of partition as long as they are all the same (IMHO). Remember to (W)rite your changes in fdisk.

After all the drives are partitioned we will make the raid. In your case you are building a raid5 with 5 disks.

mdadm --create /dev/md0 --level=raid5 --force /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
(gee, what is your boot drive??)

after that is all done we can now format the raid

mkfs.ext3 /dev/md0

To make it available to mount, add the following to /etc/fstab:
/dev/md0 /mymountpoint ext3 defaults 0 0

Now mount that sucker!

mount /dev/md0

Lets make sure that the raid starts on bootup otherwise you will have to assemble it manually everytime you boot.

mdadm --detail --scan --verbose > /etc/mdadm.conf
---------------------------------------------------------

Below are some helpful commands:

---------------------------------------------------------

To remove a failed drive from the array:

mdadm /dev/md0 -r /dev/sda1

To add a drive to the array:

mdadm /dev/md0 -a /dev/sda1
To remove a spare drive from array that has failed:

mdadm /dev/md0 --remove /dev/sdXX
To see which drive has failed, type this

mdadm --detail /dev/md0

To manually assemble your raid if you rebooted without mdadm.conf file. Or something weird happened.

mdadm --assemble --force /dev/md1 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

Manually stop a array:

mdadm -s /dev/md0
-----------------------------------------------------

Hope this all helped.
 
I have setup quite a few raids in linux and currently run a 1TB raid on my home/business server. Setting up the raid is not very hard but can be confusing at times when you are first learning. What I can do for you is list the steps involved and you can go thru and make sure everything is good to go.
---------------------------------------------
First thing is first. Make sure the drives have NO partitions on them. You can do this by going into fdisk for each drive and deleteing all partitions on the drives (D command in fdisk). Make sure you hit W after you are done to write changes to disk. Also, What gets some is that you might need to re-read the partition tables on the drives after you are done otherwise they will show up like you never edited them. To have linux reread the tables issue the following command:



or for a particular drive:


Now go into fdisk and make your partitions. On mine I just made ext2 partitions but you might want to make gpt partitions if your going to do a raid device larger than 3tb I do believe. Otherwise stuff like windows may not recognize the whole array. The choice is up to you. Linux will work on almost any type of partition as long as they are all the same (IMHO). Remember to (W)rite your changes in fdisk.

After all the drives are partitioned we will make the raid. In your case you are building a raid5 with 5 disks.


(gee, what is your boot drive??)

after that is all done we can now format the raid



To make it available to mount, add the following to /etc/fstab:


Now mount that sucker!



Lets make sure that the raid starts on bootup otherwise you will have to assemble it manually everytime you boot.


---------------------------------------------------------

Below are some helpful commands:

---------------------------------------------------------

To remove a failed drive from the array:



To add a drive to the array:


To remove a spare drive from array that has failed:


To see which drive has failed, type this



To manually assemble your raid if you rebooted without mdadm.conf file. Or something weird happened.



Manually stop a array:


-----------------------------------------------------

Hope this all helped.

This looks like it should be in the Wiki or put in the Guides, Tips, and Tricks section as a How To.;)
 
This looks like it should be in the Wiki or put in the Guides, Tips, and Tricks section as a How To.
Today 11:51 AM

I would but I would clean it up a bit more and be more indepth if I did. Dont have the time right now. Perhaps tonite or tomorrow I could.

thanx!

:)
 
Another command:

mdadm --zero-superblock <dev>

Removes the RAID superblock from the device. Just deleting a RAID will not remove it, which is why I asked if the drives were used before.
 
Back
Top