Best RAID / Number of disks - Hyper-V 2012 with 2 VMs

cyabro

Well-Known Member
Reaction score
420
Location
New Zealand
Hey,

A client of ours wants to upgrade their two ageing HP ML330 and ML350 servers.
The ML350 is just over 6 years old and the ML330 3 years.

I have suggested that we replace with one decent server and run the SBS2008 and the 2nd Server 2008 as VMs on the new box.

The new box will have a 6 core Xeon with 32GB RAM and 8x300GB SAS 2.5" drives.
It will also have a true hardware RAID with 512MB cache.

The current ML330 SBS2008 box has a quad core xeon with 6GB RAM and the ML350 Server 2008 is also a qaud core but with only 4GB RAM.
The SBS2008 box currently only has a RAID1 setup and the ML350 has a RAID1 setup for the system drive and RAID5 for the data.


What do people suggest as being the best way to configure the disks for RAID?
EG: Should I just create one RAID5 setup using all the drives or should I create 2x RAID5s using 4 disks in each or 2x RAID10s also using 4 disks in each?
Or possibly RAID1 for the Hyper-v 2012 plus the two system drives then RAID5 the remaining 6 disks for data for both operating systems?

The 2nd server 2008 is running SQL for their accounting practice software but I don't think it is that very database intensive as most of the time the users are just inputting data.
I am also going to take the two old servers and try and build up one server using whatever I can from both to build the best machine possible and use it as a hyper-v replica box.
 
Last edited:
Boy this could be fun.
I'd certainly opt for RAID1 for the Hypervisor.

The rest depends on what all I'd be doing with the VMs.

If I were to be setting up some database servers, I'd split the other 6 disks into 2 RAID5 (assuming that would give me enough space) so that the DB VMs could split the OS/DB.

If database performance isn't that big of a deal, RAID10 or RAID6 for the whole x8 drives as just one big datastore.

But it does all depend on the intended use (look at the future) and size of the disks.
 
Pair of small drives RAID 1 for the hyper-visor install....and you can use remaining space for storage too, and even put some disk files on there for pagefile of each guest (so you have virtual memory on another spindle from the guest OS)

RAID 10 with 6 disks will be freaking fast....with most servers on bare metal I prefer to have 2x separate spindles....RAID 1 for OS, RAID (something) for the data...so there's separate spindles, with a bunch of drives doing RAID 10 it works quite well to put all of the guests on one large RAID 10 volume (for a setup this small).

So I'd vote RAID 1 for the Hyper-visor OS, and use up remaining space for some virtual disk space for the guests (just add another drive letter for them and slap a page on there)...as/or storage (like ISOs and snapshot copies if you had a big pair for the RAID 1), and do a single large RAID 10 with the remaining 6 drives.

Or you could get creative...do a second RAID 1 with a pair of drives...and put each servers boot partition on those, and do a 4 drive RAID 10 with the remaining 4 drives and put each servers data volumes on those. So your host would have RAID 1 (first 2 drives) for the hyper-V host, another RAID 1 (second 2 drives) for both guest servers C partitions, and a 3rd RAID volume (remaining 4 drives)..a RAID 10 for both servers data. Could span each servers pagefile across all 3 volumes too.
 
Honestly I would just RAID5 or RAID6 the whole thing and not complicate it further. Do you really need the added level of redundancy and security by putting the OS to a RAID1? The talk about no databases on a RAID 5 well I can't confirm which systems are set with RAID5 and which with RAID6 but we have multiple database servers running on VMs and I couldn't tell you any drive performance related issues on a single one.
 
Honestly I would just RAID5 or RAID6 the whole thing and not complicate it further. Do you really need the added level of redundancy and security by putting the OS to a RAID1? The talk about no databases on a RAID 5 well I can't confirm which systems are set with RAID5 and which with RAID6 but we have multiple database servers running on VMs and I couldn't tell you any drive performance related issues on a single one.

But have you seen the difference in performance comparing 5/6 to RAID 10?
Read 'n write...there's a performance difference (hit) with that parity bit of 5/6, versus pure striping on RAID 10.

While you may not have noticed it..unless you've been able to see/experience the difference several times, yeah...you can say you haven't noticed a "problem".

Kinda like if all you've ever driven were 4 cylinder econo-cars...it does the job without compliant. But once you get the chance to drive a 400+ horsepower car....and experience it....now you notice the difference.
 
A vast majority of the servers I manage are EMR servers and some of them are pretty heavy on the DB.
RAID1 for OS and RAID10 for DB Datastore gives killer performance over any other setup I have inherited or built.

In a virtual enviroment, I try to ape the same setup by placing the OS vdisk on a RAID1 (or 5 or 6) datastore and the DB vdisk on a seperate set of disks - be it RAID10, 5, 6, or 1
 
Kinda like if all you've ever driven were 4 cylinder econo-cars...it does the job without compliant. But once you get the chance to drive a 400+ horsepower car....and experience it....now you notice the difference.

So true. So very true. The first time my wife drove my truck, I some how got stuck with her little Toyota Corolla.

DBs on a raid 5 is a no no

I agree. The performance you will not have will lag depending on the size of the DB, and how much the amount of traffic. If we are talking a small setting, under 5 people, a RAID 5 with 4+ disks might work, but when you can do a RAID 6 or 10 instead, why bother with a RAID 5.

As a rule of thumb, if I can, I will always RAID 1 an OS. DB's, if I can, I will RAID 10. I will use RAID 5/6 for other data, assuming the customer cares about their data.

As a minimum I always push for at least a RAID 1. And when asked about a RAID 0, always tell them if you lose 1 drive, you lose everything. I never market, or tell a customer that a RAID 10 is a combination of RAID 1 and RAID 0. I tell them its completely different.
 
All we run are RAID5 and RAID6 we don't have the luxury of wasting the drive space to do RAID10. With the drives we run and even with total user count in out user directory of 500+ there is no I/O hit on RAID5 vs RAID6 on any of our database servers. I believe only 1 database server has enough concurrent users to make I/O take a hit and that one I believe is RAID6. The other database servers have fewer concurrent users and/or actions I believe which is why there is no problem running a RAID5 on it. With the scope I am dealing with I can say confidently that RAID5 being terrible for database servers is mostly a myth. I am not claiming it isn't inferior in but that it takes a lot more to reach the threshold where that inferiority actually shows up than is often insinuated by people. I have run my own small scale tests of RAID0 vs RAID1 vs RAID5 vs RAID10, or 01 I forget which is which between the two, and I can see the difference in them.
 
I am saying I know there are difference but there is a lot to factor into when and why you will actually need to go with higher performance configurations. Size of the database, frequency of reads and writes, drive memory caches, RAID controller memory cache, number of concurrent reads and writes. Then cost balance of higher performance parts vs higher performance configurations given a RAID10 you lose half the drives so the more drives you have the better off you are in investing in better drives and better controller cards and using a lower performance RAID configuration. I am not saying anyone is wrong that RAID10 or RAID6 is better than 5 it is I know this and any tech should know this. I am saying you may be wrong that it is unsuitable for the task at hand I can't even say I am right as I don't know the specific application of the system or the expected load on it. I merely provide anecdotal evidence to support the claim that a RAID5 can be an acceptable setup in a database system.
 
Last edited:
RAID 5 is acceptable in a DB system, to a point. But a RAID 10 works out a lot better in the long run for DB purposes.

RAID 10, with 6-1TB HDD's = 3TB of Storage, giving me a 6x Read, and 3x write speed gain. at-least 1 Drive Failure allowed

RAID 5, with 6-1TB HDD's = 5TB of storage, but a 5x read speed, and no write speed gain. 1 Drive Failure allowed

Depending on what the DB is being used for, and how it is being used, I still put my underwear on a RAID 10. Yeah, the RAID 5 holds more, but I have more fault tolerance on a RAID 10, and better read/write speeds, which means as the company grows, I can maintain RAID integrity as the RAID grows.
 
Thanks guys, given me something to think about.

With the current db server it is running raid5 over 6 spindles and the support team for the accounting practice software often have to log in remotely to do upgrades etc. they have said it is one if the fastest setups they've worked on and that's on a 6 year old machine.

Whichever way I decide to go, raid10 or raid5, the total storage space will be more than they have currently.

Edit: can you do raid10 on 6 disks? I thought it only worked with a set of 4?
 
Last edited:
RAID 10 is a minimum of 4 disks. Whatever you have on one span, you have on the other, and so forth.

RAID 10 works like this. I hope this explains it. The disks you have in RAID 1, I normally just leave at 2 to 3 disks, but you can add more to each span, but you end up losing memory this way, just remember to keep them all the same size/type.

You can also use this for better explanations on some things, as well as formulas for performance and space efficiency, fault tolerance.

Personally, the largest RAID's I've ever used was a 8 Disk RAID 5, and a 12 Disk RAID 01, which is similar to a RAID 10, works in the end the same.
 
RAID10 and RAID01, very similar but one has slightly better fault tolerance, need a minimum of 4 drives but requires a number divisible by 2.

RAID10 is a Stripe of Mirrors to translate a bit it is as follows
Each drive has a Mirror drive (RAID1)
Each pair of Mirrored drives is Striped together (RAID0)

RAID01 is a Mirror of Stripes to translate a bit it is as follows
Half the drives are Striped (RAID0)
Each Stripe is Mirrored (RAID1)

If you want to know about RAID5 and RAID6 well its a bit more complicated but there is lots of information on the web to answer it just do a little research of your own.
 
Just an update on this.
In the end, after doing lots of research, I went with just one RAID10 array using all 8 drives rather than complicating things and grouping drives to create two or more RAID arrays.
Then I just partitioned from there as required.

Performance has been great and no complaints from the client.
 
Doing a similar project as I type this...see pics in the general discussion forum.

Server 2012 Standard
Hyper-V role
Proliant ML350p Gen8
6x core Xeon
32 gigs RAM
8x HDDs...each 300 gig 2.5" hot swap SAS, 15,000 rpm
Pair up front RAID 1 for the Hyper-V host.
remaining 6 are in a RAID 10 volume.
Raid controller with the 1 gig cache option, battery backed.
Have some ISOs stored on the C drive...and I'll be plopping the C drive (system volume) VHD's on there. The big data volume VHDs will be on the RAID 10 volume.

Accounting software...I want it on separate volumes for best write performance.
 
Yea saw your p)rn post. :)

My client is an accountant as well and they mainly use MYOB AE which is an SQL database.
 
Back
Top