[SOLVED] P2V and RAID reconfig

seedubya

Well-Known Member
Reaction score
1,019
Location
Carlow, Ireland
I have acquired client with a HP Proliant Gen8E. Nice server and running very well. OS is Server 2012 R2 Standard. The drive controller is a SmartArray P420 with 6 x 1TB SAS drives connected. The drives are configured in 2 arrays - a 1TB RAID 1 (2x1TB) and a 2.73TB (4x1TB) RAID 5. However, only 2TB of the data drive is recognised since the machine was installed with the BIOS in legacy mode when it should have been in UEFI.
The larger array contains a 931GB boot partition (C: 131GB used) and a 1116GB data partition (D: 579GB used) and then 746.44GB of unallocated space which it can't address.
The smaller array is a single data partition (G: 779GB used). Total data on the server is approx. 1.5TB

I have been give 2 days of downtime over a weekend to virtualise all of this and to re-configure it in the most effective way possible. Ideally I don't want to lose any storage capacity while doing so. I also have a spare server (T110 with 4TB storage) that I can use for testing in advance of the project. Backups are currently running on Backup Assist but I'm just about to move to NetJapan Active Image Protector.

I envisage imaging everything, then P2V to my spare server and going from from there. Once I'm sure the restore process is working correctly I think I should set the bios to UEFI, re-config the RAID as 5x1TB RAID 5 with spare or even 4x1 with 2 spare. Re-install Server 2012 R2 and then mount my vhdx. If I can get to that point I reckon I'm home free, pretty much.

I'd be very grateful for any help anyone can provide with this to ensure that I don't miss anything. I really only have a week to prep for this.

TIA

Colm
 
I'd reconfigure the second volume...to RAID 10.
I like having servers...especially including hyper-V hosts, to have a RAID 1 up front for the OS volume(s)...and a fast RAID 10 in the rear for the data volumes.
This way, you technically have 2x separate spindles...which allow for MUCH better concurrent hit performance. What the OS and primary pagefile is doing, has ZERO impact on the performance of the data volume.

When people do a single RAID 5 or a single RAID 10...and then they partition their servers C and D volumes on that single spindle...concurrent hit performance drops. You really have just 1x large single hard drive. You cannot span pagefile ...and the data volume shares the same volume as the primary pagefile, and the OS/boot partition. Quite bluntly..that sucks. I see so many SBS installs like this, and people complaining about performance. Well..yah...cuz..it's setup in a poor manner. Don't blame SBS. (or insert some other "heavy" server).

So what I do for hyper-visors, I make a larger RAID 1 up front. I install the server OS on that...Hyper-V role. Leaving enough room on there for the C-VHD files of the guest(s).
And then on the large RAID 10 (or a couple of large RAID 10 volumes if a larger server)...I put the D-VHD files.
In the guest servers...I span the virtual memory (pagefile.sys) across both C and D (and more) volumes.

RAID 5 was more common years ago, when hard drive space was very pricey for enterprise drives. And it's tolerable as long as you have GOOD RAID controllers...like higher end HP and Dell model controllers, with lots of battery back cache on them. Read times are good (striped)...write times are slow though. Rebuild times are sorta slow. Granted...a performance of all increases as you add more drives above the default minimum of 3 drives. RAID 6 lessons chance of failure...as RAID 5 on cheaper controllers did have a higher risk of data loss when things went wrong. But RAID 6 has poorer performance. RAID 10 is better for less risk, has good read performance, and has good write performance. You lose some total volume compared to same size/amount of drives with RAID 5...but 10 has become more popular since prices of enterprise storage have come way down compared to what they were generations ago.
 
Thanks B. Let me see if I understand what you're saying
- reconfigure the drives into a RAID 1 with 2 of the drives and a RAID 10 with 4 of the drives.
- Install Server 2012 R2 on the RAID 1 and config. as Hyper V host

How then should i configure the available storage for use by my VM? As I figure it here, on my first "spindle" I'll have approx. 930GB I'll have, say, 800GB free. On my second "spindle" I'll have 1.8TB on to which I'll need to place approx. 1.3TB of data. I don't have much play afterwards.

To add a layer of complexity - the drives are currently de-duped natively in 2012
 
The problem with your 2 terabytes of storage on the data drive may be related to the fact that it is partitioned with MBR instead of GPT. This happens when you create your data partition during the windows server install process. It defaults to MBR. If you wanted to keep the server os intact then you could backup the Data Drive, repartion the drive to GPT and then restore the data

Edited to add: I just reread your post and see your raid 1 Drive is actually a data drive and not the operating system Drive. In that case you cannot simply delete the large data partition and repartition as GPT because that raid volume also has the OS partition on it. Wipe the whole lot and start again I say
 
Last edited:
And I would still go with one big RAID 10 using all drives.


Upon that RAID 10 I would create a 200 gigabytes OS logical Drive in the HP smartarray config utility and a very big logical data drive. As has already been said above do not partition a 200 gigabyte C drive but instead create a logical 200gb hard drive in the raid bios. If you're going to be using hyper-v on the server don't forget to format the data drive with 64k clusters rather than the default 4K.

I did a lot of research on IOPS and benchmarks when deciding to stick with one big RAID 10 and could not find any conclusive benefit to dedicating two drives in a raid 1 for the OS. By including those two drives in the big RAID 10 you get flexibility when it comes to using that otherwise empty space and the 2 extra drives contribute to overall iops performance.

https://community.spiceworks.com/topic/262196-one-big-raid-10-the-new-standard-in-server-storage
 
Last edited:
Finally when you have virtualized the current server onto your spare hardware, and rebuilt the new server using theRAID of choice, you can use hyper-v replication to synchronize your virtual machine from the spare hardware back to your new server. When the time comes to move the virtual machines you can do a planned fail over and move them over in a couple of minutes. You can also choose to keep the replication running in the other direction after the move so that you have a complete replica of the vm up to the last few minutes that you can start up if your new server blows up.
 
Thanks B. Let me see if I understand what you're saying
- reconfigure the drives into a RAID 1 with 2 of the drives and a RAID 10 with 4 of the drives.
- Install Server 2012 R2 on the RAID 1 and config. as Hyper V host

How then should i configure the available storage for use by my VM? As I figure it here, on my first "spindle" I'll have approx. 930GB I'll have, say, 800GB free. On my second "spindle" I'll have 1.8TB on to which I'll need to place approx. 1.3TB of data. I don't have much play afterwards.

To add a layer of complexity - the drives are currently de-duped natively in 2012

What size are the drives? They 1 TB each?
 
Sorry for the multiple separate posts but they do contain useful individual points... it is worth investing in a 1 terabyte solid state flash backed write cache for your RAID controller if you have not got one installed. Look up the part number of your server with Google look for inclusion of fbwc or use the HP array Configuration utility to see if your RAID controller has a cache. This will massively increase the performance of disk access
 
The problem with your 2 terabytes of storage on the data drive may be related to the fact that it is partitioned with MBR instead of GPT. This happens when you create your data partition during the windows server install process. It defaults to MBR. If you wanted to keep the server os intact then you could backup the Data Drive, repartion the drive to GPT and then restore the data

Yeah, that's right. I didn't explain it fully in the description. It defaults to non-UEFI and GPT and can't be converted without reformatting and changing the BIOS settings.

Interesting article on OBR10, thanks a lot for the material.

The RAID controller has flash cache, I can't remember quite how much at the moment.
 
The OBR10 compares it to old RAID 5 for the data volume. Naturally performance would be higher.
I still prefer RAID 1 / RAID 10....granted on smaller servers, lighter file sharing, etc...you don't push it hard so you don't notice a difference. But when you have multiple guest servers, some of them very "heavy" like database engines...that is when the performance difference stands up and reveals itself without question. Granted...my standards are a bit higher, I'm pickier for performance for some larger clients. But if you're talking a server for a network of say...25 or less, basic services..yeah..won't see much diffy.
 
The network size varies from about 30 to 45 - almost all the traffic is file sharing. A couple of small DBs for LOBs and that's it. Their main LOB is web based.
 
I'd reconfigure the second volume...to RAID 10.
I like having servers...especially including hyper-V hosts, to have a RAID 1 up front for the OS volume(s)...and a fast RAID 10 in the rear for the data volumes.
This way, you technically have 2x separate spindles...which allow for MUCH better concurrent hit performance. What the OS and primary pagefile is doing, has ZERO impact on the performance of the data volume.

When people do a single RAID 5 or a single RAID 10...and then they partition their servers C and D volumes on that single spindle...concurrent hit performance drops. You really have just 1x large single hard drive. You cannot span pagefile ...and the data volume shares the same volume as the primary pagefile, and the OS/boot partition. Quite bluntly..that sucks. I see so many SBS installs like this, and people complaining about performance. Well..yah...cuz..it's setup in a poor manner. Don't blame SBS. (or insert some other "heavy" server).

So what I do for hyper-visors, I make a larger RAID 1 up front. I install the server OS on that...Hyper-V role. Leaving enough room on there for the C-VHD files of the guest(s).
And then on the large RAID 10 (or a couple of large RAID 10 volumes if a larger server)...I put the D-VHD files.
In the guest servers...I span the virtual memory (pagefile.sys) across both C and D (and more) volumes.

RAID 5 was more common years ago, when hard drive space was very pricey for enterprise drives. And it's tolerable as long as you have GOOD RAID controllers...like higher end HP and Dell model controllers, with lots of battery back cache on them. Read times are good (striped)...write times are slow though. Rebuild times are sorta slow. Granted...a performance of all increases as you add more drives above the default minimum of 3 drives. RAID 6 lessons chance of failure...as RAID 5 on cheaper controllers did have a higher risk of data loss when things went wrong. But RAID 6 has poorer performance. RAID 10 is better for less risk, has good read performance, and has good write performance. You lose some total volume compared to same size/amount of drives with RAID 5...but 10 has become more popular since prices of enterprise storage have come way down compared to what they were generations ago.
Very well written

Some random thoughts of mine regarding raid.

Safety:
Don't buy drives from the same vendor in bulk, Hard Drives tend to go bad in batches so if your raid is based on raid10 its very likely the 2 drives will go bad around the same time
Raid10 - Its not safe due to the fact that if you lose two drives of that set you are toast. going with raid6 forces the same space/drive loss ratio in a 4 drive array (due to the redundancy factor) but allows the system to continue running.
My main concern with raid 6 is the rebuild time

Speed
Agree with YeOldeStonecat if speed is the prime requirement go with raid10 (Plus a hotspare)

Cost
1)Even though drive costs have gone down chassis costs are still high and drive slots are at a premium price. Unless going with a supermicro chassis you are going to be at a 16 bay chassis max when purchasing from dell or hp etc...
2)As the storage area grows justifying 50% for redundancy may be hard sell to the C level executives.
 
Back
Top