Noodling a server design

HCHTech

Well-Known Member
Reaction score
3,848
Location
Pittsburgh, PA - USA
We've got a chance to bid on a pretty beefy server, so I've been thinking about how I would configure the storage.

The current setup is 4 individual servers (each running Server 2008R2), a DC w/Exchange, a File Share, and 2 dedicated SQL servers (SQL Std 2008) for LOB apps, one kind of a practice management software, and one that controls some equipment on the warehouse floor.

I'm thinking we could virtualize everything in a single box and move the 35 employees to hosted Exchange in the process.

Currently, the total of the used space on the OS drives for all 4 servers is 1.3TB.

The total of the used space on the data drives on all 4 servers is 850GB

I would want to include a virtual workstation for our use on this system as well. so add 120GB to the OS drive total to get an amended total of 1.42TB.

We've had pretty good luck with the last two servers, where we were able to use all SSDs in their construction, so I'd like to start there until someone or something talks me out of it.

I'm thinking 2x240GB drives in a RAID1 (usable capacity 240GB, 1 drive fault tolerance) for the host machine, then 6x1TB drives in a RAID6 for the VMs (Usable capacity 4TB, 2 drive fault tolerance), and finally 4x1TB drives in a RAID10 for the Data drives (Usable capacity 2TB, fault tolerance 1 drive).

I would include 1x240GB drive, and 2x1TB drives as cold spares, so the quote would include 3x240GB drives and 12x1TB drives.

This works out to double their current usage plus a cushion. I'm thinking 2xXeon Silver 4110 (8 core) which would give 32 cores to divvy up among the machines, or possibly 2xXeon Silver 4114 (10 core) which would give 40 cores to divvy up. The latter would require the pricier version of the server OS since we would be exceeding the 16-core limit of the base version. We're probably looking at 96 or 128GB of RAM.

I'm still waiting to hear back from the software vendors w/r/t OS compatibility, so I don't know yet if we're talking Server 2016 or 2019.

I'm trying to get this basic configuration nailed down before I look at actual machines. Does this sound about right?
 
Not sure I'm following the RAID setup. What is the reasoning for splitting the "VM drives" and "Data drives" into 2 separate RAID volumes?

Usually this would be done for tiered performance. For example having a fast SSD backed volume for SQL, then a slower spinning disk backed volume for less demanding things like file shares. As you are going all SSD this isn't necessary.

Personally I would go:

2x 240GB drives in RAID1 - for the host OS
8x 1TB drives in RAID6 - for everything else

Format the big RAID6 as ReFS and sit all your VHDX files on top.


Processors
I don't think more cores are worth the additional cost here. You already have enough with the 4110 to give each VM 6 cores and still have a couple to spare.

If you do go for an upgrade to the 4110 - I would look towards the Xeon Gold 6134 which is also 8C/16T but much higher base/turbo frequency and more than double the L3 cache. Cost won't be far from the 4114 as you won't need additional licencing.


Other notes
Make sure it has plenty of NIC's so you aren't pushing 5 VM's worth of traffic down a single 1Gb link. Ideally at least 6 NIC's so you can dedicate one to each VM and one for the host. Either this or look into 10GB NIC's if you have the network infrastructure to support them.


A real life example
We built a server with similar(ish) specs about 10 months ago
2x Xeon Gold 5122
128GB DDR4 RAM
2x 240GB SSD in RAID1 (the host)
6x 500GB SSD in RAID6 (VHDX storage)

On this server we are running
- Domain Controller
- Terminal Server
- fairly large SQL server with near 200GB in various databases
- print server
- CRM server

It flys.
 
Last edited:
Wow, great comments, thanks. The idea to separate the VM data drives from the os drives on different arrays is less a well thought out answer and more about that's how most of the ones I've run into are done.

We were planning to quote a 10Gb NIC, and we'll be quoting a new switch with 2 10Gb connections. What are your thoughts about a single 10Gb connection compared to multiple 1Gb connections? It may be that multlple 1Gb cards may be better because they are dedicated to the specific VMs.

How are you divvying up the cores in your Xeon 5122 rig? This is always kind of a guess for me. 5122s are 4-core right? So 2 of them gets 16 cores with hyperthreading. A DC doesn't need much juice to do its job, but the SQL rig would benefit from as much as you could give it I'd imagine.
 
Why all 1TB drives? The cost difference from 1TB to 2TB is like $20. Personally, I would give 2 bids, 1 for 1TB and 1 for 2TB. I would push for the 2TB for better future proofing.
 
Dell T440 or rack.
Dual Xeon 6 Core it better 64-96gb memory
Raid 10 mix use SSD
Redundant psu
Server 19 standard with calls
Extra sever 19
Drac 19

SQL software may need upgrade unless express
 
The idea to separate the VM data drives from the os drives on different arrays is less a well thought out answer and more about that's how most of the ones I've run into are done.

Well it still makes sense to separate drives at the virtual level. For example on your file server create a VHDX for the OS and a separate VHDX for data shares. This way if your file server is ever down you can just attach the data VHDX to another server and share it out from there. Or when it comes to migrating/upgrading you can just detach the data VHDX from the old server > attach to the new.

We were planning to quote a 10Gb NIC, and we'll be quoting a new switch with 2 10Gb connections. What are your thoughts about a single 10Gb connection compared to multiple 1Gb connections? It may be that multlple 1Gb cards may be better because they are dedicated to the specific VMs.

10GB wins hand's down on performance.

With 6x 1Gb NIC'S sure you might have 6Gb bandwidth but it's not shared efficiently when they are dedicated. Example - a DC will barely use 10% of a 1Gb NIC but tough, nothing else is getting the remaining 90%. It's wasted.

And with that thought in mind - You may be better creating a NIC team consisting of 5-6 1Gb NIC's. This would alleviate the above issue and give you some redundancy if a link were to fail.

10Gb still wins on performance though as NIC teaming is more like load balancing.
Say you are running a file transfer - it's limited at 1Gb bandwidth.
However, if you run 5 file transfers simultaneously - they can each have 1Gb bandwidth.



How are you divvying up the cores in your Xeon 5122 rig?

4 cores - DC
4 cores - Print server
4 cores - CRM
4 cores - Terminal Server
6 cores - SQL

Yes, I'm aware that add's up to 22 :)

You don't have to stick to the physical number. It helps to think of core allocation more like a percentage. Lets say you have 16 cores.
You give VM1 2 cores
You give VM2 6 cores
You give VM3 8 cores.

What you are really doing is assigning:
VM1 a maximum 12.5% of the host CPU
VM2 a maximum 37.5% of the host CPU
VM3 a maximum 50% of the host CPU

Now you can over-allocate and give every VM 16 cores if you want. But once the host CPU hits 100% it comes down to "best effort" as to who gets the resources. Latency sensitive things like SQL really won't like this.

In our setup we use resource control to handle this. The SQL has a "Virtual Machine Reserve" of 100 which means it's resources are guaranteed. The other machines are set to 50.

So if the host was ever pegged at 100% CPU the SQL server would be unaffected. The others would have 50% of their CPU resources guaranteed and have to fight it out for the remaining 50%.
 
Last edited:
Given the original situation I'd probably pass on the 10gb and go with multiport 1gb nic(s) giving each VM their own port. Don't forget to evaluate the wiring as well.
 
Given the original situation I'd probably pass on the 10gb and go with multiport 1gb nic(s) giving each VM their own port. Don't forget to evaluate the wiring as well.

In our original meeting with the owner and tech guy, I mentioned 10Gb as one of the things you can do today that wasn't possible back when their servers were built, I'm pretty sure that was one of the things that sold them on us. They kept saying "Our current IT company is pretty old-school", and "Our current IT company never recommended virtualization", stuff like that. It's not a bad thing, but I think they are now expecting 10Gb in my quote - haha. Hell, maybe I'll show an "available upgrade" option of 2 10Gb connections - team those and that should remove bandwidth worries for the life of this server, yeah?

You never know about folks. I had a simple desktop quote last week for a single retired guy who browses the web and writes email. He took every upgrade on the quote. "Which one of these choices is faster? I want that one." "Is there a better choice than this?" So he had money to spend and wanted a "damn fast" (his words) computer. I'll admit I prejudged him and initially quoted a modest machine...

In our setup we use resource control to handle this. The SQL has a "Virtual Machine Reserve" of 100 which means it's resources are guaranteed. The other machines are set to 50.

Ooh, I admit to misunderstanding exactly how this setting worked. I've used it, but not quite right. (sneaks off to check the settings on one or two servers). Thanks!
 
I'd definitely do 128 gigs.
And I much prefer multiple NICs..typically we start with a quad port Intel server NIC.

With spindles I used to usually do a pair of volumes...so I could have the guests mix match their system and data volumes split across both the RAID volumes. So I'd have mostly equal size C and D on the hyper-v host. Put a few of the C drives on the guests on this first volume, and opposite on the others. This way the load is pretty much split across both.

But with SSD servers now I just to 1x big R10 and stick the HV Host OS on it as well as all volumes of the guests.
 
Back
Top