To virtualize or not....

HCHTech

Well-Known Member
Reaction score
4,381
Location
Pittsburgh, PA - USA
I'm quoting a server replacement for a small lawfirm client of mine, currently bringing an SBS2011 server (deployed in mid-2012) in for a landing. There are a dozen workstations, 2 on Win10Pro and the rest on Win7Pro. 2 attorney-owners and 10 support staff.

They don't do anything complicated - there isn't much remote access. We'll be moving to hosted Exchange, so the server will be left with sharing files and running a TimeMatters database.

I'm wondering if it's worth it to look at virtualization. They don't have a need right now for more than one server, but over the lifetime of this hardware, who knows? The incremental costs to do this upfront have to be small, plus it would give us more backup options, for one.

Most of my clients are small so I haven't had the opportunity to do this yet....only my biggest client (45 workstations, 2 locations) has virtualization in place (ESXi - 3 servers and 4 workstation containers), and that was setup by folks I hired because it was too far out of my wheelhouse in 2014. It's been easier than expected to maintain and we've had very few problems.

I don't know - what do you think? Should I wade into these waters with this client? What would you do?
 
You can easily run server essentials virtualised and there are lots of guides on doing it
 
I'd look at it if getting a new server. At the end of the day it's little different than on bare metal in terms of setting up the server itself. Most of my experience is with ESXi but I've setup Hyper-V and it's not too different.
 
I virtualize all workloads now, because image based backups have a far greater success restoring into a virtual framework. Unless there's a huge IO load, but even then I tend to Hyper-V and use Optane / NVME storage for the database.
 
I did a server replacement just last fall and I should have virtualized it. Thats all in hind sight though. But, The one factor in my mind that makes the claim for virtualizing is doing a image backup. Its going to be extremely easy and you can even do it from remote if need be. Of course, I use virtualbox and I currently have windows 7 running on this server replacement in a vm. I just shutdown the vm and backup to an appliance. Then just copy the appliance to tape or whatever you use for off sight backups. That alone makes me do Virtualization from now on. As a side note, Just think if you have to reinstall the whole operating system from scratch. Think of all the configuring you will have to do. With a VM you just restore it.

My .02 cents,
 
I virtualize all workloads now, because image based backups have a far greater success restoring into a virtual framework. Unless there's a huge IO load, but even then I tend to Hyper-V and use Optane / NVME storage for the database.

Questions, questions:

What software package to you use for the image backups, and what is the typical backup destination?

When configuring the host hardware, do you use a single RAID array or multiple, like you would when not intending to virtualize?

Do you use a RAID setup for the Optane/NVME storage?

The single ESXi setup I have access to actually uses a RAID1 array of two SD cards for ESXi, then a RAID10 for the rest of the storage that all of the virtual machines run on. When configuring a non-virtual server, I would normally put the OS on one array and the data on another. I guess I'm wondering the best-practice hardware setup differs non-virtual environments.
 
Virtualization also makes it very easy to move to new hardware if needed/desired - you just move the VMs, and you're up and running (possibly with minor tweaks depending on what you changed - e.g. on VMWare if you Clone instead of Migrate and want to bring up the clone, you may need to go into the VM's configuration file and tweak the MAC if a new one was assigned).
 
Questions, questions:

What software package to you use for the image backups, and what is the typical backup destination?

When configuring the host hardware, do you use a single RAID array or multiple, like you would when not intending to virtualize?

Do you use a RAID setup for the Optane/NVME storage?

The single ESXi setup I have access to actually uses a RAID1 array of two SD cards for ESXi, then a RAID10 for the rest of the storage that all of the virtual machines run on. When configuring a non-virtual server, I would normally put the OS on one array and the data on another. I guess I'm wondering the best-practice hardware setup differs non-virtual environments.

I use either Datto or ShadowProtect, depending on customer's budget, the former being the focal point because the solution solves every issue at once.

ShadowProtect requires a server to store data on that's capable of being an emergency hypervisor to do this work. Again, Datto Sirrus just makes all this easy.

All servers I deploy use RAID 10, usually with SSDs today.

NVME storage is not RAID'd in the event of fault I just restore the database volume to the primary SSD array. SoftRAID isn't very reliable and a huge performance hit, this is done for performance only. Intel NVME drives have great tools for monitoring health, so you have plenty of warning to schedule downtime to move the database ahead of a fault. This is also relatively rare, most SQL workloads are nowhere near needing this.

I don't use VMWare, I use HyperV. Server has a single RAID array, and a single partition to the host OS. I prevent the host from blowing up by never over-provisioning the guests.

Best practice? I have no idea... and honestly I don't care. Datto brings the system back online in the event of fault, I simply reinstall the host server, and restore the guest in the event of failure. Because the guest is virtual, I can restore it onto any platform that can make a container to run it. Thanks to Datto or Shadowprotect that means I can move a VM from Hyper-V to vSphere to Xenserver to KVM and back again if I have to.
 
Server-wise, I virtualise everything that I can now, and have been doing so for the last 4 or 5 years. You will typically come across a few hurdles using virtual servers, especially with LoB software. I find that some of the LoB software vendors state that they don't support virtualisation (even today) but there's usually a workaround. For example, I had some dental software that required a USB hardware key a while ago (posted here), which was resolved by using a USB over IP device (thanks to @YeOldeStonecat's suggestion).

Virtualisation just provides so many potential benefits in terms of redundancy, backups, continuity and scalability. When I first began virtualising systems (probably over 15 years ago now), I was mainly using VMware's various offerings. Now I exclusively use Windows Hyper-V (in both 'core' and 'full' flavours).

My typical virtual budget setup consists of a single physical 2U rack-mount server (used or new, depending on budget) with twin CPUs and plenty of RAM (96GB as a minimum but usually 128GB+). For storage I use SSD only. A couple of 120GB SSDs in RAID 1 for the OS and usually a number of separate volumes in RAID1 or 10 for the VMs, Data, etc. I install Windows Server Standard on the bare metal (or sometimes Hyper-V core if there's more than 1 physical server) and install the Hyper-V role. The only other role I often give the host is that of a file server (sometimes with DFS Replication, if there's more than 1 physical server). Technically, I find this role works better on the host than the VMs and doesn't impact the host's performance. All other roles are provided by the VMs.

Typically I have a virtual DC, with only the AD/DC and DHCP roles installed and at least 1 other virtual server that serves as the main server, hosting all the remaining required roles. In most cases I have a 3rd VM though, which serves as a RD session host for remote users. I usually make the RDSH part of a 'collection', even if there's only one physical server and one RDSH, since it makes replacing/adding physical host servers and RDSH virtual servers simpler later. On larger systems for example, with multiple host servers, I use numerous virtual RDSH servers (depending on the number of users), which gives redundancy and load balancing and enables me to live-migrate VMs between hosts or take RDSH servers down for maintenance without affecting business continuity. Being virtual, I can do all of this remotely of course, including cloning or creating new VMs. Another point to note is that you can add the Hyper-V host server to the domain, even if it hosts the DC VM. You might expect that would cause issues, due to the fact that the host will boot before the VMs, but it works fine.

Most of the systems I install have multiple host servers but, as an example, here's a simple (single physical server) virtualised system I did a few years ago (which I recently talked about here):

View attachment 7495

I prefer to use separate VMs for the RDSH servers, budget allowing, but the setup above keeps within the 2 VMs allowed on a single Server Standard licence.
 
Most of the systems I install have multiple host servers but, as an example, here's a simple (single physical server) virtualised system I did a few years ago (which I recently talked about here):

Wow - thanks for the detailed post @Moltuae . I will have to think about these options and how best to use the technology here. Licensing is definitely a bit confusing, but you get two guest VM installs as well as the host install (Core or Std with only the HV role) with the standard licensing, so your just paying for the 16-core minimum for each CPU, right? There is quite a few dollars in storage, though. 6 enterprise SSDs and 3 enterprise 1TB drives, not to mention 96GB of ECC RAM. If you don't mind me asking, what did this bill to the client at in round dollars (pounds)?

I'll definitely need to pare that back a bit to make this feasible, I think. My client currently has about 280GB of data in the shares, 400GB in everything else (40GB for the Exchange database, there is WSUS & such, etc.) What is your main use for the single 1TB drive?

I've seen recommendations to include a workstation VM as well - good place to host various things like controllers for the WAPs, etc. I'll have to think about that.

Do you think going back to a single processor would be too much of a performance hit (because you would have less cores to assign among the machines)?
 
Licensing is definitely a bit confusing, but you get two guest VM installs as well as the host install (Core or Std with only the HV role) with the standard licensing, so your just paying for the 16-core minimum for each CPU, right?
To be honest, I'm not that clear on 2016's core licensing regarding VMs yet. I've only done a couple of Server 2016 installs so far. The budget installation I'm referring to, and any similar setups, have all been Server 2012 R2 (Standard). I do need to study 2016's licencing soon. Perhaps someone else could clarify how that works now ...

There is quite a few dollars in storage, though. 6 enterprise SSDs and 3 enterprise 1TB drives, not to mention 96GB of ECC RAM. If you don't mind me asking, what did this bill to the client at in round dollars (pounds)?
I'll probably get some flak for saying this but, you know what, you don't need enterprise grade drives for a budget system like this, Samsung consumer grade SATA SSDs work just great. As long as every volume is RAID1, and the system is going to be regularly backed up and monitored for drive failures, you've got it covered. For volumes that need a bit more performance, use Samsung Pro SSDs and RAID10, budget allowing. I've got lots of server systems with consumer grade Samsung SSD installed and I haven't had a single failure yet. Not that it would matter if I did get a failure -- they're all backed up and in RAID1/10 arrays.

The RAM, like the server, was used (and was ordered with it). It was an ex-datacentre server, about 2 years old at the time, with something like 1yr manufacturer's warranty remaining. I think I paid a little over £1K for the server, including CPUs, RAM, RAID controllers, etc (but without drives). I was completely upfront with the customer of course about the SSDs and the used components and I discussed various options with them, including brand new hardware and enterprise grade drives.

I don't have the exact figure for the client's bill because some parts of the installation and setup were spread over several months (I was gradually migrating them to a domain server system from a chaotic mess of desktop PCs and pseudo-servers). I also never give quotes, only estimates, and an approximation of the number of labour hours required. Looking back at old notes and invoices, the 'delivered' cost to the customer for the server, including SSDs and initial setup (but not including software), was in the region of £3,500. There were lots of additional on-going labour charges once it was on site of course though, as I began the process of migrating everything over.

What is your main use for the single 1TB drive?

The 1TB drive is more for my own use than theirs. It's just a cheap HDD, used for temporary storage of things like ISO files, VM clones, temporary drive images/backups and files pending deletion (ie only unimportant stuff). Saves me going to site to plug an external drive in if I need a little extra temporary storage.

I've seen recommendations to include a workstation VM as well - good place to host various things like controllers for the WAPs, etc. I'll have to think about that.
I usually put controllers for WAPs, etc on the main virtual server, or a separate virtual 'admin server' if the budget allows for the extra licences. I also usually have at least one virtual RD session host (as part of a 'collection') which acts as a virtual workstation for remote users, although there wasn't the need/budget for a separate RDSH in this particular installation. I also like to make sure the spec' provides enough spare resources to (temporarily) fire up extra VMs for cloning and for running a virtual desktop OS or two for testing purposes -- particularly useful when you're testing new group policies and you don't want to experiment on one of the workstations (or don't have unattended remote access to them).

Do you think going back to a single processor would be too much of a performance hit (because you would have less cores to assign among the machines)?
It depends really on how many VMs you're planning to run and how many cores you think they'll need. You want to avoid overprovisioning and, for performance and future-proofing, the more cores and memory the better. I would work out what's needed and discuss it with your customer. If there's a cheaper single-CPU option that's going to take a big performance hit, let them choose whether they'd prefer to pay more or live with the lower performance.
 
Server 2016s licensing is pretty simple...

Don't think about VMs, think about the platform. You do not license the CPUs in the guests, you license the product for the CPUs in the server.

Server 2016 OEM Standard is licensed for 2 physical CPUs, up to 16 cores, and 2 2016 guests. If you have more CPU cores, or you need more VMs you need another copy of standard. This license includes 1 hyperv role only install for the host, NO OTHER ROLES ARE ALLOWED. If you install something beyond Hyper-V on the host, you just burned one of the guest seats. This doesn't change if you decide to use another hypervisor, you still license the same way because you need the guests. So you have 2 production environments, and 1 host license with Server 2016. Datacenter is the same way only with unlimited guests, which considering you "burn" a guest to put things on the host, if you fork over for Datacenter you can do whatever you want.

In my case I have yet to run into a CPU limit, usually I'm buying two copies of Standard 2016 because I want 4 VMs on that host. 2 copies of 2016 standard is now licensed for 4 CPUs and 32 cores. These are physical cores, not virtual cores so hyperthreaded stuff doesn't double your count.

MSSQL server's per core licensing works the same way, and for that reason unless you've got more than 25 users, you're better off buying the Standard CAL version and getting an appropriate number of CALs. The per core licensing for MSSQL is very expensive because it's unlimited users.
 
Awesome - lots of food for thought. Looking back, has the server's performance been adequate? Looking at the specs of the X5650, the E5-2670 would be about equivalent and available in today's refurb market, I'd guess.

In the setup using ESXi at my biggest client, They've got a Poweredge R720 2U server, with 2 x E5-2670, which is a 2.6GHz 8-core chip, and 64GB of RAM. They have a second smaller unit, I think it's an R420 1U used for their backup (used to be called AppAssure, now Quest Rapid Recovery). Anyway, that one has 2x E5-2430L, which is a 2GHz 6-Core, and 32GB of RAM.

The main host has their DC & App server VMs, and 5 Workstation VMs, the 2nd host has the backup DC & the AppAssure VMs. The host OS on all of the units is on a RAID1 of enterprise SD cards. We haven't had any complaints about performance in the 3 years those units have been in service. The hosts all have dual cpus, so the folks that configured them agree with you about that it appears. There is a twin of the smaller server in one of the owner's homes that the backups replicate to - They spent a few bucks on all of that iron, that's for sure.
 
Last edited:
And balance your choice of CPUs with how you're going to have servers running. Unless you know you have a specific requirement for lots of cores, it may make sense to get processors with fewer but faster cores.

The last good-sized server we got for a client was a T630 in rack configuration, configured for up to 32 2.5" drives because you can't add that capacity later. Processors are E5-2687W (12 cores each) instead of the slightly slower but higher core count alternatives, because I was looking at Server 2016 (or future) licensing even though all that's on it right now is 2008R2 VMs. Only 96GB of RAM, but they're really only using about 60GB on a TS with 32GB and a SQL Server with 20GB (which is larger than their EMR database) plus my problem child data hog with only 4GB of RAM.
 
Ok, so I setup a little lab this weekend. I fired up a server from the bonepile, reformated the array and installed HyperV in core mode. After some futzing around with one of my Win10 bench machines a bit (using the guide I found here), i was able to connect with HyperV Manager, setup a VM & succcessfully install Win10. All-in-all, it was simple once I got connected.

While core mode would probably use the least amount of resources on the host, I wonder about maintaining that remotely. I think I'd prefer to do a full Server install so I could put the monitoring agent on it. Also in Core mode, installing drivers would seem to be potentially troublesome. The main install (apparently) found all of my hardware ok, but it seems you're taking a lot on faith that way. It's certainly possible my concern is unfounded. For those of you that do virtualization as a matter of course, how do you monitor the host?

Now I think I'll start over and do ESXi so I can compare the installation experience.
 
Ok, so I setup a little lab this weekend. I fired up a server from the bonepile, reformated the array and installed HyperV in core mode. After some futzing around with one of my Win10 bench machines a bit (using the guide I found here), i was able to connect with HyperV Manager, setup a VM & succcessfully install Win10. All-in-all, it was simple once I got connected.

While core mode would probably use the least amount of resources on the host, I wonder about maintaining that remotely. I think I'd prefer to do a full Server install so I could put the monitoring agent on it. Also in Core mode, installing drivers would seem to be potentially troublesome. The main install (apparently) found all of my hardware ok, but it seems you're taking a lot on faith that way. It's certainly possible my concern is unfounded. For those of you that do virtualization as a matter of course, how do you monitor the host?

Now I think I'll start over and do ESXi so I can compare the installation experience.

I think you've hit the nail on the head there. Hyper-V Core is 'free', the limitations do cause a problem. For minimal resources, you're probably better off installing the full version in Core or Nano mode. But the question is, is the bare metal Windows Server OS still not counting against your available licences if you install other software on it?

You would be able to do monitoring of Hyper-V core via WinRM, but yeah, as an MSP you want your monitoring all to be in one place.

I expect that ESXi has some similar challenges, unless you purchase their management software.

Going to try KVM next?
 
Back
Top