Need help setting up VMs on Server 2019 Std

KurtisK

New Member
Reaction score
0
Location
Amarillo, TX
I just bought a client based on their specs. Dell T440; single Xeon Silver 4214; 64GB RAM; 4 x 600GB 10k SAS in RAID 10; 1 600GD 10k SAS for data and logfiles. The server needs to run IIS and MS SQL 2017 server for 5-6 users on the local network. The current SQL database is around 2 GB.

Had planned on just running the two Roles together on the same Windows instance but the software provider has come back and insisted the Webserver and SQL server be run in different VMs. The Webserver needs a full GUI install, The SQL server requires mapping physical drives to the virtual server to be used for database and log file storage.

Is has been 8-10 years since the last time I messed with VMs, especially with SQL server, and not sure which method creating VMs works best in this situation.

Trying to decide if I want to spend the weekend retraining myself of VMs and SQL, or search for a consultant who would setup the VMs for me. IIS & SQL server don;t actually need to be installed, the VMs just need to be setup.

Any suggestions would be greatly appreciated.
 
At work we use hyper v. I think if you have server 2019 standard you are allowed to run 2 vms before you have to purchase another server license unless you buy the data center edition. And that you can't use the server for anything else. I believe that is supposed to apply if you use hyper v, VMware etc.

I think they do have a free VMware server edition, and it seems like there's a free hyper v hypervisor but it's all command line based. But we use hyper v and it works ok for us. In our situation we have 2 hyper v boxes, and replicate our vms between them. About once a month I export a copy of each vm to an external hard drive that gets powered off as well.
 
Yes, 2019 Standard does allow to VM's on top of the bare metal layer. So what is on top of bare can only have one role, hyper-v. Of course you'll needs CAL's for everything.

Personally I use the free ESXi for my bare metal. but I wold not do that for a customer production machine.
 
The SQL server requires mapping physical drives to the virtual server to be used for database and log file storage.

Did the software provider ask for this? If so, they also need to brush up on virtual machines. There has been no reason to do this since we got Gen2 virtual machines and VHDX files with Server 2012R2.

The reason before 2012R2 was for performance. You got noticeably lower latency by passing a disk directly into a VM compared to creating a VHD on the same disk. So it made sense for things like SQL. On server 2019 though... absolutely no valid reason to do this.

All you will achieve is loosing many of the features than makes virtualisation great.
- no thin provisioning
- no snapshots/checkpoints on that disk
- no live migrations
- hypervisor level backups software can't see that disk.
Likely many more I'm not thinking of...
 
Last edited:
Yes, the software provider lists a pass through drive as a requirement. I don't think they have updated their hardware requirements document in quite a while. I guess I'll stick in an extra drive just in case. The IT manager I have been talking with about scheduling the install seems pretty inflexible. Hopefully the tech who actually does the install will be easier to deal with.

Been giving myself a refresher course over the weekend and decided to stick with Hyper-V. I've noticed some people recommending separate volumes for each VM and fixed size VHDX for the SQL server. Any thoughts?
 
Yes, the software provider lists a pass through drive as a requirement. I don't think they have updated their hardware requirements document in quite a while. I guess I'll stick in an extra drive just in case. The IT manager I have been talking with about scheduling the install seems pretty inflexible. Hopefully the tech who actually does the install will be easier to deal with.

Been giving myself a refresher course over the weekend and decided to stick with Hyper-V. I've noticed some people recommending separate volumes for each VM and fixed size VHDX for the SQL server. Any thoughts?

The advantage of fixed size VHDX is slightly better for performance. Very slightly.
The advantage of dynamic is space saving as it will only expand to the size of your actual data.
If you have the storage capacity available just use fixed to save any complication.

Separate volumes for each VM makes no difference if those volumes are on the same underlying disk. It can have some benefits on much larger setups but isn't relevant here.
 
Dynamic disks also snap and merge more quickly... thick provisioning shouldn't be used unless you've got a database work load. But honestly... my heaviest SQL load is thin... and it's been fine. The more rapid snaps have been NICE.

Separate volumes do matter... I always make a D volume for actual databases. Why? Because if the DB runs rampant, the C volume stays clean and doesn't nuke the install. Much easier to sort out a busted SQL server with a full data volume than a full OS volume with a corrupted copy of windows on it.

They can also give you the ability to bring VMs back online in pieces during a DR event, allowing for partial restoration of services more rapidly than if all the data was on a single disk.

I do NOT recommend directly mapping storage to the SQL load, if your platform requires such performance install an NVME adapter, and slap in two extremely high speed SSDs. Use the host OS to mirror them, and slap the DB's volume on that disk. If the mirror collapses, you can simply restore that volume to another array to bring the VM back online while you sort out the hardware. You can also use Intel's stupid fast PCIe cards... I know a shop that does stupidly large Great Plains installs that handle IBM levels of transactions like that. The idea of passing drives through... is just old... and stupid.
 
Separate volumes do matter... I always make a D volume for actual databases. Why? Because if the DB runs rampant, the C volume stays clean and doesn't nuke the install. Much easier to sort out a busted SQL server with a full data volume than a full OS volume with a corrupted copy of windows on it.

I was referring to separate volumes for each VM. For example having your SQL server .vhdx stored on D: and your IIS server .vhdx stored on E: makes little difference if D: and E: are simply 2 volumes on the same underlying storage.

Inside the VM such as your SQL example - totally agree. Data should be kept separate from OS.
 
I was referring to separate volumes for each VM. For example having your SQL server .vhdx stored on D: and your IIS server .vhdx stored on E: makes little difference if D: and E: are simply 2 volumes on the same underlying storage.

Inside the VM such as your SQL example - totally agree. Data should be kept separate from OS.

Ahh yes, that makes sense. And you're absolutely correct, carving up an array into multiple volumes on the host doesn't make any sense unless you're using said array for the host OS as well as the guests.
 
I like to have my host with ws core, c: for OS, d: for vms, p: for pagefile, all in a raid 1 disk (for small installs, otherwise raid6), hyperv role, and then depends on the clients and their licensing, a minimum of 2 vm's would be like:
Vm1: adds, fs (file sharing with dfs and no mapped drives).
Vm2: Apps-bbdd (can give local admin to app provider without compromising the domain).

And every single vm with c: for OS, d: for data and p: for pagefile

And the core with Windows admin center, it has proven to be quite handy for me.
 
Back in the spindle disk days, I preferred breaking up my RAID volumes on the host...I'd often have a pair RAID 1...small to mid sized...to create a volume for the guests system drives. And then usually a larger Raid 1 or RAID 10...to create a second volume, for the data volumes of the guests. This would give each guest "separate spindles" for their loads to spread across. On larger clients I'd mix match that. I saw noticeable performance improvements with that setup of having the hypervisor present one single big RAID1 volume to have all guests drives on....since that would have to deal with a higher load of concurrent hits.

But these days with SSDs in servers...so bloody fast I just make a single big R1 (for small clients) or single big R10. (either with a hot spare). SSD isn't that much more expensive anymore, and the performance gain is simply silly to pass up on.
 
Back
Top