Provisioning Server 6

After attending Partner training, I realized not a lot of people had made the jump to the new version of PVS.  Many people were so disappointed with PVS 5.6 that they chose alternate methods of deploying VMs since the maintenance/upgrades/replication/etc problems with 5.6 were enough to steer them away from it.

6.0 supports maintenance builds!  They are very easy.  I’m not going to go into a lot of super-technical detail in this post, but the short version is you boot a VM in maintenance mode from the standard image, and an .avhd gets created.  Make your changes, shut down the VM.  Assign the avhd to a test VM, make sure you have it how you like it, then mark that version as production.  The next time all the assigned VMs boot, they’ll get the new version, and the changes in the avhd will be merged with the vhd as the client machines request their blocks.  PVS 6 will also automatically keep track of your vhd stores for you, too, so you don’t have to worry about that.  If the PVS server doesn’t have a copy of the vDisk assigned to the target, then it will direct the target to the other PVS server.  Just create a robocopy job to keep them in synch.  I prefer this method, since it provides you with multiple copies of the VHDs, as opposed to some shared storage (like CIFS), where that shared storage could become a single point of failure.  Sure, you could use DFS, but if you don’t already have DFS configured, you’re adding yet another layer of complexity to address a problem that PVS (and robocopy or VBS or PS or…) can handle for you automatically.

This brings us to a couple of points worth noting about PVS.  These have been culled from many different sources, and I apologize for not taking the time to cite each one here.  It’s late, and I’m more interested in getting the information out there.  Rest assured, all this information is available to find on the Internet, I’m just doing everyone the favor of compiling a fair amount of it into one location in a short list.

1.  By default, Server 2008 R2 will only cache block-level storage.  This means if you decide to use the CIFS share method to serve up your VHDs, you’ll need to modify the registry.
2.  If your PVS is a VM, disable the memory management feature of the hypervisor tools.  Due to the way System Cache works, the memory is not technically marked as “used”, and is thus available to be stolen by other applications.  The balloon driver is one such application.
3.  The minimum required amount of RAM for a PVS box is 2GB.  If you run it with less than 16GB you are doing yourself and your target VMs a tremendous disservice.
4.  Use multiple NICs in your PVS box, and consider teaming.
5.  One of the benefits that seems to be often overlooked, especially in the XenDesktop world, is that it literally cuts your storage IOPS in half.  Why?  All the read IOPS are being sent across the network, not across your storage subsystem.  But wait, they have to be read by the PVS box, right?  No.  See System Cache.
6.  Want PVS for your XenApp deployment, but can’t afford Platinum licensing?  Trade up to XenDesktop Enterprise.  The cost is less than you might think, and PVS is included with XD Enterprise.
7.  If your hypervisor is VMware, make CERTAIN you give each of the target VMs some hard disk.  Why?  Because if you don’t, VMware won’t put a SCSI controller in the VM, so when you try to boot, they’ll bluescreen from not being able to find the system disk.  Brilliant, huh?
8.  Use dynamic disks for your PVS VHDs.  Why not fixed?  Dynamic is bad, right?  Well, sort of.  There is a HUGE article written all about this (this one I will link to because it’s very in-depth – http://blogs.citrix.com/2012/02/13/fixed-or-dynamic-vdisks/), but the short version is that any of the performance hits you would normally take are voided by the way PVS works.  Think about it:  The VHD doesn’t grow – you are never writing to the VHD after it is initially created and the image is taken from the source machine, so having fragmented files is no longer a concern.  Even read performance isn’t a concern since all the blocks will be stored in RAM in the System Cache.   So save yourself some disk space (not to mention time!) and use dynamic.
9.  On the topic of disks, make sure your write cache vDisks in the target VMs are NOT thin provisioned.  They should be eager zero, or thick, or fixed, or…  Why cripple your write cache when the read side of your infrastructure is so speedy?
10.  If you are PXE booting physical machines from PVS, make sure to configure PortFast on your switches.  How can you tell?  If a cold boot results in them not being able to contact the PXE server, but then giving them the old three-finger-salute gets them to boot, STP is probably your problem.  The warm-boot means that the port didn’t have to renegotiate and was all ready to go.
11.  Use multiple PVS servers for redundancy!  Think about it – if your PVS server goes down, none of the target VMs will function anymore!
12.  Use the PVS PXE service instead of configuring option 66 and 67 in your DHCP scope.  Why?  You can only enter a single IP and filename in DHCP.  By using the PXE service on EACH of your PVS servers, you can get PXE redundancy, too.  Imagine:  You have 4 PVS servers, but 66 & 67 are configured in DHCP.  If the ONE server specified in DHCP goes down, well, you may as well not have any PVS servers.  Get it?

There are more, but this should be plenty to get you started. I’m sure Geeks 2 and 3 will chime in with some of their favorite best practices as well, making this a more comprehensive list.

Happy Provisioning,
CG1

Advertisements

Comments are closed.