Optimizing Hyper-V performance: Advanced fine-tuning

14 באוגוסט 2008

תגיות: ,
אין תגובות

Check out the following optimization guidelines for Hyper-V…

Hyper-V Integration Services
Let's start with a simple, common sense practice: Ensure that you use the latest version of Hyper-V's integration services. This simple setup program installs the latest available drivers for supported guest OSes (and some that are not officially supported). The result is improved performance when VMs make calls to hardware. This should generally be the first thing one does after installing a guest OS. Keep in mind that updated versions of integration services might be released to improve performance between major releases of Hyper-V.

Use synthetic network drivers
Hyper-V supports two types of virtual network drivers: emulated and synthetic. Emulated drivers provide the highest level of compatibility. Synthetic drivers are far more efficient, because they use a dedicated VMBus to communicate between the virtual network interface card (NIC) and the root/parent partitions physical NIC. To verify which drivers are used from within a Windows guest OS, you can use Device Manager.

The type of network adapter installed can be changed by adjusting the properties of the VM. For changes to take effect, in some cases a VM will need to be shut down or rebooted. The payoff is usually worth it, though: If synthetic drivers are compatible, you'll likely see lower CPU utilization and lower network latency.

Increasing network capacity
Network performance is important for various types of applications and services. Whether running one or a few VMs, you can often get by with just a single physical NIC on the host server. But if many VMs compete for resources and a physical network-layer security is to be implemented, consider adding multiple gigabit Ethernet NICs on the host server. Also, NICs that support features such as TCP offloading can improve performance by managing overhead at the network interface level. Just be sure that this feature is enabled in an adapter's drivers in the root/parent partition.

Another key is, whenever possible, to segregate VMs onto separate virtual switches. Each virtual switch can be bound to a different physical NIC port on the host, allowing for compartmentalization of VMs for security and performance reasons. VLAN tagging can also be used to segregate traffic for different groups of VMs that use the same virtual switch.

Minimize OS overhead
A potential drawback of running a full operating system on virtual host servers comes in the form of OS overhead. You can deploy Hyper-V in a minimal, slimmed-down version of Windows Server 2008 by using the Server Core installation option. This configuration lacks the standard administrative tools, but it also avoids a lot of OS overhead. It also lowers the security "surface area" of the server and removes many services and processes that might compete for resources. It's really a stripped-down version of the Windows OS that's optimized for specific tasks. You'll need to use remote management tools from another Windows machine to manage Hyper-V, but the performance benefits often make it worth the effort.

Virtual CPUs and multiprocessor cores
Hyper-V supports up to four virtual CPUs for Windows Server 2008 guest OSes and up to two virtual CPUs for various other supported OSes. That raises the question: When should you use this feature? Many applications and services are designed to run in a single-threaded manner. This leads to the common issue of seeing two CPUs on a server both running at 50% utilization when a single application is cranking. From the level of the guest OS and the hypervisor itself, spreading CPU calls across processor cores can be expensive and complicated. The bottom line is that you should use multiple virtual CPUs only for those VMs that have applications and services that can benefit from them.

Memory matters
A rule of thumb is to allocate as much memory to a VM as you would for the same workload running on a physical machine; but that doesn't mean that you should waste physical memory. If you have a good idea of how much RAM is required for running a guest OS and all of the applications and services the workload requires, start there. You should also add a small amount of additional memory for overhead related to virtualization (an additional 64 MB is usually plenty.)

A lack of available memory can create numerous problems, such as excessive paging within a guest OS. This latter issue can be confusing, because it might initially seem as though the problem is disk I/O performance. The root cause is often because too little memory has been assigned to the VM. It's important to monitor the needs of your applications and services, which is most easily done from within a VM, before you make sweeping changes throughout a data center.

SCSI and disk performance
Disk I/O performance is a common bottleneck for many types of VMs. You can choose to attach virtual hard disks (VHDs) to a Hyper-V VM using either a virtual integrated development environment (IDE) or SCSI controllers. IDE controllers are the default because they provide the highest level of compatibility for a broad range of guest OSes. But SCSI controllers can reduce CPU overhead and enable a virtual SCSI bus to provide multiple transactions simultaneously. If your workload is disk-intensive, consider using only virtual SCSI controllers if the guest OS supports that configuration. If that's not possible, add additional SCSI-connected VHDs (preferably ones that are stored on separate physical spindles or arrays on the host server).

For more information on this, read the source article – http://searchservervirtualization.techtarget.com/tip/0,289483,sid94_gci1321718,00.html

הוסף תגובה
facebook linkedin twitter email

כתיבת תגובה

האימייל לא יוצג באתר. שדות החובה מסומנים *