Saturday, July 24, 2010

The factors that influence “How many Cluster Shared Volumes [CSV] in a Cluster & VHD’s per CSV”

Cluster shared volumes provides many benefits over a traditional cluster physical disk resource however implementation and designing of cluster shared volumes [CSV] need significant planning to get the maximum benefits out of it. The quick questions that appear during planning is how many CSV Lun’s spanned across cluster nodes, how many VHD’s per CSV, which VHD’s to club together on same CSV and what is the optimum size of a CSV Lun. By the very nature of the CSV’s, if they are in redirected access mode [whether planned or unplanned], will bring down the performance considerably and we should make sure that SMB IO happens for the minimum possible time. In today’s blog I won’t be answering these questions as there is no “one size fits all” answer but we will touch all the points which help you in figuring out the right size for your IT environment.


1. You may want to place OS and Database/logs VHD’s on separate CSV luns. You may even want to place different databases eg. SQL database VHD’s and Exchange VHD’s on separate CSV luns. You can also use CSV in conjunction with pass through disk if you need as mentioned here.

2. How much IOPS your CSV Lun can handle. You may like to get an approximate IOPS estimate of all the VHD’s you are planning to place on a specific CSV Lun, so that you take an informed decision on number of VHD’s that CSV can handle.

3. You may like to make sure that average queue length and disk latency values of CSV Lun are under permissible range after placing the VHD’s. You may like to check with your SAN vendor leveraging storage performance monitoring tools and plan Raid configuration depending on the capabilities of the Storage.

4. While calculating IOPS we not only need to consider Applications generated IOPS but also the maintenance jobs like antivirus scanning, defrag, backup etc. you may also like to consider your backup strategy and whether your backup vendor uses software provider or hardware provider for VSS. To see why it is recommended to use VSS hardware provider along with CSV and the impact of using software shadow copy provider, check here and here.

5. While deciding the size of the CSV Lun you also need to consider the time chkdsk will take to finish. However in server 2008 R2, NTFS self healing thread and improvements in chkdsk and defrag improves the customer experience.To understand how chkdsk works and how much time it takes to run depends on size of volume, number of files and their size and corruption..check here [chkdsk in server 2008 and above is better than earlier OS but you still cannot predict time it will take]

6. While planning the size of CSV Lun, you may also need to plan for VHD’s snapshots. You may also need to plan how much free space should be left available on CSV after placing the VHD’s.

7. Cluster shared volume performance counters will also help you in planning/sizing along with other performance counters. you can see how much direct Read/Write IO happening from the nodes. In same fashion you can monitor the Metadata/Redirected IO for all the placed VM’s. You may also like to spread your CSV Luns uniformly across all cluster nodes assuming all nodes have same computing resources.

Hope this article helps you in planning your CSV design and enables you to reap the maximum benefits out of Cluster shared volumes and Hyper-V host clustering.

GAURAV ANAND

Blog is based on my Personal understanding of the Technologies mentioned above and information provided is AS IS.

Wednesday, July 21, 2010

How Dynamic Memory feature of Server 2008 R2 SP1 works

Microsoft first mentioned about Dynamic memory in 2008 PDC conference and it seemed that it would be part of server 2008 R2 but the feature got delayed and came in server 2008 R2 SP1 whose public beta was released just a few days back. Microsoft is planning to ship SP1 in 1st quarter of next year. Dynamic Memory allows for memory on a host machine to be pooled and dynamically distributed to virtual machines as necessary. Memory is dynamically added or removed based on current workloads, and is done so without service interruption. At a high level, Hyper-V Dynamic Memory is a memory management enhancement for Hyper-V designed for production use that enables customers to achieve higher consolidation/VM density ratios. We will enable dynamic memory today and dig deep into what it is and how it differs from VMware implementation of over commit feature.

lets get started by installing SP1,




Once it is done, you will see the build as 7601, Service Pack 1



Ok, however if you are trying to install SP1 on a server core machine then you may have to uninstall Chinese or other language packs as they are not part of the SP1 package and you will have to use Lpksetup.exe as mentioned here



Once you have installed SP1 and rebooted the host machine, you can see the dynamic memory tab in settings of the VM.


However though you can enable Dynamic memory by selecting the radio button but remember that till the time you install the windows 7 SP1 or windows server 2008 R2 SP1 or the Hyper-V integration components for these or earlier operating systems, your VM will only use the startup ram and will not increase dynamically. So lets say you have a windows server 2003 SP2 machine, till the time you do not install Hyper-V integration components [latest] you will not be able to Dynamic memory feature ...your VM will only get startup ram. . In other words, the Dynamic Memory settings for the virtual machine can be configured but they don't do anything—a virtual machine that doesn't have the latest Integration Components can only have a fixed amount of memory assigned to it.


Once that is done and VM's rebooted, you can use Dynamic memory feature and it will look like as below. Key point to remember is that once a virtual machine has been configured to use Dynamic Memory by installing the latest Integration Components on the guest operating system, the virtual machine will no longer work on pre-SP1 hosts and cannot be moved to such hosts. You have to be careful if the VM's are highly available as in that scenario you need to enable dynamic memory on all cluster hosts otherwise on non dynamic memory enabled hosts, VM's will have access to startup ram only.


To understand more about Hyper-V Dynamic memory and before reading ahead, please read this Microsoft whitepaper. [Recommended]

The two new groups of performance counters for monitoring Dynamic Memory are Hyper-V Dynamic Memory Balancer and Hyper-V Dynamic Memory VM. I enabled those and checked the results and at same time i opened Msinfo32 from host to see how much is memory available to host after the memory has been consumed by the VM's. you can very well see from perfmon counters that when we are choosing a buffer we are actually choosing a acceptable memory pressure value for our VM—if my buffer is 80% , pressure value will be around 20%. I also found that Domain Controller VM’s need more start up ram than recommended by Microsoft in above mentioned whitepaper [that’s what I found during my testing]. You may notice that the amount of ram reported by task Manager in the guest operating system does not decrease when a virtual machine uses less ram, so all the procedures/rules for rightly configuring a machine for capturing memory dumps will need to be modified  [http://support.microsoft.com/kb/969028 & http:/support.microsoft.com/kb/254649] as they are dependent on physical ram available.


Now lets try to understand how Dynamic memory works from a 100 FT view.



The host has a parent partition that can be configured to provide resources to guest operating systems executing in the child partitions by using virtualization service providers (VSPs). Broadly, the VSPs can be used to multiplex the interfaces to the hardware resources by way of virtualization service clients (VSCs).  A dynamic memory virtualization service provider (DMVSP) can be used to adjust the amount of memory accessible to a child partition. Broadly, the DMVSP can commit and de-commit memory to partitions using one or more techniques. The DMVSP can be associated with one or more virtualization service clients, namely dynamic memory virtualization service clients (DMVSCs). Broadly, the DMVSCs can provide information to the DMVSP . Each DMVSC can also help commit and de-commit memory from the partition it operates within. The DMVSCs and DMVSP communicate by way of a virtualization bus VMbus. VM worker process can work in conjunction with the a virtualization infrastructure driver (VID) which can allocate memory to a child partition. Each guest operating system includes a memory manager which can allocate memory to applications at their request and free the memory when it is no longer needed by the applications. The memory addresses that memory managers actually manipulate are guest physical addresses (GPAs) allocated to the guest operating systems by the VID . The guest physical address in turn can be backed by system physical addresses (SPAs), e.g., system memory addresses that are managed by the hypervisor . The GPAs and SPAs can be arranged into memory blocks. In operation, when a guest operating system stores data in GPA of block 1, the data may actually be stored in a different SPA such as block 6 on the system [see below].
Memory status for a guest operating system can be obtained, and memory status can identify how performance of the guest is affected by the amount of memory that is available. This can be calculated during the runtime of the guest operating system by, DMVSC. This information can then be sent to the DMVSP. The memory status information can include a series of values which identify level of memory pressure that the guest OS is experiencing. As the guest operating system becomes more stressed, i.e., as the amount of memory required to efficiently execute the current workload increases, the DMVSC can revise the value and communicate this information to the DMVSP. Based on the obtained memory status, an amount of guest physical addresses reported to a memory manager of the guest operating system can be adjusted. DMVSP can adjust the amount of guest physical addresses reported to the memory manager of guest operating system. That is, the DMVSP can adjust the amount of address spaces that are detected by the memory manager and can operate to commit or de-commit memory based on the memory pressure that the guest OS is experiencing, e.g., if guest operating system is stressed, memory can be committed.

The memory manager can be configured to support dynamic addition of memory to a running system. DMVSC can be configured to access a hot-add interface of the memory manager and the DMVSC can send a message to the operating system that describes the hot-added GPAs. The memory manager can then make the new memory available to the guest operating system, drivers, applications. For example, the DMVSC can receive the hot-added memory addresses from the DMVSP after the VID generates the relationships between GPAs and SPAs.

The way Microsoft implementation is different from VMware as Microsoft does not uses something similar like Vsphere host swap file and moreover the guest OS does the paging when it needs nor the hypervisor so guest OS knows very well which pages to swap and which not. Also the buffer option available in Dynamic memory is not available in VMware implementation. To Read more about it please check here

Another point that you need to keep in mind is that you may see zero or negative values of memory buffer in Hyper-V console and this means that VM does not have the needed memory and memory pressure is being faced by the guest OS.

By default when we enable dynamic memory some amount is also reserved for host OS.
I can see a approx difference of 1GB between Available memory [Hyper-V dynamic memory balancer] perfmon counter and host msinfo32 output.

Though the product is yet in beta and RTM version may take another 4-6 months, a lot of things may change in RTM version. Though I will definitely say that Microsoft has done a great job and this was most awaited feature and gives users an opportunity to leverage this feature to better optimize the physical resources and increase the VM density on the host. To see how dynamic memory compliments Failover Clustering check here.  Hope you enjoyed the blog and it gave you some insight into Microsoft Dynamic memory implementation and yes, your valuable time is highly appreciated.


GAURAV ANAND

Blog is based on my Personal understanding of the Technologies mentioned above and information provided is AS IS.