Tuesday, December 22, 2009

Virtualization 2.0 and Intel Virtualization Technology(VT)

Introduction:-
Virtualization is one of the hottest technologies in IT infrastructure today. According to Gartner, “Virtualization is the highest impact trend changing infrastructure and operations through 2012. It will change how you manage, how and what you buy, how you deploy, how you plan, and how you charge.” Several studies by the research firm IDC support this claim. The firm reports 22 percent of servers today as being virtualized and expects that number to grow to 45 percent over the next 12 to 18 months. Another IDC study predicts the number of logical servers generated on virtualized servers will surpass the number of non-virtualized physical server units by 2010.

Historically limited to mainframe environments, virtualization’s rapid adoption on Intel architecture based platforms is being enabled by virtualization software and Intel’s advances in both multi-core processing and a suite of virtualization technologies known as Intel Virtualization Technology(Intel VT). The first virtualization implementations on Intel platforms primarily focused on server consolidation (utilizing multiple virtual machines to run multiple applications on one physical server). This consolidation has greatly benefited data centers by increasing server utilization and easing deployment of systems in data center environments.

Virtualization 2.0 focuses on increasing service efficiency through flexible resource management. In the near future, this usage model will become absolutely critical to data centers, allowing IT managers to use virtualization to deliver high availability solutions with the agility to address disaster recovery and real-time workload balancing so they can respond to the expected and
the unexpected.

Consolidation will continue:-
Consolidation, the usage model labeled in Figure 2 as Virtualization 1.0 and the earliest driver for virtualization in traditional IT deployments, came as a result of data center managers looking for ways to improve server utilization and lessen the impact of rising energy costs. This continues to be a primary and valuable usage model for small and large businesses alike. Consolidation using virtualization has proven to be a real cost saver. A recent IDC study found 88 percent of U.S.-based organizations using virtualization for consolidation saved at least 20 percent of capital expenditures (CAPEX) by adopting virtualization technologies. Overall x86 utilization rose from 35 percent before virtualization to 52 percent with virtualization. IT organizations around the world still have much more to gain through further utilization improvements through consolidation.

Driving existing and future virtualizationusage models:-
For Virtualization 1.0 where the desired outcome is primarily consolidation, IT needs servers with performance tuned for virtualization. Anticipating these needs, Intel delivered the following technologies:

Virtualization hardware-assist in server processors. Intel introduced this technology in 2005 in both Intel Itanium processors for mission critical servers and Intel Xeon processors.
Unparalleled power-efficient performance. Intel Xeon processors based on Intel Core microarchitecture (introduced in second quarter 2006) and the Intel hafnium-based 45nm Hi-k silicon process technology (introduced in second half 2007) have set new standards in power-efficient performance for server processors. Current Intel Core Microarchitecture-based Intel Xeon processor-based servers achieve the top industry-standard power efficiency benchmark results (July 2008). By rapidly ramping up processor capacity and performance over the last few years, Intel has been able to fulfill IT needs for servers capable of improving performance while hosting many guests. Today’s Intel Xeon processors deliver up to 6.36 times better performance/ watt than single core. Quad-core processors also provide twice the performance of dual-core processors for better TCO.
Reliability. Intel Xeon processor-based platforms include the bestin- class RAS capabilities that increase data availability and reliability–this is essential for deploying more VMs per server with confidence. These processors provide features designed to improve reliability
and recovery speed. Examples include improved Error Correcting Code (ECC) coverage for system bus and cache, new memory mirroring, fully buffered DIMM technology, and hot pluggable component support. Intel’s X8 Single Device Data Correction (X8 SDDC), for instance, allows IT to fix the failure of an entire DRAM device on-the-fly by removing a single DRAM from the memory map and recovering its data into a new device.
A final enabling ingredient for this first stage of virtualization was Intel’s collaboration and continued support in the development of a strong ecosystem. An important part of that support was Intel VT–the suite of virtualization technologies that make it easier for software providers to develop a robust hypervisor and bring solutions to market faster. This has enabled a wealth of virtualization software that takes advantage of these platform-centric capabilities and solutions to better help IT meet their needs.


The transition to Virtualization 2.0:-
The success of consolidation deployments, combined with software evolution and Intel’s continued advancements in processor performance, energy efficiency, and virtualization technologies, are now enabling many IT organizations to take the next step: using virtualization to improve their operational efficiencies. The time has come to ask more of virtualization and give virtualized data centers the opportunity to increase service levels and deliver major business agility advancements. Virtualization 2.0 focuses precisely on that by enabling flexible resource management.

Organizations worldwide are already beginning to take advantageof this model. The 2007 IDC study, for example, showed that 50 percent of all VMware ESX users had adopted VMotion* capability. This technology enables live migration—moving guests from one physical server to another with no impact to end users’ experience. By giving IT managers the ability to move guests on the fly, live migrations make it easier to balance workloads and manage planned and unplanned downtimes more efficiently.
This next phase, focused on flexible resource management, will require an infrastructure that supports:

• Flexible workload management for easier load balancing across different generations of Intel® Xeon® processor-based servers
• I/O tuned for virtualization to enable more efficient migration and greater I/O throughput capacity.
• Hardware and software compatibility that enables the new usage models and provides the confidence that ‘it just works’.

Flexible workload management:-
Dynamic load balancing requires the ability to easily move workloads across multiple generations of processors without disrupting services. Performing live migrations from a newer generation processor with a newer instruction set to an older generation processor with an older instruction set carries the risk of unexpected behaviors in the guest. In 2007 Intel helped solve this problem by developing Intel Virtualization Technology (Intel VT) FlexMigration. By allowing virtual machine monitor (VMM) software to report a consistent set of available instructions to guest software running within a hypervisor, this technology broadens the live migration compatibility pool across multiple generations of Intel Xeon processors in the data center. This also reduces the challenges to IT in deploying new generations of hardware, enabling faster utilization of servers with new performance capabilities as they become available.

Accelerating I/O performance and enabling more efficient migration
Virtualization solutions are inherently challenged in the area of network I/O because the guests on a host server all end up sharing the same I/O resources. Moreover, many I/O resources are emulated in software for consistency and decision-making (e.g., network packet routing from the shared I/O resource is often done in software). Intel improves availability through a number of technologies that accelerate I/O performance. This enhances the ability to deploy I/O intensive workloads (beyond simple consolidation) and increases efficiency in Virtualization 2.0 usage models such as load balancing, high availability, and disaster recovery (all of which extensively rely on data transfer over the network).

Intel’s I/O technologies for improving data transfer include:
• Intel Virtualization Technology (Intel VT) for Connectivity (Intel VT-c) provides unique I/O innovations like Virtual Machine Device Queues (VMDq) that offloads routine I/O tasks to network silicon to free up more CPU cycles for applications and delivers over 2x throughput gains on 10 GbE.9
• Intel Virtualization Technology (Intel VT) for Directed I/O (Intel VT-d) delivers scalable I/O performance through direct assignment (e.g. assigning a network interface card to a guest) and enables single root input/output virtualization (IOV) for sharing devices natively with multiple guest systems. Centralized storage is a key aspect of Virtualization 2.0 usage models. Usage models like load balancing, high availability, and disaster recovery rely on a VM’s ability to efficiently migrate from one physical system to another while having constant access to data storage for continued operation. Thus, simplifying the fabric and providing a cost-effective means to deploy storage area networks (SAN) and LANs are key requirements for Virtualization 2.0. Intel products address this need for more cost-effective SAN and LAN fabric through support of Fibre Channel over Ethernet (FCoE). Intel also provides leadership in important I/O virtualization standards designed to improve I/O and fabric performance throughout the industry. Intel is working on T11 FCoE (through the T11 standard body of the American National Standards Institute, or ANSI), as well as playing important roles on the IEEE for Enhanced Ethernet and PCI-SIG* IOV specifications.

Hardware-software compatibility:-
Through its rich partnerships in the virtualization ecosystem, Intel is able to ensure that its products and those from virtualization providers are well suited to Virtualization 2.0 usage models. A recent example is a 2007 collaboration between Intel and VMware that enhanced how Intel VT FlexMigration and Enhanced VMotion worked together. Intel is also working with several virtualization software solution partners to enable platform capabilities that are important for Virtualization 2.0 usage models such as efficient power management. Usage models such as high availability require headroom build-out so that there are enough backup systems to run the workload in case the primary system or software fails. Efficient power management of this headroom is critical for data centers and Intel is working with its virtualization software partners to enable such power management capabilities as power monitoring and system power-capping through hardware technologies provided on the platform.

Furthering virtualization’s role in the data center:-
On the horizon is Virtualization 3.0 where adaptive continuity takes flexible resource management to the next level. Hardware will provide more resilient infrastructure and instrumentation for enabling automation software to make the balancing decisions in real-time. Predictive decisionmaking will readjust loads automatically based on changing workload requirements and/or data center demands, such as power, server resource changes, software failures, or other factors. Thus Intel VT, is a path towards an automated infrastructure where workloads can be dynamically moved and scaled across the data center depending on customer demand, resource requirements, and service-level assurance requirements including performance, I/O, and/or power. Virtualization 2.0 is the next step.


References:- Intel, Gartner, IDC

Sunday, December 13, 2009

Pros and Cons of Bundling Hardware and Software(Virtual Computing Environment)

Buying hardware and software together for virtualization will save organizations time and money, according to Cisco Systems, EMC and VMware. The three vendors have formed the Virtual Computing Environment(VCE) coalition, through which they will sell prepackaged bundles of servers, networking equipment and software for virtualization, storage, security and management. Key components of the bundles include the Cisco Unified Computing System and VMware vSphere.
In this post I am trying to answer the question
What are the pros and cons of bundling hardware and software together for virtualization, and will this approach have success in the market?
Pros:-
1. VCE will enhance partners' ability to recommend and implement preconfigured, tested and validated solutions with one support organization. This should accelerate the adoption of virtualized solutions and move toward the goal of 100% virtualized environments. Partners of these companies will have advanced training and expertise in implementing the solutions.
2. Prepackaged server virtualization bundles might succeed -- at least until the external cloud offerings mature -- in the small and medium-sized business category, where disparate hardware is not as much a factor, and support staff may have lower skill levels. By offering preconfigured bundles, administration becomes the focus -- not architecting the virtual environment. There would be money to be made in support contracts in this area as well.
3. Some experts have definite positive approach towards VCE. Consider the possible situations as below -
  • Environments with no experience and no virtual infrastructure can easily purchase a single SKU and immediately get started. What arrives is a hardware/software combo that guarantees them a certain level of pre-tested service. For this group, much of the risk of implementation failure is transferred to the manufacturer in exchange for a slightly increased "integration" cost.
  • Mature environments with greater experience and existing infrastructure also benefit. For these groups, smart prepackaging enables modularization. Need more horsepower for virtual machines? Buy another single SKU and scale your environment by a known and predefined unit of additional resources.
  • This future is an obvious evolution of how we already buy server hardware today. No one builds their own servers anymore. Instead we select from slightly more expensive, pre-engineered server specs that have been designed for a specific use. As virtualization becomes more mainstream, we'll see just these kinds of hardware plus virtual software combos from our existing and trusted manufacturers.
Cons:-
1. VCE is creating a lot of confusion in the marketplace at this time. There are some worthy competitors to this coalition, and they will not go down without a fight. As consultants, we need to recognize our customers' needs and substitute another technology if it is appropriate for our customer. The venture may be classified as successful in future, but not without challenges as the competitors offer their own solutions.
2. Large-scale, prepackaged bundles like the Virtual Computing Environment will have a tough time gaining influence in large, established datacenters. Bundled hardware and software may not be in line with consultant's established vendor standards or administrative skill sets, and that could reduce operational efficiency.
3. VCE can be a good fit if the requirements for each environment match the VCE offering. VCE is one prepackaged virtualization solution. Another type of prepackaged virtualization offering is from Avaya with the Aura System Platform. In this situation, the virtualization technology delivered is a customized hypervisor that will not fit within a mainstream virtualized infrastructure. While these scenarios are different, they have these same attributes. These prepackaged offerings may introduce dependencies.

So will VCE hamper the competition in virtualization/datacenter market? Will it be appreciated for being a one-stop shopping experience for sales, integration and support? Isn't the concept of a hypervisor supposed to be that it is hardware agnostic? By creating these type of targeted alliances with hardware or software vendors, will there be polarization of supported configurations? You can better discuss these questions and hopefully time will provide their answers.

Thursday, December 10, 2009

Virtualized Storage - Get all the features of the SAN without paying for SAN

We all know the benefits of  virtualization and consolidation in the server area and similarly you can achieve a lot more productivity, efficiency, TCO and ROI by consolidation of your local DAS storage by putting it in a central location and provisoning it from there as it leads to less wastage and more utilization with better storage management and deduplication along with capacity management.  However not every Enterprise Business can afford a SAN though this does not imply that they won't benefit out of SAN but we all know that traditional Fibre channel storage comes very costly and requires a dedicated storage area network comprising of dual FC HBA cards on all hosts, switches and dedicated storage Like HP EVA/XP, EMC Clarrion/Symmetrix and trained administrative staff. So what shall you propose when you know that the Enterprise for which you are designing a solution or may be your own Enterprise IT cost center would not like to fund budget for SAN.

Get all the features of the SAN without paying for SAN

Last few years have seen a considerable growth in use of ISCSI technology as an answer to costly traditional SAN storage. The benefits of the iscsi are that it does not require a dedicated costly fabric switch and HBA network as it utilizes existing ethernet network. Data blocks flow over existing ethernet network through dedicated network without interupting the network packet traffic which leads to cost savings and saves you from allocation of huge budget for SAN infrastructure. You can use iscsi storage technology in many scalable ways depending on your needs for example Microsoft storage server acting as a NAS box, Starwind ISCSI target software , HP Lefthand and Dell Equallogic [both fall under category of premium ISCSI SAN]. In this article i am going to talk about HP Lefthand ISCSI based SAN and the features and advantages it has over traditonal SAN storage. Both Dell Equallogic and HP Lefthand have very similar features and are competitors in market.


While traditional Fibre Channel SANs require a separate physical infrastructure to for storage networks, HP P4000 SANs go wherever your Ethernet network reaches. The use of ISCSI technology—SCSI over standard Internet Protocol (IP)—reduces costs and helps IT organizations to realize the vision of connecting every server to high-performance, shared, block-based storage. Few features of HP Lefthand P4000 SANs which make them so attratcive are provided below:

  • Storage clustering and inbuilt synchronus mirroring
  • Network RAID
  • Thin Provisioning, Snapshots and smartclones
  • Remote Copy  
  • Deduplication
  • Performance increases in lock step with its storage capacity.
  • HP P4000 device-specific module (DSM) for the Microsoft Windows Multipath I/O (MPIO) iSCSI plug-in.
  • Certified for interoperability with Microsoft applications including Microsoft Exchange, Microsoft SQL Server, Microsoft SharePoint and Microsoft Hyper-V
  • Certified to integrate with VMware vSphere software
  • HP P4000 SANs work with VMware Site Recovery Manager to respond fast and accurately to disasters that are geographic in scope and also support Microsoft Cluster shared volumes in multi site clustering.
The best feature of the HP Left hand storage is its scalability without reducing the performance output. In traditional storage you can add disk enclosures but cannot add controllers, cache and processing power which means as you keep on adding the disk enclosures performance will dip down which is not the case for HP Lefthand SAN.






If at any point of time you need more capacity you can provision more storage nodes without effecting the performance as opposite to traditional storage. Each storage node contributes its own disk drives, RAID controller, cache, memory, CPU, and networking resources to the cluster.



Another great feature is lun thin provisoning and snapshots/smartclones which helps in deduplicating leading to saving in your storage consumption and increased efficiency for storage utilization. A fully provisioned volume has its blocks pre-allocated, while a thin-provisioned volume has none. Thin provisioning, combined with the ability to scale storage clusters dynamically, allows customers to purchase only the storage they need today, and to add more storage to the cluster as application data grows. Thin Provisioning eliminates the need for up-front capacity reservations, helping to raise utilization levels, efficiency, and ROI all while reducing energy consumption and carbon footprint.This feature also enables you to provision disaster recovery and also take smart backups using Microsoft VSS as HP Lefthand SAN is fully integrated and capable of taking advantage of VSS functionality. SmartClone feature uses the snapshot mechanism to clone volumes instantly for use by new virtual or physical servers. The feature turns any volume or snapshot into one or many full, permanent, read-write volumes. Volume clones use copy-on-write semantics to avoid copying or duplicating data, making SmartClone an instant, space-efficient mechanism that also helps to increase storage utilization and improve storage ROI.







Network RAID is built-in synchronous mirroring that protects data and allows configuration of availability levels on a per-volume basis rather than a per-storage-system basis. Network RAID dictates how a logical volume’s blocks are laid out across the cluster, providing reliability that can be configured on a per-volume basis to best meet application and data requirements. Depending on a logical volume’s Network RAID level, 1, 2, 3, or 4 copies of each of the volume’s data blocks are synchronously replicated and striped across the storage nodes in a cluster. Network RAID is a per-volume attribute, so changing a volume’s RAID level is a simple operation that does not cause any interruption in service. When one or more new storage nodes are added to a cluster, Network RAID re-arranges the striping and replication scheme for the volumes to include the new nodes. Unlike traditional storage products, HP P4000 SANs can do this while remaining continuously online.

At end of all the discussion performance is most important point and 10-Gigabit Ethernet connectivity to the storage nodes eliminates the network as the source of bottlenecks. Delivering performance far superior to 2, 4, and even 8 Gbps Fibre Channel networks, optional dual 10 Gigabit Ethernet interfaces on each storage node deliver up to 20 Gbps of storage bandwidth per node. This clears the very myth that FC based storage provides better performance than ISCSI based storage. However the most important point is yet to be discussed and that is the cost benefits of using ISCSI storage and you can see here what people are saying about it. ISCSI storage based solutions are costing 1/4th of the traditional Enterprise storage solutions and there are many other cost benefits as mentioned here   and here Another interesting benefit is mixing your FC Storage network with ISCSI storage network for achieving the best out of both worlds. For some of your loads do not really require high performance and throughput and can be directed on low costing ISCSI storage and other on High end FC Enterprise storage leading you saving costs and bringing better ROI. Hope you find this article interesting and was a reading pleasure and once again thanks for your time.


For More Information on HP lefthand network, please refer to HP website

GAURAV ANAND

Friday, December 4, 2009

How to scan your IT Environment Health and fix problems before they happen

There are many times when we run into a problem and we wish we would have got some clue of this issue and would have fixed it or taken steps to resolve it before it became a big menace and caused downtime and triggered a chain of events. As it is said, "Precaution is better than Cure", and if you did not took precaution i.e. didn't followed best practices while implement your IT environment then "Early diagnosis is better than Late". It is always good to have tools handy which let you diagnose your IT environment. Microsoft has recently released a very handy tool based on their system essentials platform which can be combined with other tools for early diagnosois and resolution of the issue.

The Microsoft IT Environment Health Scanner is a diagnostic tool that is designed for administrators of small or medium-sized networks (recommended up to 20 servers and up to 500 client computers) who want to assess the overall health of their network infrastructure. The tool identifies common problems that can prevent your network environment from functioning properly as well as problems that can interfere with infrastructure upgrades, deployments, and migration. When run from a computer with the proper network access, the tool takes a few minutes to scan your IT environment, perform more than 100 separate checks, and collect and analyze information about the following:

Configuration of sites and subnets in Active Directory

Replication of Active Directory, the file system, and SYSVOL shared folders

Name resolution by the Domain Name System (DNS)

Configuration of the network adapters of all domain controllers, DNS servers, and e-mail servers running Microsoft Exchange Server

Health of the domain controllers

Configuration of the Network Time Protocol (NTP) for all domain controllers

 


If a problem is found, the tool describes the problem, indicates the severity, and links you to guidance at the Microsoft Web site (such as a Knowledge Base article) to help you resolve the problem. You can save or print a report for later review. The tool does not change anything on your computer or your network. The tool supports Windows Server 2003 Service Pack 2; Windows Server 2008; Windows Vista Service Pack 1; Windows XP Service Pack 2 but not server 2008 R2 yet.





After running this tool if you want more information from a particular server you may like to run Microsoft MPS reports on server/servers to get more information and log files depending on the issue you get as seen here Few other Interesting tools that do similar job and comes handy are given below:

Microsoft Active Directory Topology Diagrammer reads an Active Directory configuration using ActiveX Data Objects (ADO), and then automatically generates a Visio diagram of your Active Directory and /or your Exchange 200x Server topology. The diagramms include domains, sites, servers, administrative groups, routing groups and connectors and can be changed manually in Visio if needed.


FRSDiag provides a graphical interface to help troubleshoot and diagnose problems with the File Replication Service (FRS). FRS is used to replicate files and folders in the SYSVOL file share on domain controllers and files in Distributed File System (DFS) targets. FRSDiag helps to gather snap-shot information about the service, perform automated tests against that data, and compile an overview of possible problems that may exist in the environment.


Group Policy Inventory (GPInventory.exe) allows administrators to collect Group Policy and other information from any number of computers in their network by running multiple Resultant Set of User Policy (RSOP) or Windows Management Instrumentation (WMI) queries. The query results can be exported to either an XML or a text file, and can be analyzed in Excel. It can be used to find computers that have not downloaded and applied new GPOs

ISA Server Best Practices Analyzer (BPA) is a diagnostic tool that automatically performs specific tests on configuration data collected on the local ISA Server computer from the ISA Server hierarchy of administration COM objects, Windows Management Instrumentation (WMI) classes, the system registry, files on disk, and the Domain Name System (DNS) settings.


BPA2Visio generates a Microsoft Office Visio 2003 or Visio 2007 diagram of your network topology as seen from an ISA Server computer or any Windows computer based on output from the ISA Server Best Practices Analyzer Tool.


SQL Server 2005 Best Practices Analyzer (BPA) gathers data from Microsoft Windows and SQL Server configuration settings. BPA uses a predefined list of SQL Server 2005 recommendations and best practices to determine if there are potential issues in the database environment. In windows server 2008 R2 there are in built Best practices analyzers for following roles:

Active Directory Certificate Services

Active Directory Domain Services

DNS Server

Web Server (IIS)

Remote Desktop Services

Failover clustering
 
These utilities enable administrators reduce best practice violations by scanning one or more roles that are installed on their servers, and reporting best practice violations to the administrator and introduce change management for those. If you have a Topology diagram of your active directory or your IT environment it always helps in understanding the problem and design changes required. so if you have not yet created a Topology diagram this is the time to do it and dont forget to baseline your servers with above mentioned utilities. Thanks once again for your time and sticking with the blog. Hope you find this helpful.

GAURAV ANAND