Tuesday, December 22, 2009

Virtualization 2.0 and Intel Virtualization Technology(VT)

Virtualization is one of the hottest technologies in IT infrastructure today. According to Gartner, “Virtualization is the highest impact trend changing infrastructure and operations through 2012. It will change how you manage, how and what you buy, how you deploy, how you plan, and how you charge.” Several studies by the research firm IDC support this claim. The firm reports 22 percent of servers today as being virtualized and expects that number to grow to 45 percent over the next 12 to 18 months. Another IDC study predicts the number of logical servers generated on virtualized servers will surpass the number of non-virtualized physical server units by 2010.

Historically limited to mainframe environments, virtualization’s rapid adoption on Intel architecture based platforms is being enabled by virtualization software and Intel’s advances in both multi-core processing and a suite of virtualization technologies known as Intel Virtualization Technology(Intel VT). The first virtualization implementations on Intel platforms primarily focused on server consolidation (utilizing multiple virtual machines to run multiple applications on one physical server). This consolidation has greatly benefited data centers by increasing server utilization and easing deployment of systems in data center environments.

Virtualization 2.0 focuses on increasing service efficiency through flexible resource management. In the near future, this usage model will become absolutely critical to data centers, allowing IT managers to use virtualization to deliver high availability solutions with the agility to address disaster recovery and real-time workload balancing so they can respond to the expected and
the unexpected.

Consolidation will continue:-
Consolidation, the usage model labeled in Figure 2 as Virtualization 1.0 and the earliest driver for virtualization in traditional IT deployments, came as a result of data center managers looking for ways to improve server utilization and lessen the impact of rising energy costs. This continues to be a primary and valuable usage model for small and large businesses alike. Consolidation using virtualization has proven to be a real cost saver. A recent IDC study found 88 percent of U.S.-based organizations using virtualization for consolidation saved at least 20 percent of capital expenditures (CAPEX) by adopting virtualization technologies. Overall x86 utilization rose from 35 percent before virtualization to 52 percent with virtualization. IT organizations around the world still have much more to gain through further utilization improvements through consolidation.

Driving existing and future virtualizationusage models:-
For Virtualization 1.0 where the desired outcome is primarily consolidation, IT needs servers with performance tuned for virtualization. Anticipating these needs, Intel delivered the following technologies:

Virtualization hardware-assist in server processors. Intel introduced this technology in 2005 in both Intel Itanium processors for mission critical servers and Intel Xeon processors.
Unparalleled power-efficient performance. Intel Xeon processors based on Intel Core microarchitecture (introduced in second quarter 2006) and the Intel hafnium-based 45nm Hi-k silicon process technology (introduced in second half 2007) have set new standards in power-efficient performance for server processors. Current Intel Core Microarchitecture-based Intel Xeon processor-based servers achieve the top industry-standard power efficiency benchmark results (July 2008). By rapidly ramping up processor capacity and performance over the last few years, Intel has been able to fulfill IT needs for servers capable of improving performance while hosting many guests. Today’s Intel Xeon processors deliver up to 6.36 times better performance/ watt than single core. Quad-core processors also provide twice the performance of dual-core processors for better TCO.
Reliability. Intel Xeon processor-based platforms include the bestin- class RAS capabilities that increase data availability and reliability–this is essential for deploying more VMs per server with confidence. These processors provide features designed to improve reliability
and recovery speed. Examples include improved Error Correcting Code (ECC) coverage for system bus and cache, new memory mirroring, fully buffered DIMM technology, and hot pluggable component support. Intel’s X8 Single Device Data Correction (X8 SDDC), for instance, allows IT to fix the failure of an entire DRAM device on-the-fly by removing a single DRAM from the memory map and recovering its data into a new device.
A final enabling ingredient for this first stage of virtualization was Intel’s collaboration and continued support in the development of a strong ecosystem. An important part of that support was Intel VT–the suite of virtualization technologies that make it easier for software providers to develop a robust hypervisor and bring solutions to market faster. This has enabled a wealth of virtualization software that takes advantage of these platform-centric capabilities and solutions to better help IT meet their needs.

The transition to Virtualization 2.0:-
The success of consolidation deployments, combined with software evolution and Intel’s continued advancements in processor performance, energy efficiency, and virtualization technologies, are now enabling many IT organizations to take the next step: using virtualization to improve their operational efficiencies. The time has come to ask more of virtualization and give virtualized data centers the opportunity to increase service levels and deliver major business agility advancements. Virtualization 2.0 focuses precisely on that by enabling flexible resource management.

Organizations worldwide are already beginning to take advantageof this model. The 2007 IDC study, for example, showed that 50 percent of all VMware ESX users had adopted VMotion* capability. This technology enables live migration—moving guests from one physical server to another with no impact to end users’ experience. By giving IT managers the ability to move guests on the fly, live migrations make it easier to balance workloads and manage planned and unplanned downtimes more efficiently.
This next phase, focused on flexible resource management, will require an infrastructure that supports:

• Flexible workload management for easier load balancing across different generations of Intel® Xeon® processor-based servers
• I/O tuned for virtualization to enable more efficient migration and greater I/O throughput capacity.
• Hardware and software compatibility that enables the new usage models and provides the confidence that ‘it just works’.

Flexible workload management:-
Dynamic load balancing requires the ability to easily move workloads across multiple generations of processors without disrupting services. Performing live migrations from a newer generation processor with a newer instruction set to an older generation processor with an older instruction set carries the risk of unexpected behaviors in the guest. In 2007 Intel helped solve this problem by developing Intel Virtualization Technology (Intel VT) FlexMigration. By allowing virtual machine monitor (VMM) software to report a consistent set of available instructions to guest software running within a hypervisor, this technology broadens the live migration compatibility pool across multiple generations of Intel Xeon processors in the data center. This also reduces the challenges to IT in deploying new generations of hardware, enabling faster utilization of servers with new performance capabilities as they become available.

Accelerating I/O performance and enabling more efficient migration
Virtualization solutions are inherently challenged in the area of network I/O because the guests on a host server all end up sharing the same I/O resources. Moreover, many I/O resources are emulated in software for consistency and decision-making (e.g., network packet routing from the shared I/O resource is often done in software). Intel improves availability through a number of technologies that accelerate I/O performance. This enhances the ability to deploy I/O intensive workloads (beyond simple consolidation) and increases efficiency in Virtualization 2.0 usage models such as load balancing, high availability, and disaster recovery (all of which extensively rely on data transfer over the network).

Intel’s I/O technologies for improving data transfer include:
• Intel Virtualization Technology (Intel VT) for Connectivity (Intel VT-c) provides unique I/O innovations like Virtual Machine Device Queues (VMDq) that offloads routine I/O tasks to network silicon to free up more CPU cycles for applications and delivers over 2x throughput gains on 10 GbE.9
• Intel Virtualization Technology (Intel VT) for Directed I/O (Intel VT-d) delivers scalable I/O performance through direct assignment (e.g. assigning a network interface card to a guest) and enables single root input/output virtualization (IOV) for sharing devices natively with multiple guest systems. Centralized storage is a key aspect of Virtualization 2.0 usage models. Usage models like load balancing, high availability, and disaster recovery rely on a VM’s ability to efficiently migrate from one physical system to another while having constant access to data storage for continued operation. Thus, simplifying the fabric and providing a cost-effective means to deploy storage area networks (SAN) and LANs are key requirements for Virtualization 2.0. Intel products address this need for more cost-effective SAN and LAN fabric through support of Fibre Channel over Ethernet (FCoE). Intel also provides leadership in important I/O virtualization standards designed to improve I/O and fabric performance throughout the industry. Intel is working on T11 FCoE (through the T11 standard body of the American National Standards Institute, or ANSI), as well as playing important roles on the IEEE for Enhanced Ethernet and PCI-SIG* IOV specifications.

Hardware-software compatibility:-
Through its rich partnerships in the virtualization ecosystem, Intel is able to ensure that its products and those from virtualization providers are well suited to Virtualization 2.0 usage models. A recent example is a 2007 collaboration between Intel and VMware that enhanced how Intel VT FlexMigration and Enhanced VMotion worked together. Intel is also working with several virtualization software solution partners to enable platform capabilities that are important for Virtualization 2.0 usage models such as efficient power management. Usage models such as high availability require headroom build-out so that there are enough backup systems to run the workload in case the primary system or software fails. Efficient power management of this headroom is critical for data centers and Intel is working with its virtualization software partners to enable such power management capabilities as power monitoring and system power-capping through hardware technologies provided on the platform.

Furthering virtualization’s role in the data center:-
On the horizon is Virtualization 3.0 where adaptive continuity takes flexible resource management to the next level. Hardware will provide more resilient infrastructure and instrumentation for enabling automation software to make the balancing decisions in real-time. Predictive decisionmaking will readjust loads automatically based on changing workload requirements and/or data center demands, such as power, server resource changes, software failures, or other factors. Thus Intel VT, is a path towards an automated infrastructure where workloads can be dynamically moved and scaled across the data center depending on customer demand, resource requirements, and service-level assurance requirements including performance, I/O, and/or power. Virtualization 2.0 is the next step.

References:- Intel, Gartner, IDC

Sunday, December 13, 2009

Pros and Cons of Bundling Hardware and Software(Virtual Computing Environment)

Buying hardware and software together for virtualization will save organizations time and money, according to Cisco Systems, EMC and VMware. The three vendors have formed the Virtual Computing Environment(VCE) coalition, through which they will sell prepackaged bundles of servers, networking equipment and software for virtualization, storage, security and management. Key components of the bundles include the Cisco Unified Computing System and VMware vSphere.
In this post I am trying to answer the question
What are the pros and cons of bundling hardware and software together for virtualization, and will this approach have success in the market?
1. VCE will enhance partners' ability to recommend and implement preconfigured, tested and validated solutions with one support organization. This should accelerate the adoption of virtualized solutions and move toward the goal of 100% virtualized environments. Partners of these companies will have advanced training and expertise in implementing the solutions.
2. Prepackaged server virtualization bundles might succeed -- at least until the external cloud offerings mature -- in the small and medium-sized business category, where disparate hardware is not as much a factor, and support staff may have lower skill levels. By offering preconfigured bundles, administration becomes the focus -- not architecting the virtual environment. There would be money to be made in support contracts in this area as well.
3. Some experts have definite positive approach towards VCE. Consider the possible situations as below -
  • Environments with no experience and no virtual infrastructure can easily purchase a single SKU and immediately get started. What arrives is a hardware/software combo that guarantees them a certain level of pre-tested service. For this group, much of the risk of implementation failure is transferred to the manufacturer in exchange for a slightly increased "integration" cost.
  • Mature environments with greater experience and existing infrastructure also benefit. For these groups, smart prepackaging enables modularization. Need more horsepower for virtual machines? Buy another single SKU and scale your environment by a known and predefined unit of additional resources.
  • This future is an obvious evolution of how we already buy server hardware today. No one builds their own servers anymore. Instead we select from slightly more expensive, pre-engineered server specs that have been designed for a specific use. As virtualization becomes more mainstream, we'll see just these kinds of hardware plus virtual software combos from our existing and trusted manufacturers.
1. VCE is creating a lot of confusion in the marketplace at this time. There are some worthy competitors to this coalition, and they will not go down without a fight. As consultants, we need to recognize our customers' needs and substitute another technology if it is appropriate for our customer. The venture may be classified as successful in future, but not without challenges as the competitors offer their own solutions.
2. Large-scale, prepackaged bundles like the Virtual Computing Environment will have a tough time gaining influence in large, established datacenters. Bundled hardware and software may not be in line with consultant's established vendor standards or administrative skill sets, and that could reduce operational efficiency.
3. VCE can be a good fit if the requirements for each environment match the VCE offering. VCE is one prepackaged virtualization solution. Another type of prepackaged virtualization offering is from Avaya with the Aura System Platform. In this situation, the virtualization technology delivered is a customized hypervisor that will not fit within a mainstream virtualized infrastructure. While these scenarios are different, they have these same attributes. These prepackaged offerings may introduce dependencies.

So will VCE hamper the competition in virtualization/datacenter market? Will it be appreciated for being a one-stop shopping experience for sales, integration and support? Isn't the concept of a hypervisor supposed to be that it is hardware agnostic? By creating these type of targeted alliances with hardware or software vendors, will there be polarization of supported configurations? You can better discuss these questions and hopefully time will provide their answers.

Thursday, December 10, 2009

Virtualized Storage - Get all the features of the SAN without paying for SAN

We all know the benefits of  virtualization and consolidation in the server area and similarly you can achieve a lot more productivity, efficiency, TCO and ROI by consolidation of your local DAS storage by putting it in a central location and provisoning it from there as it leads to less wastage and more utilization with better storage management and deduplication along with capacity management.  However not every Enterprise Business can afford a SAN though this does not imply that they won't benefit out of SAN but we all know that traditional Fibre channel storage comes very costly and requires a dedicated storage area network comprising of dual FC HBA cards on all hosts, switches and dedicated storage Like HP EVA/XP, EMC Clarrion/Symmetrix and trained administrative staff. So what shall you propose when you know that the Enterprise for which you are designing a solution or may be your own Enterprise IT cost center would not like to fund budget for SAN.

Get all the features of the SAN without paying for SAN

Last few years have seen a considerable growth in use of ISCSI technology as an answer to costly traditional SAN storage. The benefits of the iscsi are that it does not require a dedicated costly fabric switch and HBA network as it utilizes existing ethernet network. Data blocks flow over existing ethernet network through dedicated network without interupting the network packet traffic which leads to cost savings and saves you from allocation of huge budget for SAN infrastructure. You can use iscsi storage technology in many scalable ways depending on your needs for example Microsoft storage server acting as a NAS box, Starwind ISCSI target software , HP Lefthand and Dell Equallogic [both fall under category of premium ISCSI SAN]. In this article i am going to talk about HP Lefthand ISCSI based SAN and the features and advantages it has over traditonal SAN storage. Both Dell Equallogic and HP Lefthand have very similar features and are competitors in market.

While traditional Fibre Channel SANs require a separate physical infrastructure to for storage networks, HP P4000 SANs go wherever your Ethernet network reaches. The use of ISCSI technology—SCSI over standard Internet Protocol (IP)—reduces costs and helps IT organizations to realize the vision of connecting every server to high-performance, shared, block-based storage. Few features of HP Lefthand P4000 SANs which make them so attratcive are provided below:

  • Storage clustering and inbuilt synchronus mirroring
  • Network RAID
  • Thin Provisioning, Snapshots and smartclones
  • Remote Copy  
  • Deduplication
  • Performance increases in lock step with its storage capacity.
  • HP P4000 device-specific module (DSM) for the Microsoft Windows Multipath I/O (MPIO) iSCSI plug-in.
  • Certified for interoperability with Microsoft applications including Microsoft Exchange, Microsoft SQL Server, Microsoft SharePoint and Microsoft Hyper-V
  • Certified to integrate with VMware vSphere software
  • HP P4000 SANs work with VMware Site Recovery Manager to respond fast and accurately to disasters that are geographic in scope and also support Microsoft Cluster shared volumes in multi site clustering.
The best feature of the HP Left hand storage is its scalability without reducing the performance output. In traditional storage you can add disk enclosures but cannot add controllers, cache and processing power which means as you keep on adding the disk enclosures performance will dip down which is not the case for HP Lefthand SAN.

If at any point of time you need more capacity you can provision more storage nodes without effecting the performance as opposite to traditional storage. Each storage node contributes its own disk drives, RAID controller, cache, memory, CPU, and networking resources to the cluster.

Another great feature is lun thin provisoning and snapshots/smartclones which helps in deduplicating leading to saving in your storage consumption and increased efficiency for storage utilization. A fully provisioned volume has its blocks pre-allocated, while a thin-provisioned volume has none. Thin provisioning, combined with the ability to scale storage clusters dynamically, allows customers to purchase only the storage they need today, and to add more storage to the cluster as application data grows. Thin Provisioning eliminates the need for up-front capacity reservations, helping to raise utilization levels, efficiency, and ROI all while reducing energy consumption and carbon footprint.This feature also enables you to provision disaster recovery and also take smart backups using Microsoft VSS as HP Lefthand SAN is fully integrated and capable of taking advantage of VSS functionality. SmartClone feature uses the snapshot mechanism to clone volumes instantly for use by new virtual or physical servers. The feature turns any volume or snapshot into one or many full, permanent, read-write volumes. Volume clones use copy-on-write semantics to avoid copying or duplicating data, making SmartClone an instant, space-efficient mechanism that also helps to increase storage utilization and improve storage ROI.

Network RAID is built-in synchronous mirroring that protects data and allows configuration of availability levels on a per-volume basis rather than a per-storage-system basis. Network RAID dictates how a logical volume’s blocks are laid out across the cluster, providing reliability that can be configured on a per-volume basis to best meet application and data requirements. Depending on a logical volume’s Network RAID level, 1, 2, 3, or 4 copies of each of the volume’s data blocks are synchronously replicated and striped across the storage nodes in a cluster. Network RAID is a per-volume attribute, so changing a volume’s RAID level is a simple operation that does not cause any interruption in service. When one or more new storage nodes are added to a cluster, Network RAID re-arranges the striping and replication scheme for the volumes to include the new nodes. Unlike traditional storage products, HP P4000 SANs can do this while remaining continuously online.

At end of all the discussion performance is most important point and 10-Gigabit Ethernet connectivity to the storage nodes eliminates the network as the source of bottlenecks. Delivering performance far superior to 2, 4, and even 8 Gbps Fibre Channel networks, optional dual 10 Gigabit Ethernet interfaces on each storage node deliver up to 20 Gbps of storage bandwidth per node. This clears the very myth that FC based storage provides better performance than ISCSI based storage. However the most important point is yet to be discussed and that is the cost benefits of using ISCSI storage and you can see here what people are saying about it. ISCSI storage based solutions are costing 1/4th of the traditional Enterprise storage solutions and there are many other cost benefits as mentioned here   and here Another interesting benefit is mixing your FC Storage network with ISCSI storage network for achieving the best out of both worlds. For some of your loads do not really require high performance and throughput and can be directed on low costing ISCSI storage and other on High end FC Enterprise storage leading you saving costs and bringing better ROI. Hope you find this article interesting and was a reading pleasure and once again thanks for your time.

For More Information on HP lefthand network, please refer to HP website


Friday, December 4, 2009

How to scan your IT Environment Health and fix problems before they happen

There are many times when we run into a problem and we wish we would have got some clue of this issue and would have fixed it or taken steps to resolve it before it became a big menace and caused downtime and triggered a chain of events. As it is said, "Precaution is better than Cure", and if you did not took precaution i.e. didn't followed best practices while implement your IT environment then "Early diagnosis is better than Late". It is always good to have tools handy which let you diagnose your IT environment. Microsoft has recently released a very handy tool based on their system essentials platform which can be combined with other tools for early diagnosois and resolution of the issue.

The Microsoft IT Environment Health Scanner is a diagnostic tool that is designed for administrators of small or medium-sized networks (recommended up to 20 servers and up to 500 client computers) who want to assess the overall health of their network infrastructure. The tool identifies common problems that can prevent your network environment from functioning properly as well as problems that can interfere with infrastructure upgrades, deployments, and migration. When run from a computer with the proper network access, the tool takes a few minutes to scan your IT environment, perform more than 100 separate checks, and collect and analyze information about the following:

Configuration of sites and subnets in Active Directory

Replication of Active Directory, the file system, and SYSVOL shared folders

Name resolution by the Domain Name System (DNS)

Configuration of the network adapters of all domain controllers, DNS servers, and e-mail servers running Microsoft Exchange Server

Health of the domain controllers

Configuration of the Network Time Protocol (NTP) for all domain controllers


If a problem is found, the tool describes the problem, indicates the severity, and links you to guidance at the Microsoft Web site (such as a Knowledge Base article) to help you resolve the problem. You can save or print a report for later review. The tool does not change anything on your computer or your network. The tool supports Windows Server 2003 Service Pack 2; Windows Server 2008; Windows Vista Service Pack 1; Windows XP Service Pack 2 but not server 2008 R2 yet.

After running this tool if you want more information from a particular server you may like to run Microsoft MPS reports on server/servers to get more information and log files depending on the issue you get as seen here Few other Interesting tools that do similar job and comes handy are given below:

Microsoft Active Directory Topology Diagrammer reads an Active Directory configuration using ActiveX Data Objects (ADO), and then automatically generates a Visio diagram of your Active Directory and /or your Exchange 200x Server topology. The diagramms include domains, sites, servers, administrative groups, routing groups and connectors and can be changed manually in Visio if needed.

FRSDiag provides a graphical interface to help troubleshoot and diagnose problems with the File Replication Service (FRS). FRS is used to replicate files and folders in the SYSVOL file share on domain controllers and files in Distributed File System (DFS) targets. FRSDiag helps to gather snap-shot information about the service, perform automated tests against that data, and compile an overview of possible problems that may exist in the environment.

Group Policy Inventory (GPInventory.exe) allows administrators to collect Group Policy and other information from any number of computers in their network by running multiple Resultant Set of User Policy (RSOP) or Windows Management Instrumentation (WMI) queries. The query results can be exported to either an XML or a text file, and can be analyzed in Excel. It can be used to find computers that have not downloaded and applied new GPOs

ISA Server Best Practices Analyzer (BPA) is a diagnostic tool that automatically performs specific tests on configuration data collected on the local ISA Server computer from the ISA Server hierarchy of administration COM objects, Windows Management Instrumentation (WMI) classes, the system registry, files on disk, and the Domain Name System (DNS) settings.

BPA2Visio generates a Microsoft Office Visio 2003 or Visio 2007 diagram of your network topology as seen from an ISA Server computer or any Windows computer based on output from the ISA Server Best Practices Analyzer Tool.

SQL Server 2005 Best Practices Analyzer (BPA) gathers data from Microsoft Windows and SQL Server configuration settings. BPA uses a predefined list of SQL Server 2005 recommendations and best practices to determine if there are potential issues in the database environment. In windows server 2008 R2 there are in built Best practices analyzers for following roles:

Active Directory Certificate Services

Active Directory Domain Services

DNS Server

Web Server (IIS)

Remote Desktop Services

Failover clustering
These utilities enable administrators reduce best practice violations by scanning one or more roles that are installed on their servers, and reporting best practice violations to the administrator and introduce change management for those. If you have a Topology diagram of your active directory or your IT environment it always helps in understanding the problem and design changes required. so if you have not yet created a Topology diagram this is the time to do it and dont forget to baseline your servers with above mentioned utilities. Thanks once again for your time and sticking with the blog. Hope you find this helpful.


Tuesday, November 24, 2009

Why Google Chrome operating system won't be a Success

Well Google has released the preview of its chrome operating system [OS] and also shared the source code with developers. Google made the early code of Chrome OS available to the open source community and claims external developers will have the same access to the code as internal Google developers. The Google chrome OS is meant for Net books which are seen dominantly as secondary computers. The great part in this announcement is that this will lead to entry of Google into the operating system market which is very much dominated by Microsoft followed by Apple. Though it’s too early to comment on the success of the Google chrome OS as it is about to ship in 2010 however the best part is that its business design is Cloud based. Almost all you user data will be on Google cloud or web for example, Gmail, Google Apps, you tube, orkut, messengers, picassa, twitter, online songs blah blah.

Chrome OS will run with a Linux kernel as its base which will boot directly into the Chrome Web browser and is aimed primarily at netbooks which will run on both x86 and ARM processors. It will not be designed to have local storage; all data will be stored in the cloud. Google will not entice developers to build software to run on the Chrome OS; instead, they want them to build Web apps that will run on any standards-based browser. The three most important features will be “speed, simplicity and security,” according to Google. Announced Chrome OS hardware partners: Acer, Adobe, ASUS, Freescale, Hewlett-Packard, Lenovo, Qualcomm, Texas Instruments, and Toshiba. Netbooks running Chrome OS will be available in the second half of 2010.

 The best part here is that Google is providing chrome OS for free which means you don’t have to pay anything for it except the net book price. Google claims that chrome will boot up in few seconds and users will be ready to browse the web in much lesser time as compared to netbooks which will be using windows XP or windows 7 which is true but what Google does not highlight is that the netbooks meant for Google chrome will be costlier as they will require solid state hard drives [something like flash drives]. Another drawback is that as these hard drives are much costlier they will only provide small capacity hard drive [very less data can be kept on local machine] so chances will be very less that you can put windows XP or windows 7 on your chrome net book if you ever wanted to switch the sides [ vendor lock-in].

Another point that Google makes is that users need to worry about software updates as in case of windows operating systems. Well Microsoft operating systems can also be put on automatic update so that’s not something new. For more details on Google chrome security model, please check here : http://blogs.zdnet.com/security/?p=4969

Google says that users need not to worry about the data on their machines. That’s obvious because you hardly will be keeping any data on your netbooks, most of the data will be at Google data centers. What Google misses out is that what about users private data which they won’t like to keep on web for the obvious reasons.

I personally believe that Google Chrome would not be very successful and here are the reasons:

  • Bad market timing : Windows 7 and Apple’s snow leopard would have consumed the market by mid 2010.
  • Vendor/Hardware lock-in : Users will have to get specific hardware to install Chrome OS on netbook.
  • Data privacy : Users would not like to store their personal and private data on google cloud.
  • No cost benefit to user : Though Chrome OS will be provided free of cost, the hardware used to run chrome OS will be costlier than today’s netbook.
  • Net books backlash : This factor is common to all operating system vendors of netbooks but effect google the most because of its timing. The consumer backlash against netbooks has already begun and by the time we see Chrome OS netbooks from Google’s hardware partners in the second half of 2010, the net book phenomenon will either have retreated into the background or morphed into something better. And then Google will have to scramble to make Chrome OS available on a wider variety of notebook computers, as well as on netbooks which will be a big chalenge for google as their OS model is web based. Net books are very low end machines and can be best used in education sector and not really as business machines. Infact they are more of a secondary machine, you are expected to have a primary machine. The biggest drawback of chrome OS is that all apps will be web based and you wont be able to insall applications on your own machine easily. Netbooks are terrible and a lot of consumers regret buying them (verified by a recent NPD survey http://gigaom.com/2009/06/23/as-small-notebooks-netbooks-largely-dash-expectations/ )

  • Restriction to web Apps : Users will not ne able to install native non web Apps that they use everyday and Google is only promoting Web based Apps.

However it would be interesting to see Brand name of Google behind Linux entry to desktops. But if Google really wants to emerge in this market it has to go beyond the netbooks. Hope you find this interesting and thanks for your time. 



Sunday, November 22, 2009

Information Technology(IT):- A Business Partner not just Cost Center (Based on IDC Survey)

Hello Everybody!
This is first time I am writing a blog on IT Infrastructure, though I am working in IT Infrastructure domain for quite some time now. My expertise is in technical, sales and consulting arena. So I feel this knoweledge sharing would be more interactive and it would be more helpful for all of us who work in IT Infrastructure domain or more broadly in overall IT.
Today we can look at how IT has evolved over a period of time from being just a Cost Center earlier to a Business Partner today. The base of this information is a survey conducted by IDC in multiple MNCs across various vertical sectors like Insurance, Manufacturing, Professional Services and Telecoms in Japan, Australia, India and the People's Republic of China(PRC).
The key findings of this IDC Survey inclue:-
1. IT has evolved from a cost center to a business enabler, and in many organizations today, IT assumes a strategic role of a business partner. With IT ranking high on the corporate agenda, a CIO’s role has also become more strategic and critical for organizations that use technology as a key business differentiator.
2. In countries like Australia, the PRC and India, select industry sectors are witnessing double-digit growth. In order to sustain their growth, bottom line and cost competitiveness in the market, organizations are expecting to use IT more aggressively to cut down operational costs while improving employee efficiency and productivity. CxOs are expected to join shoulders with business leaders in attaining both top-line and bottom-line revenue targets.
3. In order to stay competitive, organizations are leveraging on IT to enhance their operational
efficiency, reduce the go-to-market period and improve customer service, while keeping costs under control. CxOs are also challenged to simplify IT while keeping the IT ecosystem agile and within budget. The increase in power and cooling costs, and the growing environmental concerns of rising carbon footprint are also pushing organizations to embark on "Green" initiatives. Therefore, CIOs are increasingly looking to leverage their IT vendors' global expertise in
simplifying IT to meet their business and IT goals.

Business Priorities Driving IT Initiative:-

IDC's survey shows that CEOs are focused on growing the business by driving product innovation and improving customer care, while ensuring regulatory compliance. Improving organizational productivity also features strongly in the CEOs' agenda. The IT organization is expected to be more nimble and innovative in supporting the business goals. For organizations in India, Australia and Japan, enhancement in customer care service is one of the leading initiatives on the CEOs' agenda. It is worth noting that regulatory compliance is the most important
initiative for CEOs in India.

About 80% of the respondents said that they would use and rely on technology “more aggressively” to achieve their company’s priorities. The country most likely to
use technology more aggressively is Japan, as almost 90% of the companies responded that
they would do so.

Top Priorities for Key IT Projects:-

Across the surveyed countries, respondents said the top priorities of their key IT projects are to support the business, to ensure IT security, and to reduce the total cost of ownership. In addition, projects on virtualization and related tasks of upgrading and consolidating IT systems are also top priorities. The survey also revealed that the business priorities of respondents from India and Japan are more varied, and did not concentrate on a select few.
Growing Complexities in the IT Ecosystem:-

CxOs from the PRC, Australia and Japan, unanimously rated “achieving business agility” as the key challenge, whereas the India CxOs gave similar priorities to “finding and retaining talent” and “assessing and quickly absorbing mergers and acquisitions." CIOs across all four countries agreed that having too many disparate systems and the lack of an integrated IT environment posed the topmost challenge with respect to the impact of technology use on the organization's ability to compete more effectively in the marketplace.

Overcoming Challenges by Simplifying IT:-

The CxOs face significant difficulties in achieving business agility amidst expanding business
requirements. CIOs are adopting service-oriented architecture (SOA) to standardize and automate their business and IT processes in order to simplify their IT. The consolidation of hardware down to a select few vendors and reducing the number of applications are also key initiatives.

Emerging Themes: Green IT, Virtualization and Web 2.0:-

When asked about the technology areas that can affect their organizations’ competitiveness, CxOs believe that Green IT, datacenter transformation and usage of new media to reach customers are the top 3 technology areas that will define their competitiveness. It noteworthy that for the emerging economies like India and the PRC, Green IT has the topmost priority in technology areas that can affect the organizations' competitiveness. Green IT may help the organizations in these countries reduce the cost of power and cooling, as phenomenal growth in both the countries lead to an increase in power supply usage.

Across the region, virtualization is currently applied mainly to servers, storage and
networks, as shown in figure below. The degree of adoption is mixed, indicating that these organizations still have some ways to go towards leveraging virtualization as a tool to improve IT utilization and drive consolidation.

The performance of the IT department and the CIO are measured on how effectively they support the lines of business, with projects being completed on time. Ultimately, all efforts must continue to support and advance the company's strategy in a continuing cycle where IT creates value and projects stay within budget.

Top business priorities for the companies surveyed are to drive business growth through faster product innovation and better customer care, while ensuring regulatory compliance. The companies expect IT to play a more strategic and responsive role in supporting business initiatives.
 Thus the top priorities for CIOs are to support the line of business, ensure security, and reduce the total cost of IT ownership. They expect to use technology aggressively to help them achieve competitiveness, but also recognize the ongoing struggles of simplifying IT systems to achieve business agility. This all have to be achieved without appreciable budget increases.
 CIOs in the survey openly admit that they have a disparate and complex ITenvironment, and they are turning to vendors who understand their business to co-create solutions that are open and flexible. CIOs are measured on how well they support the organization's business priorities and how well they deliver these projects on time and within budget. They also said they would like their vendors to help them achieve these goals.
 Green IT is an increasingly important factor on the CxO's agenda due to the potential cost savings or regulatory compliance requirements, and will increase in importance in the near future. CxOs are concerned about reducing energy consumption, reducing environmental impact through recycling and reducing carbon footprint, all driven by cost savings and regulatory pressures.
 Virtualization is widely adopted by organizations in Japan and China, but not so in India and Australia. There are opportunities for companies in India and Australia to achieve better efficiencies by virtualizing more of their servers, storage, clients and applications.
 Vendors are expected to support the CxO’s Green initiatives by providing more eco-friendly IT solutions as well as making the functionalities of existing IT systems environmentally friendly. CxOs expect these to be achieved with no increase in IT budgets.
 The business landscape is changing at such break-neck speed that in order to stay competitive, organizations are leveraging on IT to enhance their operational efficiency, reduce the go-to-market period and improve customer service, while keeping costs under control. As a result, IT infrastructures are becoming more complex, and CxOs are challenged to simplify IT while keeping the IT ecosystem agile and within budget. Due to the increase in power and cooling costs, and the growing push to reduce carbon emissions, organizations are embarking on Green initiatives. To overcome these challenges, CIOs are increasingly looking to leverage their IT vendors' global expertise in simplifying IT to meet their business and IT goals.

Friday, November 20, 2009

The most common poor design that effects server Up time and troubleshooting bottleneck in IT Enterprises.

The most important thing for an IT enterprise is to ensure that there servers remains Up and running and one of the most common cause which brings down the servers is Blue screen or more commonly known as blue screen of Death. I have seen hundreds of big enterprises with huge mission critical servers running applications that need to be highly available with poor design. Sometimes these servers are highly available cluster nodes and sometimes single stand alone servers however the famous blue screen of death can struck these servers anytime. So what happens if a server starts blue screening. What shall we do? Well as said that 90% of the blue screens occur due to buggy drivers and other 10% due to hardware issues. So as a first step you can check what the stop code is and what does that stop code refers to. You can update the drivers and Bios but you need to be careful of the computability issues. So the next step is to grab a memory dump to prepare a conclusive action plan after its analysis to resolve the issue. That’s where the design flaw appears.

Most of the IT Enterprises don’t plan this scenario which always leads to disruption of server Up time and sometimes the after effects of this bad design leads to troubleshooting bottle neck. Microsoft Operating systems enables IT administrator 3 kinds of memory dumps: small, kernel and full memory dump.

A Small Memory Dump is much smaller than the other two kinds of kernel-mode crash dump files. It is exactly 64 KB in size on 32 bit machine and 128kb in 64 bit machine, and requires only 64 KB/128KB of pagefile space on the boot drive.This dump file includes the following:

1.The bug check message and parameters, as well as other blue-screen data.
2.The processor context (PRCB) for the processor that crashed.
3.The process information and kernel context (EPROCESS) for the process that crashed.
4.The thread information and kernel context (ETHREAD) for the thread that crashed.
5.The kernel-mode call stack for the thread that crashed. If this is longer than 16 KB, only the topmost 16 KB will be included.
6.A list of loaded drivers.
7.A list of loaded modules and unloaded modules.
8.The debugger data block. This contains basic debugging information about the system.

This kind of dump file can be useful when space is greatly limited. However, due to the limited amount of information included, errors that were not directly caused by the thread executing at time of crash may not be discovered by an analysis of this file. If a second bug check occurs and a second Small Memory Dump file is created, the previous file will be preserved. Each additional file will be given a distinct name, which contains the date of the crash encoded in the filename. For example, mini022900-01.dmp is the first memory dump file generated on February 29, 2000. A list of all Small Memory Dump files is kept in the directory %SystemRoot%\Minidump

Unfortunately stack traces reported by WinDbg, especially involving 3rd-party components, are usually incomplete and sometimes not even correct. They can also point to stable drivers when the system failure happened after slowly accumulated corruption caused by some intermediate driver or a combination of drivers. In other words small memory dumps are helpful but not always reliable to conclude.

Kernel dumps almost always capture the relevant information required in case of a blue screen. Though it does not contain user mode data however that is not required most of the time. As kernel mode in a 32 bit machine can be only 2GB+1mb at max it is easy to get a kernel dump on 32 bit machines. But the real problem lies in 64 bit operating systems. Customers generally don’t create required big enough page file on C drive or do not have required free space on boot drive i.e. C. the same thing happens in case of full memory dumps [which are required in case your server hard hangs/freezes]

Here you go with a public article from Microsoft which explains that even if you point dump to D or E drive you still need free space on boot volume C drive which or at least require that the page file on C drive is at least Ram +1 MB[ in case of kernel dump and 1.5* RAM [in case of full dump] [though Microsoft article is not very clear in beginning and confuses audience though it helps] Although you can change the path for the location of the dump file using Control Panel, Windows always writes the debugging information to the pagefile on the %SYSTEMROOT% partition first, and then moves the dump file to the path specified. Well the kernel dumps is not always that big as your ram however you can't exactly predict the size of a kernel memory dump is because its size depends on the amount of kernel-mode memory in use by the operating system and drivers and this becomes more complex in 64 bit environment.

Please review the following articles to plan your C boot drive and page file size on servers in your Enterprise.

886429 What to consider when you configure a new location for memory dump files in Windows Server 2003

141468 Additional Pagefile Created Setting Up Memory Dump

Another article on how to determine the appropriate page file size for 64-bit versions of Windows Server 2003 or Windows XP

For business-critical 64 bit operating system servers where business processes require to server to capture physical memory dumps for analysis, the traditional model of the page file should be at least the size of physical ram plus 1 MB, or 1.5 times the default physical RAM. This makes sure that the free disk space of the operating system partition is large enough to hold the OS, hotfixes, installed applications, installed services, a dump file, and the page file. On a server that has 32 GB of memory, drive C may have to be at least 86 GB to 90 GB. This is 32 GB for memory dump, 48 GB for the page file (1.5 times the physical memory), 4 GB for the operating system, and 2 to 4 GB for the applications, the installed services, the temp files, and so on. Remember that a driver or kernel mode service leak could consume all free physical RAM. Therefore, a Windows Server 2003 x64 SP1-based server in 64-bit mode with 32GB of RAM could have a 32 GB kernel memory dump file, where you would expect only a 1 to 2 GB dump file in 32-bit mode. This behavior occurs because of the greatly increased memory pools.

130536 Windows does not save memory dump file after a crash

So in case you already are stuck with same issue or if your IT enterprise has servers with configuration where the dumps cannot be captured, either we can increase free space on boot volume c:\ [something not supported by Microsoft] to follow above mentioned articles or we can reduce RAM by using /maxmem switch in boot.ini [reducing ram won’t be a feasible option always in production environments]. Another option is to try live debug by engaging Microsoft Customer support however customer needs to sets the machine for live debug.

All what I said above does not apply to win2k8 [ In windows 2008 you have a new feature of dedicated dump file http://support.microsoft.com/kb/957517 ]

In Windows Vista and Windows Server 2008, the paging file does not have to be on the same partition as the partition on which the operating system is installed. To put a paging file on another partition, you must create a new registry entry named DedicatedDumpFile. You can also define the size of the paging file by using a new registry entry that is named DumpFileSize. By using the DedicatedDumpFile registry entry in Windows Server 2008 and in Windows Vista, a user can configure a registry setting to store a dump file in a location that is not on the startup volume.

Location: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CrashControl
Name: DedicatedDumpFile
Type: REG_SZ
Value: A dedicated dump file together with a full path

Location: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CrashControl
Name: DumpFileSize
Value: The dump file size in megabytes.

Please review the following articles to plan dump captures on servers in your Enterprise.

How to generate a kernel or a complete memory dump file in Windows Server 2008

Dedicated dump files are unexpectedly truncated to 4 GB on a computer that is running Windows Server 2008 or Windows Vista and that has more than 4 GB of physical memory.http://support.microsoft.com/kb/950858

Hope you guys find this as a reading pleasure and it helps. Very thanks for your time and stay tuned to blog for more intersting upcoming topics.


Friday, November 13, 2009

Converged Inrastructure and changing market trends.

Well after Cisco revealed the partnership with VMware and EMC, competitors like HP, IBM, SUN and Dell have to do something to stay ahead in market. We all can very well see that a new market is emerging – market of converged infrastructure which is about selling everything that is required in one box. One stop shop is becoming the trend of the day. With the upcoming market of Virtualized and consolidated data centers, cloud computing the demand for such kind of converged infrastructure is growing and companies are getting ready. Cisco- EMC- VMware announced their new partnership under umbrella of a new venture company Alpine. It’s very clear that HP has its blade matrix solution to counter them but Hp pro curve networking business will get a big push with acquisition deal of 3com as it will fill gaping hole—core switching in the data center network—which HP really needed to fill. This will also help customers who were looking for an alternative to Cisco. Cisco presently owning about 52 percent of the networking market, with HP at 11 percent and 3Com at 9 percent. This acquisition deal will create HP a 20% market share holder and a significant challenger to Cisco. Also 3com’s big presence in china market will be a boon for HP. HP, which has bought more than 30 companies since Chief Executive Mark Hurd arrived in 2005, is a major player in personal computers, servers, IT services [also acquired EDS ] and printers has become one-stop shop for all the servers, storage arrays, switches and software any data center needs.


Sunday, November 8, 2009

Designing disaster tolerant Multi Site High Availability Solution integrating Microsoft failover clustering

Today we are going to talk about the design concepts of highly available disaster recovery solution based on Microsoft clustering and various options available in market for geographic clusters. The purpose of the article is to give an insight into the design consideration of Geographic cluster. Why do we need a disaster tolerant solution and does it covers for backup requirements. Disaster tolerance is the ability to restore applications and data services within a reasonable period of time after a disaster. Most think of fire, flood, and earthquake as disasters, but a disaster can be any event that unexpectedly interrupts service or corrupts data in an entire data center. It does not mitigate the need of an effective backup solution for the data or application recovery on cluster. Backup solutions enables us to go back in time for restoration and high availability/Clustering solutions enables us to make sure that applications and data services are up and running round the year. The very essence of a geographic cluster is that data on site A needs to be replicated to site B to counter any disaster on site A and vice versa. The maximum distance between nodes in cluster determines the data replication and networking technology.

The questions you need to ask pre design are:

1. The applications you are going to run on cluster nodes and what kind of IO they will do. What kind of data loss or lag these applications and business can sustain. Many applications can recover from crash-consistent states; very few can recover from out-of-order I/O operation sequences.

2. How far these cluster nodes will be and will the solution consists of 2 or multiple sites. Depending on the answer of 1st question you have to choose between synchronus or asynchronus replication.
3. What will be the medium of Data replication : Fibre channel, LAN or WLAN.
4. Which cluster extension you will like to use for Microsoft failover clustering multi site cluster. There are various solutions in the market like HP CLX extension and EMC cluster enabler. This can get influenced from the storage solution you have like HP EVA/XP or EMC Symmetrix/Clariion.
Well, this article is more of multi site clustering solution for Microsoft failover clustering however there are other existing solutions in market which can be leveraged for example VMware vCenter Site Recovery Manager, IBM GPFS clusters using powerHA system mirror for AIX enterprise edition, Veritas cluster, HP polyserve & Metroclusters.

In a two-site configuration, the nodes in Site A are connected to the storage in Site A directly, and the nodes in Site B are connected to the storage in Site B directly. The nodes in Site A can continue without accessing the storage on Site B and vice versa. Its storage fabric [HP Continous Access or EMC SRDF] or host-based software [ Double Take or Microsoft Exchange CCR ] provides a way to mirror or replicate data between the sites so that each site has a copy of the data. In failover clustering 2008, concept of quroum has changed entirely and now quroum is translated as Majority of votes. Prior to this, server 2003 MNS clusters were being used and as the name suggests, majority node sets also worked on the concept of node majority along with the benefit of file share witness which can provide an additional vote if required to achieve quorum.
The essence of the solution is that we need to replicate our storage luns from site A to storage luns of site B. This can be either synchronus or asynchronus replication and it may be from site A luns--> site B luns or site B luns --> site A luns. This automatic lun replication behaviour can be controlled by a cluster extension and if site A is active then replication will be from site A luns--> site B luns and vice versa. It is recommended to put file share witness in 3rd site.
*A major improvement to clustering in Windows Server 2008 is that cluster nodes can now reside on different subnets. As opposed to previous versions of clustering (as in Windows Server 2003 and Windows 2000 Server), cluster nodes in Windows Server 2008 can communicate across network routers. This means that you no longer have to stretch virtual local area networks (VLANs) to connect geographically separated cluster nodes, far reducing the complexity and cost of setting up and maintaining multi-site clusters. One consideration for subnet-spanning clusters can be client response time. Client computers cannot see a failed-over workload any faster than the DNS servers can update one another to point clients to the new server hosting that workload. For this reason, VLANs can make sense when keeping workload downtime to an absolute minimum is your highest priority.

Difference between synchronus and asynchronus data replication:
Synchronous replication is when an application performs an operation on one node at one site, and then that operation is not completed until the change has been made on the other sites. So, synchronous data replication holds the promise of no data loss in the event of failover for multi-site clusters that can take advantage of it. Using synchronous, block-level replication as an example, if an application at Site A writes a block of data to a disk mirrored to Site B, the input/output (I/O) operation will not be completed until the change has been made to both the disk on Site A and the disk on Site B. In general, synchronous data replication is best for multi-site clusters that can rely on high-bandwidth, low-latency connections. Typically, this will limit the application of synchronous data replication to geographically dispersed clusters whose nodes are separated by shorter distances. While synchronous data replication protects against data loss in the event of failover for multi-site clusters, it comes at the cost of the latencies of application write and acknowledgement times impacting application performance. Because of this potential latency, synchronous replication can slow or otherwise detract from application performance for your users.
Asynchronous replication is when a change is made to the data on Site A and that change eventually makes it to Site B. Multi-site clusters using asynchronous data replication can generally stretch over greater geographical distances with no significant application performance impact. In asynchronous replication, if an application at Site A writes a block of data to a disk mirrored to Site B, then the I/O operation is complete as soon as the change is made to the disk at Site A. The replication software transfers the change to Site B (in the background) and eventually makes that change to Site B. With asynchronous replication, the data at Site B can be out of date with respect to Site A at any point in time. This is because a node may fail after it has written an application transaction to storage locally but before it has successfully replicated that transaction to the other site or sites in the cluster; if that site goes down, the application failing over to another node will be unaware that the lost transaction ever took place. Preserving the order of application operations written to storage is also an issue with asynchronous data replication. Different vendors implement asynchronous replication in different ways. Some preserve the order of operations and others do not.

*Excerpt from Microsoft Windows Server 2008 Multi-Site Clustering Technical Decision-Maker White Paper

Hp CLX extension is the one responsible for monitoring and recovering disk pair synchronization on an application level and offload data replication tasks from the host using storage softwares like command view EVA/XP. CLX automates the time-consuming, labor-intensive processes required to verify the status of the storage as well as the server cluster; thus allowing the correct failover and failback decisions to be made to minimize downtime. It automatically manages recovery without human intervention. For more information these please refer to http://h18000.www1.hp.com/products/quickspecs/12728_div/12728_div.pdf
Similarly we can use EMC cluster enabler extension for 2008 failover clusters. For more details on EMC cluster extension and recovery point solution please refer following link. It does the same job for EMC storgae clarrion/symmetrix as HP CLX does for EVA/XP storage.
So, in a failover scenario resources will move to other site and will start using the storage on disaster recovery site and lun replication direction will be reversed by the cluster extension without any manual intervention. File share witness quroum model helps in retaining the cluster quroum [vote majority] in case of split brain scenarios and resources will remain highly available even if the network communications break between 2 sites [ Till the time one of the nodes can access file share witness in 3rd site]. I hope this talk would have given you an insight into the high level design overview of multisite clusters based on Microsoft failover clusters and whats needs to be considered in the design process. Thanks for your time and stay tuned to blog for more intersting upcoming topics.

Saturday, November 7, 2009

Cloud Infrastructure

We all are highly excited about the news of Cisco & EMC announcing their joint venture to provide V-block as the new solution for ready to go internal and external cloud infrastructures. It will defintely bring more competition into Cloud infrastructure world which should benefit customers. however, before i traverse deep into the seas of cloud infrastructure its better to understand what a cloud is because its a very hyped upcoming technology term.

As per National Institute of Standards and Technology, Information Technology Laboratory, Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models.

Essential Characteristics:

On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider.
Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.
Rapid elasticity. Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
Measured Service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing

Service Models:

Cloud Software as a Service (SaaS). The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
Cloud Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
Cloud Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).

Deployment Models:

Private cloud. The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise.
Community cloud. The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise.
Public cloud. The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
Hybrid cloud. The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).

Ok as now we all know what a cloud is how confusing that term can be but we need to keep in mind that this could be future Technology and how outsourcing businesses will evolve in upcoming years.one of the best examples of this is Amazon cloud [ http://aws.amazon.com/ec2/ ]

Coming back to from where we started we need assess what are the different ready to go Cloud Infrastructure solutions available today in the market.

HP Blade Matrix:

The HP BladeSystem Matrix is a converged infrastructure platform designed to simplify the deployment of applications and business services by delivering IT capacity through pools of readily deployed resources. The goal of Matrix is to accelerate provisioning, optimize IT capacity across physical and virtual environments and to ensure predictable delivery and service levels. BladeSystem Matrix integrates proven HP BladeSystem technologies including Virtual Connect, Insight Dynamics software, Fibre Channel SAN like the EVA4400, and standard ProLiant and Integrity blade servers with HP Services for streamlined implementation and support.
The use of HP Virtual Connect technology allows blades to be added, replaced, and recovered through software, saving the valuable time of LAN, SAN, and server administrators. Changes can be made in a matter of minutes by one person working at a single console. In a racked, stacked, and wired environment, the same changes might require involvement from four organizations and take weeks to complete, incurring significant labor costs for physically moving resources for re-configuration.
HP Claims that BladeSystem Matrix system is offered at a list price that is 15 percent lower than the cost of buying the components individually and building your own solution.

BladeSystem Matrix allows you to consolidate Ethernet network equipment by a 4 to 1 ratio, while tripling the number of network interface controllers (NICs) per server. This level of consolidation is made possible by the included HP Virtual Connect Flex-10 Ethernet module. It flexibly allocates the bandwidth of a 10 Gb Ethernet network port across four NIC connections to best meet the needs of your applications and virtual machine channel. With Flex-10 technology at work, you can avoid purchasing additional costly NICs, switches, and cables while concurrently increasing bandwidth. You can use either eva 4400 which can come along with blade matrix solution or can use the HP BladeSystem Matrix with an existing supported SAN. The BladeSystem Matrix can scale to 1,000 blades or virtualmachines, managed as a single domain. Finally, with built-in power capping control, customers can significantly lower their power and cooling costs to the point of even extending the life of datacenter facilities.

IBM Cloudburst:
Cloudbrust is Self-contained with Software, Hardware, Storage, Networking, and Management packaged in one box and each IBM CloudBurst package includes the IBM implementation services, so you can make it operational in your environment quickly. It is Modular, with the capability to be automatically expandable and scalable. It provides Advanced analytics, leveraging historical and real-time data for autonomic operations. And it is virtualized, across servers, networks, and storage resources. IBM CloudBurst is a quick-start to cloud computing. Simply roll it into your data center to quickly see the benefits of cloud computing.

Built on the IBM System x BladeCenter® platform, IBM CloudBurst provides pre-installed, fully integrated service management capabilities across hardware, middleware and applications. Expanded features and benefits for this new release include:

*Delivery of integrated IBM Tivoli Usage and Accounting capability to help enable chargeback for cloud services to optimize system usage.

*Enhanced service management capability delivered via IBM Tivoli Service Automation Manager V7.2 to support new levels of ease of use.

*Integration with Tivoli Monitoring for Energy Management that enables monitoring and management of energy usage of IT and facility resources, which can assist with efforts to optimize energy consumption for higher efficiency of resources, in an effort to help lower operating cost.

*Optional high availability using Tivoli systems automation and VMWare high availability that can provide protection against unplanned blade outages and can help simplify virtual machine mobility during planned changes.

*Optional secure cloud management server with IBM Proventia Virtualized Network Security platform. IBM Proventia protects the CloudBurst production cloud with Virtual Patch, Threat Detection and Prevention, Proventia Content Analysis, Proventia Web Application Security, and Network Policy enforcement.

EMC-CISCO-VMware VBlock:
The Virtual Computing Environment coalition has introduced Acadia — a Cisco and EMC solutions joint venture to build, operate, and transfer Vblock infrastructure to organizations that want to accelerate their journey to pervasive virtualization and private cloud computing while reducing their operating expenses. Acadia expects to begin customer operations in the first calendar quarter of calendar year 2010. Because the Vblock architecture relies heavily on Intel Xeon® processors and other Intel data center technology, Intel will join the Acadia effort as a minority investor to facilitate and accelerate customer adoption of the latest Intel technology for servers, storage, and networking.
The following family of Vblock Infrastructure Packages is being offered by the Virtual Computing Environment coalition:

Vblock 2 is a high-end configuration supporting up to 3,000-6,000 virtual machines that is completely extensible to meet the most demanding IT needs of large enterprises and service providers. Designed for large-scale and 'green field' virtualization, Vblock 2 takes advantage of Cisco's Unified Computing System (UCS), Nexus 1000v and Multilayer Directional Switches (MDS), EMC's Symmetrix V-Max storage (secured by RSA), and the VMware vSphere platform.

Vblock 1 is a mid-sized configuration supporting 800 up to 3,000 virtual machines to deliver a broad range of IT capabilities to organizations of all sizes. Designed for consolidation and optimization initiatives, Vblock 1 is comprised of a repeatable model leveraging Cisco's UCS, Nexus 1000v and MDS, EMC's CLARiiON storage (secured by RSA), and the VMware vSphere platform.

Vblock 0 will be an entry-level configuration available in 2010, supporting 300 up to 800 virtual machines, for the first time bringing the benefits of private clouds within reach of medium-sized businesses, small data centers or organizations, and for test and development by channel partners, systems integrators, service providers, ISVs, and customers. Vblock 0 is also comprised of a repeatable model leveraging Cisco's UCS and Nexus 1000v, EMC's Unified Storage (secured by RSA), and the VMware vSphere platform.

So, we see that these are the key players into cloud infrastructure market and this new release of vblock solution will help the market to get more visibility and adoption of clouds and at the same time will benefit customers by introducing competition between these key players and hence a race for better converged solution.

Wednesday, October 21, 2009

Challenges of Virtual Infrastructure Deployment and Management

Almost every IT Manager is getting excited by the thought of virtualized IT infrastructure where he can manage and provision his IT needs while remaining on his workstation and within minutes instead of days. Virtualization has became a hot selling and talked about technology and every IT Infrastructure sales consultant claims that it will bring ROI, reduce IT costs, carbon, Data center space and IT administrative work. Virtualization should bring down time for deployment of new servers, services and applications. Market is hot not only about hypervisor based virtualization solutions but also about hardware virtualization for example Blade matrix solution by HP and CloudBrust by IBM. This also brings in the market of management software for Virtualized data centers. Some of such softwares are provided by Hypervisor vendors itself like VMware and Microsoft for example Microsoft's System center virtual machine manager and VMware's Virtual center. However there are lot more tools available from 3rd party vendors which can help in quick designing, deployment, documentation and management of Virtualized IT infrastructure. To read more about these tools please refer to link below.


The biggest challenge in deployment of virtualized solution is to indentify what kind of virtualization an organization requires. It could vary from hardware virtualization, application virtualization, client desktop virtualization to server virtualization. Once that has been indentified it needs to be kept in mind whether virtualization will be a feasible solution to achieve the required goals. For example if you are considering server virtualization you must make sure that apps running on those servers will give required performance in virtualized environment and will be supported by the app vendor. Another example will be hardware virtualization i.e. consolidating all the hardware, network and storage resource into one big pool and then allocation and deallocation based on the IT infrastructure requirements. In such a scenario you need to do optimal capacity planning. It has been observed that when you have all the resources readily available and you need not to go to management for server, hardware and network acquisitions, your combined infrastructure resource pool will exhaust more quickly as it is very easy to provision new server, storage or network resources which leads to quick consumption of pool resources. similarly, in case of application virtualization you need to do an assement of users roaming behaviour and your lan traffic.

Even when you have deployed a virtual infrastructure the bigger challenge that lies ahead is its effective management. Enterprises need to make sure that their Infrastucture management team is trained on Virtualization Technologies. They need to baseline their virtualized environment, do optimal future capacity planning, security hardening. The key to a successfull Virtualized solution is to indentify why you need to virtualize and what are your goals.