Tuesday, November 24, 2009

Why Google Chrome operating system won't be a Success

Well Google has released the preview of its chrome operating system [OS] and also shared the source code with developers. Google made the early code of Chrome OS available to the open source community and claims external developers will have the same access to the code as internal Google developers. The Google chrome OS is meant for Net books which are seen dominantly as secondary computers. The great part in this announcement is that this will lead to entry of Google into the operating system market which is very much dominated by Microsoft followed by Apple. Though it’s too early to comment on the success of the Google chrome OS as it is about to ship in 2010 however the best part is that its business design is Cloud based. Almost all you user data will be on Google cloud or web for example, Gmail, Google Apps, you tube, orkut, messengers, picassa, twitter, online songs blah blah.



Chrome OS will run with a Linux kernel as its base which will boot directly into the Chrome Web browser and is aimed primarily at netbooks which will run on both x86 and ARM processors. It will not be designed to have local storage; all data will be stored in the cloud. Google will not entice developers to build software to run on the Chrome OS; instead, they want them to build Web apps that will run on any standards-based browser. The three most important features will be “speed, simplicity and security,” according to Google. Announced Chrome OS hardware partners: Acer, Adobe, ASUS, Freescale, Hewlett-Packard, Lenovo, Qualcomm, Texas Instruments, and Toshiba. Netbooks running Chrome OS will be available in the second half of 2010.


 The best part here is that Google is providing chrome OS for free which means you don’t have to pay anything for it except the net book price. Google claims that chrome will boot up in few seconds and users will be ready to browse the web in much lesser time as compared to netbooks which will be using windows XP or windows 7 which is true but what Google does not highlight is that the netbooks meant for Google chrome will be costlier as they will require solid state hard drives [something like flash drives]. Another drawback is that as these hard drives are much costlier they will only provide small capacity hard drive [very less data can be kept on local machine] so chances will be very less that you can put windows XP or windows 7 on your chrome net book if you ever wanted to switch the sides [ vendor lock-in].


Another point that Google makes is that users need to worry about software updates as in case of windows operating systems. Well Microsoft operating systems can also be put on automatic update so that’s not something new. For more details on Google chrome security model, please check here : http://blogs.zdnet.com/security/?p=4969


Google says that users need not to worry about the data on their machines. That’s obvious because you hardly will be keeping any data on your netbooks, most of the data will be at Google data centers. What Google misses out is that what about users private data which they won’t like to keep on web for the obvious reasons.

I personally believe that Google Chrome would not be very successful and here are the reasons:

  • Bad market timing : Windows 7 and Apple’s snow leopard would have consumed the market by mid 2010.
  • Vendor/Hardware lock-in : Users will have to get specific hardware to install Chrome OS on netbook.
  • Data privacy : Users would not like to store their personal and private data on google cloud.
  • No cost benefit to user : Though Chrome OS will be provided free of cost, the hardware used to run chrome OS will be costlier than today’s netbook.
  • Net books backlash : This factor is common to all operating system vendors of netbooks but effect google the most because of its timing. The consumer backlash against netbooks has already begun and by the time we see Chrome OS netbooks from Google’s hardware partners in the second half of 2010, the net book phenomenon will either have retreated into the background or morphed into something better. And then Google will have to scramble to make Chrome OS available on a wider variety of notebook computers, as well as on netbooks which will be a big chalenge for google as their OS model is web based. Net books are very low end machines and can be best used in education sector and not really as business machines. Infact they are more of a secondary machine, you are expected to have a primary machine. The biggest drawback of chrome OS is that all apps will be web based and you wont be able to insall applications on your own machine easily. Netbooks are terrible and a lot of consumers regret buying them (verified by a recent NPD survey http://gigaom.com/2009/06/23/as-small-notebooks-netbooks-largely-dash-expectations/ )

  • Restriction to web Apps : Users will not ne able to install native non web Apps that they use everyday and Google is only promoting Web based Apps.


However it would be interesting to see Brand name of Google behind Linux entry to desktops. But if Google really wants to emerge in this market it has to go beyond the netbooks. Hope you find this interesting and thanks for your time. 


http://code.google.com/chromium/

GAURAV ANAND

Sunday, November 22, 2009

Information Technology(IT):- A Business Partner not just Cost Center (Based on IDC Survey)

Hello Everybody!
This is first time I am writing a blog on IT Infrastructure, though I am working in IT Infrastructure domain for quite some time now. My expertise is in technical, sales and consulting arena. So I feel this knoweledge sharing would be more interactive and it would be more helpful for all of us who work in IT Infrastructure domain or more broadly in overall IT.
Today we can look at how IT has evolved over a period of time from being just a Cost Center earlier to a Business Partner today. The base of this information is a survey conducted by IDC in multiple MNCs across various vertical sectors like Insurance, Manufacturing, Professional Services and Telecoms in Japan, Australia, India and the People's Republic of China(PRC).
The key findings of this IDC Survey inclue:-
1. IT has evolved from a cost center to a business enabler, and in many organizations today, IT assumes a strategic role of a business partner. With IT ranking high on the corporate agenda, a CIO’s role has also become more strategic and critical for organizations that use technology as a key business differentiator.
2. In countries like Australia, the PRC and India, select industry sectors are witnessing double-digit growth. In order to sustain their growth, bottom line and cost competitiveness in the market, organizations are expecting to use IT more aggressively to cut down operational costs while improving employee efficiency and productivity. CxOs are expected to join shoulders with business leaders in attaining both top-line and bottom-line revenue targets.
3. In order to stay competitive, organizations are leveraging on IT to enhance their operational
efficiency, reduce the go-to-market period and improve customer service, while keeping costs under control. CxOs are also challenged to simplify IT while keeping the IT ecosystem agile and within budget. The increase in power and cooling costs, and the growing environmental concerns of rising carbon footprint are also pushing organizations to embark on "Green" initiatives. Therefore, CIOs are increasingly looking to leverage their IT vendors' global expertise in
simplifying IT to meet their business and IT goals.

Business Priorities Driving IT Initiative:-


IDC's survey shows that CEOs are focused on growing the business by driving product innovation and improving customer care, while ensuring regulatory compliance. Improving organizational productivity also features strongly in the CEOs' agenda. The IT organization is expected to be more nimble and innovative in supporting the business goals. For organizations in India, Australia and Japan, enhancement in customer care service is one of the leading initiatives on the CEOs' agenda. It is worth noting that regulatory compliance is the most important
initiative for CEOs in India.

About 80% of the respondents said that they would use and rely on technology “more aggressively” to achieve their company’s priorities. The country most likely to
use technology more aggressively is Japan, as almost 90% of the companies responded that
they would do so.

Top Priorities for Key IT Projects:-

Across the surveyed countries, respondents said the top priorities of their key IT projects are to support the business, to ensure IT security, and to reduce the total cost of ownership. In addition, projects on virtualization and related tasks of upgrading and consolidating IT systems are also top priorities. The survey also revealed that the business priorities of respondents from India and Japan are more varied, and did not concentrate on a select few.
Growing Complexities in the IT Ecosystem:-

CxOs from the PRC, Australia and Japan, unanimously rated “achieving business agility” as the key challenge, whereas the India CxOs gave similar priorities to “finding and retaining talent” and “assessing and quickly absorbing mergers and acquisitions." CIOs across all four countries agreed that having too many disparate systems and the lack of an integrated IT environment posed the topmost challenge with respect to the impact of technology use on the organization's ability to compete more effectively in the marketplace.

Overcoming Challenges by Simplifying IT:-

The CxOs face significant difficulties in achieving business agility amidst expanding business
requirements. CIOs are adopting service-oriented architecture (SOA) to standardize and automate their business and IT processes in order to simplify their IT. The consolidation of hardware down to a select few vendors and reducing the number of applications are also key initiatives.

Emerging Themes: Green IT, Virtualization and Web 2.0:-

When asked about the technology areas that can affect their organizations’ competitiveness, CxOs believe that Green IT, datacenter transformation and usage of new media to reach customers are the top 3 technology areas that will define their competitiveness. It noteworthy that for the emerging economies like India and the PRC, Green IT has the topmost priority in technology areas that can affect the organizations' competitiveness. Green IT may help the organizations in these countries reduce the cost of power and cooling, as phenomenal growth in both the countries lead to an increase in power supply usage.

Across the region, virtualization is currently applied mainly to servers, storage and
networks, as shown in figure below. The degree of adoption is mixed, indicating that these organizations still have some ways to go towards leveraging virtualization as a tool to improve IT utilization and drive consolidation.

The performance of the IT department and the CIO are measured on how effectively they support the lines of business, with projects being completed on time. Ultimately, all efforts must continue to support and advance the company's strategy in a continuing cycle where IT creates value and projects stay within budget.
CONCLUSION:-

Top business priorities for the companies surveyed are to drive business growth through faster product innovation and better customer care, while ensuring regulatory compliance. The companies expect IT to play a more strategic and responsive role in supporting business initiatives.
 Thus the top priorities for CIOs are to support the line of business, ensure security, and reduce the total cost of IT ownership. They expect to use technology aggressively to help them achieve competitiveness, but also recognize the ongoing struggles of simplifying IT systems to achieve business agility. This all have to be achieved without appreciable budget increases.
 CIOs in the survey openly admit that they have a disparate and complex ITenvironment, and they are turning to vendors who understand their business to co-create solutions that are open and flexible. CIOs are measured on how well they support the organization's business priorities and how well they deliver these projects on time and within budget. They also said they would like their vendors to help them achieve these goals.
 Green IT is an increasingly important factor on the CxO's agenda due to the potential cost savings or regulatory compliance requirements, and will increase in importance in the near future. CxOs are concerned about reducing energy consumption, reducing environmental impact through recycling and reducing carbon footprint, all driven by cost savings and regulatory pressures.
 Virtualization is widely adopted by organizations in Japan and China, but not so in India and Australia. There are opportunities for companies in India and Australia to achieve better efficiencies by virtualizing more of their servers, storage, clients and applications.
 Vendors are expected to support the CxO’s Green initiatives by providing more eco-friendly IT solutions as well as making the functionalities of existing IT systems environmentally friendly. CxOs expect these to be achieved with no increase in IT budgets.
 The business landscape is changing at such break-neck speed that in order to stay competitive, organizations are leveraging on IT to enhance their operational efficiency, reduce the go-to-market period and improve customer service, while keeping costs under control. As a result, IT infrastructures are becoming more complex, and CxOs are challenged to simplify IT while keeping the IT ecosystem agile and within budget. Due to the increase in power and cooling costs, and the growing push to reduce carbon emissions, organizations are embarking on Green initiatives. To overcome these challenges, CIOs are increasingly looking to leverage their IT vendors' global expertise in simplifying IT to meet their business and IT goals.








Friday, November 20, 2009

The most common poor design that effects server Up time and troubleshooting bottleneck in IT Enterprises.

The most important thing for an IT enterprise is to ensure that there servers remains Up and running and one of the most common cause which brings down the servers is Blue screen or more commonly known as blue screen of Death. I have seen hundreds of big enterprises with huge mission critical servers running applications that need to be highly available with poor design. Sometimes these servers are highly available cluster nodes and sometimes single stand alone servers however the famous blue screen of death can struck these servers anytime. So what happens if a server starts blue screening. What shall we do? Well as said that 90% of the blue screens occur due to buggy drivers and other 10% due to hardware issues. So as a first step you can check what the stop code is and what does that stop code refers to. You can update the drivers and Bios but you need to be careful of the computability issues. So the next step is to grab a memory dump to prepare a conclusive action plan after its analysis to resolve the issue. That’s where the design flaw appears.



Most of the IT Enterprises don’t plan this scenario which always leads to disruption of server Up time and sometimes the after effects of this bad design leads to troubleshooting bottle neck. Microsoft Operating systems enables IT administrator 3 kinds of memory dumps: small, kernel and full memory dump.


A Small Memory Dump is much smaller than the other two kinds of kernel-mode crash dump files. It is exactly 64 KB in size on 32 bit machine and 128kb in 64 bit machine, and requires only 64 KB/128KB of pagefile space on the boot drive.This dump file includes the following:


1.The bug check message and parameters, as well as other blue-screen data.
2.The processor context (PRCB) for the processor that crashed.
3.The process information and kernel context (EPROCESS) for the process that crashed.
4.The thread information and kernel context (ETHREAD) for the thread that crashed.
5.The kernel-mode call stack for the thread that crashed. If this is longer than 16 KB, only the topmost 16 KB will be included.
6.A list of loaded drivers.
7.A list of loaded modules and unloaded modules.
8.The debugger data block. This contains basic debugging information about the system.

This kind of dump file can be useful when space is greatly limited. However, due to the limited amount of information included, errors that were not directly caused by the thread executing at time of crash may not be discovered by an analysis of this file. If a second bug check occurs and a second Small Memory Dump file is created, the previous file will be preserved. Each additional file will be given a distinct name, which contains the date of the crash encoded in the filename. For example, mini022900-01.dmp is the first memory dump file generated on February 29, 2000. A list of all Small Memory Dump files is kept in the directory %SystemRoot%\Minidump

Unfortunately stack traces reported by WinDbg, especially involving 3rd-party components, are usually incomplete and sometimes not even correct. They can also point to stable drivers when the system failure happened after slowly accumulated corruption caused by some intermediate driver or a combination of drivers. In other words small memory dumps are helpful but not always reliable to conclude.

Kernel dumps almost always capture the relevant information required in case of a blue screen. Though it does not contain user mode data however that is not required most of the time. As kernel mode in a 32 bit machine can be only 2GB+1mb at max it is easy to get a kernel dump on 32 bit machines. But the real problem lies in 64 bit operating systems. Customers generally don’t create required big enough page file on C drive or do not have required free space on boot drive i.e. C. the same thing happens in case of full memory dumps [which are required in case your server hard hangs/freezes]

Here you go with a public article from Microsoft which explains that even if you point dump to D or E drive you still need free space on boot volume C drive which or at least require that the page file on C drive is at least Ram +1 MB[ in case of kernel dump and 1.5* RAM [in case of full dump] [though Microsoft article is not very clear in beginning and confuses audience though it helps] Although you can change the path for the location of the dump file using Control Panel, Windows always writes the debugging information to the pagefile on the %SYSTEMROOT% partition first, and then moves the dump file to the path specified. Well the kernel dumps is not always that big as your ram however you can't exactly predict the size of a kernel memory dump is because its size depends on the amount of kernel-mode memory in use by the operating system and drivers and this becomes more complex in 64 bit environment.

Please review the following articles to plan your C boot drive and page file size on servers in your Enterprise.

886429 What to consider when you configure a new location for memory dump files in Windows Server 2003
http://support.microsoft.com/default.aspx?scid=kb;EN-US;886429


141468 Additional Pagefile Created Setting Up Memory Dump
http://support.microsoft.com/default.aspx?scid=kb;EN-US;141468


Another article on how to determine the appropriate page file size for 64-bit versions of Windows Server 2003 or Windows XP
http://support.microsoft.com/kb/889654

For business-critical 64 bit operating system servers where business processes require to server to capture physical memory dumps for analysis, the traditional model of the page file should be at least the size of physical ram plus 1 MB, or 1.5 times the default physical RAM. This makes sure that the free disk space of the operating system partition is large enough to hold the OS, hotfixes, installed applications, installed services, a dump file, and the page file. On a server that has 32 GB of memory, drive C may have to be at least 86 GB to 90 GB. This is 32 GB for memory dump, 48 GB for the page file (1.5 times the physical memory), 4 GB for the operating system, and 2 to 4 GB for the applications, the installed services, the temp files, and so on. Remember that a driver or kernel mode service leak could consume all free physical RAM. Therefore, a Windows Server 2003 x64 SP1-based server in 64-bit mode with 32GB of RAM could have a 32 GB kernel memory dump file, where you would expect only a 1 to 2 GB dump file in 32-bit mode. This behavior occurs because of the greatly increased memory pools.

130536 Windows does not save memory dump file after a crash
http://support.microsoft.com/default.aspx?scid=kb;EN-US;130536

So in case you already are stuck with same issue or if your IT enterprise has servers with configuration where the dumps cannot be captured, either we can increase free space on boot volume c:\ [something not supported by Microsoft] to follow above mentioned articles or we can reduce RAM by using /maxmem switch in boot.ini [reducing ram won’t be a feasible option always in production environments]. Another option is to try live debug by engaging Microsoft Customer support however customer needs to sets the machine for live debug.

All what I said above does not apply to win2k8 [ In windows 2008 you have a new feature of dedicated dump file http://support.microsoft.com/kb/957517 ]

In Windows Vista and Windows Server 2008, the paging file does not have to be on the same partition as the partition on which the operating system is installed. To put a paging file on another partition, you must create a new registry entry named DedicatedDumpFile. You can also define the size of the paging file by using a new registry entry that is named DumpFileSize. By using the DedicatedDumpFile registry entry in Windows Server 2008 and in Windows Vista, a user can configure a registry setting to store a dump file in a location that is not on the startup volume.

Location: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CrashControl
Name: DedicatedDumpFile
Type: REG_SZ
Value: A dedicated dump file together with a full path

Location: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CrashControl
Name: DumpFileSize
Type: REG_DWORD
Value: The dump file size in megabytes.

Please review the following articles to plan dump captures on servers in your Enterprise.


How to generate a kernel or a complete memory dump file in Windows Server 2008
http://support.microsoft.com/kb/969028


Dedicated dump files are unexpectedly truncated to 4 GB on a computer that is running Windows Server 2008 or Windows Vista and that has more than 4 GB of physical memory.http://support.microsoft.com/kb/950858

Hope you guys find this as a reading pleasure and it helps. Very thanks for your time and stay tuned to blog for more intersting upcoming topics.

GAURAV ANAND

Friday, November 13, 2009

Converged Inrastructure and changing market trends.

Well after Cisco revealed the partnership with VMware and EMC, competitors like HP, IBM, SUN and Dell have to do something to stay ahead in market. We all can very well see that a new market is emerging – market of converged infrastructure which is about selling everything that is required in one box. One stop shop is becoming the trend of the day. With the upcoming market of Virtualized and consolidated data centers, cloud computing the demand for such kind of converged infrastructure is growing and companies are getting ready. Cisco- EMC- VMware announced their new partnership under umbrella of a new venture company Alpine. It’s very clear that HP has its blade matrix solution to counter them but Hp pro curve networking business will get a big push with acquisition deal of 3com as it will fill gaping hole—core switching in the data center network—which HP really needed to fill. This will also help customers who were looking for an alternative to Cisco. Cisco presently owning about 52 percent of the networking market, with HP at 11 percent and 3Com at 9 percent. This acquisition deal will create HP a 20% market share holder and a significant challenger to Cisco. Also 3com’s big presence in china market will be a boon for HP. HP, which has bought more than 30 companies since Chief Executive Mark Hurd arrived in 2005, is a major player in personal computers, servers, IT services [also acquired EDS ] and printers has become one-stop shop for all the servers, storage arrays, switches and software any data center needs.

GAURAV ANAND

Sunday, November 8, 2009

Designing disaster tolerant Multi Site High Availability Solution integrating Microsoft failover clustering



Today we are going to talk about the design concepts of highly available disaster recovery solution based on Microsoft clustering and various options available in market for geographic clusters. The purpose of the article is to give an insight into the design consideration of Geographic cluster. Why do we need a disaster tolerant solution and does it covers for backup requirements. Disaster tolerance is the ability to restore applications and data services within a reasonable period of time after a disaster. Most think of fire, flood, and earthquake as disasters, but a disaster can be any event that unexpectedly interrupts service or corrupts data in an entire data center. It does not mitigate the need of an effective backup solution for the data or application recovery on cluster. Backup solutions enables us to go back in time for restoration and high availability/Clustering solutions enables us to make sure that applications and data services are up and running round the year. The very essence of a geographic cluster is that data on site A needs to be replicated to site B to counter any disaster on site A and vice versa. The maximum distance between nodes in cluster determines the data replication and networking technology.

The questions you need to ask pre design are:

1. The applications you are going to run on cluster nodes and what kind of IO they will do. What kind of data loss or lag these applications and business can sustain. Many applications can recover from crash-consistent states; very few can recover from out-of-order I/O operation sequences.

2. How far these cluster nodes will be and will the solution consists of 2 or multiple sites. Depending on the answer of 1st question you have to choose between synchronus or asynchronus replication.
3. What will be the medium of Data replication : Fibre channel, LAN or WLAN.
4. Which cluster extension you will like to use for Microsoft failover clustering multi site cluster. There are various solutions in the market like HP CLX extension and EMC cluster enabler. This can get influenced from the storage solution you have like HP EVA/XP or EMC Symmetrix/Clariion.
Well, this article is more of multi site clustering solution for Microsoft failover clustering however there are other existing solutions in market which can be leveraged for example VMware vCenter Site Recovery Manager, IBM GPFS clusters using powerHA system mirror for AIX enterprise edition, Veritas cluster, HP polyserve & Metroclusters.

In a two-site configuration, the nodes in Site A are connected to the storage in Site A directly, and the nodes in Site B are connected to the storage in Site B directly. The nodes in Site A can continue without accessing the storage on Site B and vice versa. Its storage fabric [HP Continous Access or EMC SRDF] or host-based software [ Double Take or Microsoft Exchange CCR ] provides a way to mirror or replicate data between the sites so that each site has a copy of the data. In failover clustering 2008, concept of quroum has changed entirely and now quroum is translated as Majority of votes. Prior to this, server 2003 MNS clusters were being used and as the name suggests, majority node sets also worked on the concept of node majority along with the benefit of file share witness which can provide an additional vote if required to achieve quorum.
The essence of the solution is that we need to replicate our storage luns from site A to storage luns of site B. This can be either synchronus or asynchronus replication and it may be from site A luns--> site B luns or site B luns --> site A luns. This automatic lun replication behaviour can be controlled by a cluster extension and if site A is active then replication will be from site A luns--> site B luns and vice versa. It is recommended to put file share witness in 3rd site.
*A major improvement to clustering in Windows Server 2008 is that cluster nodes can now reside on different subnets. As opposed to previous versions of clustering (as in Windows Server 2003 and Windows 2000 Server), cluster nodes in Windows Server 2008 can communicate across network routers. This means that you no longer have to stretch virtual local area networks (VLANs) to connect geographically separated cluster nodes, far reducing the complexity and cost of setting up and maintaining multi-site clusters. One consideration for subnet-spanning clusters can be client response time. Client computers cannot see a failed-over workload any faster than the DNS servers can update one another to point clients to the new server hosting that workload. For this reason, VLANs can make sense when keeping workload downtime to an absolute minimum is your highest priority.

Difference between synchronus and asynchronus data replication:
Synchronous replication is when an application performs an operation on one node at one site, and then that operation is not completed until the change has been made on the other sites. So, synchronous data replication holds the promise of no data loss in the event of failover for multi-site clusters that can take advantage of it. Using synchronous, block-level replication as an example, if an application at Site A writes a block of data to a disk mirrored to Site B, the input/output (I/O) operation will not be completed until the change has been made to both the disk on Site A and the disk on Site B. In general, synchronous data replication is best for multi-site clusters that can rely on high-bandwidth, low-latency connections. Typically, this will limit the application of synchronous data replication to geographically dispersed clusters whose nodes are separated by shorter distances. While synchronous data replication protects against data loss in the event of failover for multi-site clusters, it comes at the cost of the latencies of application write and acknowledgement times impacting application performance. Because of this potential latency, synchronous replication can slow or otherwise detract from application performance for your users.
Asynchronous replication is when a change is made to the data on Site A and that change eventually makes it to Site B. Multi-site clusters using asynchronous data replication can generally stretch over greater geographical distances with no significant application performance impact. In asynchronous replication, if an application at Site A writes a block of data to a disk mirrored to Site B, then the I/O operation is complete as soon as the change is made to the disk at Site A. The replication software transfers the change to Site B (in the background) and eventually makes that change to Site B. With asynchronous replication, the data at Site B can be out of date with respect to Site A at any point in time. This is because a node may fail after it has written an application transaction to storage locally but before it has successfully replicated that transaction to the other site or sites in the cluster; if that site goes down, the application failing over to another node will be unaware that the lost transaction ever took place. Preserving the order of application operations written to storage is also an issue with asynchronous data replication. Different vendors implement asynchronous replication in different ways. Some preserve the order of operations and others do not.

*Excerpt from Microsoft Windows Server 2008 Multi-Site Clustering Technical Decision-Maker White Paper

Hp CLX extension is the one responsible for monitoring and recovering disk pair synchronization on an application level and offload data replication tasks from the host using storage softwares like command view EVA/XP. CLX automates the time-consuming, labor-intensive processes required to verify the status of the storage as well as the server cluster; thus allowing the correct failover and failback decisions to be made to minimize downtime. It automatically manages recovery without human intervention. For more information these please refer to http://h18000.www1.hp.com/products/quickspecs/12728_div/12728_div.pdf
Similarly we can use EMC cluster enabler extension for 2008 failover clusters. For more details on EMC cluster extension and recovery point solution please refer following link. It does the same job for EMC storgae clarrion/symmetrix as HP CLX does for EVA/XP storage.
So, in a failover scenario resources will move to other site and will start using the storage on disaster recovery site and lun replication direction will be reversed by the cluster extension without any manual intervention. File share witness quroum model helps in retaining the cluster quroum [vote majority] in case of split brain scenarios and resources will remain highly available even if the network communications break between 2 sites [ Till the time one of the nodes can access file share witness in 3rd site]. I hope this talk would have given you an insight into the high level design overview of multisite clusters based on Microsoft failover clusters and whats needs to be considered in the design process. Thanks for your time and stay tuned to blog for more intersting upcoming topics.
GAURAV ANAND

Saturday, November 7, 2009

Cloud Infrastructure





We all are highly excited about the news of Cisco & EMC announcing their joint venture to provide V-block as the new solution for ready to go internal and external cloud infrastructures. It will defintely bring more competition into Cloud infrastructure world which should benefit customers. however, before i traverse deep into the seas of cloud infrastructure its better to understand what a cloud is because its a very hyped upcoming technology term.

As per National Institute of Standards and Technology, Information Technology Laboratory, Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models.

Essential Characteristics:

On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider.
Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.
Rapid elasticity. Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
Measured Service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing

Service Models:

Cloud Software as a Service (SaaS). The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
Cloud Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
Cloud Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).

Deployment Models:

Private cloud. The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise.
Community cloud. The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise.
Public cloud. The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
Hybrid cloud. The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).

Ok as now we all know what a cloud is how confusing that term can be but we need to keep in mind that this could be future Technology and how outsourcing businesses will evolve in upcoming years.one of the best examples of this is Amazon cloud [ http://aws.amazon.com/ec2/ ]

Coming back to from where we started we need assess what are the different ready to go Cloud Infrastructure solutions available today in the market.

HP Blade Matrix:

The HP BladeSystem Matrix is a converged infrastructure platform designed to simplify the deployment of applications and business services by delivering IT capacity through pools of readily deployed resources. The goal of Matrix is to accelerate provisioning, optimize IT capacity across physical and virtual environments and to ensure predictable delivery and service levels. BladeSystem Matrix integrates proven HP BladeSystem technologies including Virtual Connect, Insight Dynamics software, Fibre Channel SAN like the EVA4400, and standard ProLiant and Integrity blade servers with HP Services for streamlined implementation and support.
The use of HP Virtual Connect technology allows blades to be added, replaced, and recovered through software, saving the valuable time of LAN, SAN, and server administrators. Changes can be made in a matter of minutes by one person working at a single console. In a racked, stacked, and wired environment, the same changes might require involvement from four organizations and take weeks to complete, incurring significant labor costs for physically moving resources for re-configuration.
HP Claims that BladeSystem Matrix system is offered at a list price that is 15 percent lower than the cost of buying the components individually and building your own solution.

BladeSystem Matrix allows you to consolidate Ethernet network equipment by a 4 to 1 ratio, while tripling the number of network interface controllers (NICs) per server. This level of consolidation is made possible by the included HP Virtual Connect Flex-10 Ethernet module. It flexibly allocates the bandwidth of a 10 Gb Ethernet network port across four NIC connections to best meet the needs of your applications and virtual machine channel. With Flex-10 technology at work, you can avoid purchasing additional costly NICs, switches, and cables while concurrently increasing bandwidth. You can use either eva 4400 which can come along with blade matrix solution or can use the HP BladeSystem Matrix with an existing supported SAN. The BladeSystem Matrix can scale to 1,000 blades or virtualmachines, managed as a single domain. Finally, with built-in power capping control, customers can significantly lower their power and cooling costs to the point of even extending the life of datacenter facilities.

IBM Cloudburst:
Cloudbrust is Self-contained with Software, Hardware, Storage, Networking, and Management packaged in one box and each IBM CloudBurst package includes the IBM implementation services, so you can make it operational in your environment quickly. It is Modular, with the capability to be automatically expandable and scalable. It provides Advanced analytics, leveraging historical and real-time data for autonomic operations. And it is virtualized, across servers, networks, and storage resources. IBM CloudBurst is a quick-start to cloud computing. Simply roll it into your data center to quickly see the benefits of cloud computing.

Built on the IBM System x BladeCenter® platform, IBM CloudBurst provides pre-installed, fully integrated service management capabilities across hardware, middleware and applications. Expanded features and benefits for this new release include:

*Delivery of integrated IBM Tivoli Usage and Accounting capability to help enable chargeback for cloud services to optimize system usage.

*Enhanced service management capability delivered via IBM Tivoli Service Automation Manager V7.2 to support new levels of ease of use.

*Integration with Tivoli Monitoring for Energy Management that enables monitoring and management of energy usage of IT and facility resources, which can assist with efforts to optimize energy consumption for higher efficiency of resources, in an effort to help lower operating cost.

*Optional high availability using Tivoli systems automation and VMWare high availability that can provide protection against unplanned blade outages and can help simplify virtual machine mobility during planned changes.

*Optional secure cloud management server with IBM Proventia Virtualized Network Security platform. IBM Proventia protects the CloudBurst production cloud with Virtual Patch, Threat Detection and Prevention, Proventia Content Analysis, Proventia Web Application Security, and Network Policy enforcement.

EMC-CISCO-VMware VBlock:
The Virtual Computing Environment coalition has introduced Acadia — a Cisco and EMC solutions joint venture to build, operate, and transfer Vblock infrastructure to organizations that want to accelerate their journey to pervasive virtualization and private cloud computing while reducing their operating expenses. Acadia expects to begin customer operations in the first calendar quarter of calendar year 2010. Because the Vblock architecture relies heavily on Intel Xeon® processors and other Intel data center technology, Intel will join the Acadia effort as a minority investor to facilitate and accelerate customer adoption of the latest Intel technology for servers, storage, and networking.
The following family of Vblock Infrastructure Packages is being offered by the Virtual Computing Environment coalition:

Vblock 2 is a high-end configuration supporting up to 3,000-6,000 virtual machines that is completely extensible to meet the most demanding IT needs of large enterprises and service providers. Designed for large-scale and 'green field' virtualization, Vblock 2 takes advantage of Cisco's Unified Computing System (UCS), Nexus 1000v and Multilayer Directional Switches (MDS), EMC's Symmetrix V-Max storage (secured by RSA), and the VMware vSphere platform.

Vblock 1 is a mid-sized configuration supporting 800 up to 3,000 virtual machines to deliver a broad range of IT capabilities to organizations of all sizes. Designed for consolidation and optimization initiatives, Vblock 1 is comprised of a repeatable model leveraging Cisco's UCS, Nexus 1000v and MDS, EMC's CLARiiON storage (secured by RSA), and the VMware vSphere platform.

Vblock 0 will be an entry-level configuration available in 2010, supporting 300 up to 800 virtual machines, for the first time bringing the benefits of private clouds within reach of medium-sized businesses, small data centers or organizations, and for test and development by channel partners, systems integrators, service providers, ISVs, and customers. Vblock 0 is also comprised of a repeatable model leveraging Cisco's UCS and Nexus 1000v, EMC's Unified Storage (secured by RSA), and the VMware vSphere platform.

So, we see that these are the key players into cloud infrastructure market and this new release of vblock solution will help the market to get more visibility and adoption of clouds and at the same time will benefit customers by introducing competition between these key players and hence a race for better converged solution.
GAURAV ANAND