Get your head around the concepts and problems of unified computing.
Since joining Egenera, I've been championing what's now being termed Converged Infrastructure (aka unified computing). It's an exciting and important part of IT management, demonstrated by the fact that all major vendors are offering some form of the technology. But it sometimes takes a while for folks (my analyst friends included) to get their heads around understanding it.
PART 1: What is Converged Infrastructure, and how it will change data centre management
Converged Infrastructure and Unified Computing are both terms referring to technology where the complete server profile, including I/O (NICs, HBAs, KVM), networking (VLANs, IP load balancing), and storage connectivity (LUN mapping, switch control) are all abstracted and defined/configured in software. The result is a pooling of physical servers, network resources and storage resources that can be assigned on-demand. This approach lets IT operators rapidly re-purpose servers or entire environments without having to physically reconfigure I/O components by hand, and without the requirement of hypervisors. It massively reduces the quantity and expense of the physical I/O and networking components as well as the time required to configure them. A converged infrastructure approach offers an elegant, simple-to-manage approach to data centre infrastructure administration.
From an architectural perspective, this approach may also be referred to as a compute fabric or Processing Area Network. Because the physical CPU state is completely abstracted away, the CPUs become stateless and therefore can be reassigned extremely easily creating a fabric of components, analogous to how SANs assign logical storage LUNs. Through I/O virtualisation, both data and storage transports can also be converged, further simplifying the physical network infrastructure down to a single wire.
The result is a wire-once set of pooled bare-metal CPUs and network resources that can be assigned on demand, defining their logical configurations and network connections instantly.
There is another nice resource: A whitepaper commissioned by HP, executed by Michelle Bailey at IDC. In it she defines a converged system:
The term converged system refers to a new set of enterprise products that package server, storage, and networking architectures together as a single unit and utilise built-in service-oriented management tools for the purpose of driving efficiencies in time to deployment and simplifying ongoing operations. Within a converged system, each of the compute, storage, and network devices are aware of each other and are tuned for higher performance than if constructed in a purely modular architecture. While a converged system may be constructed of modular components that can be swapped in and out as scaling requires, ultimately the entire system is integrated at either the hardware layer or the software layer.
A Converged Infrastructure is different from, but analogous to, hypervisor-based server virtualisation. Think of hypervisors as operating above the CPU, abstracting software (applications and O/S) from the CPU; think of a Converged Infrastructure as operating below the CPU, abstracting network and storage connections. However, note that converged Infrastructure doesn't operate via a software layer the way that a hypervisor does. Converged Infrastructure is possible whether or not server virtualisation is present.
Converged Infrastructure and server virtualisation can complement each other producing significant cost and operational benefits. For example, consider a physical host failure where the entire machine, network and storage configuration needs to be replicated on a new physical server. Using Converged Infrastructure, IT Ops can quickly replace the physical server using a spare bare-metal server. A new host can be created on the fly, all the way down to the same NIC, HBA and networking configurations of the original server.
A Converged Infrastructure can re-create a physical server (or virtual host) as well as its networking and storage configuration on any cold bare-metal server. In addition, it can re-create an entire environment of servers using bare-metal infrastructure at a different location as well. Thus it is particularly well-suited to provide both high-availability (HA) as well as Disaster Recovery (DR) in mixed physical/virtual environments, eliminating the need for complex clustering solutions. In doing so, a single Converged Infrastructure system can replace numerous point-products for physical/virtual
server management, network management, I/O management, configuration management, HA and DR.
- Simplifying Management for The other half of the Data centre
In the manner that server virtualisation has grown to become the dominant data centre management approach for software, converged infrastructure is poised to become the dominant management approach for the other 50 percent of the data centre its infrastructure.
However adoption will take place gradually, for a few reasons:
IT can only absorb so much at once. Most often, converged infrastructure is consumed after IT has come up the maturity curve after having cut their teeth on OS virtualisation. Once that initiative is under way, IT then begins looking for other sources of cost take-out... and the data centre infrastructure is the logical next step.
Converged infrastructure is still relatively new. While the market considers OS virtualisation to be relatively mature, converging infrastructure is less-well understood.
But there is one universal approach that can overcome these hesitations -- money. So, in my next installment, I'll do a deeper dive into the really fantastic economics and cost take-out opportunities of converging infrastructure...
PART 2. Converged Infrastructures Cost Advantages
Let me go a bit deeper and explain the source of capital and operational improvements converged Infrastructure offers and why its such a compelling opportunity to pursue.
First, the most important distinction to make between converged infrastructure and the old way of doing business is that management as well as the technology is also converged. Consider how many point- products you currently use for infrastructure management.
This diagram below has resonated with customers and analysts alike. It highlights, albeit in a stylised fashion, just how many point-products an average-sized IT department is using. This results in clear impact in
- Operational complexity coordinating tool use, procedures, interdependencies and fault-tracking
- Operational cost the raw expense it costs to acquire and then annually maintain them
- Capital cost if you count all of the separate hardware components theyre trying to manage
That last bullet, the thing about hardware components, is also something to drill down into. Because every physical infrastructure component in the old way of doing things has a cost. I mean I/O components like NICs and HBAs, not to mention switches, load balancers and cables.
What might be possible if you could virtualise all of the physical infrastructure components, and then have a single tool to manipulate them logically?
Well, then youd be able to throw-out roughly 80 percent of the physical components (and the associated costs) and reduce the operational complexity roughly the same amount.
In the same way that the software domain has been virtualised by the hypervisor, the infrastructure world can be virtualised with I/O virtualisation and converged networking. Once the I/O and network are now virtualised, they can be composed/recomposed on demand. This eliminates a large number of components needed for infrastructure provisioning, scaling, and even failover/clustering (more on this later). If you can now logically redefine server and infrastructure profiles, you can also create simplifed disaster recovery tools too.
In all, we can go from roughly a dozen point-products down to just 2-3 (see diagram below). Now: Whats the impact on costs?
On the capital cost side, since I/O is consolidated, it literally means fewer NICs and elimination of most HBAs since they can be virtualised too. Consolidating I/O also implies converged transport, meaning fewer cables (typically only 1 per server, 2 if teamed/redundant). A converged transport also allows for fewer switches needed on the network. Also remember that with few moving (physical) parts, you also have to purchase few software tools and licenses. See diagram on facing page.
On the operational cost side, there are the benefits of simpler management, less on-the-floor maintenance, and even less power consumption. With fewer physical components and a more virtual infrastructure, entire server configurations can be created more simply, often with only a single management tool. That means creating and assigning NICs, HBAs, ports, addresses and world-wide names. It means creating segregated VLAN networks, creating and assigning data and storage switches. And it means automatically creating and assigning boot LUNs. The server configuration is just what youre used to except its defined in software, and all from a single unified management console. The result: Buying, integrating and maintaining less software.
Ever wonder why converged infrastructure is developing such a following? Its because physical simplicity breeds operational efficiency. And that means much less sustained cost and effort. And an easier time at your job.
PART 3: Converged Infrastructure: What it Is, and What it Isn't
In my two earlier posts, I first took a stab at an overview of converged infrastructure and how it will change IT management, and in the second installment, I looked a bit closer at converged infrastructure's cost advantages. But one thing that I sense I neglected was to define what's meant by converged infrastructure (BTW, Cisco terms it Unified Computing). Even more important, I also feel the need to highlight what converged infrastructure is not. Plus, there are vendor instances where The Emperor Has No Clothing -- e.g. where some marketers have claimed that they suddenly have converged infrastructure where the fact remains that they are vending the same old products. Why splitting hairs in defining terms? Because true converged infrastructure / unified computing has architectural, operational, and capital cost advantages over traditional IT approaches. (AKA Don't buy the used car just because the paint is nice)
Defining terms - in the public domain Obviously, it can't hurt to see how the vendors self-describe the offerings... here goes: Cisco's Definition (via webopedia) "...simplifies traditional architectures and dramatically reduce the number of devices that must be purchased, cabled, configured, powered, cooled, and secured in the data centre. The Cisco Unified Computing System is a next-generation data centre platform that unites compute, network, storage access, and virtualisation into a cohesive system..."
Egenera's Definition "A technology where CPU allocation, data I/O, storage I/O, network configurations, and storage connections are all logically defined and configured in software. This approach allows IT operators to rapidly re-purpose CPUs without having to physically reconfigure each of the I/O components and associated network by handand without needing a hypervisor."
HP's Definition "HP Converged Infrastructure is built on a next-generation IT architecture based on standards that combines virtualised compute, storage and networks with facilities into a single shared-services environment optimised for any workload."
Defining terms - by using attributes Empirically, converged infrastructure needs to have two main attributes (to live up to its name): It should reduce the quantity and complexity of physical IT infrastructure, and it should reduce the quantity and complexity of IT operations management tools. So let's be specific:
Ability to reduce quantity and complexity of physical infrastructure:
- virtualise I/O, reducing physical I/O components (e.g. eliminate NICs and HBAs)
- leverage converged networking, reducing physical cabling and eliminating recabling
- reduce overall quantity of servers, (e.g. ability to use free pools of servers to repurpose for scaling, failure, disaster recovery, etc.)
- Ability to reduce quantity and complexity of operations/management tools:
- be agnostic with respect to the software payload (e.g. O/S independent)
- fewer point-products, less paging between tool windows (BTW, this is possible because so much of the infrastructure become virtual and therefore more easily logically manipulated)
- reduce/eliminate the silos of visualising & managing physical vs virtual servers, physical networks vs virtual networks
- simplified higher-level services, such as providing fail-over, scaling-out, replication, disaster recovery, etc.
To sum-up so far, if you're shopping for this stuff, you need to
a) Look for the ability to virtualise infrastructure as well as software
b) Look for fewer point products and less windowing
c) Look for more services (e.g. HA, DR) baked-into the product.
Beware.... when the Emperor Has No Clothes...
In closing, I'll also share my pet peeve: When vendors whitewash their products to ft the latest trend. I'll not name names, but beware of the following stuff labeled converged infrastructure:
- If the vendor says Heterogeneous Automation - that's different. For example, it could easily be scripted run-book automation. This doesn't reduce physical complexity in the least.
- If the vendor says Product Bundle, single SKU - Same as above. Shrink wrapped does not equal converged
- If the vendor says Pre-Integrated - This may simplify installation, but does not guarantee physical simplicity nor operational simplicity.
Thanks for reading the series so far. I'm pondering a fourth-and-final installment on where this whole virtualisation and converged infrastructure thing is taking us - a look at possible future directions.
Ken Oestreich is a marketing and product management veteran in the enterprise IT and data centre space, with a career spanning start-ups to established vendors.Ken currently works as Vice President - Product Marketing at Egenera.
This article is published with prior permission