Virtualization is offering companies some significant advantages in using IT resources more efficiently, and streamlining operations to work smarter, not harder, in a data-heavy and digitally connected world.
In the old days, managers strung Ethernet cabling or other connectivity between physical computer towers and monitors to allow workstations to communicate, sending files and messages back and forth. These days, a lot of businesses are using "virtual machines" — that is, slicing and dicing hardware components into a set of virtual devices using top-level controllers such as hypervisors. That means a workstation’s functionality is determined not by the specs built into a device at the factory, but by the resources that a tech professional has provisioned for it.
VMWare has become, in some ways, synonymous with virtualization, partly due to trademarking (since VM is the abbreviation for virtual machine, VMWare looks, at first glance, like the originator of VMs) — and partly through capturing market share in this very new industry.
Look on the Web, and you'll see videos of administrators with smiling faces, talking about their triumph over business problems using VMWare products or other virtualization resources. But what do you have to get through to really achieve with these kinds of systems?
Getting Into the Literature
On a very basic level, getting conversant with virtualization starts with looking at vendor literature and discovering what their products can do for your company. Taking VMWare as an example, we can see that today's vendors offer a staggering number of services, each with different proprietary names and functionality, that exist in a kind of text-heavy alphabet soup often thrown at a reader who's looking at introductory business resources.
For example, take VMWare vRealize Automation, a single component of the cloud services that the company offers.
Here is a top-level description of the service, direct from VMWare’s literature:
- "VMware vRealize Automation empowers IT to accelerate the delivery and ongoing management of personalized, business-relevant infrastructure, application and custom services while improving overall IT efficiency. Policy-based governance and logical application modeling assures that multi-vendor, multi-cloud services are delivered at the right size and service level for the task and (sic) that needs to be performed. Full lifecycle management assures resources are maintained a (sic) peak operating efficiency and release automation allows multiple application deployments to be kept in-synch through the development and deployment process. vRealize Automation turns IT into business enablers."
The wording of this is going to be pretty tough for anyone who is not very involved in using these services already. We might know, for example, that "policy-based governance" refers to making consistent changes or applications to data through a "policy" determined by a CIO, but we still don’t have an inkling, at least not from this resource, of how that, or any other part of this, gets done.
What this boils down to is that the first challenge, and in a way, the first gatekeeper, is the research on services. Executives or anyone else trying to make practical decisions have to be 100% up on the jargon and the technology, or they’re likely to get lost in a sea of words.
|Free Download: Best Ways to Extend Endpoint Management and Security to Mobile Devices|
Configuring and Managing Network Endpoints
Another of the big challenges with actually implementing virtualization is basically about those workstations mentioned above, the hardware components that were, and still are, the "workhorses" of enterprise hardware architectures, and the parts that end users see, hear and touch.
Businesses can re-invent their logical devices all they want, but they still need an end-user interface — a screen and a keyboard, and something people can sit down to. But this can cause big headaches for administrators who need to link these endpoints to the virtualized network.
Anna Cmaylo is a sales and marketing associate at Stratodesk in Klagenfurt, Austria. The company has developed a product called No Touch Desktop to help handle endpoint setups.
"When you think about an average organization that is transitioning to VDI," Cmaylo says,
"Despite choosing an effective solution for the server environment and infrastructure (VMware), the question of endpoint management often results in unforeseen headaches. A diverse hardware environment (various vendors, some old, some new, some desktop PCs, some thin clients, different versions of Windows, etc.) means that finding the right configuration for the clients can be cumbersome."
Cmaylo adds that in some cases, companies actually lose functionality, because they feel the need to get rid of old desktop PCs or other hardware that they see as outdated or obsolete. One problem, she says, is that these older models may actually be more capable than newer and more expensive equipment. Besides, upgrading hardware is expensive — which is one of the problems that virtualization was supposed to solve in the first place.
Companies like Stratodesk develop products to help make endpoint administration consistent, so that instead of cobbling together old monitors and new thin client systems, administrators can take a more global approach.
Dealing with Storage
Another problem with virtualization setups is the storage end.
Parag Patkar is the CEO of Virtunet Systems, a company with five years of experience helping clients deal with I/O issues. In a response to our questions about virtualization hardships, Patkar talks about how storage can be a significant bottleneck in systems. He also provided several reasons.
One is that in general, storage speeds haven’t kept pace with the advances in CPU, memory and network transfer speeds that support today's modern virtualized systems.
Storage controllers, said Patkar, can also slow down traffic, and increased input/output to networked storage from big data centers can put a strain on servers.
Patkar also points to what he calls an "I/O blender" — "In virtualized servers, by the time sequential I/O from each VM gets through the hypervisor, it gets interspersed with I/O from other VMs on the same server," says Patkar, "and so the I/O pattern out of the virtualized server becomes mostly random. Since disk performance deteriorates rapidly with random I/O, this reduces storage performance further."
These issues are just some of what unwitting business people can look forward to when putting a toe into the waters of virtualization. From the provisioning of resources, to the back and forth that goes on between vendors and clients, network virtualization is not child’s play. It’s a new — and in some ways, unwritten — industry, with a lot of puzzles for VM-masters to figure out.