As energy costs rise and office space declines, energy- and space-saving solutions are at an absolute premium. In terms of pure economics, implementing a total migration to a virtualized environment seems a bit redundant. Overall, however, virtualization has been met with enthusiasm in the IT industry. There are still a few wrinkles, but it’s the boundless potential that has people really excited. Here we take a look at the pros and cons and let you decide.
A Little History About Virtualization
According to VMware’s official website, the practice of virtualization began in the 1960s, when IBM attempted to better partition mainframe computers in an effort to increase CPU utilization. The end result was a mainframe that could simultaneously perform multiple operations. With the onset of the 1980s and ’90s, the x86 architecture became the architecture of choice as distributed computing began to really take hold within the IT industry. The proliferation of the x86 architecture effectively caused a mass exodus from virtualization as the server-client model began a rapid rise in popularity.
In 1998, VMware was founded by a group of researchers from the University of California Berkley who were attempting to address some of the shortfalls of the x86 architecture. Among these shortfalls was a concept known as insufficient CPU utilization. Within many implementations of x86 architecture, CPU utilization averages between 10 and 15 percent of total capacity. One of the primary reasons behind this involves the practice of running one server per CPU to increase the performance of each individual server. This did enhance performance, but at the cost of hardware efficiency.
There’s no question that virtualization has become wildly popular within the IT industry, but why? Some of the more obvious reasons involve increased CPU utilization, increased space utilization, and the ability to standardize server builds. In terms of CPU utilization, more servers on one physical machine typically translates into more work performed by the CPU. So rather than receiving all Web traffic on one machine, all SMTP traffic on another machine, and all FTP traffic on yet another, it is possible to receive all said traffic on one physical machine, thereby increasing CPU utilization. However, doing this successfully involves using some discretion in placing multiple virtual machines on one host machine as this scenario has the potential to decrease performance.
The CPU utilization provided by virtualization indirectly affects space utilization. Keeping in mind the above-mentioned scenario where multiple servers are placed on one physical machine, it stands to reason that with virtualization, fewer physical machines are needed, and that as a result, less space is consumed.
Finally, virtualization lends itself rather easily to the concepts of cloning, ghosting, snapshots, and any other type of replicating software currently available. The value in this is derived from the leeway it provides a system administrator for creating images of any operating system on the network. Creating custom images allows a system administrator to create a default build that can be replicated throughout the network. The time this saves when configuring additional servers is invaluable. (To learn more, check out Server Virtualization: 5 Best Practices.)
The majority of the established disadvantages regarding virtualization pertain primarily to security. The first, and perhaps most important, disadvantage involves the concept of single point of failure. Put simply, if an organization’s Web server, SMTP server, and any other kind of server are all on the same physical machine, an enterprising young hacker only needs to perform a denial-of-service attack on that host machine to disable multiple servers within a network’s server infrastructure. Shutting down an organization’s Web server can be devastating in and of itself, but taking out several servers can be positively catastrophic.
Second, a common security practice is to place an intrusion detection system (IDS) on multiple network interfaces within a given network. If configured properly, the IDS can be a useful tool when studying trends, heuristics, and other such activities within the network. However, this becomes next to impossible in a virtualized environment, where multiple operating systems are placed on one host machine due to the fact that intrusion detection systems are only capable of monitoring physical network interfaces. In other words, the IDS works like a charm when monitoring ingress and egress traffic on the physical network interface, but when traffic is moving between virtual servers, the IDS hears nothing but crickets in the forest. (For related reading, check out The Dark Side of the Cloud.)
A Must for IT
The effort by most companies to stay afloat technologically has given birth to an insatiable thirst for more capacity and more performance. Given the need to do more with less, it should come as no surprise that virtualization has quickly become a staple of system administration. Until a new innovation within CPU architecture takes the IT world by storm, virtualization will continue to be thought of as an absolute must within any reputable IT network.