One of the frustrations of going virtual for applications and desktops is performance. No one wants to wait longer than a second or two for an application to appear after launch. As users, we expect to have our applications appear immediately after double-clicking the icon. We don’t realize what goes on in the background to deliver those applications between server, through firewalls, through load balancers, over the air or through the wires to our desktops and to our mobile devices, nor do we care. Our collective patience has worn thin with promises of better, faster, more secure technology and it’s time for a “put up or shut up” moment from vendors and from support staff alike. In turn, vendors and support staff share our pain and have responded with some acceleration technologies that deliver performance at or near locally installed levels.
Patent-pending network and storage technology with compute, virtualization, and SaaS management in ONE enterprise cloud in a box.
Witness the power of Ignite today.
For users, it’s all about the speed, but unlike users, architects, system administrators and CIOs aren’t looking for faster response to user double-clicks; they’re also looking for scalability, improved security, and longer technology life expectancies than ever before. In the end, users are vendor and support’s harshest critics and for that reason, the exploration of virtual application techniques and technologies are at hand. This article examines five ways to accelerate virtual applications. The five solutions are in no particular order, but all focus in one of three key areas for optimization and acceleration: infrastructure, application code and bandwidth.
WAN and LAN Optimization
You may refer to WAN and LAN optimization as a bandwidth solution, where the ultimate goal is to put more information and more data on a network pipeline in a more efficient manner. Because application performance is so critical to end users, there are some ingenious methods of delivering more content in shorter amounts of time, such as creating a content delivery network (CDN) that essentially moves the data closer to the consumer or the end user. Moving the data closer to the user decreases latency because the data has to traverse fewer “hops” or networks to arrive at its destination. Most cloud service providers have CDNs in place already in order to help application owners deliver distributed content close to its consumers.
Load balancing optimizes bandwidth by spreading client requests among multiple servers or among multiple locations in order to better share the burden of application delivery. Load balancers enhance application delivery speed by removing the traffic jams that occur with user requests for a single application. But they also increase reliability by being able to deliver the application efficiently to a server that isn’t overburdened with other requests.
Increasing raw bandwidth between applications and clients seems like an obvious enhancement to make to speed application delivery. Who can argue that a gigabit network connection between the application infrastructure and the client computer is a bad thing? Even a poorly designed and conceived application will receive a significant performance boost by increasing bandwidth between the source and the target.
SSDs and Flash Arrays
SSDs and flash arrays seem to be the new “go to” technology for any kind of application performance enhancement. It is true that solid-state storage is far faster than spinning disks, but it’s also significantly more expensive. The solution might be well to look at using SSDs differently – as cache for “hot” data instead of for data at rest. SSDs can deliver data far more quickly than spinning disks can, but some of that efficiency is lost in translation over the network and through various networking components. However, if one uses so-called “flash cache” to leverage SSD speed on which to store cached information, the results are impressive. Intel reports up to “12 times more performance on transactional database processing and up to 36 times faster processing of I/O intensive virtualized workloads.”
SSDs for data caching makes sense due to the speed at which data can be retrieved and placed into memory. And if SSDs are used purely for caching purposes, then significantly fewer of them will have to be purchased to satisfy the resulting performance boosts. (For more on storage, see How to Optimize Your Enterprise Storage Solution.)
Ask anyone who uses a CAD program, video editing software or even a project management application where he or she wants those applications loaded and you’ll hear a chorus of “locally.” Turning these graphics-intensive applications loose into a virtual environment spelled disaster until the release of virtual graphic processing unit (GPU) technology.
Virtual GPUs finally allow any workload to be placed into a virtual machine. The old-school CAD holdouts have now been assimilated, as have video editors and graphic designers. Even those who work in three dimensions now have a virtual presence thanks to virtual GPUs.
What made this technology possible is that special GPU boards, compatible with virtual machine host systems, are installed into those host systems and then their hardware attributes are abstracted or virtualized so that they can be used by virtual machines.
Performance Optimized Software
Angry and frustrated system administrators will often tell you that fixing code isn’t their job. The ubiquitous problem is, however, that developers might be top notch at programming an application, but can have absolutely no clue or desire to obtain a clue about optimizing code for performance. Often the attitude is that more RAM, faster disks or more powerful CPUs will fix any performance-related issues that might exist in the code, and it’s true to some extent. Alternatively, fixing code is far less expensive and far easier to resolve than rebuilding an infrastructure simply to accelerate poorly written applications.
There are those, such as computer pioneer Donald Knuth, who said of optimizing computer code, “If you optimize everything, you will always be unhappy.” Mr. Knuth’s opinions notwithstanding, optimizing code for a balanced amount of improvement should be performed and tolerated. But what about commercial programs that you purchase and deploy to your users? For example, the evergreen Microsoft Office suite is a standard suite of applications that system administrators must make available to both local and to remote users.
In the case of commercial programs that administrators have no leverage over, they must apply a multi-layered performance enhancement strategy. Caching of common application bits will be the administrator’s greatest technology in speeding up the delivery of large applications to users.
Any time that you read or hear the terms preloading, preprocessing or precompiling, the writer or speaker is most likely referring to some sort of caching. Application caching usually refers to the loading of certain static and some dynamic pieces of content into a memory buffer so that it’s easily retrievable upon request. The only bits delivered all the way through the pipeline are those that have to do specifically with the user or other time- or session-dependent data. Everything else is cached into memory.
Caching puts less stress on storage, on network bandwidth and on CPUs. The data waits in memory until called upon and then proceeds on its much shorter journey to the end user. Most technologies combine caching with location to deliver content quicker. In other words, common data – that’s data common to all users – is placed into the aforementioned CDNs and then delivered to users who are close to the requested data. Some solutions go so far as to locally cache data at remote or satellite sites so that those common bits reside where they’re consumed and don’t have to be pulled fresh across the WAN or an internet link.
Caching is often a preferred application acceleration method because it’s far less costly than comparably performing solutions that rely on infrastructure enhancements. (To learn more about caching, see Which Write is Right? A Look at I/O Caching Methods.)
Perhaps the basic rule of thumb when attempting to optimize or accelerate virtual applications in any environment is to first try caching and then supplement that strategy with other technologies. Caching is the least expensive solution and is the least invasive one as well. The best advice is to purchase plenty of RAM for memory caching and SSDs for hot data caching. Do try to keep costs manageable, but remember that when you spend money on infrastructure and on software, you can amortize it over the life of the technology and spread it out on a per user basis to make it easier for management to digest. In the end, keep your users productive and happy and they’ll keep you gainfully employed.