Since the dawn of the cloud era, the enterprise has been looking forward to the possibility of seamlessly offloading excess data to third-party virtual infrastructure – also known as cloud bursting. But while technologically possible, this prize seems to remain perpetually out of reach as a practical matter, even in hybrid environments that are supposed to support robust connectivity between on-premises and remote data centers.

It turns out that the obstacles to this level of functionality are more formidable than initially thought, and even the use cases are not that strong given the wildly different operating environments that inhabit traditional and cloud-based architectures.

Performance Costs?

For one thing, says Gartner analyst Lauren Nelson, bursting places significant strain on both internal and external networks, very little of which has been abstracted to the point that it can support highly dynamic workflows. This means that to implement an effective bursting environment, most networks must be overprovisioned to handle peak loads, which drives up costs and leaves much of the bandwidth idle during normal operating periods. For this reason, many enterprises opt for a hosted private cloud, which provides the same level of performance and isolation as an on-premises data center but can more easily burst workloads onto the provider’s public resources. (For more on different types of cloud services, see Public, Private and Hybrid Clouds: What's the Difference?)

Still, issues like interoperability and integration get in the way of completely seamless bursting. Like the enterprise data center, most cloud facilities feature a collection of hardware, software, virtualization and other solutions – even those that are built around customized platforms and open reference architectures. Every time one platform needs to query another or convert data from one format to another, a slight bit of lag is introduced, and this can become noticeable to users as workloads increase and resource consumption starts to scale.

Even when workloads are successfully pushed across these technological diasporas, performance can vary dramatically from cloud to cloud. A key problem, says Kaseya’s Mike Puglia, is the fact that traditional data center applications are not designed to run in dynamic cloud environments, and vice versa. So even within the same application, dataflows internal to the data center may move much more quickly than those that must traverse the WAN to reach the cloud and back. And since most organizations lack the visibility into their cloud provider’s infrastructure, it can be difficult, if not impossible, to determine exactly where the bottlenecks are and how to resolve them.

Predictable Workloads Help

The news is not all bad, however. As tech writer Tyler Keen noted recently, bursting is a lot easier if you know when and by how much your workload will spike. An e-commerce environment that sees heavy traffic during the holidays, for example, can utilize a pre-configured cloud environment that dynamically scales to desired levels. In many cases, the environment is already linked to a limited cloud presence, so the enterprise is not exactly “bursting” data but consuming more of the provider’s resources than normal. To accomplish this, of course, application software will have to be tailored to support multi-instance environments, and this becomes more complicated as the app comes to rely upon multiple third-party services.

But shouldn’t all of these issues fade away with the rise of virtual networking and the software-defined data center (SDDC)? Perhaps not entirely, says Dave Cope, senior director of market development for Cisco CloudCenter. While these and other developments certainly help, the real breakthrough will come from abstraction at the application level and the development of cloud-independent application profiles. This will deliver the central point of visibility and control to allow the enterprise to manage its workflows regardless of where or how they are supported. Even as the app transitions between public, private and hybrid resources, users maintain a consistent interface even as the app itself is continuously integrated and upgraded through advanced DevOps processes. (To learn more about software-defined data centers, see The Software-Defined Data Center: What's Real and What's Not.)

This approach also allows the enterprise to become more cloud-like without the costly, time-consuming process of converting legacy infrastructure into private clouds. Using an application-centric management and orchestration platform, organizations can convert their entire application portfolio to a consumption-based services model that maintains performance and consistency across any and all infrastructure configurations. This is a tall order for many enterprises, however, as it fundamentally shifts the relationship between infrastructure, applications, data, users and even the business model itself.

The Future of Bursting

Nevertheless, this is exactly the journey that today’s enterprise faces as it confronts the realities of digital transformation and the rise of the service-driven economy. Today’s data user has little patience for latency, service interruptions or other excuses that prevent them from getting what they want when they want it, but traditional data center infrastructure is not flexible enough to support this level of functionality, while the cloud does not always represent the most cost-effective solution.

At this point, a fully seamless, distributed architecture is still a work in progress, but with the major technology limitations clearly identified, it isn’t a stretch to envision an environment in which data and applications will one day freely traverse multiple resource configurations and dynamically self-assemble their optimal support infrastructure.

Once that is accomplished, bursting data from one set of resources to another should be a snap.