An In-Depth Look at Cloud Bursting
Cloud bursting is still a work in progress, but it is getting closer to practical reality.

Since the dawn of the cloud era, the enterprise has been looking forward to the possibility of seamlessly offloading excess data to third-party virtual infrastructure – also known as cloud bursting. But while technologically possible, this prize seems to remain perpetually out of reach as a practical matter, even in hybrid environments that are supposed to support robust connectivity between on-premises and remote data centers.
It turns out that the obstacles to this level of functionality are more formidable than initially thought, and even the use cases are not that strong given the wildly different operating environments that inhabit traditional and cloud-based architectures.
Performance Costs?
For one thing, says Gartner analyst Lauren Nelson, bursting places significant strain on both internal and external networks, very little of which has been abstracted to the point that it can support highly dynamic workflows. This means that to implement an effective bursting environment, most networks must be overprovisioned to handle peak loads, which drives up costs and leaves much of the bandwidth idle during normal operating periods. For this reason, many enterprises opt for a hosted private cloud, which provides the same level of performance and isolation as an on-premises data center but can more easily burst workloads onto the provider’s public resources. (For more on different types of cloud services, see Public, Private and Hybrid Clouds: What's the Difference?)
Still, issues like interoperability and integration get in the way of completely seamless bursting. Like the enterprise data center, most cloud facilities feature a collection of hardware, software, virtualization and other solutions – even those that are built around customized platforms and open reference architectures. Every time one platform needs to query another or convert data from one format to another, a slight bit of lag is introduced, and this can become noticeable to users as workloads increase and resource consumption starts to scale.
Even when workloads are successfully pushed across these technological diasporas, performance can vary dramatically from cloud to cloud. A key problem, says Kaseya’s Mike Puglia, is the fact that traditional data center applications are not designed to run in dynamic cloud environments, and vice versa. So even within the same application, dataflows internal to the data center may move much more quickly than those that must traverse the WAN to reach the cloud and back. And since most organizations lack the visibility into their cloud provider’s infrastructure, it can be difficult, if not impossible, to determine exactly where the bottlenecks are and how to resolve them.
Predictable Workloads Help
The news is not all bad, however. As tech writer Tyler Keen noted recently, bursting is a lot easier if you know when and by how much your workload will spike. An e-commerce environment that sees heavy traffic during the holidays, for example, can utilize a pre-configured cloud environment that dynamically scales to desired levels. In many cases, the environment is already linked to a limited cloud presence, so the enterprise is not exactly “bursting” data but consuming more of the provider’s resources than normal. To accomplish this, of course, application software will have to be tailored to support multi-instance environments, and this becomes more complicated as the app comes to rely upon multiple third-party services.
But shouldn’t all of these issues fade away with the rise of virtual networking and the software-defined data center (SDDC)? Perhaps not entirely, says Dave Cope, senior director of market development for Cisco CloudCenter. While these and other developments certainly help, the real breakthrough will come from abstraction at the application level and the development of cloud-independent application profiles. This will deliver the central point of visibility and control to allow the enterprise to manage its workflows regardless of where or how they are supported. Even as the app transitions between public, private and hybrid resources, users maintain a consistent interface even as the app itself is continuously integrated and upgraded through advanced DevOps processes. (To learn more about software-defined data centers, see The Software-Defined Data Center: What's Real and What's Not.)
This approach also allows the enterprise to become more cloud-like without the costly, time-consuming process of converting legacy infrastructure into private clouds. Using an application-centric management and orchestration platform, organizations can convert their entire application portfolio to a consumption-based services model that maintains performance and consistency across any and all infrastructure configurations. This is a tall order for many enterprises, however, as it fundamentally shifts the relationship between infrastructure, applications, data, users and even the business model itself.
The Future of Bursting
Nevertheless, this is exactly the journey that today’s enterprise faces as it confronts the realities of digital transformation and the rise of the service-driven economy. Today’s data user has little patience for latency, service interruptions or other excuses that prevent them from getting what they want when they want it, but traditional data center infrastructure is not flexible enough to support this level of functionality, while the cloud does not always represent the most cost-effective solution.
At this point, a fully seamless, distributed architecture is still a work in progress, but with the major technology limitations clearly identified, it isn’t a stretch to envision an environment in which data and applications will one day freely traverse multiple resource configurations and dynamically self-assemble their optimal support infrastructure.
Once that is accomplished, bursting data from one set of resources to another should be a snap.
Related Terms
Written by Arthur Cole | Contributor

Arthur Cole is a freelance technology journalist who has been covering IT and enterprise developments for more than 20 years. He contributes to a wide variety of leading technology web sites, including IT Business Edge, Enterprise Networking Planet, Point B and Beyond and multiple vendor services.
More from Turbonomic
Related Questions
- Why would companies invest in decision automation?
- What are some advantages of multi-cloud deployments?
- How does software-defined networking differ from virtual networking?
- How does dynamic allocation in the cloud save companies money?
- Why should companies be considering intent-based networking?
- Why is it important to manage a relational database system in the cloud?
- How can businesses innovate in managing data center bandwidth?
- What are some best practices for cloud encryption?
- How does visibility help with the uncertainty of handing data to a cloud provider?
- How can companies maintain application availability standards?
- Why do cloud providers seek FEDRamp certification?
- How might a team make an app "cloud-ready"?
- Why does loosely coupled architecture help to scale some types of systems?
- How might companies deal with hardware dependencies while moving toward a virtualization model?
- Why does virtualization speed up server deployment?
- What is the virtualization "backlash" and why is it important?
- Why could a "resource hog" make virtualization difficult?
- How might a company utilize a virtualization resource summary?
- Why do undersized VMs lead to latency and other problems?
- What are some of the positives of a demand-driven migration model?
- Why should cloud services offer both elasticity and scalability?
- What are some of the values of real-time hybrid cloud monitoring?
- Why might a company assess right-sizing on-premises versus in the cloud?
- How can companies deal with “dynamic unpredictability?”
- What are some basic ideas for optimizing hybrid cloud?
- Why do some companies choose Azure or AWS over open-source technologies like OpenStack?
- What are some advantages and drawbacks of stateless applications?
- Why is it important to look at the "full stack" in virtualization?
- How does automation help individual system operators?
- How do companies develop a "data center BMI"?
- How can companies tally up cloud costs for multi-cloud or complex cloud systems?
- Why is a good HTML5 interface important for a business project?
- How do companies work toward composable infrastructure?
- How can a manager use a workload chart?
- How can companies work to achieve a desired state?
- How can companies cultivate a better approach to “object-based” network changes?
- Why do naming conventions for virtual machines help with IT organization?
- Why is reserve capacity important in systems?
- What are some values of cloud-native architecture?
- Why is it important to match uptime to infrastructure?
- What's commonly involved in site reliability engineering?
- What are some important considerations for implementing PaaS?
- What are some challenges with handling an architecture's storage layers?
- What are some of the benefits of software-defined storage?
- What are some things that rightsizing virtual environments can do for a business?
- What are some benefits of continuous real-time placement of user workloads?
- How can stakeholders use the three key operations phases of autonomic hyperconvergent management?
- Why would managers suspend VMs when VDI instances are not in use?
- Why would managers differentiate storage for I/O-intensive workloads?
- Why would companies assess quality of service for VMs?
- What's the utility of a cluster capacity dashboard?
- How can companies use raw device mapping?
- Why might someone use an N+1 approach for a cluster?
- How do companies balance security, cost, scalability and data access for cloud services?
- How do companies battle application sprawl?
- What are some benefits of self-driving data centers?
- What are some concerns companies might have with a "lift and shift" cloud approach?
- What is involved in choosing the right EC2 instances for AWS?
- What are some benefits of workload abstraction?
- What are some challenges of scaling in OpenStack?
- How do companies use Kubernetes?
- What methods do companies use to improve app performance in cloud models?
- How do businesses use virtualization health charts?
- What is the difference between convergence, hyperconvergence and superconvergence in cloud computing?
- What are some of the business limitations of the public cloud?
- What is the difference between deploying containers inside a VM vs directly on bare metal?
- What are the benefits of converged infrastructure in cloud computing?
- How is containerization different from virtualization?