The Top 3 Challenges for Implementing Public Cloud
Organizations should consider these points before implementing public cloud.

Deploying resources on the public cloud is incredibly easy – so easy, in fact, that even business managers can do it. But deploying resources and managing them are very different things, and most organizations are quickly discovering that as their data environments scale, so do the challenges.
Most of the issues that arise in the public cloud can be summed up under the mantle of shadow IT – the practice by which users create, and often abandon, resources without IT’s authorization or even knowledge. This can result in lost or uncoordinated data, cost overruns, security risks and a wealth of other problems. (To learn about different types of cloud services, see Public, Private and Hybrid Clouds: What's the Difference?)
But even when everything is on the up and up, the enterprise can still run into trouble merely by virtue of the fact that cloud resources are not consumed, managed or utilized the same way as local data center resources. Here then, are the top three challenges that tend to prevent cloud infrastructure from achieving its maximum value:
Compliance
According to Dereje Yimam and Eduardo B. Fernandez, technology researchers at Florida Atlantic University, maintaining compliance in the cloud is problematic for a number of reasons. For one thing, there is a distinct lack of common cloud reference architectures. This does not completely subvert compliance efforts, but it makes them a lot harder than they should be. With such a wide variety of architectural styles across multiple cloud providers, the enterprise is unable to maintain compliance across distributed workloads, and it makes it difficult to assess individual providers’ strengths and weaknesses before or even after data has been migrated.
Compliance can also be hampered by an inability to maintain full access and control over cloud-based environments. Most organizations that are subject to strict compliance rules will undoubtedly spell their requirements out in the service-level agreement, but without direct access to underlying infrastructure, enforcement of these requirements is a matter of trust, and violations are often detected only after data has been breached. (For more on compliance, see Beyond Governance and Compliance: Why IT Security Risk Is What Matters.)
The enterprise should also be aware that the public cloud faces unique security threats that don’t exist, or at least are greatly diminished, in local infrastructure. Most cloud workloads are hosted on highly partitioned, but nonetheless shared, hardware, so one user’s problem can impact another. And since cloud resources are often provisioned by people who simply want to get their work done, security is not always a high priority. However, one up-and-coming option - autonomic virtual monitoring - can help mitigate this risk.
Costs
It may seem strange to list this as a challenge, given that the cloud generally supports data loads at a fraction the cost of a traditional data center, but as experience grows so does the realization that the sub-penny per GB come-on offer is rarely the whole story.
In many cases, the cloud’s rapid and easy scalability is the primary cost driver. When coupled with its self-service provisioning options, hosted environments can quickly scale up and out to extreme levels, ultimately pushing operational costs beyond the capital expenses of owned and operated data facilities. This trend is most often observed in technology startups, which launch on full cloud infrastructure but eventually start building their own IT as their business grows.
Enterprise executives should also realize that even though resources are cheaper in the cloud, management costs are not. No matter where an app is hosted, it still requires a technician to monitor and maintain it, which means labor costs tend to scale as cloud deployments become more prevalent. This is one of the reasons why many enterprise workloads are being handed over to managed service providers, which provide not just the infrastructure to support applications and data, but the people to oversee them. Of course, this level of service also comes at higher price points than basic cloud.
At the same time, most cost comparisons between cloud and in-house infrastructure often fail to take into consideration items like connectivity, customization, backup and recovery and a range of other factors. In most cases, the cloud still provides a lower-cost option, but it is not nearly as dramatic as the initial sales pitch suggests and, as mentioned above, these costs can quickly scale up. Public cloud management software can help streamline operations and ensure more successful, less costly, cloud implementation.
Performance
Performance in the cloud is difficult to measure because the metrics can vary widely across CPU, memory, networking and other elements. Most enterprises are challenged enough just keeping track of their own diverse infrastructure, let alone resources that may be distributed across a number of third-party systems and providers.
Compounding the problem is a lack of visibility into cloud infrastructure, which makes it difficult to assess performance characteristics of various workloads as well as the resource consumption patterns of the hosted environment. Without this, the enterprise has no way of knowing if it is getting optimal support from the resources it is paying for, nor any clear way of improving its configurations or processes to adjust to changing business requirements. Ultimately, this lack of visibility in cloud infrastructure forces the enterprise to gauge performance on the application layer, which generally does not reveal problems until the user is aware of them too.
So what is to be done about these challenges? Increasingly,the enterprise is turning toward automation to give the cloud environment a high degree of autonomy when it comes to building and maintaining the data ecosystem. As workloads become more complex and in need of faster and more dynamic support, operations will rely on too many touch points for even an army of IT administrators to handle. As today’s automated platforms evolve through artificial intelligence and machine learning, the enterprise will find that their clouds will become increasingly efficient and effective simply by operating as needed.
It’s been a tenant of technological advancement that for every challenge there is a solution. These days, the enterprise often has a plethora of solutions to choose from, which itself can be a challenge when it comes to consistently deploying the right one. But with the broad federation of cloud infrastructure and the increased prevalence of automated, abstract architectures, most organizations will find that wrong turns in the cloud can be quickly corrected while successful solutions can be expanded and improved with far fewer complications than traditional data architectures.
Not sure which cloud services are right for you? Cloud Cost Compare will profile your app workload and decide on the best cloud and template.
Related Terms
Written by Arthur Cole | Contributor

Arthur Cole is a freelance technology journalist who has been covering IT and enterprise developments for more than 20 years. He contributes to a wide variety of leading technology web sites, including IT Business Edge, Enterprise Networking Planet, Point B and Beyond and multiple vendor services.
More from Turbonomic
Related Questions
- Why would companies invest in decision automation?
- What are some advantages of multi-cloud deployments?
- How does software-defined networking differ from virtual networking?
- How does dynamic allocation in the cloud save companies money?
- Why should companies be considering intent-based networking?
- Why is it important to manage a relational database system in the cloud?
- How can businesses innovate in managing data center bandwidth?
- What are some best practices for cloud encryption?
- How does visibility help with the uncertainty of handing data to a cloud provider?
- How can companies maintain application availability standards?
- Why do cloud providers seek FEDRamp certification?
- How might a team make an app "cloud-ready"?
- Why does loosely coupled architecture help to scale some types of systems?
- How might companies deal with hardware dependencies while moving toward a virtualization model?
- Why does virtualization speed up server deployment?
- What is the virtualization "backlash" and why is it important?
- Why could a "resource hog" make virtualization difficult?
- How might a company utilize a virtualization resource summary?
- Why do undersized VMs lead to latency and other problems?
- What are some of the positives of a demand-driven migration model?
- Why should cloud services offer both elasticity and scalability?
- What are some of the values of real-time hybrid cloud monitoring?
- Why might a company assess right-sizing on-premises versus in the cloud?
- How can companies deal with “dynamic unpredictability?”
- What are some basic ideas for optimizing hybrid cloud?
- Why do some companies choose Azure or AWS over open-source technologies like OpenStack?
- What are some advantages and drawbacks of stateless applications?
- Why is it important to look at the "full stack" in virtualization?
- How does automation help individual system operators?
- How do companies develop a "data center BMI"?
- How can companies tally up cloud costs for multi-cloud or complex cloud systems?
- Why is a good HTML5 interface important for a business project?
- How do companies work toward composable infrastructure?
- How can a manager use a workload chart?
- How can companies work to achieve a desired state?
- How can companies cultivate a better approach to “object-based” network changes?
- Why do naming conventions for virtual machines help with IT organization?
- Why is reserve capacity important in systems?
- What are some values of cloud-native architecture?
- Why is it important to match uptime to infrastructure?
- What's commonly involved in site reliability engineering?
- What are some important considerations for implementing PaaS?
- What are some challenges with handling an architecture's storage layers?
- What are some of the benefits of software-defined storage?
- What are some things that rightsizing virtual environments can do for a business?
- What are some benefits of continuous real-time placement of user workloads?
- How can stakeholders use the three key operations phases of autonomic hyperconvergent management?
- Why would managers suspend VMs when VDI instances are not in use?
- Why would managers differentiate storage for I/O-intensive workloads?
- Why would companies assess quality of service for VMs?
- What's the utility of a cluster capacity dashboard?
- How can companies use raw device mapping?
- Why might someone use an N+1 approach for a cluster?
- How do companies balance security, cost, scalability and data access for cloud services?
- How do companies battle application sprawl?
- What are some benefits of self-driving data centers?
- What are some concerns companies might have with a "lift and shift" cloud approach?
- What is involved in choosing the right EC2 instances for AWS?
- What are some benefits of workload abstraction?
- What are some challenges of scaling in OpenStack?
- How do companies use Kubernetes?
- What methods do companies use to improve app performance in cloud models?
- How do businesses use virtualization health charts?
- What is the difference between convergence, hyperconvergence and superconvergence in cloud computing?
- What are some of the business limitations of the public cloud?
- What is the difference between deploying containers inside a VM vs directly on bare metal?
- What are the benefits of converged infrastructure in cloud computing?
- How is containerization different from virtualization?