Turbonomic: Bringing Autonomics to Virtualization
VMTurbo is now Turbonomic, with a focus on autonomic systems that can manage themselves and make adjustments as needed.
In a dramatic new move to highlight some very interesting new software capabilities, VMTurbo officially announced in August that its name would become Turbonomic. The new name represents the three main pillars of the company's product: Speed, economy and autonomy. It also points to a major shift that's happening in technology toward autonomic systems and management.
Turbonomic and Autonomics
Not sure what autonomic computing is? Well, think of the autonomic nervous system, the human regulatory system that helps keep us alive. Medical professionals define it as the part of the peripheral nervous system, and it manages vitals such as heart rate, respiratory rate and pupillary dilation. In an autonomic system, everything is maintained and regulated automatically – you don't remind yourself to breathe, or decide that you're going to get a rush of adrenaline in response to a conflict or challenge; your body senses what's required and makes the necessary adjustments to maintain the system.Just like the autonomic nervous system does its work without any user input, Turbonomic regulates data center management. (To learn more about autonomics and Turbonomic, see Autonomic Systems and Elevating Humans from Being Middleware: Q&A with Ben Nye, CEO of Turbonomic.)
Automation for a New Age
For a number of years, professionals have been talking about things like resource allocation for virtual machines, workload sharing and other types of management tasks necessary in a complex hardware and software environment. Over that time, the assumption has been that better leadership and decision-making by human administrators would be the key to progress in this field.
This is what Turbonomic is designed to do. By abstracting workloads and building decision automation systems, Turbonomic essentially creates a virtual marketplace where applications, virtual machines, and hardware and software elements metaphorically buy and sell resources.
Instead of waiting for the resources that they need, virtual machines make independent decisions to fulfill their demands, which assures performance and self-regulates the system to make everything run much more smoothly. (For more on automation, see Why Automation Is the New Reality in Big Data Initiatives.)
There is a lot of detail built into Turbonomic's functionality, but at the bottom of it is the big secret - that engineers can build proactive and anticipatory systems that can make their own decisions in real time. Turbonomic isn't reactive, it's proactive, and it's able to automate so many of the labor-intensive processes that administrators have taken for granted throughout the era of virtualization.
Correlation to Network Security Innovations
In a way, what's going on with Turbonomic is a little bit like the innovations that technology leaders have made in the network security sector.
Throughout the security community, as cyberattacks ramp up and different types of hacking appear on the horizon, there's been a major trend toward systems that go beyond the perimeter – network security monitoring systems that are built as multi-segmented systems that go beyond a firewall or antivirus program. There are many ways to build these programs, but they all revolve around the same central principle – that businesses can't afford to simply wait for an attack, but must preempt a lot of the digital threats that companies fear because of security vulnerabilities.
In the same way, Turbonomic goes beyond linear administration and programmable task management by bringing new automation to processes that, on their own, are relatively new. Administrators who are familiar with scanning software tools for things like CPU bottlenecks or trying to push resources to starving virtual machines know that these things take work. Turbonomic removes the human middleware from the equation, increasing efficiency, speed and resource management.
Making VM handling, resource allocation and workloads autonomic is a big shift, in part because it moves the central control beyond basic human management.
Investing in Cloud and On-Premises Systems
The development of Turbonomic also takes place in the context of the rapid sea change toward cloud-based systems.
Companies are investing in on-premises cloud and hybrid virtualization systems, but placing a lot of emphasis on the cloud (or at cloud connection points). With Turbonomic, the issue of cost becomes entirely transparent through the interface, and with the ability to manage both on-premises and cloud-based systems, Turbonomic helps businesses look at platforms like AWS and Azure to benefit from real-time decision-making, thus helping resolve cost and performance issues.
Essentially, Turbonomic offers administrators choices. They can sign off on decisions on re-configuration, moving or suspending workloads, decommissioning hardware and making other critical demand and supply choices, or they can basically allow Turbonomic to go on “autopilot,” addressing these issues on its own. In that way, Turbonomic is really a kind of autonomic system for the cloud, and it arrives at exactly the right time.
Check out the range of resources and guidance at the Turbonomic website to learn more about how this product is helping VM admins and others to break out of the box, and realize a whole new way of system administration.
This content was brought to you by our partner, Turbonomic.
More from Turbonomic
- Why would companies invest in decision automation?
- What are some advantages of multi-cloud deployments?
- How does software-defined networking differ from virtual networking?
- How does dynamic allocation in the cloud save companies money?
- Why should companies be considering intent-based networking?
- Why is it important to manage a relational database system in the cloud?
- How can businesses innovate in managing data center bandwidth?
- What are some best practices for cloud encryption?
- How does visibility help with the uncertainty of handing data to a cloud provider?
- How can companies maintain application availability standards?
- Why do cloud providers seek FEDRamp certification?
- How might a team make an app "cloud-ready"?
- Why does loosely coupled architecture help to scale some types of systems?
- How might companies deal with hardware dependencies while moving toward a virtualization model?
- Why does virtualization speed up server deployment?
- What is the virtualization "backlash" and why is it important?
- Why could a "resource hog" make virtualization difficult?
- How might a company utilize a virtualization resource summary?
- Why do undersized VMs lead to latency and other problems?
- What are some of the positives of a demand-driven migration model?
- Why should cloud services offer both elasticity and scalability?
- What are some of the values of real-time hybrid cloud monitoring?
- Why might a company assess right-sizing on-premises versus in the cloud?
- How can companies deal with “dynamic unpredictability?”
- What are some basic ideas for optimizing hybrid cloud?
- Why do some companies choose Azure or AWS over open-source technologies like OpenStack?
- What are some advantages and drawbacks of stateless applications?
- Why is it important to look at the "full stack" in virtualization?
- How does automation help individual system operators?
- How do companies develop a "data center BMI"?
- How can companies tally up cloud costs for multi-cloud or complex cloud systems?
- Why is a good HTML5 interface important for a business project?
- How do companies work toward composable infrastructure?
- How can a manager use a workload chart?
- How can companies work to achieve a desired state?
- How can companies cultivate a better approach to “object-based” network changes?
- Why do naming conventions for virtual machines help with IT organization?
- Why is reserve capacity important in systems?
- What are some values of cloud-native architecture?
- Why is it important to match uptime to infrastructure?
- What's commonly involved in site reliability engineering?
- What are some important considerations for implementing PaaS?
- What are some challenges with handling an architecture's storage layers?
- What are some of the benefits of software-defined storage?
- What are some things that rightsizing virtual environments can do for a business?
- What are some benefits of continuous real-time placement of user workloads?
- How can stakeholders use the three key operations phases of autonomic hyperconvergent management?
- Why would managers suspend VMs when VDI instances are not in use?
- Why would managers differentiate storage for I/O-intensive workloads?
- Why would companies assess quality of service for VMs?
- What's the utility of a cluster capacity dashboard?
- How can companies use raw device mapping?
- Why might someone use an N+1 approach for a cluster?
- How do companies balance security, cost, scalability and data access for cloud services?
- How do companies battle application sprawl?
- What are some benefits of self-driving data centers?
- What are some concerns companies might have with a "lift and shift" cloud approach?
- What is involved in choosing the right EC2 instances for AWS?
- What are some benefits of workload abstraction?
- What are some challenges of scaling in OpenStack?
- How do companies use Kubernetes?
- What methods do companies use to improve app performance in cloud models?
- How do businesses use virtualization health charts?
- What is the difference between convergence, hyperconvergence and superconvergence in cloud computing?
- What are some of the business limitations of the public cloud?
- What is the difference between deploying containers inside a VM vs directly on bare metal?
- What are the benefits of converged infrastructure in cloud computing?
- How is containerization different from virtualization?