IBM started the move toward autonomic computing in 2001. IBM engineers saw the need to develop smart systems that could monitor, repair and manage themselves to a high degree. In 2004, IBM Press published the 336-page “Autonomic Computing” book that described systems that “install, heal, protect themselves, and adapt to your needs – automatically.” The purpose of autonomic computing is to decrease human management and lower maintenance costs associated with break/fix, patch management, restarting services and problem reporting. Removing human intervention promised to reduce costs, to improve service levels, to enhance service levels and to simplify management.
The term autonomic means involuntary or unconscious and refers to the autonomic nervous system that controls breathing, pupil dilation and contraction, and other neuromuscular reflexes. The theory is that a computer system’s normal operations can operate at peak efficiency because of in-memory monitors, scheduled actions and periodic housekeeping tasks taking place in the background. One such autonomic system that system administrators have put into practice for decades is the daily backup. Scheduled backups run independent of all other system processes, restart if interrupted and have automatic reporting capability.
Turbonomic: Real-Time, Autonomic Performance Control Get a 30-Day FREE Trial |
The idea of systems that are self-healing, self-managing and self-monitoring is not new. Fiction writer Edward Ellis proposed the idea of a steam-powered mechanical man in his 1868 novel, “The Steam Man of the Prairies,” and Karel Capek coined the term, “robot” in his "Rossum’s Universal Robots" in 1921. The revival and excitement surrounding autonomous computing in the early part of the 21st century waned a bit with the widespread adoption of virtualization and cloud computing. However, there is now a return to interest in self-managing systems. (To learn more about autonomic systems, see Autonomic Systems and Elevating Humans from Being Middleware: Q&A with Ben Nye, CEO of Turbonomic.)
The Need for Automation
Industries and businesses need to run lean in order to compete in a global marketplace. The days of niche markets and limited competition are gone. Vendors have to respond to this economic pressure by developing automated systems that hide complexity from the end user. The problem is that the easier a technology is for the end user, the more complex it is on the back end. Vendors now have entire teams working on making systems more autonomous and easier to deploy and use.
The short-lived, but highly prized (and highly priced) International Journal of Autonomic Computing solicited articles on the following topics:
- Autonomic programming models, tools and environments
- Autonomic resources scheduling and management
- Autonomic middleware and toolkits
- Autonomic monitoring and management
- Autonomic policy and QoS/IT system management
- Autonomic performance evaluation and modeling
- Autonomic architectures and mathematical foundations
- Autonomic application integration issues
- Autonomic grid, services
- Autonomic access control and security issues
- Ontology programming for self-configuring
- Sensor array structure and signal processing
- Intelligent control and pattern recognition
- Autonomic applications, system solutions, case studies
- Convergence of web services, grid and autonomic computing
New technologies and trends responsible for the rekindled interest in autonomic computing are big data analytics, the internet of things (IoT), cloud computing, and the commoditization of IT labor resources, among other things. The greatest driver in autonomic computing research and development is the need to save money on labor. If systems can self-optimize, self-manage, self-monitor, self-heal and self-protect, then the need for human intervention decreases significantly and so do costs associated with maintenance and management.
Read: Turbonomic: Bringing Autonomics to Virtualization
The Automation Dilemma
Business owners want automation because it lowers costs. The flip side of automation is the assumed loss of control. Automation isn’t about removing control, it’s about increasing efficiency and removing some of the human error introduced into operating system and software maintenance.
Jake Smith, Director, Data Center Solutions and Technologies at Intel Corporation, wrote in 2009, for the Intel IT Peer Network, “Autonomic controls are in place today, machine to machine computer architectures are here today, scalable compute engines are here today. Are they perfect? No. Are they effective? Yes.” Autonomic controls are a good solution in that they actually deliver more insight and more control than manual controls do. Autonomic controls also increase efficiency.
Smith adds, “When executed properly, Autonomic controls should be able to deliver 20-25 percent performance and efficiency increases with each new generation of Moore’s law. In some cases … these increases have been over 150 percent in virtualization performance, these increases will be a combination of software architecture enhancement and silicon optimization.”
The takeaway from this discussion is that no matter how autonomous a system is, you can’t completely remove the human factor from that system. For example, if a piece of hardware fails, the system will send alerts to a human who must repair or replace the failed component. Through clever programming, operating systems and software can be made to self-heal, but hardware still requires a human touch. (For more on automation, see Why Automation Is the New Reality in Big Data Initiatives.)
Autonomic infrastructures, greater virtualization and elastic computing increase efficiency and remove the requirement for continuous administrator involvement, but they will never completely remove it.
The Autonomous Future
The future of autonomic computing is cloudy at best. Researchers and industry analysts aren’t sure of its future, except to state that more systems will use automatic or autonomic controls. Most agree on the fact that creating autonomous systems is an extremely complex undertaking for the ever-increasing complexity of business computing environments.
The goal of autonomic computing is to add sufficient intelligence to systems to allow those systems to adapt to changes, to dynamically protect themselves, to automatically apply patches and fixes, and to alert a human when things go really awry. At least one company created a virtual system administrator suite that acts as a system watchdog and administrator that requires very little human interaction. The suite provides a single management console where a human system administrator can monitor and manage hundreds of systems with a few mouse clicks.
The autonomic computing frenzy that IBM started in the early days of this century has evolved into a series of smaller steps toward the reality of fully autonomous infrastructures. The interest in autonomic computing hasn’t declined a great deal, but it has changed from a research novelty into a business necessity. Vendors continue to build autonomy into their systems on an evolutionary scale.
This content was brought to you by our partner, Turbonomic.