6 Things Many CIOs Don't Understand About Data Centers
Data centers are rapidly evolving. CIOs should understand these points in order to effectively manage their data.

The world of information technology is changing. But then, the evolution of IT has been quick and transformative from the beginning. No one who runs a data center can afford to be stuck in the past. The innovations of the last few years are making old models for IT management seem like ancient history. It’s not your father’s data center.
Of course, no one knows everything about the IT business. But it might help to address a few things that every chief information officer (CIO) should know – and some may still not understand – about data centers. (To learn more about the role of CIO, see Reality Check: What's the Difference Between a CTO and CIO?)
Total Cost of Ownership
GE’s Harry Handlin suggests that it’s easy to miscalculate the total cost of ownership (TCO) when it comes to data centers. TCO equals capital expenditures (CapEx) plus all operating expenditures (OpEx) over time. He believes that one of the ways people miscalculate TCO is in power consumption. In his article on the Data Center Knowledge website, he offers several arguments and illustrations about how energy efficiency could provide significant savings using the TCO model.
Ongoing Operations, a solutions provider for credit unions, provides a breakdown of factors to be considered in assessing TCO. These include construction and maintenance costs, equipment, labor and power.
Jonathan Koomey of Uptime Institute offers a white paper detailing their model for determining TCO for data centers. He says that “previous TCO calculation efforts for data centers … have been laudable, but generally have been incomplete and imperfectly documented.”
Whatever the method, it’s important for CIOs to find a way to understand the TCO for the data centers they control. The TCO applies to the entire scope of the data center from start to finish.
The Needs of the Data Center
Those in charge of data centers also should be aware of the needs that may arise. Tech Republic has identified “10 critical elements of an efficient data center”:
- Environmental controls
- Security
- Accountability
- Policies
- Redundancy
- Monitoring
- Scalability
- Change Management
- Organization
- Documentation
It’s a big list, and lot of issues to manage. These items are all part of the traditional data center. But what about the data center of the future? Bob Fortna helps us out there with his article on the website GCN. The data center of the future will need to:
- Deploy software-defined networking for greater agility.
- Automate processes for better security and control.
- Adhere to open standards for cost-effective innovation.
- Use analytics to address the needs of users.
- Ensure high performance and scalability to accommodate growing needs.
We noted the data center is changing. Any CIO worth his salt will stay abreast of the latest technology trends.
Break/Fix: An Outdated Model
The break/fix model has been around since time immemorial. Handymen have always been able to pick up an extra buck when things fall apart. The traditional practice was to wait for something to fail before getting help. After all, why pay for someone to just stand around?
This reactive model may have worked in the past. But today’s IT managers know that proactivity, automation and autonomous systems are far more efficient. Robert Peretson gives “17 Reasons Why Providing Break/Fix Support Will Kill Your IT Support Business.” He suggests that a managed service model – “proactive maintenance at a flat fee” – should be standard practice now. It will benefit both the CIO and those who provide technical support for his infrastructure.
The Business Bee website deals with the question “Managed IT Services vs. Break Fix: Which is Best for Your Small Business?” Their answer came in the form of more questions: “Can you afford to put off maintenance of your system and risk a full-fledged IT fiasco? Or are you willing to pay a monthly fee to possibly keep the problem from occurring in the first place?” They suggest that it could be worth it to let the IT professionals handle technical issues for a monthly fee while you go on about your core business.
Convergence, Hyperconvergence and Superconvergence
It’s becoming ever clearer that the equipment footprint of the data center is shrinking. I’ve been writing about this trend here at Techopedia for several months. New products are now combining compute, storage and network all in one box. Add to that virtualization and cloud computing, along with easy management through a single pane of glass, and it becomes obvious that the very physical dimensions of the data center are not what they used to be. Who knows how many virtual devices might be hiding in the depths of a single piece of equipment?
That begs the question of how to transition from legacy equipment to this newly converged infrastructure. Every piece of equipment will eventually reach the end of its life cycle. Rather than replacing it one-for-one, managers should consider how new virtual machines, software-defined networking (SDN), or network functions virtualization (NFV) might accommodate the original functions of the device. (To learn more about the different levels of convergence, see What is the difference between convergence, hyperconvergence and superconvergence in cloud computing?)
The Complexities of Old and New Technologies
Technology may be getting easier for the user these days, but that does not make it less complex. More mature technologies are becoming embedded within the unified environment. All the switching, routing and computing technologies that required dedicated experts in times past are now in the hands of a new generation of engineers and technicians. What happens when some of those technologies require expert support?
Newer technologies, such as virtualization and the cloud, have challenges of their own. It may be true that we have still not worked out all the bugs, and not everyone is up to speed. In time, automation will lead to autonomous systems. The nature of the data center and the skills required to support it will continue to evolve. CIOs need to be ready for anything.
Data Center Management: Software and Best Practices
Now let’s suppose that the evolving data center is in full swing. Everything is running, with all the bells and whistles. There still remains one of the most important aspects of operating a data center: management. The traditional data center used SNMP-based managed objects that lit up like a Christmas tree on big screens in front of the operating center. Those systems are still in use, but data center management is evolving too.
Along with device management and the FCAPS network management model, now let’s bring in analytics, artificial intelligence and other cutting-edge technologies. Superconvergence makes it easier to see everything in a single pane. CIOs need to keep up with advances in IT management as well as other areas of the data center.
Conclusion
The work of an IT manager is never done. CIOs should be proactive, manage changes and provide good oversight. We’re not yet to the point that we can turn on the machines and put them on autopilot. Manufacturers continue to make machines more intelligent and resilient, but they still need us. Thank goodness.
Related Terms
Written by David Scott Brown | Contributor

David Scott Brown has more than 15 years experience as a freelance network engineer. He has worked in both fixed line and wireless environments across a wide variety of technologies in Europe and America. David is an avid reader and an experienced writer.
More from Turbonomic
Related Questions
- Why would companies invest in decision automation?
- What are some advantages of multi-cloud deployments?
- How does software-defined networking differ from virtual networking?
- How does dynamic allocation in the cloud save companies money?
- Why should companies be considering intent-based networking?
- Why is it important to manage a relational database system in the cloud?
- How can businesses innovate in managing data center bandwidth?
- What are some best practices for cloud encryption?
- How does visibility help with the uncertainty of handing data to a cloud provider?
- How can companies maintain application availability standards?
- Why do cloud providers seek FEDRamp certification?
- How might a team make an app "cloud-ready"?
- Why does loosely coupled architecture help to scale some types of systems?
- How might companies deal with hardware dependencies while moving toward a virtualization model?
- Why does virtualization speed up server deployment?
- What is the virtualization "backlash" and why is it important?
- Why could a "resource hog" make virtualization difficult?
- How might a company utilize a virtualization resource summary?
- Why do undersized VMs lead to latency and other problems?
- What are some of the positives of a demand-driven migration model?
- Why should cloud services offer both elasticity and scalability?
- What are some of the values of real-time hybrid cloud monitoring?
- Why might a company assess right-sizing on-premises versus in the cloud?
- How can companies deal with “dynamic unpredictability?”
- What are some basic ideas for optimizing hybrid cloud?
- Why do some companies choose Azure or AWS over open-source technologies like OpenStack?
- What are some advantages and drawbacks of stateless applications?
- Why is it important to look at the "full stack" in virtualization?
- How does automation help individual system operators?
- How do companies develop a "data center BMI"?
- How can companies tally up cloud costs for multi-cloud or complex cloud systems?
- Why is a good HTML5 interface important for a business project?
- How do companies work toward composable infrastructure?
- How can a manager use a workload chart?
- How can companies work to achieve a desired state?
- How can companies cultivate a better approach to “object-based” network changes?
- Why do naming conventions for virtual machines help with IT organization?
- Why is reserve capacity important in systems?
- What are some values of cloud-native architecture?
- Why is it important to match uptime to infrastructure?
- What's commonly involved in site reliability engineering?
- What are some important considerations for implementing PaaS?
- What are some challenges with handling an architecture's storage layers?
- What are some of the benefits of software-defined storage?
- What are some things that rightsizing virtual environments can do for a business?
- What are some benefits of continuous real-time placement of user workloads?
- How can stakeholders use the three key operations phases of autonomic hyperconvergent management?
- Why would managers suspend VMs when VDI instances are not in use?
- Why would managers differentiate storage for I/O-intensive workloads?
- Why would companies assess quality of service for VMs?
- What's the utility of a cluster capacity dashboard?
- How can companies use raw device mapping?
- Why might someone use an N+1 approach for a cluster?
- How do companies balance security, cost, scalability and data access for cloud services?
- How do companies battle application sprawl?
- What are some benefits of self-driving data centers?
- What are some concerns companies might have with a "lift and shift" cloud approach?
- What is involved in choosing the right EC2 instances for AWS?
- What are some benefits of workload abstraction?
- What are some challenges of scaling in OpenStack?
- How do companies use Kubernetes?
- What methods do companies use to improve app performance in cloud models?
- How do businesses use virtualization health charts?
- What is the difference between convergence, hyperconvergence and superconvergence in cloud computing?
- What are some of the business limitations of the public cloud?
- What is the difference between deploying containers inside a VM vs directly on bare metal?
- What are the benefits of converged infrastructure in cloud computing?
- How is containerization different from virtualization?