If every year gets its own buzzword, then 2016 deserves the title, “The Year of the Software-Defined Data Center (SDDC).” In fact, if you Googled it right now, you would find more than few articles and blog posts dating back to last fall with that very headline.
No one has formally defined the SDDC, of course, so people are free to declare whatever they wish. But the tech industry has gone through the peaks and troughs of hype and disillusionment too many times to count, so it’s probably a good idea to take a hard look at the SDDC to determine what is real right now, what is still under development and what is imaginary.
The SDDC itself is most certainly real at this point. Now that software-defined networking (SDN) and network functions virtualization (NFV) have severed the last link between virtual architectures and physical infrastructure, all the pieces are in place to start hosting end-to-end data environments completely in software. But according to market research firm 451 Research, while more than two-thirds of large organizations say they will increase spending on software-defined infrastructure this year, those who are not ready to pull the trigger just yet site the lack of maturity among SDI products and lack of staff expertise as major inhibitors. This suggests that while the SDDC has moved from mere concept to functioning platform, many of the practical realities to see it through to full-scale production deployments are still with us. And even if these were to be worked out tomorrow, few in the enterprise industry would know how to work in this new environment. (To learn more about software-defined technology, check out 10 Tech Acronyms You Must Know.)
Rush to Market
This is what development is all about, of course, and enterprise vendors of all sorts are rushing SDDC products to market at a rapid clip. Leading names like IBM and VMware, which recently launched a partnership to support the vSphere, NSX and Virtual SAN platforms in the IBM Cloud, know that scalable, virtualized resources are growing more prevalent every day, so it’s better to have an SDDC solution at the ready now so as to catch the first wave of commercial deployments as it emerges. There is also the fact that the SDDC will drive demand for scalable cloud resources – particularly from startups who may opt to host their entire data operation in the cloud – and this would be a tremendous boon as providers ranging from Microsoft and Oracle to Google and Amazon vie for enterprise workloads. (Disclosure: I provide web content services for IBM.)
The value proposition of such a data environment is hard to overstate as well. As Pluribus Networks’ Mark Harris noted on RCR Wireless recently, the real prize here is not the technology itself, but in the ability to quickly align data resources with changing business requirements. This was one of the original promises of virtualization, but it quickly turned out that consolidation in the server farm only led to demand for more storage and networking. With a full SDDC stack, however, entire compute architectures can be provisioned, scaled up and down, migrated across clouds, integrated with other environments and ultimately decommissioned, all through a simple user interface. For a generation of IT techs who while away their time patching networks, manually load-balancing servers and provisioning new storage, this is like stepping off a horse-drawn carriage and climbing aboard a jet airplane. (To learn about some hindrances to virtualization, see 5 Things That Can Bog Down Virtual Infrastructure.)
Still, don’t expect to walk into the office one day and find yourself surrounded by a digital data center. Forrester’s Robert Stroud says that while defining infrastructure as code should be a key component of the enterprise agenda, it’s important to remember that the SDDC is an evolving architecture – even an “operational philosophy” – not a product to be purchased and deployed. There are fundamental architectures to construct first, then a multi-layered virtual stack and finally an end-to-end orchestration and automation layer, and all of this will have to be developed, deployed, integrated and then scaled into production environments.
The key stumbling block remains the automation and orchestration layer. Enterprise Feature’s Caitlin Winter noted last fall that while a single-vendor compute/storage/networking environment can support a fair amount of automation, problems arise when you try to link these automated domains using APIs and master policy management tools on the application layer. And the complexity only compounds when you introduce a multi-vendor environment to the mix. VMware is probably further ahead than most in this area, primarily due to its desire to deploy vCloud Air as an extension of the data center, but the industry as a whole will need to develop a wealth of new standards before the SDDC can scale seamlessly across distributed cloud environments.
A Pleasant Dream
But even if many of these issues get worked out over the coming year, it is highly unlikely that the enterprise will have access to an intelligent, autonomous over-arching data environment in which optimal service is delivered with a few mouse clicks. That may sound like a dream, but in reality it will likely bump up against all manner of regulatory, compliance and residency requirements, not to mention the fear of opening data to a wealth of new attack vectors.
So at the moment, the SDDC has just barely emerged from test environments. By the end of this year, we should start to see initial production deployments at limited scale, and then broader rollouts throughout the rest of the decade as organizations seek to capitalize on big data and the internet of things.
No matter how it evolves, however, the SDDC should be robust enough to tackle virtually any workload the enterprise can throw at it, putting a high-performance computer in the hands of every knowledge worker.