If you’ve heard anything about network virtualization, the practice of abstracting networks from the bare metal and provisioning them with virtual resources, you’ve probably heard about containers. If you’ve read enough to figure out how IT containers differ from, say, shipping containers, you know a little bit about their structure and makeup.
A container is a virtualization resource that shares the kernel of a cloned operating system with other containers. It requires less effort to set up than a virtual machine in general, and has some other key benefits as well. Systems including Docker and Kubernetes containers are allowing companies to build and scale in new and exciting ways.
Why are containers so popular, and how do they contribute to efficiency and enhanced operations? Here are some ideas from some of the pioneers of containerization who have put this philosophy to work in their companies and organizations. (Read also: How Containers Help Enterprise Applications.)
Encapsulation, Microservices and Artifacts
One of the talking points that you get most often from engineers who are enthusiastically using container setups is that the containers themselves are able to house a full codebase with all of its dependencies, one that’s ready to become deployed.
Using a static file called a container image, engineers can combine system libraries and other resources with all or part of an application. This in turn drives the creation and delivery of microservices, where different containers host different functions that can be put together to create an agile ecosystem.
“We believe the container, or rather, the container image, is the new software delivery artifact,”
says Chris Ciborowski, CEO of NebulaWorks, who has been working with containers since their early days in the 2000s. “What do I mean by that, and why?
A delivery artifact is the executable version of a developers’ application that is ready to be deployed. In the past, this was something that included just the executable code itself, which left resolving runtime dependencies to operations. By leveraging the container image, developers can include all of their dependencies, greatly reducing the chance of runtime failure due to human error during application deployment.”
“Containers, which allow organizations to easily migrate both applications and their dependencies between machines, make a lot of sense for organizations that do in-house software development,” says Peter Tsai, a senior technology analyst at SpiceWorks, pointing out that containers are still a relatively new technology.
“Third-party solutions for containers aren’t as robust as they are in the virtualization environment. According to Spiceworks data, in 2018 only 19 percent of organizations were using containers, although that number was expected to grow to 35 percent by 2020.”
Scott Buchanan, VP of Marketing at Heptio, explains this in the form of a helpful logistics analogy.
“Think about moving,” Buchanan says. “You’re going to need a lot of cardboard boxes. So, you deploy a bunch of them throughout your home, and then you fill them with all the stuff that matters to you: applications. Instead of taping them shut and losing access to your possessions, they stay open so you can re-organize your stuff between boxes as needed. And, when you need to move those boxes, it’s a lot simpler than putting your house on wheels. Those cardboard boxes are containers, and they offer you the portability to move your stuff between locations, including public and private clouds.”
The DevOps Philosophy
Containers are also helping companies pursue something called “DevOps” that's kind of a holy grail in enterprise technology. It’s the idea that you’re bridging the development and operations departments, helping teams to collaborate better, and that enhances the pipeline and creates a more agile release system. (Read also: DevOps Managers Explain What They Do.)
“Not only do Devs gain a benefit – so do operations,” Ciborowski explains, describing some of this DevOps functionality. “
Since the container image is portable, operations teams can run the container image on ANY host that has a compliant container runtime – like Docker – and as adoption grows, leverage orchestration tools like Kubernetes for nearly ANY application stack, across ANY infrastructure types, for example, on-premises and cloud.”
CEO Ali Golshan of StackRox further explains some of the DevOps philosophy inherent in container design, describing how containerization can help to enhance a pipeline.
“Containerization enables organizations to release applications and introduce new functionality for customers much faster,” Golshan says. “Because containers isolate code into smaller units, developers can work more independently to improve functionality. Container technology also reduces the testing burden, which speeds introduction of software, because developers can test just the new code, confident that they haven’t ‘broken’ another part of the application.”
ConDati’s Dan Bartow describes how Kubernetes containerization helped his company to evolve.
“Before Kubernetes, we had to manually shell into each environment and manually do upgrades by pulling new containers, stopping old ones, starting new ones, and repeat that manually for every customer,” Bartow says.
“Kubernetes turned hours and hours of work on release days into just a few minutes. With a couple of clicks, we can do a rolling restart upgrade of every container on any or all environments. This happens seamlessly.”
When these types of operations help developers to work more closely with operations teams, and break down barriers between the departments, they can enable a better DevOps model, making the firm more competitive in its industry.
Security
In addition to everything that containers promise in terms of functionality, they also have some important security benefits. Golshan has a lot to say about how a “thin attack surface” in container deployment reduces risk.
“The attack surface with containers gets both simplified and complicated,” Golshan says. “On the one hand, each ‘chunk’ of code is smaller, reducing the attack surface. Plus, containers come with a lot of declarative information about how they should be configured, labeled, and used, which can improve security.”
And, he adds, that’s not all.
“On the other hand, containers introduce new attack surfaces in two ways. Ephemerality is one element. Because containers routinely come and go, it’s OK to take drastic security measures such as killing a container if it acts ‘incorrectly.’ But that ephemerality also means attackers can cover their tracks more easily and thwart forensics by launching an attack, pulling data, and then killing the container when they’re done. The second element of broader attack surface comes with other elements of the ecosystem – most notably the orchestrator. Orchestrators provide organizations with a way to scale the creation, deployment, and management of containers, but the industry has seen multiple attacks and vulnerabilities tied to the orchestrator. Tesla saw its Kubernetes infrastructure compromised in a way that allowed attackers to mine cryptocurrency, and a report detailed how an attacker could have compromised Shopify’s Kubernetes clusters.”
In Bartow’s case, an actual third party security audit confirmed that the smaller attack surface of containers is a plus for ConDati.
“We’ve just completed a third party penetration test … the first we’ve done, and they told us verbatim that we have a ‘small attack surface,’” Bartow says. “Kubernetes is a huge part of why that is true.”
All of the above points toward big potential for containers in tomorrow’s business IT world. Think about all of the ways that these essential benefits can apply to any cutting-edge business model.