Technology pundits have been predicting the end of IT infrastructure for some time, at least in terms of something the enterprise has to worry about. But the rise of serverless computing has pushed the conversation to a whole new level. (For the basics on serverless, check out Serverless Computing 101.)
The question is certainly valid. Why would anyone want to go through the time, trouble and expense of building their own compute infrastructure when they can simply lease the abstract resources they need only for the duration they need it?
But as with any technology, serverless has its good points and its bad points, which means it provides optimal support for some applications, middling support for others and weak support for others still.
First, the good points. According to Israeli entrepreneurial firm YL Ventures, serverless computing is the next phase of infrastructure as a service in which runtimes and operational management functions become the focus of virtualization. This is why it is sometimes called function as service, since it allows users to execute a given task without worrying about provisioning servers, virtual machines or any other underlying compute resources. The key advantages are improved agility and scalability, as well as more accurate cost/consumption models and even improved security, particularly against DDoS attacks. (For a new method of fighting DDoS attacks, see Will Blockchain Technology Make DDoS Attacks Obsolete?)
For these reasons, the firm says, serverless has the potential to revolutionize the way complex software is developed, deployed and managed, which in turn will alter the way the enterprise creates and supports key applications for an increasingly service-driven economy. Emerging initiatives like DevOps and the internet of things, in fact, will likely get a significant boost in terms of both functionality and cost-savings through serverless computing.
One of the leading champions of serverless, in fact, is Netflix. With more than 100 million subscribers streaming data-heavy video content, the company recently completed the migration of its content delivery platform to the cloud. It is now using the AWS Lambda service for media files, backup, instance deployments and to support monitoring software. Sure, the company could house all of this on internal infrastructure, but the capital costs alone would be astronomical, not to mention the army of technicians needed to maintain anything close to operational efficiency.
Donna Malayeri, program manager for Microsoft’s Azure Functions, also notes that the latest iterations of serverless technology remove a number of key obstacles that had hampered adoption at the outset. These include more robust support for debugging and monitoring, as well as support for local virtual machines that allows enterprises to embrace on-premises development experiences, a must-have for companies building private and hybrid clouds. With serverless, all the enterprise needs to worry about is its code and how it is triggered; the underlying platform takes care of all the rest.
Still, says Tech Republic’s Matt Asay, not all of the drawbacks to serverless computing have been resolved. For one thing, the technology makes it easier than ever to create code, host it on a serverless resource and then forget about it. This, in turn, leads to unnecessary resource consumption and expanded attack vectors that can be exploited to insert malicious code into the enterprise data environment. At the same time, serverless has the potential to increase dependency on a single provider as it becomes easier to launch new code on the same platform that supports existing code. In both of these cases, however, it is important to note that the problems do not reside on the serverless solution itself, but in the way the enterprise chooses to manage it.
In addition to both the positive and negative aspects of serverless computing, there are still a lot of unknowns as to exactly how it will integrate into the overall data ecosystem. According to game developer Michael Churchman, the use cases for serverless are still largely undefined and seem mainly confined to high-volume backend processes and real-time data streaming. These are important functions, but they represent only a tiny portion of the full enterprise workload.
Another big question is whether serverless should integrate with or replace legacy infrastructure. The temptation will be to utilize the resources that cost the least and provide the highest level of performance. But determining that on a case-by-case basis can be difficult, particularly when the services being supported start interacting with each other in novel and unpredictable ways.
As a third-party solution, serverless also runs into the same challenges regarding application and service performance. An SLA is fine for spelling out the remedies for lost or diminished service, but they cannot guarantee uptime. When deciding whether or not to go serverless for any given application, make sure to carefully assess the real-world consequences of downtime.
The relationship between other emerging technologies, namely containers, and serverless computing is also largely unknown. Many people feel that serverless represents the end of containers before they even make substantial headway into the enterprise data environment. Churchman argues, however, that serverless and containers actually complement each other, with serverless resources acting as an external service that does not necessarily need to be closely integrated into the application’s main container ecosystem.
As with any emerging technology, the enterprise should embrace serverless with a degree of caution and a clear idea of what it hopes to gain from this new environment. Only through careful and well-planned adoption will organizations be able to reduce the risk of entrusting key functions to a still-developing third-party data solution while at the same time enhancing the rewards of a new, more agile operating environment.