What is Serverless Computing?
In the rapidly changing world of IT, "serverless computing" is an important and integral term. Some think of serverless computing as a flavor of cloud services, but it's really broader than that. Serverless computing is actually a good moniker for a lot of what has driven enterprise IT into the future – the idea that instead of running vital business applications off of in-house servers, companies can simply order up functionality, in this case computing functionality, as a service. This places serverless computing squarely in that realm of "software as a service" or SaaS models that have revolutionized enterprise vendor options. So serverless computing is cloud and SaaS, but it's also more: For example, the moves toward network virtualization and the use of containers to separate data and operating system clones have a lot to do with serverless computing as well.
One good definition of serverless computing is as follows: Serverless computing is a scenario where the buyer "provides only application logic" and is not responsible for or even privy to infrastructure issues. At its most basic level, serverless computing is another tendril of the rapidly emerging model of "on-demand services" – companies don't have to worry about storing and maintaining server farms, keeping servers cool, or provisioning them in key ways. They just order functionality from afar, and use it to seamlessly integrate application functions.
In this sense, serverless computing is really a bellwether of our world and the way that enterprise IT has evolved. It's quintessential outsourcing, and the agile design of elastic and scalable systems that businesses can use to compete. When you have dynamic needs, serverless computing can provide dynamic responses. It's a booming field, and one that's getting a lot of attention in tech press.
To really get a sense of what serverless computing is and what it represents, it's important to note that in reality, serverless computing is more than just "not having a server" – many different flavors of technology that replace bare-metal machining with virtualized systems can get rid of the company's responsibility for housing servers. The difference is that, with the most popular serverless computing services, you don't "rent a virtual server" – instead, you rent each little instance in which a server would run code. That's quite a different model and one that merits a lot of research and brainstorming before enterprise adoption.
The Context of Serverless Computing
Again, serverless computing is a broad term. But part of why it's broad relates to the broader context in which these services exist.
As Moore's law kept making hardware smaller, more portable and more agile, smart engineers found ways to capitalize on that functionality by untethering hardware from wires and cables.
The internet was famously started as a military project, and most of us are familiar with that, but fewer users have thought about the process by which cloud and virtual services emerged. Vendors got smart about using that global web to deliver data. It started with protocols like FTP and secure virtual private network tunnels, but soon, the IT community found out how easy it is to deliver all sorts of services right through the "net."
Because of this abundance of new digital possibilities, serverless computing exists as part of a wide menu of options. If you can simply order a whole server's performance over the web, why would you want to instead order tiny individual function instances and their runtimes individually? The answer lies with the specific kinds of use cases to which serverless computing is applied.
One excellent example is covered well in a piece at The New Stack, which shows one of the most interesting uses for triggering code through events. In general, image processing is both a mainstay of modern ML/AI work, and one of the task areas to which serverless computing is most commonly applied.
In the specific example talked about by writer Mark Boyd, a drone goes up and takes aerial pictures, and a serverless computing model triggers responses. When you think about the vast capacity to mine and deliver actionable data through aerial photos, you start to see how serverless-driven notifications and updates can be applied to nearly any field, from water and sewer administration to public safety to real estate.
Image processing is just one of many uses for serverless computing, but it's one with a whole host of specific applications and benefits to end users. It also illustrates what serverless computing is "good for" and why, as one Twitter poster put it, (cited in the above New Stack article) it's valuable to enterprise that with serverless computing you can "buy the hamburger instead of buying the cow."
A 'Pay As You Go' Model
Another key way to define serverless computing is that it is typically provided in a model that is different from, say, a standard network virtualization service. In many VM scenarios, customers pay for computing per unit, per virtualized server or workstation. In serverless computing, it's usually more of a "per-function" or "pay as you go" system – like so many other pay-as-you-go services, there's a meter on runtimes and app functions to build a cost model.
One prominent example is Amazon Lambda, a serverless computing service from one of the dominant giants of today's tech world, and arguably, the single company (aside from perhaps Microsoft) to move most aggressively into the realm of on-demand services. Amazon is, of course, a household name in both consumer retail and the tech industry, and so its serverless computing offering is sometimes thought to be relatively synonymous with the general world of serverless computing vendor provisions.
So, in Amazon Lambda, you can see the pricing models that are so much a part of how serverless computing works. With Lambda, users pay per individual function request and per the amount of time it takes to execute a piece of code. This isn't provisioning a virtual machine with a certain amount of CPU and memory and paying that way, as customers do with other types of similar but different services. It's paying only for what you need, for systems that may not be working full time to drive business. Maybe the activity that the physical server would have supported is only a trickle of user events such as new user registrations or notifications. Maybe that small stream of events just needs a modest amount of virtual outsourced support. It's these kinds of projects that really thrive on a serverless computing model. It's instructive to note that, for example, Amazon offers a "free tier" for Lambda, giving away up to a million requests per month or 400,000 GB/seconds of computing time – for free!
Much has been made of this free offer, for a reason – in many cases, it can be an optimized way to provide for given amounts of computing resources related to a range of business projects. It's a much different model than a lot of the cloud and virtualization deals that offer a "service" with uptime and other 24/7 provisions. In the case of serverless computing, the vendor is just running a piece of code in a given computer programming language, i.e. Java or Python.
What Can You Do with Serverless Computing?
Another way to look at the definition and value of serverless computing is to think about what's possible using this type of service. And one specific way to do that is to look at how it works within a given vendor's framework.
In Amazon's semi-walled garden, where you can get platform outsourcing with AWS, there's something called AWS S3, which is a type of object storage model. Data and digital items get put into S3 "buckets" and they stay there for evaluation and retrieval. There are all sorts of tools aimed at optimizing this new method of storage and working with data flows into and out of S3.
One thing that customers can do is use Lambda to work with S3 events. Suppose something happens in a bucket. Developers for a customer can buy Lambda functionality and use Lambda to automatically work on those events (see this list of Lambda use cases for a description of how this might work).
This leads to a bigger conversation about how serverless computing works. Typically, you will have "triggers" that drive function requests. These can be based on events inside an app, or inside of a service like S3. They can also be done over HTTPS, where people talk about using "webhooks" as triggers. Or, Lambda or other serverless computing services can work off of scheduled events.
In any case, the triggers drive the process of requesting an execution from the serverless computing service. Thinking of it this way, it's easier to imagine the overall role of serverless computing. The serverless computing vendor is making available a service that many call "function as a service" – a service that will run a codebase on demand, an executable one-off service that can be iterated fully in a business context.
Those who are relatively unfamiliar with serverless computing will get a lot of orienting examples and other helpful details by following how companies are using Lambda – but that's not the only service available. A wider discussion should center around what companies are choosing to use, and to what ends, in any given industry.
Webhooks and the Future of the Net
To look more closely at certain popular functions of serverless computing, take "webhooks" which are commonly defined as a type of "callback," a codebase that is passed as an argument, again, triggering a response. To put it another way, the callback is a type of code module that functions on a somewhat "biological" model, with stimulus leading to response.
Webhooks make things happen on the internet. They are, again, "triggered" by user events or other real-time events. The response may be delayed, and, according to the webhook, may take many different forms. Some types of webhooks will trigger bug-fixing programs or other maintenance actions in response to an event: Others will link up some event to another – for instance, when someone posts a comment in a forum, the webhook may trigger a notification to some other user or set of users somewhere else on the global web, showing that the comment was posted. The purposes of webhooks are quite diverse, but the idea is that a webhook "makes something happen when something happens."
Webhooks are one of the most popular applications of serverless computing, and they are likely to dramatically change how we view connected IT in the IoT age. The internet of things (IoT), still in its infancy, is starting to redefine what talks to us: Instead of sitting at desktop or laptop workstations, we may be commanding, or taking commands from, our refrigerator, smart TV, or even toaster or countertop. But serverless computing will help to determine what happens when we do this – what we say to all of our smart appliances and how they respond.
We can already see intrepid users monkeying around with these types of technologies, as in this compelling read at ZDNet. Other examples abound: With a serverless computing code snippet, you can set notifications for managing your home's ambient temperature, or make sure you know when the toast pops up, or, (in an example given by the above writer) make an animatronic rabbit wiggle its ear when the stock market changes.
It's that open frontier of ideas that shows how vital serverless computing can be to a cutting-edge distributed architecture like an IoT system. It's a guidepost to what this model can provide in the future.
How Serverless Serves Multitenancy - and Why That's a Win
Along with the ability to deliver all sorts of nifty new functions in an enterprise architecture, serverless computing is exciting to businesses for another reason – one that has to do with that model of buying the hamburger – the optimized provision of computing.
As background, the nascent cloud industry was split between "public" and "private" cloud models. In private cloud, the vendor set up a specific walled network system just to house a single client's data. In public systems, which were often referred to as "multitenant," the vendor kept multiple accounts in the same architecture.
The trade-off was between security and cost, and in the end, if you ask a lot of experts, cost won. That's partly because it's relatively easy to provide capable security and segment customer data with public cloud. The other reason is that private systems are often prohibitively expensive or not worth the money. To put it simply, there is hacking potential even in private cloud, and considering the ROI, many companies ultimately opted for a multitenant approach, rewarding the vendors that chose that direction.
While offering microservices based on individual function request and triggers, smart serverless computing vendors serve a lot of different customers at once. That saves them enormous amounts of money, savings that they may pass on to the customer.
At TechCrunch, Anshu Sharma provides a good read on this contrast, calling serverless computing "the new multi-tenancy" and suggesting that the savings are going to propel "public" serverless computing into the mainstream.
"You truly pay for only what you need," writes Sharma, echoing one of the most full-throated value propositions for a new and emerging tech model. Sharma goes on to talk about applications to CRM and other sectors that are so vital to competing in today's business world.
With all of the above in mind, there is a lot of excitement about serverless computing floating around. In fact, some of it might seem hyperbolic – until you think about just how quickly cloud and other past innovations took off.
Over at InfoWorld, Matt Asay made a compelling case just this past summer that a lack of serverless computing services could hurt one of the biggest brands in tech.
Google has become a real monolith. Over time, through all of its innovations, acquisitions, and dominant search technologies, the company has cornered the market on certain tech segments. But now, Asay and others argue that Google could lose an important race – one to offer serverless computing options, where AWS Lambda and Azure Function are competing plays to offer these new models to customers.
Ticking off major commercial brands that have jumped to serverless computing, Asay underscores the idea that containerization, as new as it is, can in some ways be made obsolete by new serverless computing technologies. Naming the technologies that SC is replacing is one more way to really understand what serverless computing is and how it is being used.
What's next for SC? That's up to the adopters.
You could Google "Lambda and Alexa" to get an idea of applications to smart home technologies, or "serverless and manufacturing" to see how the same technologies are being applied to that brave new world of production robotics and data-driven industrial processes. For instance, in a survey of a Fortune 500 company that makes heavy equipment, a Flux7 case study goes over some of the benefits of serverless design. "Managing security, risk and compliance" might not have sounded like something a production company would have been doing in the 1980s, but today, the analysis of how audit and notification tools power business, even a very physical business, shows how dependent all fields are on software, and how serverless can help.
There's also the promise of more "out of the box" functionality as a result of serverless technologies: At the Next Platform, Doug Vanderweide makes the case for a coming egalitarianism in which regularly defined functions will "make everyone a programmer" and possibly erode that guild of smart techies that control comfortable positions in corporate structures.
Is that a good or a bad thing? It really depends – however, if you look at an analogy, specifically, the birth of automation in HTML tools providing easier web design, you can see how making tools more accessible aids business. In the words of some consultants, if you don't need a data scientist to do analysis, and anyone can perform analysis and create reports, you're generally better off.
Look for serverless computing to enable many of these changes in the future, as it emerges to provide alternatives for purchasing server performance in big blocks.