Search for

FaaS: From server farms to serverless computing

FaaS: From server farms to serverless computing

The cost and time-to-market appeal of serverless computing is way too significant to be ignored by IT departments.

This cool button delivers CIO stories to you on Facebook:

Serverless computing adopts an approach to deploying and running code in the cloud where the billable unit of computing is a carefully demarcated function as opposed to a ’server’ component such as a Virtual Machine (VM), an operating system or an application server. Because of its focus on functions, serverless computing is also often called Function as a Service (FaaS).

The serverless model is disruptive most of all because it is supremely cost-effective. How? By taking a function as the only asset that needs to be billed, and by abstracting it from all the scaffolding software and hardware architecture that supports it, customers are freed up to only pay for raw, business-purposed computational resources.

This approach in effect reimagines how software is designed, implemented and operated. And in true affirmation of its possibilities, Gartner advices that "technology and service product manages must prepare for…(this) change."

80 years of history

Interestingly, there is a strong historical connection between functions and the name ’Lambda’ that Amazon uses to brand its serverless offering. A lambda function (or lambda abstraction) is a nameless function definition that can be treated as a value and, thus, used to create more complex higher-order functions. The concept itself owes its origins to American mathematician and logician Alonzo Church's famed lambda calculus work from 1936.

In mathematics and pure functional programming, functions are strict mappings of values from a domain to a range (or co-domain). This means that given the same input, a function must always produce the same output. For example, an expression that takes a customer's date of birth and the current year as an input and produces the customer's age as an output is a function since the result will always be constant. On the other hand, an expression that provides the current temperature of a given city is not a function since the result will vary every time it is calculated. Many programming languages therefore call their units of execution "methods" or "procedures" since they are not pure functions in the strict, formal sense.

In that context, a function implemented in AWS Lambda or Azure Functions behaves much more like a regular method or procedure; since multiple invocations can produce different values. However, we would like to think that the slight misuse of the word “function” by vendors accurately represents the aspiration they have with regards to the behavioral properties that a proper function should possess. Properties such as statelessness and idempotency lead to functions that behave closer to what their names formally represent and these properties are essential to facilitate massive parallelism and scaling of function-based solutions.

Greater computational density, lower costs

Since the first appearance of sharing systems in the sixties, we have been trying to make a single computer serve multiple users. Multitenancy is a concept almost as old as computing itself. The objective is to be economical: to find the most efficient way to allocate and utilize a scarce resource.

From an economic perspective, and barring concerns such as security, isolation and memory protection, mechanisms such as virtualization and containerization allow to maximize the amount of business logic that may be derived from a capital asset that has a relatively fixed cost over time (accruing for depreciation). Actual utilization (e.g. making the CPU busy and allocating memory) has a relatively small incidence in terms of energy and network bandwidth, when applicable.

In an IaaS offering, the customer pays for the ”share" of a machine that is instantiated, but the cost of using 0 percent or 100 percent of the computational power within that share does not radically alter the cost. It is like booking a 3-bedroom hotel room. The room still needs to be paid in full even if only a single guest checks in and sleeps on one bed.

PaaS and container models achieve greater rationalization by making each share of computational capacity even smaller. A virtual machine may represent, say, one-eighth of an actual physical machine whereas a PaaS application instance or container could be of an order of magnitude smaller. But we are still in a situation where the computer's resources are not fully utilized as no matter how small the computing shares are, the "long tail" of them remain, on average, underutilized.

In the serverless model, the asset that is subject to costing and billing is the most fundamental, abstract and indivisible unit of computation--a function; a mechanism that takes a value A and produces a value B in return. Serverless computing is therefore the ultimate model for hyper-rationalization of computing resources.

Architecture

However, when we think serverless, or without a "server", it is not as though the server infrastructure (machines, VMs, containers, application servers, etc.) disappears, but that the said infrastructure is put away from the problem at hand which is that of executing a unit of computation.

Challenges like the number of threads, number of instances, auto-scaling orchestration and so on are related to the management of components in the "server" space: How many containers do I need? How many VMs should I spin up to support the said containers? Do I need Kubernetes for orchestration? These are all problems related to scaffolding management and unrelated to the running of relevant business logic. In a serverless model, the problem of what computing resources (and which deployment model) are required to run a function is the cloud provider's business rather than the customers’. 

We have already said that the entire server stack is put away for all functional purposes. But there is also another fundamental element of abstraction: this is the separation of concerns between the implementation of the function as a general construct that receives data as an input and produces data as an output--and-the "binding" to a transport layer such as HTTP or a messaging interface.

Security, logging, and debugging concerns are typically addressed using tooling native to the runtime environment offered by the serverless vendor. This requires a few considerations if we want to create serverless functions that can be migrated from one provider to another without significant changes.

For example, both Azure Functions and AWS Lambda support the .NET platform; however, in the Azure platform, log statements require the use of the TraceWriter class whereas in AWS Lambda, the LambdaLogger class is used instead. For this reason, it is a good idea to keep the function implementation outside the entry handler and perform logging against a proxy interface that can then be dispatched to the serverless platform's appropriate logging class.

One last aspect to consider is that serverless computing is most appropriate for use cases that involve raw, pure business logic such as the calculation of the premium for an insurance policy. Technical, multiple-step and heavily algorithmic uses cases, such as the compression of a video file, are not ideal for this computing model.

What the future holds

Currently, the serverless paradigm is implemented on top of conventional stacks such as containers and popular language runtimes; for example, V8 (NodeJs), .Net and Java. There is still a good amount of "abstraction leak" in the sense that developers can still have programmatic access to the abstraction layers below (the container system, the system APIs, etc.) and implement behavior in a non-idiomatic fashion.  I predict that future serverless platforms will offer either a custom language (or stripped down conventional languages such as the approach currently used by Google App Engine) which executes functions in a pure, sandboxed environment without abstraction leaks. This approach should result in further density increments (by removing unnecessary boilerplate) leading to increased computing density, and thus, lower costs.

The cost and time-to-market appeal of serverless computing is indeed way too significant to be ignored by IT departments.

The author is VP, Enterprise Architecture at Mphasis. 

Disclaimer: This article is published as part of the IDG Contributor Network. The views expressed in this article are solely those of the contributing authors and not of IDG Media and its editor(s).