Serverless computing, or all the more simply referred as “Serverless”, is an interesting topic in the software product engineering world. The "Large Three" cloud merchants—Amazon, Google, and Microsoft—are vigorously putting resources into Serverless, and we've seen a lot of books, open-source tasks, gatherings, and programming sellers committed to the subject. Be that as it may, what is Serverless, and for what reason is (or isn't) worth considering? In this article we would like to illuminate you a little on these questions.
To begin with, we'll take a look at the "what" of Serverless. We'll get into the advantages and disadvantages of the methodology later.
What is Serverless?
Serverless can be said as a cloud computing execution model where the cloud provider dynamically manages the distribution and providing of servers. A serverless application works on stateless compute containers that are event-triggered, short-timed (may last for one supplication), and fully managed by the cloud supplier. The cost is based on the number of executions rather than pre-purchased compute ability. Basically, this is the ideal framework for that project you have been planning for a long time! Well, you can opt for it if you wish to.
Most of the cloud suppliers have devoted strongly to serverless and that is a lot of money. With the given intense promotion and genuine offering you can safely assume that serverless is going to be one of the most used cloud services in the future. Here are some of the currently available cloud services: AWS Lambda, Google Cloud Functions ,Azure Functions, IBM OpenWhisk, Alibaba Function Compute, Iron Functions, Auth0 Webtask, Oracle Fn Project and Kubeless.
Up until recently, your applications have run on servers which you had to patch, update, and continuously look after day and night due to all the unthinkable errors that broke your production. Therefore, as long as you looked after them, the whole responsibility of their proper working and order was on you. Serverless tends to be unlike the above mentioned, you no longer need to worry about the basic servers. Reason being, they are not managed by you anymore and with management out of question, this responsibility is taken up the Cloud vendors. But regardless the cool features of Serverless in some cases, the traditional architecture surpasses it.
A Couple of Examples
- Create Audiobooks with Amazon Polly and AWS Batch is a wonderful example of an asynchronous job queue leveraging multiple AWS serverless services such as Lambda, Batch, and Polly.
- A host-it-yourself status monitor for your websites known as LambStatus is another example of serverless applications.
Unpacking “Function as a Service”
FaaS is a way of implementation of Serverless architectures where software professionals can deploy an individual function or a piece of business logic. They usually start within milliseconds (~100ms for AWS Lambda) and process these single requests within a 300-second timeout which is imposed by most cloud suppliers.
Principles of FaaS:
- Full management of servers
- Request based billing
- Driven by events and scaled instantaneously
Key properties of FaaS:
Independent, server-side, logical functions
FaaS shares similarities with the functions you’re used to writing while coding, small, distinct, units of logic that take input arguments, process the input and return the result.
As far as Serverless is concerned, everything is stateless. Therefore, you can’t save a file to disk on one execution of your function and then hope it to be there at the next. Any two requests of the same function could run on completely different containers discretely.
FaaS are designed to get into action quickly, do their work and then shut down again. They do not wait for long if unused. Hence, as long as the task is working, the core containers are utilized.
Despite functions being called directly, yet they are often triggered by events from other cloud services like HTTP requests, new database entries or incoming message notifications. FaaS are often used and thought of as the binding element between services in a cloud environment.
Scalable by default
Multiple containers can be initialized with stateless functions, allowing as many functions to be run (in parallel, if needed) on demand to continually service all incoming requests.
Fully managed by a Cloud vendor
AWS Lambda, Azure Functions, IBM OpenWhisk and Google Cloud Functions are most well-known FaaS solutions in the market. Each offering typically supports a range of languages and runtimes e.g. Node.js, Python, .NET Core, Java.
Benefits of Serverless Computing
By far, we've mostly tried to stick to just defining and explaining what Serverless architectures have evolved to mean. We will now discuss some of the benefits and drawbacks to this way of design and deployment. You should only take decision to use Serverless without significant thought and weighing of advantages and disadvantages.
Reduced operational cost
Serverless, simply put, is an outsourcing solution. It enables you to pay someone to manage servers, databases and even logic of the application that you might normally have to manage yourself. As you're using a predefined service that many other people will also be using along with you, we see a shift in people’s perspective where you pay less for your managed database because this vendor is running thousands of very similar natured databases.
FaaS: Scaling costs
One of the biggest benefits of Serverless FaaS is that—as mentioned above—is that it is completely automatic, elastic, and managed by the supplier. There are several advantages to this but on the basic infrastructural side the biggest advantage is that you only pay for the compute that you require, down to a 100ms boundary as far as AWS Lambda is concerned. Depending on your traffic scale and size, this can be a huge economic victory for you.
Easier operational management
Serverless computing has reduced liability, no backend infrastructure to be responsible for. Therefore, this will lead to zero system administration. This service can be set up much faster as it is scalable and you do not have to worry about the number of concurrent requests. This will also foster innovation from the developer’s perspective.
From user’s perspective, if businesses aim at using this competitive edge to ship features faster, customers are bound to receive new features quicker than before. Additionally, it is highly likely these kinds of apps will offer client-side caching so as to provide the users with a better offline experience.
Over the past twenty years or so, there’s been a large increase in the numbers and sizes of data centers across the world. This includes the physical resources necessary to build these centers, that are associated with the energy requirements are so large that Apple, Google, and tech giants in the same league talk about hosting some of their data centers near sources of renewable energy sources in order to reduce the fossil-fuel burning impact of such sites to protect the environment.
Therefore, it’s likely that cloud infrastructure has possibly helped decrease this impact already since companies can only “buy” more servers on demand, and only when they absolutely need them, rather than stocking up all only possibly necessary servers a long time beforehand. This way serverless computing will help in conserving the ecosystem and the environment, in general.
With the use of an outsourcing strategy you are giving up the control of some of your system to a third-party. Such lack of control may lead to issues such as system downtime, unexpected limits, cost changes, loss of functionality, forced API upgrades, and much more.
Similarly, the vendor lock-in will require more trust for a third-party provider. In this case, the additional exposure to these risks need more trust for third party providers. Other risks like security risk, disaster recovery risk and unpredictable cost as the number of executions is not predefined also exist.
From a developer’s perspective, serverless applications have their own set of drawbacks. This includes immature technology that results in component fragmentation and unclear best practices. There is also architectural complexity and you need more discipline against functions being extended in a disordered way. Multi-tenancy can also happen, which would mean that the neighboring projects will hog the resources of the system behind the scenes. It is also tricky to test locally as some nations and states may have their internet and web restrictions. Apart from this, unless architected correctly, an app could come up with a poor user experience due to increased request latency.
The Future of Serverless
After taking a peek into the world of Serverless architectures, we can now discuss a few areas where the Serverless world may develop in the coming months and years.
Mitigating the drawbacks
Serverless is still relatively new in the world of technology. With this being said, the most important developments of Serverless are to be able to mitigate the inherent disadvantages and eliminate, or at least enhance, the application drawbacks.
More permissions to manage
In the serverless world, we have lots of distinct functions, each which offers its own set of services and responsibilities, its stand-alone storage and state management system. This causes hundreds of interfaces and may involve a situation when certain functions get more approvals than they are supposed to have. For example, the functions that were written to make calculations or send out emails, can sometimes get access to database resources. To avoid this, you can go through each function and decide what it needs to do. As far as rules are concerned, you can follow the ones with least privilege, so that each function does not perform more than the particular task it is supposed to do. All these extra permissions will only increase the potential chances of attack. These functions should also be constantly scanned for suspicious activity.
Serverless architecture is definitely quiteexciting, but it comes with a set of limitations and drawbacks. This is so because the validity and success of architectures also depends on the business requirements and by no means only on the technology used. Similarly, serverless has its own range of benefits when used in a proper place. Therefore, it is time for you to take a look at the great thing that is Serverless, and also take a peek intowhat Serverless looks from the inside before making a decision.