Serverless basic concepts and principles interpretation

Serverless basic concepts and principles interpretation


In the novel "Three-Body", the scene of the internal facilities of the future spacecraft is depicted. When the technology is highly developed, the complex details of the facilities are hidden. No shadow of any facilities can be seen inside the spacecraft, but when it is necessary to use it, There will be seats, tables and other facilities for people to use. Similarly, Serverless is also a concept. It hides the details of complex service operation and maintenance and physical facilities inside the cloud platform, and only provides some interfaces for developers to use. Through these interfaces, developers can start a service at any time, after running, the occupied resources will be recycled, waiting for the next call. From the perspective of ordinary users, all software services should be serverless, because users do not need to understand the operating principles behind these services.

For the service we encountered, we can judge whether it is a serverless architecture or not through several points:

  1. How many machines does the service have
  2. Where are the machines deployed
  3. What operating system is the machine running
  4. What software is installed on the machine

If we cannot answer these questions clearly, then the service we use is Serverless architecture

Serverless and front-end

With the popularity of cloud computing technology, serverless architecture has gradually become a very important infrastructure for front-end developers. With the development of the front-end applications in recent years, they have become more and more complex, shifting more from only focusing on user interfaces to undertaking more complex business logic. With the popularity of the BFF layer, different businesses will also have their own BFF layer. While providing convenience, they will also face new problems:

  1. Higher operation and maintenance costs
  2. Server-side resource utilization efficiency is not high
  3. A lot of basic logic may need to be implemented multiple times independently

How to solve these problems from the perspective of front-end developers is something we need to think more about in our work. When our BFF layer is switched to a serverless architecture, the above-mentioned problems can be solved well, allowing us to focus more on solving business problems.

Front end and full stack

As the front-end architecture continues to evolve, it has become a trend that serverless replaces the monolithic application architecture. Front-end developers can also use Serverless to make up for their shortcomings and greatly reduce the operation and maintenance costs of the server.

In fact, in many companies, although front-end engineers are indispensable, their position is also very passive. They are self-deprecating as "tool people". Sometimes although I am dissatisfied with this status quo, it is difficult to make effective changes in the general environment. A very important reason for this situation is that front-end engineers are usually "too far away" from the business, while back-end engineers basically don t need to know how the front-end displays and interact, they only need to know the business logic, and they already have it. Enough right to speak.

With the continuous evolution of the front-end architecture and changes in the front-end and back-end cooperation model, Serverless will make the front-end responsible for more upper-level business logic instead of just writing some simple pages. From this point of view, the challenge for front-end R&D is to have an in-depth understanding of business processes and control of the overall situation. This will make us a true full-stack engineer.

*The evolution of aaS

Serverless is an abstract concept, which does not mean that there can only be one specific implementation method. This concept is constantly developed in practice. So, what is its relationship with PaaS (Platform as a Service) and BaaS (Backend as a Service) that are often mentioned in the industry?

In the early IT era, if you want to deploy an application, you may have to go through the following steps:

  1. Buy server
  2. Install the operating system
  3. Install dependent software, such as MySQL, Nginx
  4. Deploy the application. Deploy the code to the server

It will take a long time to deploy an application in this way, and the time cost is very high. The cloud computing technology that is widely used nowadays solves this problem well.

In cloud computing, three service modes are provided. Namely IaaS, PaaS, SaaS. First look at a schematic diagram:

IaaS (Infrastructure as a Service) provides basic processing storage, network connection, and basic computing resource services, allowing users to directly deploy operating systems on it. Customers can deploy and run their own services directly without purchasing or renting physical servers.

PaaS (Platform as a Service, platform as a service) on top of the infrastructure, further provides computing platform and solution services. For example: database service, cache service, message queue service.

SaaS (Software as a Service) goes one step further and provides software services out of the box. These software do not need to be installed, and users can directly use the services provided by the software directly through the client. SaaS provides services that can directly solve business scenarios, and for PaaS services, we still need to implement business logic on top of it.

The above three aaS cloud computing services provide users with services from different layers, and users can choose what they need and choose the scenario that suits them. Cloud computing encapsulates computing resources through a hierarchical model, allowing users to access them on demand.

With the development of container technology, a new service model has emerged on the basis of IaaS, which is CaaS (Container as a Service). Cloud computing service providers change computing resources from providing virtual machines to providing containers. Through the container orchestration service, developers can build and deploy applications based on containerized technology through description files.

If CaaS is an evolution of IaaS capabilities, then BaaS is an extension of capabilities on top of PaaS. We usually use some third-party services to replace some technical functions in the application. Third-party services are generally provided in the form of APIs. These APIs are automatically scalable. For developers, these services do not require operation and maintenance. They are serverless services. From these characteristics, PaaS and PaaS are not much different, but BaaS is oriented to different objects. BaaS is directly oriented to the terminal, such as mobile APP, Web site, etc. R&D personnel can directly use these PaaS capabilities on the terminal.

The above mainly introduces the evolution of cloud computing services. It can be seen that if you judge the serverless standard at the beginning, these service models have more or less the characteristics of the serverless architecture. In the CNCF (Cloud Native Foundation) white paper, the capabilities that serverless should provide are clearly defined. The serverless computing platform should include one or two of the following capabilities:

  1. Function as a Service. Provide event-driven computing services.
  2. Backend as a Service. It means that it can be used to replace some of the core capabilities in the application and provide third-party services directly through the API.


Here is an introduction to the latest FaaS technology, which is also a serviceless technology that can be closely integrated with the front-end.

Based on the event-driven concept, it provides code that allows developers to run at the granularity of functions, and has the ability to be triggered and executed like HTTP or other times. Developers only need to write business code without paying attention to server resources. In this way, renting virtual machines to achieve monthly payment has become a billing based on scheduled consumption. This method reduces server operation and maintenance costs and lease costs, and greatly improves the use efficiency of hardware resources. On the other hand, the deployment efficiency of the code has been improved, because when a new function is released, only one function needs to be launched.

The event processing model of FaaS is as follows:


  1. Higher research and development efficiency. In the traditional R&D process, we usually need to complete two parts of work, business implementation and technical architecture. Our goal is to achieve business, but in the process, we need to pay attention to the technical architecture. FaaS not only provides users with a function running environment, but also a function scheduling method. Let developers focus more on business and improve R&D efficiency.
  2. Lower deployment costs. In the FaaS scenario, after the developer completes the function writing, he only needs to use the Web console or a simple command line tool to complete the function deployment.
  3. Lower operation and maintenance costs. Thanks to the elastic scalability of Serverless, we don't need to pay attention to the load of the server. Almost all the work to ensure availability can be omitted.
  4. Lower learning costs. Just as the driver does not need to understand the principle of the engine and the photographer does not need to learn the principle of optics, we can directly deploy the business function through Faas.
  5. Lower server costs. For services based on virtual machine technology and container technology, billing begins after applying for service resources, and costs will be incurred regardless of whether the resources are occupied. On the other hand, FaaS is billed based on the amount of function calls and function execution time, which saves a lot of costs.
  6. More flexible deployment options. Since each function is independently released and controlled, the new function release will start a new instance instead of overwriting the previous version instance, so it will not affect the function of the original function, so that multiple sets can be easily implemented The deployment environment and the ability to segment traffic in grayscale.
  7. Higher system security. In Serverless, because and without the concept of a server, R&D and operation and maintenance personnel do not need to log in to the server. The door to the server is closed, making it more difficult to attack.


  1. There are platform learning costs. Since FaaS is a relatively new architecture, it lacks documentation, examples, and best practices. Different vendors have different implementations on the platform, which also increases learning costs for R&D personnel.
  2. Higher debugging costs. Because I cannot directly run this function locally, I can only configure the same container environment locally or debug it remotely, whichever is more troublesome and is a problem that needs to be solved.
  3. Potential performance issues. After the service has not been called for a long time, the instance of the function will be automatically scaled to 0. If there is a new request at this time, the container will be started immediately and the function will be executed before the function is deployed. This process from 0 to 1 is called a cold start. Different languages have different operating environments, so the cold start time varies from 10ms to 5s.
  4. Supplier lock-in issue. Because FaaS is a new cloud computing service model, in terms of implementation, each cloud computing vendor does not have a unified standard to refer to, so each vendor has a different implementation. As a result, we cannot easily migrate from one supplier to another supplier platform.

Implement simple FaaS

The above mainly introduces some basic concepts. Let's take FaaS as an example to see how to implement a simple FaaS based on nodejs.

At present, in the implementation of FaaS, the most common application is the container-level isolation based on Docker technology, which can also isolate and restrict system resources; the other is the process-based isolation implementation, which is relatively speaking based on Process isolation is more portable and flexible, but compared with container-level isolation, there is still a gap in isolation.

This section takes the implementation based on process isolation as an example.

  1. Sandbox environment. In the operating system, different processes have independent memory spaces, and different processes cannot access the memory allocated to each other. This prevents process A from writing data information to process B. In nodejs, the function call request can be monitored through the main process. When the request is triggered, the child process is started to execute the function, and the result after the execution is returned to the main process, and finally returned to the client. Considering security issues, the vm2 module is used directly for code execution. The vm module that comes with nodejs is not absolutely safe.
//The child process code //After reading the code, execute it in the new sandbox environment and return the execution result const process = require ( 'process' ); const {VM} = require ( 'vm2' ); process.on( 'message' , ( data ) => { const fnIIFE = `( $(data.fn) )()` ; const result = new VM().run(fnIIFE); process.send({ result }); process.exit(); }); //The main process code //Read the function code from the file, start the child process, and hand the function to the child process to execute const fs = require ( 'fs' ); const child_process = require ( 'child_process' ); const child = child_process.fork( './child.js' ); child.on( 'message' , ( data ) => { console .log( 'function result' , data.result); }); const fn = fs.readFileSync( './func.js' , { encoding : 'utf8' }); child.send({ action : 'run' , fn}); //The function code //defines an immediate execution function ( event, context ) => { return { message : 'function is running' , status : 'ok' }; } Copy code
  1. Increase HTTP service. In a production environment, in order for the function to provide external services, it also needs to provide a Web API capability. This capability enables the service to dynamically execute the corresponding function code according to the different request paths of the user, and return the result to the client.
//child process code const process = require ( 'process' ); const {VM} = require ( 'vm2' ); process.on( 'message' , ( data ) => { const fnIIFE = `( $(data.fn) )()` ; const result = new VM().run(fnIIFE); process.send({ result }); process.exit(); }); //The main process code //Use Koa to provide http services. When the request comes, different function codes are read according to the request path to execute const fs = require ( 'fs' ); const child_process = require ( 'child_process' ); const koa = require ( 'koa' ); const app = new koa(); app.use( async ctx => ctx.response.body = await run(ctx.request.path)); app.listen( 3000 ); async function run ( path ) { return new Promise ( ( resolve, reject ) => { const child = child_process.fork( './child.js' ); child.on( 'message' , resolve); try { const fn = fs.readFileSync( `./${path} .js` , { encoding : 'utf8' }); child.send({ action : 'run' , fn }); } catch (error) { if (error.code === 'ENOENT' ) { return resolve( 'not fond function' ); } reject(error.toString()); } }); } //Function code 1 (event, context) => { return { message : 'function is running' , status : 'ok' }; } //Function code 2 (event, context) => { return { name : 'func2' }; } Copy code

At this point, the basic FaaS capability based on process isolation is complete. On this basis, you can further consider issues such as improving performance, function execution timeout, and limiting function resources (CGroup).


Serverless is a concept. Please keep in mind the rules of how to judge whether a service is Serverless.

Any solution is constantly evolving. If you want to thoroughly understand what problem a technology can solve, you need to study its background and connect the clues together.

Front-end development engineers should know more about the business in their work. Only when you have enough power to speak will you become a true full-stack engineer.

Abstraction and the encapsulation of complex things are coexisting modes in various fields, not unique to the computer world.

Only by constantly surrendering control can we focus more on business. Because after handing over control, we will get a certain degree of protection. For example, after handing over control of the service, we no longer need to operate and maintain the service. As a result, we have reached a contract with the service provider to make things run more efficiently.