We value your privacy. We use cookies to enhance your browsing experience, serve personalized ads or content, and analyze our traffic. By clicking "Accept All", you consent to our use of cookies. Read our Privacy Policy for more information.
back arrow icon
Methodology

‘No server, no bottleneck’: serverless computing to scale with large volumes of incoming sensor data

Tuesday, June 28, 2022
Stefanos Peros
Software engineer

In recent years, the ever-growing number of interconnected devices that generate and transmit information across the Internet has given rise to new challenges in terms of managing large volumes of incoming data, while ensuring that services remain available and responsive. This is especially true in Internet of Things (IoT) use cases, where up to thousands of sensors can be measuring and transmitting measurements simultaneously through their sensor network gateway(s). Furthermore, incoming sensor data is typically encoded to cope with the bandwidth limitations of wireless sensor networks, and often needs to be aggregated before it is sent to the backend server for processing. As a result, incoming sensor data must be pre-processed before it is sent to the backend, i.e. decoded and routed to aggregators to be transformed into a usable format.

Traditionally, a server connects to the gateway and listens for incoming sensor messages, which it then decodes and forwards to the aggregators. Connecting to the gateway can be achieved through various technologies, based on the gateway’s technical specifications. Common connection technologies include RabbitMQ WebSocket, CoAP, and MQTT. Such a centralised server setup can definitely get the job done for small to medium-sized IoT infrastructures, typically consisting of tens to hundreds of IoT devices that transmit periodically every few minutes. However, as we experienced in practice from our client, a central server solution is bound to become a bottleneck as the number of connected IoT devices increases beyond the order of three digits.

Naturally, an obvious solution would be to convert to a distributed server setup, for example region-based. However, this would introduce additional overhead in terms of infrastructure management and configuration, e.g. to ensure that each gateway is associated with the right server at all times. While this is practically feasible and not uncommon, we did not want to introduce additional distribution to an already heavily distributed infrastructure that consists of thousands of IoT devices across the entire world. So, what if we just removed the server instead? After all, no server, no bottleneck, no problem, right? No, that’s not crazy talk: it’s serverless computing!

Serverless computing, illustrated in Figure 1, is a cloud computing execution model that supports running code, managing data, and integrating applications without all the hassle of managing servers. While technically there are still servers, they are fully managed by the cloud provider and users are charged purely on usage, not a fixed amount of bandwidth or number of servers. Most importantly, it features automatic scaling out of the box, so this cutting-edge cloud solution definitely picked our interest to tackle our problem at hand! Google Cloud Platform, Microsoft Azure, Amazon Web Services (AWS): they each provide their own serverless computing solutions. In our use case, we opted for Amazon’s AWS Lambda serverless service, for reasons that will become clear in the next paragraph.

serverless computing illustration
Figure 1: serverless computing illustration

In serverless computing, application business logic is written inside functions, which are called Lambda functions in the context of AWS. Once the business logic is there, developers can build and deploy their serverless application using either the web-console or the AWS SAM CLI tool. SAM CLI will take care of initiating all cloud resources that are used by the application, such as Amazon MQ in our use case: a managed message broker service for RabbitMQ. Once deployed, a lambda function remains idle until a trigger occurs, which is specified in a configuration file prior to deployment. Triggers can vary from HTTP requests to the arrival of a message (or a batch of messages) in a message queue. In our use case, the supported batch functionality provided by AWS made it stand out against its competitors, since batching enabled us to easily and efficiently process groups of sensor messages from RabbitMQ. 

Without the support for batches, we would have instead been restricted to react to the arrival of each individual message, leading to practically infeasible costs due to the thousands of IoT devices sending data. Figure 2 illustrates how AWS Lambda works in the context of a simple example application for uploading images. When a user takes a picture and uploads it to cloud storage (Amazon S3), the AWS Lambda function is triggered to resize this image to fit into the mobile, web and tablet sizes. When the lambda function is finished resizing the image, AWS frees up the allocated resources. In the context of our IoT use case, lambda functions instead decode a batch of messages and add relevant routing keys to efficiently forward them to their respective aggregators.

Figure 2: AWS Lambda example application.

Finally, we discuss some of the lessons learned by going serverless. Not only did it completely solve our scaling problem, but it did so in both directions. Indeed, IoT devices are notorious for being frequently offline, often due to the harsh environments in which they are deployed, resulting in temporarily smaller volumes of incoming sensor data. By going serverless, the auto-scaling ensures not only that our application can keep up with increasing sensor data volumes, but it also guarantees that less resources are used should there temporarily be less incoming messages due to sensor failures. 

Most importantly, this is done without requiring any manual intervention or infrastructure management whatsoever. Another important advantage of auto-scaling is that customers only pay for the resources that they use, and they do not use more resources than needed, which also ends up reducing monthly operational costs due to the absence of idle capacity charges. While this all certainly sounds very positive, there are still some considerations to be made before transitioning to serverless: being tightly coupled to a particular cloud vendor, entrusting them with hosting your data and meeting all of your availability and security requirements. Furthermore, debugging and testing serverless applications imposes additional complexities, as it typically includes an additional deployment step to a sandbox cloud environment in order to ensure the correct integration among the various cloud services used by the application.

Are you considering going serverless? Hopefully, sharing our experiences has helped you identify potential benefits for your own use cases, along with what you should consider in advance before making the jump. In any case, do keep an eye out for serverless computing as it is definitely a hot topic that fits right into his increasingly service-oriented digital world!

Want to know more?

Want to discuss serverless computing with Stefanos? Fire him an email at 'stefanos.peros@panenco.com' with your serverless questions or to schedule a serverless chat!

See also

Let's build!

Are you looking for an entrepreneurial digital partner?
Reach out to hello@panenco.com.

Egwin Avau
Founding CEO
Koen Verschooten
Operations manager

Subscribe to our newsletter

Quarterly hand-picked company updates
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.