AWS API Performance: Lambda, Containers, and API Gateway's Direct Route

You've built a fantastic API endpoint on AWS, and now the big question looms: how fast is it, really? And more importantly, could it be faster? This is a question that often pops up, especially when you're looking at different ways to deploy your services. I recently found myself pondering this very thing, wondering how a direct integration from API Gateway to a service like SNS would stack up against the more traditional Lambda approach, or even running your workload in containers.

It's easy to have hunches, right? My initial thought was that cutting out a middleman, like Lambda, would naturally lead to a snappier response. But as we all know, reality can be a bit more nuanced. So, to get a clearer picture, I decided to put three common AWS API deployment patterns through a performance test.

The Contenders

We're looking at three distinct architectures, all designed to perform the same basic task: receive an HTTP POST request and forward its payload to an AWS SNS topic. The goal here is to isolate the performance differences to the architecture itself, not the application logic.

  1. API Gateway + Lambda: This is a classic serverless setup. A request hits API Gateway, which then triggers a Lambda function. The Lambda function does the heavy lifting of sending the data to SNS and then returns a response. It's a well-trodden path, offering flexibility and scalability.

  2. API Gateway Service Proxy: Here's where we try to shave off some latency. Instead of invoking Lambda, API Gateway is configured to directly integrate with SNS. This bypasses the Lambda execution environment entirely, aiming for a more streamlined flow. It's a neat trick, but not every AWS service is compatible with this direct integration.

  3. Containers on AWS Fargate: For this test, we're bringing Docker into the mix. Requests are routed through an Application Load Balancer (ALB) to containers running on AWS Fargate. The application within the container then forwards the payload to SNS. Fargate offers a way to run containers without managing underlying EC2 instances, providing a managed container experience.

The Bake-Off

After setting up these three architectures, the real fun began: throwing requests at them. I started with a smaller batch of about 2,000 requests to get a feel for initial performance and to ensure everything was firing correctly. This initial run gave us a baseline, showing around 40 requests per second across the board.

Then came the main event: a larger test with 15,000 requests. This was designed to simulate a more sustained load and see how each architecture performed once it was 'warmed up.' The results, as they started to come in, offered some interesting insights into the trade-offs between these deployment strategies. While the direct API Gateway to SNS integration did show a performance edge by eliminating a network hop, the overall picture is more complex, with each option having its own strengths and ideal use cases. The containerized approach, while potentially more complex to set up initially, offers a different kind of flexibility and control.

Leave a Reply

Your email address will not be published. Required fields are marked *