Serverless: The Promise and Reality

Serverless computing promises automatic scaling, zero infrastructure management, and pay-per-use pricing. These promises are real — for the right workloads. For the wrong workloads, serverless introduces latency (cold starts), vendor lock-in, and higher costs than always-on servers. At Nexis Limited, we use serverless selectively — for specific use cases where it genuinely outperforms traditional deployment.

How Serverless Works

Serverless functions (AWS Lambda, Google Cloud Functions, Azure Functions) run your code in response to events — HTTP requests, queue messages, file uploads, or scheduled triggers. The cloud provider manages the server infrastructure, operating system, and runtime. You deploy code and pay only for the compute time consumed during function execution.

When Serverless Excels

Variable Traffic with Idle Periods

Services that handle bursts of traffic with long idle periods benefit most from serverless. A webhook handler that processes 100 events during business hours and zero at night costs nothing when idle. An equivalent always-on server runs 24/7 at the same cost regardless of traffic.

Event Processing

Processing events from queues, S3 uploads, database changes, or IoT devices. Each event triggers a function invocation that processes independently. Image resizing, video transcoding, and log processing are classic serverless use cases.

Scheduled Tasks

Cron-like tasks that run periodically — daily report generation, data cleanup, health checks. Serverless eliminates the need for a dedicated server or container just for scheduled jobs.

API Backends for Low-Traffic Services

REST APIs that serve low-to-moderate traffic (under 1000 requests per minute) can be cost-effective as serverless functions, especially when combined with API Gateway for routing and authentication.

When Serverless Falls Short

Cold Starts

When a function has not been invoked recently, the cloud provider must provision a new execution environment. This cold start adds 100ms to several seconds of latency, depending on the runtime and function size. For latency-sensitive applications, cold starts are unacceptable.

Long-Running Processes

Serverless functions have execution time limits (15 minutes for Lambda). Long-running processes — data migrations, ML model training, video encoding — cannot run as serverless functions without breaking them into smaller steps.

High-Traffic Services

At sustained high traffic (thousands of requests per second), serverless becomes significantly more expensive than reserved instances or containers. The per-invocation pricing that makes low traffic cheap makes high traffic expensive.

WebSocket Connections

Serverless functions are stateless and short-lived, making persistent WebSocket connections impractical. While AWS API Gateway supports WebSockets, the implementation is different from traditional WebSocket servers.

Cost Comparison

Serverless is cheaper at low utilization and more expensive at high utilization. The crossover point depends on your traffic pattern, but generally: if your service runs at over 20-30% utilization consistently, containers or VMs are cheaper. If utilization is below 20% with significant idle periods, serverless is cheaper.

Conclusion

Serverless is a tool, not a religion. Use it for event processing, scheduled tasks, and variable-traffic services where the pay-per-use model and automatic scaling provide clear benefits. Use containers or VMs for high-traffic services, latency-sensitive applications, and long-running processes. Many production systems use both — serverless for appropriate workloads alongside traditional deployment for the rest.

Designing your cloud architecture? Our team helps you choose the right deployment model for each workload.