I was recently putting together a payment processing flow. The kind of form where you collect user info, maybe some additional details from a third party, process a deposit via Authorize.Net, send confirmation emails. Nothing exotic.
I started with the obviously modern choice: AWS Lambda, API Gateway, DynamoDB. Serverless. Pay-per-invocation. No servers to patch. The architecture diagrams looked beautiful. I had reCAPTCHA v3 for bot protection, Accept.js tokenizing cards client-side so we’d never touch PAN data, SES for transactional emails.
I was deep into DynamoDB’s single-table design — partition keys, sort keys, the whole pk = "USER#jane@example.com" AND sk BEGINS_WITH "PAYMENT#" song and dance — when one of my engineers asked: “Why aren’t we just using Rails and Postgres? That’s what we know.”
Full stop.
Hear me out. Lambda and DynamoDB excel in the right context — a truly isolated microservice with unpredictable traffic, a team that lives in AWS. But I’d just spent hours building infrastructure that my team would need to learn — CloudWatch, SAM templates, NoSQL access patterns, a deployment model completely foreign to our muscle memory — all to process maybe a few dozen transactions a day.
Classic engineering manager brain: chase the shiny new architecture first, think about your team’s actual capabilities later.
So I made a comparison. Not the vibes-based “serverless is the future” kind, but an honest matrix of what matters when you’re actually shipping software:
Team Familiarity: Lambda was a hard no. Rails was home turf.
Operational Burden: Lambda wins here, technically. No servers. But “no servers” also means “no rails console when something goes sideways at 2am.”
Time to Add Features: Need an admin dashboard in Rails? rails_admin and you’re done in ten minutes. Need one in Lambda? You’re building a whole separate React app, or wiring up some AWS console nonsense, or — let’s be honest — manually querying DynamoDB from the CLI like some kind of animal.
Debugging: CloudWatch Logs vs a real stack trace in Honeybadger. No contest.
Cost at Low Volume: Lambda technically wins. Pennies vs a small EC2 instance or ECS task. But your team’s time isn’t free, and “figure out why this Lambda timed out” eats hours that “check the stack trace in Honeybadger” simply doesn’t.
Here’s what we’ve learned by 2026: serverless is a tool. A good one, in the right context. Not a universal upgrade. But the hype cycle that told every team to abandon their boring Rails apps for a constellation of Lambda functions did real damage to real codebases.
The serverless dream was that infrastructure would disappear. What actually happened is that infrastructure multiplied — it’s just invisible until something breaks, and then you’re debugging three services, two queues, and a step function at 3am with nothing but CloudWatch and prayer.
Meanwhile, the Rails app has a stack trace, a console, a debugger, and someone on your team who actually knows how it works.
A middle ground is emerging. Neon is serverless Postgres — scales to zero, branches like git, AES-256 encryption at rest, HIPAA compliant if you need it. You get the operational simplicity of serverless with the data model your team already understands. Pair it with a Rails app on ECS or even a simple EC2 instance and you’ve got something that looks modern without requiring a PhD in distributed systems to debug.
The “just use Postgres” crowd won. Not because relational databases are inherently better, but because most applications are relational. Users have payments. Payments belong to users. That’s a foreign key, not an access pattern you need to carefully model in a partition key schema.
DynamoDB is incredible for what it’s designed for. But most CRUD apps aren’t that.
I rebuilt those architecture diagrams in a Rails context. The payment flow actually got simpler — fewer hops, fewer services, more of the logic living in one place where you can reason about it. The data model went from “here’s how you query a single-table design” to “here’s three tables with foreign keys, you know how this works.”
The security model stayed basically the same. reCAPTCHA on the client, Rack::Attack for rate limiting, CSRF protection that Rails gives you for free, Accept.js so card data never hits your server, Postgres encryption at rest. PCI SAQ A-EP compliance either way.
What changed was maintainability. The Lambda version required understanding AWS. The Rails version required understanding Rails. One of these the team already knew.
Learn new things. But adopt new infrastructure because you need it, not because it looks good on a diagram or because thought leaders are tweeting about it.
If you’re a Rails shop with a working deployment pipeline, good observability, and developers who can debug production issues quickly — that’s not technical debt. That’s organizational capital. Throwing it away to chase a new paradigm is a decision with real costs, and those costs don’t show up in your AWS bill.
Build something that works with the tools your team knows. You can always migrate later when you actually hit the scaling problems that justify the complexity.
But you probably won’t hit them. Most of us won’t. And that’s fine.