Postgres vs Aurora vs CockroachDB for Production SaaS in 2026
Three databases that all speak a Postgres-compatible wire protocol, three completely different operational models, three completely different bills. The decision is not actually about features. It is about one question: do you need multi-region writes? The rest follows from there.
The one question that decides everything
Strip the marketing away and the choice between these three databases is not about Postgres compatibility, ACID guarantees, or vector search. All three handle the table-stakes things. The question that actually matters is:
Do you need multi-region active-active writes?
If the answer is no, pick RDS Postgres or Aurora Postgres. If the answer is yes and you cannot tolerate even a few seconds of failover delay, CockroachDB exists for that exact reason and the others do not.
Most SaaS in 2026 do not need multi-region writes. They need fast reads close to users (which CDN and read replicas handle) and a single writer in their primary region. If you are in this category, save yourself the operational complexity and pick managed Postgres.
RDS Postgres: the safe default that gets every team to $50M ARR
RDS Postgres is plain PostgreSQL with AWS handling backups, patching, and failover. Same Postgres you would run on a VM, just managed. As of Postgres 16 and 17, it has every feature you actually need: logical replication, JSONB, full-text search, partitioning, parallel query, and a 60+ extension ecosystem (pg_stat_statements, pg_cron, pg_partman, TimescaleDB, pg_vector, etc.).
Pricing for a typical 100K MAU SaaS:
- Primary instance: db.r6g.2xlarge (8 vCPU, 64 GB RAM) Multi-AZ = ~$1,200/month
- Read replica: db.r6g.xlarge = ~$300/month
- Storage: 500 GB gp3 = ~$60/month
- Backups: 500 GB at $0.095/GB/month = ~$50/month
- Total: ~$1,600/month
What works:
- Boring. Predictable. Postgres is decades old and almost every problem has a Stack Overflow answer.
- Tooling ecosystem is massive: pgAdmin, DBeaver, dbt, Hasura, Supabase, every ORM ever written.
- You can hire engineers who already know Postgres without retraining them.
- Migration path away from RDS is real (just stand up Postgres on EC2 or move to another cloud).
What hurts:
- Failover takes 60 to 120 seconds. For most SaaS that is fine. For some it is not.
- Storage scaling requires you to grow the volume, but cannot shrink it without snapshot-and-restore.
- Read replicas have replication lag (typically 100ms to 5s) and cannot accept writes.
- Postgres major version upgrades require some planning. Patches are painless.
If you are starting a new SaaS in 2026 and you do not have a strong reason to pick something else, RDS Postgres is the right answer.
Aurora Postgres: same Postgres, completely different storage engine
Aurora is AWS's Postgres-compatible database where they replaced the storage layer with a distributed system that automatically handles replication across 6 copies in 3 availability zones. The wire protocol and SQL surface are identical to Postgres. The internals are not.
Pricing for the same workload:
- Primary writer: db.r6g.2xlarge = ~$1,400/month
- Reader replica: db.r6g.xlarge = ~$700/month
- Storage: pay-per-GB, $0.10/GB/month = ~$50 to $80/month
- I/O: $0.20 per million requests, can add up fast for write-heavy workloads = $200 to $1,500/month depending on traffic
- Backups: same as storage
- Total: ~$2,400 to $4,000/month
Aurora Serverless v2 changes the math: you pay for ACUs (Aurora Capacity Units, roughly 2 GB RAM + matching CPU) per second, scaling automatically. Good for spiky workloads, expensive for steady-state.
What Aurora gives you that RDS does not:
- Faster failover. 30 seconds typical, sometimes 10 seconds. Storage is shared, so the new writer just takes over.
- Up to 15 read replicas with sub-100ms lag. RDS caps at 5 with longer lag.
- Storage that auto-scales. No volume size to manage.
- Aurora Global Database. Read replicas in other regions with sub-second cross-region replication.
- Faster recovery from crashes. Replays the redo log in parallel.
What Aurora costs you beyond money:
- I/O charges. A write-heavy workload can pay $1,000+/month just in I/O on Aurora that would be free on RDS gp3 storage.
- Lock-in. Aurora is AWS-only. Migrating off requires logical replication or pg_dump.
- Some Postgres extensions are not supported. AWS publishes the list, but it changes.
Pick Aurora if: failover speed matters, you have predictable steady-state traffic, you are already deep in AWS, or you need Aurora Global for cross-region reads.
CockroachDB: distributed SQL designed for the specific problem managed Postgres cannot solve
CockroachDB is a distributed SQL database that runs across multiple regions with synchronous replication. Every node can accept writes. The system uses the Raft consensus protocol per data range to ensure consistency. It speaks the Postgres wire protocol but is not actually Postgres internally.
The problem CockroachDB exists to solve:
- You have customers in three continents.
- Each region needs to write to the database with low latency.
- You cannot tolerate a regional outage taking down writes globally.
- You also need strong consistency, not eventual consistency, because money is involved.
If that is not your problem, you should not be using CockroachDB.
Pricing for the same workload (CockroachDB Cloud Dedicated):
- 3-node cluster, 8 vCPU per node, multi-region: ~$3,000 to $5,000/month
- Storage: $0.50/GB/month = ~$250/month for 500 GB
- Egress: significant if cross-region traffic is heavy
- Total for single-region production: ~$3,500 to $6,000/month
- Total for 3-region: ~$10,000 to $20,000/month
What works:
- Multi-region active-active writes with strong consistency. Nobody else does this without significant tradeoffs.
- Survives an entire region going offline without dropping writes.
- Horizontal scaling that actually works for writes, not just reads.
- Postgres-compatible wire protocol (mostly).
What hurts:
- Not actually Postgres. Some queries that work in Postgres fail in CockroachDB.
RETURNINGbehaves differently. Some isolation level subtleties differ. - Cross-region writes are slow because of consensus. If you write to a row whose leader is in another region, that write takes 100ms+. Plan your data layout around this.
- Operational complexity is real even with managed CockroachDB Cloud. Tuning is non-trivial.
- Far smaller ecosystem than Postgres. Many tools need adapters or do not work.
- Training cost for your team. Most engineers have not used distributed SQL before.
The decision matrix
| Use case | Pick |
|---|---|
| SaaS at <$50M ARR, single region, 99.9% SLA | RDS Postgres |
| SaaS where 60-second failover is unacceptable | Aurora Postgres |
| SaaS that needs cross-region read replicas | Aurora Global |
| SaaS that needs multi-region active-active writes | CockroachDB |
| SaaS where regulatory data residency requires regional sharding | CockroachDB (with careful config) or sharded Postgres |
| Small team, no DBA, wants to ignore the database | RDS Postgres |
| Heavy write workload with predictable traffic | RDS Postgres (Aurora I/O charges add up fast) |
| Spiky write workload with idle periods | Aurora Serverless v2 |
The migration trap
The most expensive mistake teams make is picking a database for a future scale they will not actually hit. Building on CockroachDB at 100 MAU because you "might need multi-region someday" is a six-figure operational tax for years before you hit the scale where it pays off.
The reverse mistake is also expensive. Picking RDS Postgres when you genuinely have global active-active needs forces you into application-level sharding, which is operationally painful and slow to evolve.
The right path is usually:
- Start on RDS Postgres. Get to product-market fit.
- If write traffic gets heavy or you need faster failover, migrate to Aurora Postgres. Migration is straightforward.
- If at $50M+ ARR you genuinely need multi-region writes, evaluate CockroachDB or Spanner. By then you have the engineering capacity to handle the complexity.
Operational red flags to watch for in any of these
Whichever you pick, monitor these things or you will get surprised:
- Long-running transactions. Vacuum stalls. Replication lag spikes. Indexes stop being used because the planner sees stale stats.
- Connection pool exhaustion. Postgres connections are expensive. Use PgBouncer or RDS Proxy. Set max_connections lower than you think you need.
- Failed vacuum on huge tables. Watch
pg_stat_user_tables.n_dead_tup. If it grows without bound, you have an autovacuum problem. - Slow queries that became slow only after the table grew. Run
pg_stat_statementsweekly. Anything in the top 10 by total time deserves a look. - Replication slots that never get cleaned up. A dead consumer will fill your disk with WAL logs.
pg_replication_slotsis your friend.
Test your queries before you ship them
Use our free SQL Formatter to clean up complex queries before review, our JSON Formatter to parse JSONB output, and our Regex Tester for migration patterns.
Explore Free ToolsThe bottom line
The choice between RDS Postgres, Aurora Postgres, and CockroachDB is not actually about features. It is about whether you need multi-region writes (CockroachDB), faster failover and storage scaling (Aurora), or boring predictable Postgres that any engineer knows (RDS).
Start with RDS unless you have a clear reason not to. Move to Aurora when failover speed or storage scaling becomes painful. Move to CockroachDB only when single-region writes genuinely cannot meet your latency or availability requirements, and accept that you are signing up for a different operational model.
Related reading: SQL Index Optimization, Database Normalization Guide, SQL Joins Explained, NoSQL vs SQL Comparison, and AWS Security Checklist for Production.