Imagine we’re building a simple e-commerce site, “SimpleStore.” The initial planning meeting identifies our needs:
- A database for users, products, and orders. (Easy: Postgres)
- A way to send confirmation emails when an order is placed. (Okay, add a RabbitMQ job queue).
- A cache for the homepage’s “Top 10 Products.” (Fine, add Redis).
- A full-text search bar. (Ugh. Add Elasticsearch).
- A nightly job to aggregate sales reports. (Spin up a Cron server).
- A new AI feature to find “similar” products. (The VCs will love this! Add Pinecone).
Before writing a single feature, our architecture diagram is a mess. We have six different systems to provision, monitor, secure, and scale. We have what the team at Supabase calls “dotted line complexity”—the invisible, brittle connections between these systems that will inevitably break in production.
This is the case for collapsing your stack. Not by returning to a monolith, but by realizing that the “boring” tool you started with, PostgreSQL, can replace almost all of it.
Let’s rebuild the “SimpleStore” and see how.
A Cohesive Example: Building “SimpleStore” with Just Postgres
Instead of a sprawl, we’ll build each feature by extending Postgres. The magic isn’t just in replacing tools; it’s in how the connections between them become simple and atomic.
1. The Core: The Order and the Email (The ACID Test)
This is the most critical link. In a sprawled stack, you’d have this dreaded code:
# The "Dotted Line" Failure Point
try:
order = db.create_order(...) # Step 1: Postgres COMMIT
queue.send_email_job(...) # Step 2: RabbitMQ PUSH
except:
# What if Step 2 fails? The user paid but gets no email.
# What if Step 1 fails? The code is a mess.
handle_complex_rollback()
With Postgres, this is one atomic unit. We’ll use the FOR UPDATE SKIP LOCKED pattern to create a powerful job queue inside the database.
Step 1: Create the tables
CREATE TABLE orders (
id SERIAL PRIMARY KEY,
product_id INT,
user_id INT,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE TABLE jobs (
id SERIAL PRIMARY KEY,
queue TEXT DEFAULT 'default',
payload JSONB,
status TEXT DEFAULT 'queued', -- queued, running, failed
run_at TIMESTAMPTZ DEFAULT NOW()
);
Step 2: The Magic (One Transaction)
Now, our application logic becomes beautifully simple:
BEGIN;
-- Insert the order
INSERT INTO orders (product_id, user_id) VALUES (123, 456)
RETURNING id;
-- Use the returned ID to create a job IN THE SAME TRANSACTION
INSERT INTO jobs (queue, payload)
VALUES ('emails', '{"type": "order_confirmation", "order_id": 1}');
COMMIT;
This entire block either succeeds or fails together. It is impossible to create an order without its corresponding email job. We have just achieved perfect data integrity, something that is incredibly difficult with separate systems.
A Go or Python worker can now pull from this queue with a simple, highly-concurrent query:
SELECT id, payload FROM jobs WHERE status = 'queued' ORDER BY run_at LIMIT 1 FOR UPDATE SKIP LOCKED;
| Feature | PostgreSQL (The “Boring” Way) | Dedicated Stack (RabbitMQ / Kafka) |
|---|---|---|
| Data Integrity | Perfect (Atomic). The job and the order are in one transaction. | Poor (Eventual). Requires complex two-phase commits or retry logic. |
| Complexity | Low. It’s just another table in your schema. | High. A separate, complex system to manage, monitor, and scale. |
| Throughput | Moderate. Excellent for most apps, but not Kafka-scale. | Extremely High. Built for massive event streaming. |
| Verdict | Wins for 90% of apps. The trade-off for lower peak throughput is massive gains in simplicity and reliability. |
2. The Homepage: Caching Top Products (Replacing Redis)
Our homepage needs to show the Top 10 products. This query is slow, so we need a cache. Instead of adding Redis, we’ll use an UNLOGGED TABLE.
CREATE UNLOGGED TABLE cache_top_products (
id INT PRIMARY KEY,
name TEXT,
sales_count BIGINT,
cached_at TIMESTAMPTZ
);
An UNLOGGED table does not write to the Write-Ahead Log (WAL). This makes writes incredibly fast. The catch? If the server crashes, the table is automatically truncated.
This sounds just like the durability model of Redis! It’s the perfect, high-speed, non-durable store for transient data.
| Feature | PostgreSQL (UNLOGGED TABLE) | Dedicated Stack (Redis) |
|---|---|---|
| Speed | Very Fast. Not as fast as in-memory, but avoids network I/O. | Extremely Fast. In-memory, sub-millisecond latency. |
| Durability | None (by design). Wiped on crash. | None (by design). Wiped on crash (unless persistence is on). |
| Simplicity | High. It’s just a SQL table. No new clients, ports, or auth. | Low. A separate service to manage, secure, and connect to. |
| Verdict | Wins for most caching. You trade a few microseconds of latency for a huge reduction in stack complexity. |
3. The Search Bar & Reports: (Replacing Elasticsearch & Cron)
We can continue this pattern for our other features:
- Search Bar: Instead of Elasticsearch, we use Postgres’s built-in Full-Text Search.
-- Add a tsvector column for product descriptions
ALTER TABLE products ADD COLUMN search_vector tsvector;
-- Keep it updated with a trigger
UPDATE products SET search_vector = to_tsvector('english', description) ...
-- Search is now a simple, fast, indexed query
SELECT * FROM products WHERE search_vector @@ to_tsquery('english', 'shiny & leather');
- Nightly Reports: Instead of a cron server (a single point of failure), we use the pg_cron extension.
-- Run a job every night at 3 AM
SELECT cron.schedule('nightly-sales-report', '0 3 * * *',
$$ CALL generate_sales_report(); $$
);
The best part? If you have a High-Availability (HA) Postgres setup, your cron job is *also* HA. It's no longer a fragile script on one server.
The All-Important Caveat: The “Good Enough” Trap
This approach is pragmatic, not dogmatic. The “Postgres for Everything” mindset doesn’t mean never using another tool. It means you add a new tool only when Postgres is no longer “good enough.”
The “Boring Technology” choice wins when it’s 80% as good as the “Best” tool, but 10x simpler to operate.
A perfect, modern example is pgvector vs. a Dedicated Vector DB (like Pinecone or Milvus).
- When pgvector is “Good Enough”: You have 50,000 product vectors for your “similar items” feature. pgvector will handle this beautifully. You can JOIN user data with vector data in one query. The simplicity is a massive win.
- When pgvector Breaks Down: You are building the next ChatGPT and need to query 100 million vectors with 99.9% recall and 20ms latency. pgvector will fail. It wasn’t built for this. Its HNSW index isn’t as optimized, and its query planner wasn’t designed for vector-first workloads. At this scale, the “dotted line” to a dedicated, purpose-built Vector DB is not a liability; it is a necessity.
Conclusion: Your Job Is to Fight Complexity
We collapsed our “SimpleStore” stack from six complex services into one robust database.
The benefits are not just theoretical; they are immediate:
- Atomic Integrity: You gain ACID guarantees across your entire workflow (e..g., Orders + Jobs).
- Reduced Cognitive Load: A new developer doesn’t need to learn six systems. They just need to know SQL.
- Lower Operational Cost: You monitor, back up, and secure one thing.
- Faster Development: You can stop writing “glue code” for the dotted lines and start building features.
“PostgreSQL for Everything” isn’t a silver bullet. It’s a maxim against premature optimization and over-engineering. It’s a reminder that your primary job isn’t just to add technology, but to cull complexity. And Postgres is the best tool for that job.