D1 Database: SQLite at the Edge with Cloudflare
A year ago, most developers I know in Spain followed the same pattern: Vercel for the frontend, a centralized database (typically PostgreSQL on some European server), and hope that latency wouldn't be a problem.
Then Cloudflare D1 arrived.
The Problem Nobody Wanted to Admit
Traditional architecture has a silent flaw: centralization kills speed. Every database query travels from the edge (where your user is) to a central server. In Spain, if your database is in Madrid and your user in Barcelona, it's fast. But what if it's in São Paulo?
Developers know it. Business people don't understand it. And customers just see that the application is slow.
Cloudflare D1 solves this in a way that seemed impossible a few years ago: SQLite distributed at the edge.
What D1 Really Is
D1 is SQLite running on Cloudflare's edge servers. It's not a NoSQL database. It's not some weird abstraction. It's SQLite, the same engine that powers millions of mobile applications.
The magic is that:
- **Generous free tier**: You get access to a SQLite database without paying initially
- **Read replicas at no cost**: You can replicate your data across multiple edge regions without extra charges
- **Minimal latency**: Your code executes queries milliseconds away from the user
Why I Changed My Thinking
Six months ago, I was building a SaaS with traditional patterns: a centralized PostgreSQL database, Supabase as middleware, and Vercel on the frontend.
It worked. But every time I scaled to new markets, latency increased.
Then I tested D1 with a small experiment: I migrated one user table to SQLite at the edge.
The result was so obvious I was surprised I hadn't seen it before: queries went from 200-300ms to 10-50ms.
It's not magic. It's physics. Distance is the enemy of speed.
The Per-Tenant Pattern That Finally Makes Sense
Now here's where D1 gets interesting for real applications.
In a traditional multi-tenant architecture, all your customers share the same database. It's resource-efficient, but has problems:
- One slow customer can affect everyone else
- Data is centralized, vulnerable to privacy issues
- Scalability is a bottleneck
With D1, you can do something different: one SQLite per tenant.
Yes, it sounds crazy. But look at the code:
```javascript // Cloudflare Worker export default { async fetch(request, env, context) { const { tenantId } = request.params;
// Each tenant has its own D1 database const db = env.D1_DATABASES[`tenant_${tenantId}`];
const result = await db .prepare('SELECT * FROM users WHERE active = ?') .bind(true) .all();
return Response.json(result.results); } }; ```
Each tenant gets:
- **Data isolation**: Their data doesn't mix with others
- **Predictable performance**: No resource competition
- **Simple scalability**: Adding a new tenant is creating a new SQLite
- **Regulatory compliance**: Easier to comply with GDPR when data is separated
In Spain, where GDPR is law, this is especially relevant. Your customers can demand their data be completely separate, and D1 makes it possible without complexity.
The Reality: When to Use D1
It's not a universal solution. D1 shines when:
- **You need global low latency**: Users distributed across multiple continents
- **Your data is small to medium**: SQLite isn't a massive database
- **Read patterns dominate**: D1 is excellent for reads, writes are slower
- **You want operational simplicity**: No server management
Where NOT to use D1:
- **Massive data**: If your table has billions of records, it's not for you
- **Write-intensive workloads**: If you need thousands of writes per second, PostgreSQL is better
- **Complex transactions**: SQLite has concurrency limitations on writes
- **Real-time analytics**: Not optimized for heavy analytical queries
Real Code
Here's a practical example I used in production:
```javascript // Validate user access with D1 export async function validateUserAccess(request, env, tenantId) { const db = env.D1_DATABASES[`tenant_${tenantId}`]; const userId = new URL(request.url).searchParams.get('user_id');
const user = await db .prepare('SELECT id, role, active FROM users WHERE id = ?') .bind(userId) .first();
if (!user || !user.active) { return new Response('Unauthorized', { status: 401 }); }
return user; }
// Replicate data to multiple regions export async function syncToReplicas(env, tenantId, data) { const regions = ['us', 'eu', 'apac'];
await Promise.all( regions.map(region => env.D1_DATABASES[`tenant_${tenantId}_${region}`] .prepare('INSERT OR REPLACE INTO data VALUES (?, ?)') .bind(data.id, JSON.stringify(data)) .run() ) ); } ```
This is what blew my mind: replicas sync automatically. You don't need complex replication logic. Cloudflare handles it.
The Mental Shift
D1 forced me to think differently about architecture:
Before: Centralize everything, optimize for one database, accept latency.
Now: Distribute data where it's used, think in multi-tenant patterns, optimize for speed.
It's the same shift that happened with CDNs a decade ago. At first it seemed unnecessary. Now it's standard.
D1 is at that point now.
The Uncomfortable Question
Why isn't everyone using D1?
Because it's relatively new. Because many developers don't know SQLite well. Because inertia is strong: "PostgreSQL works, why change?"
But if you're building something new, especially a SaaS with global users, ignoring D1 is ignoring free performance.
Takeaway
Cloudflare D1 isn't a silver bullet. But for applications where latency is critical and data is distributed, it's probably the best decision you can make today.
Best part: the free tier is generous. Try it. Migrate one table. Measure the difference.
Physics doesn't lie. Less distance = more speed.
---
Are you using D1? Or still on centralized PostgreSQL? Tell me in what context and why. Real patterns interest me more than theories.
