
Hugo
Actually the title of this post should have been :
How to use Cloudflare D1 and Cloudflare KV with NuxtHub to deploy a drinking game on Cloudflare worker?
But it was a bit long. And for many people, the title would have made no sense.
Well, at the same time, the current title doesn't make any more sense. But, maybe a drinking game will attract some readers.
This is where I explain why I created this application. You can go straight to the technical part, but if, like me, you like to understand what the technical part is for, you can stay for a few more lines :)
It's often said that the two most difficult problems in computer science are :
But, if I may say so, I'd say we should update this quote.
In 2024, finding a free domain name is a nightmare. Everything is taken, even names that have no sense at all.
That's not to say that everything has already been created. Many of these domain names aren't even in use. This is also, and above all, the fault of domain squatters.
In short, that's hell, because so many people have decided that the best way to make money on the Internet is to buy domain names randomly, hoping to resell them later.
There's not much I can do about it, so I might as well laugh. So I turned it into a drinking game.
Please note that, obviously, I don't encourage the consumption of alcoholic beverages, so you can drink anything instead. I've heard that kumis or chicha are good candidates...
If I'm going to make a quick application, I might as well learn something along the way. So I decided to use some of Cloudflare's experimental features, in particular D1.
On top of that, I chose to use NuxtHub, which provides an abstraction layer for manipulating D1 and Cloudflare's key value store from within a Nuxt application.
Traditionally, a user consulting a web site will trigger API calls to a server. This server can sometimes be far away from him/her. And in this case, it takes a long time.
This is called latency.
sequenceDiagram
participant User
participant Website
participant API
User->>Website: Request for website (fast)
User->>API: API call (slow)
Here the call to the website is fast. But the API call and response are slow. Because the API is far from the user.
Because if you're consulting a site from Japan, but for each piece of information displayed on the web page, you have to go and ask a database in France, that's a few kilometers to cover.
We estimate that each round trip between France and Japan will take between 200 and 300ms. If there are a lot of API calls, this can quickly take its toll and create a less-than-pleasant browsing experience.

Well, you could say that 300ms is still acceptable, and that's more or less true. But here, I haven't even included the processing time and the display time. Not to mention the fact that the network might have problems, or the web application is maybe a bit sloppy with lots of sequential calls. As a result, the experience could be terribly slow.
That's why we've been using CDNs for static resources for a very long time: images, js scripts, fonts etc...
A CDN is a content distribution network.
It's a set of servers distributed around the world, each containing a copy of the resource to be distributed.
So if the user viewing an image is in Japan, the CDN will serve him the image which is present on a server in Japan to optimize loading.

But when it comes to application servers, things are more complex.
An application server carries out processing. For a very long time, this type of server was located in a single place, in France for example, and the user in Japan had to agree to wait for the data to return from France.
Based on the principle of CDNs, some companies, such as Cloudflare, have begun to offer the option of moving processing and data close to the user, to the Edge.
It's called Edge computing.
The principle is far from new. In the data industry, we've been talking about Data Locality for several years now.
Dnsdrink is a Nuxt application deployed on the Edge.
It uses Cloudflare and more precisely cloudflare workers.
Workers are "servers", but not in the traditional sense.
A worker can execute code on the point of presence closest to the end user, to avoid latency. That's what Edge computing is all about.
Okay, but there's still the matter of databases?
This is where Cloudflare features come in:
The main concern when distributing a database on the Edge is data consistency.
In read-only mode, this is easy. Data can be replicated anywhere in the world, and Edge servers can serve local data. Well, easy is an understatement. We're happy to delegate it to Cloudflare.
In reading, a dnsdrink user in Japan will have the same latency as a user in France for these database accesses.
Writes, on the other hand, are "eventually consistent", meaning that the data is replicated worldwide, but not instantaneously. It's a compromise between consistency and performance.
If you want to understand the exact mechanism, I invite you to read their blog post on the subject.
It's not magic:
NuxtHub is two things:
And it's all these features that I wanted to use with dnsdrink.
The application itself isn't very complex. You can view the source code on Github.
Let's take a look at nuxt configuration
export default defineNuxtConfig({
modules: ['@nuxthub/core', '@nuxtjs/tailwindcss', '@nuxt/image'],
hub: {
database: true,
kv: true,
blob: true,
cache: true,
},
nitro: {
scheduledTasks: {
'* * * * *': ['schema'],
0 0 1 * *': ['purge'],
},
experimental: {
tasks: true
}
}
})
Note the use of the @nuxthub/core module, the activation of features in hub and nitro tasks.
Nitro tasks are not specific to NuxtHub and, to be honest, I haven't managed to get them to work on Cloudflare. They only work on my local machine. Basically, they involve scheduling tasks to be carried out at regular intervals, such as database cleaning.
I didn't dig too deeply into this because it's not particularly important to me. But here's what a task looks like:
export default defineTask({
meta: {
name: 'purge',
description: 'Purge old quota entries from previous month',
},
async run({ payload, context }) {
const db = hubDatabase()
await db.prepare('DELETE FROM quota WHERE strftime(\'%Y-%m\', created_at) < strftime(\'%Y-%m\', \'now\')').run()
return { result: 'Success' }
},
})
In the file index.post.ts you can see example calls.
async function checkQuotaExceeded() {
const db = hubDatabase()
// check if the number of requests exceeds the quota for this current month
const quota = 9500
const count = await db.prepare('SELECT COUNT(*) as count FROM quota WHERE strftime(\'%Y-%m\', created_at) = strftime(\'%Y-%m\', \'now\')').first('count') as number
if (count >= quota) {
consola.warn('Quota exceeded')
return true
}
return false
}
....
// check if result is already in cache
const responseFromCache = await hubKV().get(domain)
if (responseFromCache) {
return responseFromCache
}
....
// store result in cache
await hubKV().set(domain, summary, { expirationTtl: 60 * 60 * 24 * 30 }) // 30 days
The dev experience is quite pleasant. Locally, you have access to a SQLite database and a mini key value store, so everything runs smoothly.
Deployment on CF is equally straightforward.
I wasn't able to get the tasks to work, but I suppose with a bit of searching I could have.
For full-stack applications on Cloudflare, it's a pretty good option. It's certainly less accomplished in terms of features and administration than Supabase but it's still very clean and could become a serious contender in the future (?).
Personally, I'm not going to abandon my preferred stack with kotlin because I'm still more productive. But you have to understand that this architecture is very inexpensive. DnsDrink costs me 0. So it's a serious option. Depending on my needs, I'll reuse this stack without worry. And maybe one day my stack will be able to run on the Edge?
No comments yet. Be the first to comment!