I've recently been building a blogging platform, writizzy.com, with the ambition of offering a European alternative to US platforms like Medium, Substack, Hashnode, and others.
Now you might be thinking, "Aren't blogs kind of dead since YouTube, TikTok, Instagram came along?" I don't think so, but I'll get back to that.
However, blogs do have one major weakness compared to all those platforms: discoverability. What makes these platforms successful is... their content recommendation algorithms.
Yes, I know—that's also what we criticize them for. But without algorithms, nobody would ever discover Mike's video about his passion for ant-keeping. And that might be a shame.
The thing is, recommendation comes with platform responsibility for suggested content, which means moderation. And so far, no platform has nailed this. Between YouTube's puritanical overzealousness, X's normalization of conspiracy theories, and Shein selling questionable products, it's clear this problem is far from solved.
So what do we do? Is it doomed to fail?
In this post, I'll cover the health of blogs, why discoverability matters, different approaches to discovery, content recommendation platforms, and moderation. Fair warning: I don't have all the answers—I'm actively working through this myself. But that's exactly why I'd love to discuss it.
Blogs Are Doing Just Fine
Surprise: blogs are actually thriving. According to OptinMonster, there are over 600 million blogs worldwide. More than 409 million people read over 20 billion pages monthly on wordpress.com alone, with WordPress still powering 40% of all websites. Another source claims that 83% of internet users (4.4 billion people) regularly read blog posts.
So blogging is very much alive, though there's definitely been a shift toward video consumption.
Today, 82% of global internet traffic is video, and many people now turn to TikTok, Instagram, and similar platforms for quick answers—whether it's recipes, DIY tutorials, or even tech topics. That said, written content has clear advantages: it's easier to create and update. I do both—videos and blog posts. Writing is obviously much faster than producing a video, plus I can update an article after publishing, which I can't do with a video. That's a significant advantage for people who don't have the energy to film themselves, or simply don't want to show their face.
Yes, video will keep dominating entertainment and "brain off" moments, but blogs will remain more accessible and better suited for specialized, easily-updated content.
However—and this is where video platforms win—the big difference is content recommendation. And that's what makes the blog model fragile.
Discoverability: Just Vanity Metrics?
Discoverability is a broad topic, because the first question is: does it even matter when you're writing a blog?
For some, the answer is clearly yes—people who've built media outlets, paid newsletters, and monetize their traffic. Examples: Pragmatic Engineer, Ali Abdaal.
At the other end of the spectrum, it's purely personal—a journal where whatever happens, happens. Discoverability might even be actively discouraged, like n.survol.fr which publishes no sitemap and makes site exploration intentionally difficult.
And between these extremes lies a whole spectrum: people working on personal branding (so 2010s), others using blogs for influence, weekend hobbyist bloggers, etc.
I'm somewhere in that gray zone. I don't monetize my blog, which I've been running since 2001, but I'll admit I'd find it a bit sad if nobody read it. So I pay attention. Even though I use it as a kind of personal digital memory, my original motivation back in the early 2000s was sharing tutorials and experiences. And if absolutely nobody reads them, I'd probably put in less effort. That's precisely why I added YouTube videos a year ago—to scale up.
I understand this isn't everyone's goal, but personally, I'm trying to have an impact—at my own scale—on understanding tech topics and their influence on business and society. I'm not selling anything; I'm trying to contribute something.
Here's the thing: when I post a YouTube video, I average several thousand views, with my personal best at 35K so far. (For example: as I write this, I published a video this morning and it already has 3.6K views in under 7 hours.)
On the blog, views range from 50 to 12 000, with most posts hovering around 1K. And that's for an established blog with decent domain authority and reasonable SEO—these aren't even bad numbers.
Again, maybe some people are totally against tracking these "vanity metrics," but I won't pretend I'm not competitive, and I'd bet some blogs would be more active if they were more widely read.
When content isn't read, it's not necessarily because it's bad—it's because it's hard to find. Without active promotion, nobody reads a blog post. SEO for individual bloggers is a nearly lost battle against platforms and professional sites with better authority or marketing budgets.
That's why 90% of bloggers use social media to promote their posts.
And naturally, while building Writizzy, I'm wondering: what if we could do better? What if we could enable content discovery through a community of readers?
Algorithms: The Platform Solution
This is exactly where platforms come in. Instagram, YouTube, TikTok, X—they all work the same way. They create personalized content feeds, trying to maximize time spent on the platform by continuously serving content tailored to each user.
The idea is to leverage content produced by all users to determine the right audience for any new video.
When I publish on YouTube, I make zero promotional effort—YouTube handles it. It identifies the right audience, shows them the video, measures reactions (clicks, watch time, likes, comments, etc.), validates the audience, and repeats.
That's what we call an algorithm.
But these algorithms have plenty of flaws. They can be optimized to favor negative engagement (like X, which amplifies controversial content), or even favor certain political viewpoints (X again, implicated in several recent election interference cases). They're also criticized for creating filter bubbles that trap us in belief patterns—though I'd argue our own confirmation biases already do that on their own. They can also lock us into infinite scrolling, rewarded by small dopamine hits, while we passively accept ads scattered throughout.
Yes, algorithms have a bad reputation, but in practice, they're the main reason we stay on these platforms. They let us discover available content, and for creators, they're what prevents total anonymity.
If I'm convinced discoverability matters for blogs, one question emerges: how do we give control back to users?
Taking Back Control
Before going further, I should note that not all algorithms are opaque and complex. There are "lightweight" alternatives, like Hacker News, which ranks purely by vote count and freshness. Bearblog uses the same approach, displaying its formula at the bottom of the page:
# This page is ranked according to the following algorithm:
Score = log10(U) + (S / (B * 86,400))
Other solutions offer simple chronological sorting. That's Mastodon's approach—posts sorted purely by date. I find this too minimalist.
What really interests me is whether there's a virtuous approach. How do we preserve recommendation quality without imposing unilateral choices? This is exactly what this blog post addresses, making an important observation:
$ echo>You can pay money and advertise to women of color between 40–60 in Seattle, but you can't choose to read perspectives from those women
The post highlights a solution, an MIT research project: Gobo (unfortunately inactive as I write this), which lets you aggregate data from platforms and apply your own filters. Similar projects include Youchoose and Tournesol, all sharing the same goal: empowering users.
While these initiatives remain niche, the most polished and widely-used implementation is probably Bluesky, which lets anyone choose algorithms that can be created by other users.
If I were to build a content recommendation system for Writizzy, this would be the approach that appeals to me most.
However, discussing this with Thomas (who also works on Writizzy), offering a content feed quickly raises two other problems:
- How do you start with only 140 users, of which maybe 10-15% are truly active? This is called the Cold Start problem.
- Moderation.
I'll skip the first topic—it's not what this post is about. But the second is serious.
Moderation
The moment you create a page aggregating content from multiple users, the risk of inappropriate content exists. It could be adult content, spam, scams, racist abuse, etc.
Writizzy already has responsibilities under the European DSA (Digital Services Act). I must provide a reporting mechanism and act on reports of illegal content. Note: as a host, I'm not required to proactively monitor—only to respond to reports. As long as blogs remain separate, it's still manageable. Impact is limited to the individual's blog.
But once a feed exists, impact multiplies—that's the whole point—but it also creates more pressure on moderation. And it's far from simple, because you have to judge whether content is illegal, and that judgment isn't always clear-cut. What's acceptable varies.
Where's the line between satire and insult? Between political criticism and defamation? How do you detect and handle fake news? What's the line between pornographic nudity and art (Courbet's L'Origine du monde, for instance)?
I've tried to catalog different moderation methods to see what might work for Writizzy:
Manual Moderation
Two main categories here: manual moderation by one or more super-admins, and mass outsourced moderation. Small-scale manual moderation is what you see on Mastodon, with the obvious bias of moderator subjectivity and resulting conflicts between instances with different political positions. I'm not infallible; I don't want to be responsible for moderating all published content. And it won't scale.
Then there's mass moderation, often outsourced to low-wage countries by major platforms like Meta, TikTok, or OpenAI. It's far from pleasant work, and abuses have been documented extensively. It's unsuitable for Writizzy—economically and ethically.
Community Moderation
This relies mainly on user reports. X's Community Notes fall into this category, as do Wikipedia's discussion threads. It's obviously the cheapest approach, but it can be gamed if groups coordinate to censor content. This system can be improved with reputation points awarded by the community. That's Reddit and Hacker News Karma, or Stack Overflow reputation scores. This mechanism could make sense for Writizzy. However, community moderation is reactive—meaning the damage is done; content has already been exposed before being flagged.
Automated Moderation
This can involve keyword detection, user profiling (new accounts, posting patterns, etc.), or nudity detection algorithms for images. It's the easiest method to implement. You can adjust tolerance levels. This is where AI could shine for understanding text, but it's far from foolproof—people use word substitutions, altered spelling, emojis representing concepts, or algorithms simply perform worse in certain languages.
My conclusion from this mini-study is fairly obvious: you'd need automated detection upfront, then a reporting system enhanced by Karma scores downstream, and finally manual super-admin intervention as a last resort.
Yet these systems exist and platforms are still criticized, because all moderation is imperfect. There's always the central question of interpretation: what's legal or not? And that interpretation varies by country and culture.
This brings us to another approach—Bluesky's again—where one of the fundamental principles is decentralization, including moderation via labelers.
Decentralized Moderation
Bluesky's moderation has two levels:
- Base moderation, handled by Bluesky itself. This first layer uses Ozone, an open-source moderation tool. It's configurable so users can set their own tolerance levels. This step catches universally unacceptable content (child abuse, incitement to violence, etc.) for which the platform can be held legally responsible.
- Optional moderation provided by independent labelers. These labelers can be built with an SDK and offered to users who choose them to customize their expected moderation.
Once again, it comes back to the same idea: giving users control. (Even if 90% will probably keep the default settings.)
Alternatives for Discoverability
What's certain is that this topic is complex, and I completely understand Thomas's reluctance to venture into it. So we discussed other approaches. Rather than aggregating content, we could start with content curation. It's much simpler—we could manually select content to highlight each week or month.
We could also consider smaller-scale recommendations:
- Related content suggestions at the bottom of articles
- Similar newsletter suggestions when users subscribe
Or we could focus on automating cross-posting: RSS and newsletters today, automatic distribution to ATProto, Bluesky, Nostr tomorrow?
There are other possibilities, and no decisions have been made yet.
What would you do? If discoverability matters to you, what would be your ideal solution? Do you use Medium's or dev.to's recommendations? Are you already cross-posting to federated networks?
No comments yet. Be the first to comment!