The Myth of Neutrality: Why Tech is Political
Technology is supposed to be neutral. It's only how we use it that determines whether it becomes beneficial or harmful to society.
By extension, the people who create technology should be neutral and remain so. These people shouldn't step outside the technological sphere to discuss politics, for example.
That's an instruction I've heard recently.
Essentially, software engineers should stay in the basement discussing performance and video games, while the adults talk upstairs.
To be honest, I believed this statement in the past. Today, it bothers me, and I'd like to discuss it.
Because tech is anything but apolitical.
TIP
In this text, "tech" refers to the software industry as a whole. Much of what I'm saying could probably apply to all technologies, but I prefer to focus on what I know best.
Technology is about making choices
First, let's be clear: politics isn't just about speaking at a podium in parliament or handing out leaflets in the street.
Politics is about all the decisions we make every day to try to live with others.
At an individual level, this translates into our actions: who we vote for, what we consume (and therefore who we finance), but also how we divide tasks at work and at home, or how we interact in shared spaces.
In short, it's about the choices we make.
And creating technologies involves making many choices.
Who is it for? What problem does it solve? What compromises were made?
Software design choices are therefore "political" in the sense that they influence some of the people who make up society.
For example, did you know that in 2020, facial recognition algorithms from IBM, Amazon, and Microsoft were banned from use by American police because it was discovered that the technology worked less effectively on Black people, with a significant error rate that led to many people being wrongfully arrested?
Have you heard about TikTok's Glow Look beauty filter, designed to "enhance" people's appearance, but which had the unfortunate tendency to lighten skin tones and make eyes appear less hooded?
Choices were made that had consequences for people, such as unjust arrests in the first example.
But you might say this is just carelessness, showing that tech isn't perfect, that there are bugs, but this doesn't really contradict the idea that tech is neutral.
That's debatable. How was the training data selected? Was it a budget constraint? Were the right control measures put in place?
There are many questions we could ask, but indeed, in these specific cases, it's open to discussion.
However, we can also generalize, because these examples are just the tip of the iceberg. Let's talk about the digital divide. When we design an application or online service, we make a choice, consciously or not, about who will be able to access it and who will be excluded.
Think about government websites that become exclusively digital: a person without equipment or computer skills is effectively excluded from an essential service. Is this just a "bug" or a political choice to favor certain users at the expense of others? When we decide not to invest in digital support for vulnerable populations, that's also a political choice.
Similarly, when a company chooses to create an app that requires the latest $1,000 smartphone rather than a lightweight version accessible on more modest devices, this technical decision directly impacts who can or cannot use the service. Compatibility constraints, data consumption, interface complexity—all these seemingly technical aspects are actually choices that determine who is included and who is excluded.
The digital divide isn't just an unfortunate consequence of progress; it's also the result of decisions made by designers.
I hope we can at least agree that the consequences of technology can be societal, and therefore political.
But more than that, the problems I cite aren't just a matter of bugs; there is often intentionality in the choices that are made. To the point where sometimes, the link between technology and politics is much clearer.
Tech as a political and economic weapon
Here's a response suggested by Grok, X Twitter's AI, below Volodymyr Zelenskyy's message on X.
These responses echo its owner Elon Musk's rhetoric against Zelenskyy, asking what territory they're willing to cede for peace.
We've already seen opinion manipulation operations on social media, operations aimed at massively using bots to create false information or give the impression that a particular opinion is the majority.
But this time, it's a bit different. It's the social network's owner who is modifying his tool to make it a media outlet, to help him disseminate information with an editorial line.
X is no longer just used as a weapon by various groups.
X becomes a weapon for its owner's use.
Owner, whose connection to recent manipulations supporting the AfD, Germany's far-right party, has already been documented by Arte. An owner who, anyway, makes no secret of publicly financing and supporting this party.
TikTok is not innocent either, also flagged for a suspiciously high proportion of pro-AfD content during elections.
Very recently, Russia was called out for its operations manipulating major AI systems like ChatGPT, Perplexity, or Claude.
Russia is also involved in manipulations using the Meta/Facebook network.
In short, social networks are a battlefield, and Europe is perhaps a bit too passive on the subject. No discussion seems to be open about a ban or at least an obligation for more active moderation.
A discussion that is happening in the US regarding the ban of TikTok.
A discussion that happened in Brazil and led to the ban of X for several weeks.
But for many, social media is harmless, or at most, it's a game we have to play, where we're all supposedly on equal footing—a kind of modern agora.
Yes, except that entry into this agora is regulated, some have the right to use loudspeakers, and the walls are covered with slogans and advertisements.
And yet examples of technology used as a weapon are legion. I could have talked about Stuxnet, the American-Israeli virus that delayed Iran's nuclear program, or even simply the Enigma project during World War II, which aimed to decipher secret German messages.
Hard to talk about the neutrality of technology here.
Technology that has, in any case, long been financed by the military industry. The first computers or the Internet were born first for the army.
And since we're talking about funding, here again, it's easy to show that tech is not neutral.
Who decides?
One might think that innovation is born by chance from an apple falling from a tree or a bathtub overflowing.
It's a bit less glamorous than that.
To develop technologies, we need funding, and this funding follows a logic that is either capitalistic or political.
Capitalistic because investment funds will follow "investment theses" to finance certain sectors rather than others, sectors deemed more promising in terms of future gains. There's a fashion effect on the types of companies that will be supported. It's not just free innovation. Some inventions that could benefit humanity will never cross the threshold of the idea due to lack of possible funding. Conversely, today, put "AI" in the description of your project, and you'll get funding.
But political too, because a state can decide to invest in specific areas. When Emmanuel Macron talks about an envelope of 800 billion euros envisaged to upgrade a European army, chances are that this money includes an envelope for research, particularly in AI.
When Shenzhen becomes the Chinese Silicon Valley in about 40 years, it's not the result of chance or the "invisible hand of the market." It's a state investment.
So we find the same "trends" in the funding of public research. A research project will be funded more or less easily depending on whether it follows certain major themes.
So here again, saying that tech is neutral is completely forgetting how it's financed.
Tech that is consciously political
What if I told you that the link between politics and tech is historical anyway?
If we follow the book "Tech: How Silicon Valley Remakes the World", we can see that startup culture, open source, transhumanism, techno-solutionism, and libertarianism are indirect emanations of the counter-culture of the 60s and 70s. All these movements aim to change our relationship with the world.
Tech, through this prism, has always been political in the sense that it has always had ambitions to change society.
And now?
Technology is not neutral, whether through its funding that selects what materializes or not, through its impacts on society, through its demonstrated use as a weapon, or simply because it never aspired to be neutral. Because historically, it has always been designed to change the world, an expression that is certainly painful to hear when repeated constantly for trivial innovations, but historically accurate nonetheless.
And that's what bothers me today when someone tries to impose neutrality on me as a "technician." I am not neutral. My actions, the actions of my industry, have consequences for society.
You are not neutral either. Neutrality is already a choice. And society is built by the choices made by each of us.
For example, being on a social network financed by a far-right supporter is a choice. Buying a consumer good or watching a television program that finances the far right is a choice.
We all vote for the world we want with our purchasing power or our attention time.
At a time when a large part of American tech is lining up behind Trump and Musk, at a time when some French tech entrepreneurs continue to admire Musk, perhaps it's time to realize that the injunction to neutrality imposed on others, those who think that tech isn't just libertarians like Musk, Thiel, or DHH, often comes from discreet supporters who aren't too bothered by the situation.
And as for me, working in tech, my role isn't just to know which technology to use, in what context, or how they compare.
My role, and that of everyone working in tech, is to understand the consequences. It's also about knowing how to explain the choices, the compromises that were made, and ultimately, what world we're trying to build.
It's up to the people who make these choices to talk about them, to inform. Because they are the ones best able to understand the stakes.
Because those who control information and data will be the ones who, tomorrow, will have the necessary influence over all aspects of our lives. And THAT is not neutral.