This article tells a fascinating story about how the municipality of Amsterdam tried to build an AI model to help officials detect fraud in welfare support. In particular, the article describes how the municipality tried to do it the right, ethical and fair way by avoiding bias.

The article presents many interesting and critical angles, but I still felt like adding my own two cents.

Some direct thoughts:#

Treating everyone equally

The premise of the story is that while the government should not treat citizens differently on the basis of certain variables, like age, gender, income or migration history, it is ok if it does so on other variables.

This puzzles me. I thought that, as long as citizens have not done anything wrong, the government is supposed to treat all individual citizens equally, regardless of what group they (are taken to) belong to.

The article for example describes how the municipality goes through lengths to ensure that men and women have the same chance of being falsely accused of fraud. But shouldn’t every individual have the same chance of being falsely accused of fraud? Or actually (and crucially), shouldn’t every individual citizen have the same chance of being scrutinized?

At the end of the day, I believe the most advanced version of this AI fraud detection model - that removes all bias - would simply be a spreadsheet and some dice.

Why are we talking about this in the first place?

Of course, one can raise even more fundamental questions: like the one Hans de Zwart from Bits of Freedom raises: why are we not trying to detect who is wrongly not receiving financial support? For example because they have a hard time navigating a complex bureaucracy? Or the logical follow-up question: why are we not implementing a Universal Basic Income (UBI), which would save us all the trouble of building such expensive systems?

Some more indirect, abstract thoughts:#

What about humans?

Some people will be quick to point out that “letting past experiences inform present behaviour” is not at all unique to AI. It might even be argued that this is a profoundly human thing to do. So why are we being tough on this particular AI model when the human bureaucrats have been (and are) doing the same?

First of all, there might be reasons to prefer the biases of a dozen municipal bureacrats over the bias of one single AI model, if only because the former has more variation in space and time. As a result, the biases are more diffuse. This does not make the individual biases any less bad (they should still be fought), but it at least leads to a city in which the burden of bias is more widely shared (and less susceptible to the whims of a municipal manager/ programmer).

But secondly, and more fundamentally, even if we accept that humans are inclined to “let past experiences inform present behaviour”, when it comes to certain things like the state, it makes a lot of sense to try to suppress that inclination: to let all individuals be unburdened by whatever category they are (taken to) belong to. Because however far this might sound from reality, the government is supposed to belong to all of us equally, so it has to serve all of us equally. Hence the spreedsheet and the dice.

Lessons from the past

In fact, if I have to draw one general lesson from my past to inform my behaviour in the present it is this: it pays off to let people and things present themselves however they want to present themselves in the moment, regardless of what category you might think they belong to, and regardless of your past experiences with that category. Although it might be a bit of a cliche, I think this is what we tend to admire in the way young children relate to the world. And although there are of course limits to this logic - when it walks like a fascist, and it talks like a fascist, it is probably a fascist and we should kick it out of our governments as fast as possible - it could help in loosening the hold that our destructive past exerts on our present, and it could help open up new possibilities for a more viable future.