AI Algorithms & Free Will (Moral Analysis)

More and more of our life is determined by algorithms. These algorithms tend to be based more and more on artificial intelligence. A recent Wired piece asked about how these affect our understanding of free will. Let’s look at these questions with a bit of analysis.

I want to address two points from the article: algorithms vs Divine omniscience and algorithms and human dignity.

Algorithms like God?

Wired describes ethics as a question for free will like God.

The ways we are using predictions raise ethical issues that lead back to one of the oldest debates in philosophy: If there is an omniscient God, we can be said to be truly free? If God already knows all that is going to happen, that means whatever is going to happen has been predetermined—otherwise it would be unknowable. The implication is that our feeling of free will is nothing but that: a feeling. This view is called theological fatalism.

What is worrying about this argument, above and beyond questions about God, is the idea that, if accurate forecasts are possible (regardless of who makes them), then that which has been forecasted has already been determined. In the age of AI, this worry becomes all the more salient, since predictive analytics are constantly targeting people.

Analysis
A mix of human and robotics illustrated
A mix of human and robotics illustrated (CC0 Unsplash)

This seems to be a misunderstanding of God, as God knows every particular future event while an algorithm only makes probabilistic predictions. If there is a high probability I’ll buy a Cleveland Browns branded mug because I like the Browns and coffee. This might mean I get ads for this mug all over the internet on services that track m. But I still have the option to buy the mug or not.

Elsewhere the article rightly notes that this can restrict options. For example, an algorithm may make it impossible for a person to get a mortgage on this house.

However, all of these are probabilistic so don’t really create an issue for free will. The question of divine omniscience and free will is much more serious. (There has been a lot of debate in how to mesh these two in Catholic thought. But all acknowledge that both exist in unity. Her’s a summary in the Catholic Encyclopedia as I won’t go further into this.)

People or Things?

The Wired article brings up this question:

One Major Ethical problem is that by making forecasts about human behavior just like we make forecasts about the weather, we are treating people like things. Part of what it means to treat a person with respect is to acknowledge their agency and ability to change themselves and their circumstances…

A second, related ethical problem with predicting human behavior is that by treating people like things, we are creating self-fulfilling prophecies. Predictions are rarely neutral. More often than not, the act of prediction intervenes in the reality it purports to merely observe. For example, when Facebook predicts that a post will go viral, it maximizes exposure to that post, and lo and behold, the post goes viral. Or, let’s return to the example of the algorithm that determines you are unlikely to be a good employee. Your inability to get a job might be explained not by the algorithm’s accuracy, but because the algorithm itself is recommending against companies hiring you and companies take its advice. Getting blacklisted by an algorithm can severely restrict your options in life.

Analysis

Obviously, we cannot reduce people to things. That is an issue with certain materialistic views that imply this.

However, the question is whether such predictions via algorithms do treat people as objects. I think a good way to analyze this is to start with a similar situation but a human rather than an algorithm making the decision. Let’s imagine instead of an algorithm approving or denying your home mortgage, it was the local bank’s loan officer after looking at your pay stub and bank account. I find it hard to argue he’s treating these people as objects. He often relied on an algorithm such as calculating what percent of income the mortgage payment would be so if he thought they could afford it: obviously, such an algorithm is much simpler than modern algorithms, but it is an algorithm.

Obviously, we should make sure that machines don’t have control over human lives. I noted this about autonomous weapons recently. An algorithm determining which ads I see on my social media feed is far from that. We do have an issue when these algorithms might significantly and dramatically restrict people. For example, a predictive algorithm that puts likely criminals pre-emptively in jail would clearly be immoral. The more serious restriction it puts on a person’s life, the more we need to question how much we leave it to an automated system. I think the degree of automation needs to be related to the degree that it affects people’s lives. Judging this is a matter of prudence, thus not so much hard and fast rules.

Considering Algorithms and AI

Algorithms can be helpful in society and need not only a technical but an ethical analysis. A lot of the ethics varies case by case so we are dealing more with prudence. We also need to err on the side of caution in these, especially if they can have huge negative effects on people. We don’t want a programing accident to cause someone to be treated unjustly.

Liked it? Take a second to support Fr. Matthew P. Schneider, LC on Patreon!
Become a patron at Patreon!
Share:

Add your voice to the discussion