Thursday, August 11, 2022

If AI Is Predicting Your Future, Are You Still Free?


AS YOU READ these words, there are likely dozens of algorithms making predictions about you. It was probably an algorithm that determined that you would be exposed to this article because it predicted you would read it. Algorithmic predictions can determine whether you get a loan or a job or an apartment or insurance, and much more.

These predictive analytics are conquering more and more spheres of life. And yet no one has asked your permission to make such forecasts. No governmental agency is supervising them. No one is informing you about the prophecies that determine your fate. Even worse, a search through academic literature for the ethics of prediction shows it is an underexplored field of knowledge. As a society, we haven’t thought through the ethical implications of making predictions about people—beings who are supposed to be infused with agency and free will.

Defying the odds is at the heart of what it means to be human. Our greatest heroes are those who defied their odds: Abraham Lincoln, Mahatma Gandhi, Marie Curie, Helen Keller, Rosa Parks, Nelson Mandela, and beyond. They all succeeded wildly beyond expectations. Every school teacher knows kids who have achieved more than was dealt in their cards. In addition to improving everyone’s baseline, we want a society that allows and stimulates actions that defy the odds. Yet the more we use AI to categorize people, predict their future, and treat them accordingly, the more we narrow human agency, which will in turn expose us to uncharted risks.

HUMAN BEINGS HAVE been using prediction since before the Oracle of Delphi. Wars were waged on the basis of those predictions. In more recent decades, prediction has been used to inform practices such as setting insurance premiums. Those forecasts tended to be about large groups of people—for example, how many people out of 100,000 will crash their cars. Some of those individuals would be more careful and lucky than others, but premiums were roughly homogenous (except for broad categories like age groups) under the assumption that pooling risks allows the higher costs of the less careful and lucky to be offset by the relatively lower costs of the careful and lucky. The larger the pool, the more predictable and stable premiums were.

Today, prediction is mostly done through machine learning algorithms that use statistics to fill in the blanks of the unknown. Text algorithms use enormous language databases to predict the most plausible ending to a string of words. Game algorithms use data from past games to predict the best possible next move. And algorithms that are applied to human behavior use historical data to infer our future: what we are going to buy, whether we are planning to change jobs, whether we are going to get sick, whether we are going to commit a crime or crash our car. Under such a model, insurance is no longer about pooling risk from large sets of people. Rather, predictions have become individualized, and you are increasingly paying your own way, according to your personal risk scores—which raises a new set of ethical concerns.

An important characteristic of predictions is that they do not describe reality. Forecasting is about the future, not the present, and the future is something that has yet to become real. A prediction is a guess, and all sorts of subjective assessments and biases regarding risk and values are built into it. There can be forecasts that are more or less accurate, to be sure, but the relationship between probability and actuality is much more tenuous and ethically problematic than some assume.

Institutions today, however, often try to pass off predictions as if they were a model of objective reality. And even when AI’s forecasts are merely probabilistic, they are often interpreted as deterministic in practice—partly because human beings are bad at understanding probability and partly because the incentives around avoiding risk end up reinforcing the prediction. (For example, if someone is predicted to be 75 percent likely to be a bad employee, companies will not want to take the risk of hiring them when they have candidates with a lower risk score).

The ways we are using predictions raise ethical issues that lead back to one of the oldest debates in philosophy: If there is an omniscient God, can we be said to be truly free? If God already knows all that is going to happen, that means whatever is going to happen has been predetermined—otherwise it would be unknowable. The implication is that our feeling of free will is nothing but that: a feeling. This view is called theological fatalism.

What is worrying about this argument, above and beyond questions about God, is the idea that, if accurate forecasts are possible (regardless of who makes them), then that which has been forecasted has already been determined. In the age of AI, this worry becomes all the more salient, since predictive analytics are constantly targeting people.

ONE MAJOR ETHICAL problem is that by making forecasts about human behavior just like we make forecasts about the weather, we are treating people like things. Part of what it means to treat a person with respect is to acknowledge their agency and ability to change themselves and their circumstances. If we decide that we know what someone’s future will be before it arrives, and treat them accordingly, we are not giving them the opportunity to act freely and defy the odds of that prediction.

A second, related ethical problem with predicting human behavior is that by treating people like things, we are creating self-fulfilling prophecies. Predictions are rarely neutral. More often than not, the act of prediction intervenes in the reality it purports to merely observe. For example, when Facebook predicts that a post will go viral, it maximizes exposure to that post, and lo and behold, the post goes viral. Or, let’s return to the example of the algorithm that determines you are unlikely to be a good employee. Your inability to get a job might be explained not by the algorithm’s accuracy, but because the algorithm itself is recommending against companies hiring you and companies take its advice. Getting blacklisted by an algorithm can severely restrict your options in life.

The philosophers who were concerned with theological fatalism in the past worried that if God is omniscient and omnipotent, then it’s hard not to blame God for evil. As David Hume wrote, “To reconcile the […] contingency of human actions with prescience […] and yet free the Deity from being the author of sin, has been found hitherto to exceed all the power of philosophy.” In the case of AI, if predictive analytics are partly creating the reality they purport to predict, then they are partly responsible for the negative trends we are experiencing in the digital age, from increasing inequality to polarization, misinformation, and harm to children and teenagers.


Related Articles

- Advertisment -

Recent Articles


Popular Articles

Site Translate »