Most people prefer artificial intelligence (AI) — not humans — to make major decisions around the distribution of financial resources, despite being happier when humans make such decisions.
The majority (64%) of participants in a new study published June 20 in the journal Public Choice favored algorithms over people when it came to deciding how much money they earned for completing a set of specific tasks.
When playing the game, participants were motivated not just by their own interests, the scientists said, but ideals about fairness. They tolerated any deviation between the decision and their own interests when the AI made the decision, so long as one fairness principle was followed — but when the decision didn’t align with common principles of fairness, they reacted very negatively.
Despite preferring AI decision-making in general over human counterparts, the participants were generally happier with the decisions that people made over AI agents. Curiously, it didn’t matter how “fair” or “correct” the decision itself was.
The study found that people are open to the idea of AI making decisions thanks to a perceived lack of bias, ability to explain decisions and high performance. Whether or not AI systems actually reflect those assertions was irrelevant. Wolfgang Luhan, professor of behavioural economics at the U.K’s University of Portsmouth, called the transparency and accountability of algorithms in moral decision-making contexts “vital.”
Because fairness is a social construct where individual concepts are embedded in a shared set of definitions, the researchers said people would conclude that algorithms — trained on large amounts of ‘fairness’ data — would be a better representation of what is fair than a human decision maker.
Related: 12 game-changing moments in the history of AI
The experiment set out to answer several simple questions that the scientists considerered critical with society outsourcing more decision-making to AI.. These revolved around whether those affected by a decision would prefer humans or computers to make it, and how people will feel about the decision made depending on whether a human or AI made it.
“The question of people’s perception of and attitude towards algorithmic decisions and AI in general has become more important recently, with many industry leaders warning of the dangers of the threat AI poses and calls for regulation,” the scientists said in the study.
The study focused on redistributive decisions because of their prevalence in politics and the economy. Unlike in AI prediction tasks, the outcomes in these areas are regarded as being essentially of a moral or ethical nature, with no objectively or factually “correct” answer depending on participants’ definition of “fair.”
The experiment was conducted online, where humans and AI decision makers redistributed earnings from three tasks between two players. Regardless of whether the decision suited the individuals, the researchers believed that knowing an AI is making the decision — and believing that to have been a fair decision — made the outcomes easier to accept.
The researchers also believe that people consider algorithms used in social or “human” tasks to have a lack of subjective judgment — making them more objective and therefore likely to be correct.
The researchers said their findings will be an important part of any discussion about how open society is to AI decision-making. They added the results make them optimistic about the future, because the AI-generated decisions tended to align with the preferences of those affected.
Discussion about this post