(Graphic: John McCann/M&G)
Discussions about artificial intelligence (AI) advancements are romanticised in such a way that they are presented as an all-knowing oracle and saviour.
This seems to be a marketing strategy by big tech, but we are concerned about how uncritically these technologies are used.
There are stories aplenty of private individuals, government agencies and private domains seeking AI’s large language models to solve emerging and long-standing problems. These technologies are thought to have some form of omniscience that allows them to “know” everything that needs to be known — and beyond.
One example of these technologies is the celebrity ChatGPT.
Many university students call on ChatGPT to intercede for them in passing their assignments and other research tasks. In fact, one of the authors of this article has experienced this tendency in his work with students at the University of Johannesburg.
In most instances, the students’ works show an uncritical use of this technology that writes like humans.
But this intercession goes beyond the academic space to the daily use by individuals in different domains.
Users of this technology fail to understand that beyond the lack of understanding of ChatGPT, there are dangers lurking behind of which they are not aware.
More often than not, the information it produces is incorrect and riddled with inaccuracies — a problem which contradicts the omniscience that the technology is supposed to have.
But there is an even more serious danger embedded in the technology and it posits a challenge to the saviour-like reverence accorded to this technology.
The tool is inherently biased and discriminatory.
A common way that it discriminates against people is through the attribution of certain societal problems to particular individuals because of historical ills such as racism and sexism.
For example, it was discovered that the use of AI technology in the United States’s justice system discriminated against black Americans. It predicted that they were more prone to commit crimes or re-offend than their white counterparts. But, after rigorous research, it was discovered that this was not the case.
Another incident that exposed the embedded bias in AI systems occurred when Amazon’s hiring system was discovered to discriminate against women regarding job applications. The systems automatically rejected applications that indicated the applicant was a woman.
AI technologies that can write sentences and read human languages — called large language models — have been shown to perpetuate overt and covert forms of racism.
While we may be quick to object that these cases are applicable to US society and societies in the Global North, we must also understand that South Africa is a multiracial society and we have experienced cases of racial segregation, bias and discrimination.
The Global North accounts for most of the technologies that we use in South Africa. If these tools are problematic in that sphere, then it follows that regardless of the society where it is used, these techniques will display their problematic nature — and in this case, along racial or gender lines.
Those of us who work with these novel and state-of-the-art tools, issues of algorithmic discrimination are becoming very concerning, considering the large-scale adoption of AI in almost every facet of our society.
Perhaps it is pertinent to draw attention to the fact that discrimination is not a recent societal issue — especially in the context of South Africa, we are familiar with this social ill. Discrimination, which includes overt and covert racism, bias and the subjugation of some members of the human population, particularly black people and women, is one of the social injustice issues that have permeated society historically and contemporarily.
Discriminatory actions in the form of racism have been discussed in academic literature, social activism and formal and informal storytelling.
Furthermore, public policies and other national and international documents from the United Nations, European Union, African Union and others have sought to mitigate discriminatory and racist ideologies and actions.
We would suppose that with the end of apartheid, or colonialism in general, as well as the US Civil Rights Movement, social injustices in the form of discrimination and racism would have ended.
But these social ills continue to persist and resist being mitigated — perhaps these issues may be likened to having the myth of a cat’s “nine lives”.
It has become evident that these issues will probably be around for a very long time.
This is because a new form of racial discrimination has emerged, complicating the problem. The racist on our stoep is no longer only a human being who is actively perpetuating racist acts overtly, but AI — especially the AI that writes and speaks like humans.
The large language models perpetuating covert kinds of racism include the much-vaunted ChatGPT 2.0, 3.5 and 4.0. These are what the students of one of the authors are using to do their assignments.
Do not get us wrong, large language models are relevant social technologies.
They are trained to process and generate text across several applications to assist in domains such as healthcare, justice and education.
They construct texts, summarising documents, filtering job applications and proposal applications, sentencing in the judiciary system and deciding who gets healthcare goods.
But let us not be deceived by the supposed oracle-like or saintly nature of large language models — they also possess concerning yangs.
A recent study entitled Dialect Prejudice Predicts AI Decisions About People’s Character, Employability and Criminality by Valentine Hofmann, Pratyusha Ria Kalluri, Dan Jurafsky and Sharese King underscored that AI large language models are guilty of perpetuating covert discrimination and racism towards black people, especially African-Americans and women, based on language.
These large language models associate descriptions such as dirty, ugly, cook, security and domestic staff to African-American English speakers and women. Those who speak Standard American English are associated with more “professional” jobs.
Worse still, large language models also prescribe the death penalty to African-American English speakers in a larger proportion than Standard American English speakers for the same crime.
Why are these issues of algorithmic discrimination, bias and racism, important, and why should we care?
As South Africans, given our racial, cultural and language diversities, we should be worried about these technologies.
There is an aggravated need to use AI systems in almost every facet of our society, from healthcare to education and the justice systems.
AI technologies, especially large language models, are particularly relevant in aspects of societies such as employment and the justice system.
However, large language models are not innocuous; like other machine learning systems, they come with human bias, stereotypes and prejudice encoded in the training datasets.
This leads to the discrimination and biases of these models in racial and gender lines towards minorities.
The racism and discrimination embedded in large language models are not overt as it is in most previous forms of racism; it is covert in a colour-blind way.
In an experiment, a large language model interpreted a black alibi as more criminal and the person less educated and less trustworthy when they used African-American English.
Additionally, large language models assign less prestigious jobs to African-American English speakers compared with Standard American English speakers.
This is not to insinuate anything, but let us assume that out of our 11 languages in South Africa, one language is deemed superior. In that case, it is obvious that only those who speak that “superior” language will be valuable using large language models to make decisions that concern South Africans.
It has become self-evident, through the new racism on our stoep, that racism is not ending anytime soon because it resurfaces in different ways in a more covet fashion. Given South Africa’s role in working with state-of-the-art emerging technologies, what are the implications of racist technologies in a country like South Africa that already has a history marked with racism and subjugation?
It is imperative that, as a society, we reflect on the roles these technologies play in advancing racism and sexism in our contemporary epoch and work to ensure that these technologies do not become the new racist and sexist. We must ensure that when we call on these tools to save us, we must be critical of the information they provide and be more alert that this Oracle does not love us equally. It appears that they love a certain race more than others, and they prefer a certain gender to others.
Until we fix these social issues embedded in the technologies through using ethical programming, we must be cautious of how we call on them to provide quick fixes in our societies.
Edmund Terem Ugar is a PhD candidate in the department of philosophy and a researcher at the Centre for Africa-China Studies at the University of Johannesburg.
Zizipho Masiza is a researcher and an operation strategist at the Centre for Africa-China Studies at the University of Johannesburg.
Discussion about this post