top of page
Arabella Lewis

At Face Value: How AI technology Is Worsening Systemic Oppression, yet may also offer a solution

Image: Rich Smith

How AI emphasises our own racial bias in the news and Social Media

AI

Both noticeably and implicitly, algorithms are controlling our lives. From location-based dinner suggestions on our smartphones to ‘Discover Weekly’ on Spotify, recommender systems, and the algorithms that drive them, permeate nearly every online network; tracking and influencing our online behaviour. The omnipresence of algorithms in our daily lives has brought about an increased awareness and scepticism of where these suggestions come from, forcing us to consider: ‘how does Spotify know me so well?’ and ‘is my phone listening in to my conversations?’.

Although this could border on conspiracy, these people have a point. Algorithms continually absorb our online data and record our every move; what sites we search, what news sites we rely on and in what format, how long we spend on a page reading it and so on. With this seemingly meaningless data, algorithms can estimate with incredible accuracy our age, gender, political viewpoint, education level, location (if you allow this on your privacy settings) and even our race. This generates a full-detail profile of you as an individual and therefore tailoring advertising and content personally for you.

News and Race

The recent upsurge in support for the worldwide longstanding Black Lives Matter movement, sparked by the brutal death of George Floyd in police custody, has raised further awareness of the racial biases prevalent in our society. More than ever, people are becoming conscious of the way that minority ethnicities are not only disproportionately represented on an institutional level, but how they are often negatively portrayed by the media.

Although it is widely known that social media apps use algorithms to tailor content to its users, it is often overlooked that authoritative sources of information, such as digital news outlets, also dictate what stories we are presented with. As of late, news articles have resurfaced revealing the inherent racial stereotypes dominant in our society. Contrasting media treatment of Kate Middleton and Meghan Markle during pregnancy emphasises this point. The Daily Mail has depicted Kate Middleton ‘tenderly cradling’ her bump as the ideal British woman, whereas the images of Meghan Markle in a similar posturing were spun to ask whether it was ‘Pride, Vanity, Acting – or a new age bonding technique?’.

I am not claiming that the journalist who wrote the Meghan article is an actively racist person, but it makes it clear: we cannot take the content we are presented with online as face value especially in the news. We must constantly be aware of the implicit racial bias within us, even if we are outwardly anti-racist.

More recently, racial bias in the news has been exposed through journalistic coverage of the Black Lives Matter protests across the UK. Despite thousands flocking to beaches nationwide in the past few months, news outlets have been quick to fear-monger by claiming that a potential increase in Coronavirus cases will be undeniably linked to the protests.

Online media

Online media is intrinsically linked with ‘confirmation bias’, a neurological trait common to all humans. This subconscious bias forces us to look for evidence which confirms what we already believe—to see facts and statements that further support our predetermined ideologies—and to disregard any evidence that supports a different view.

In this way, we are continually choosing what we see online, which then influences our algorithm recommendations. As Nick Diakopoulos claims about algorithms in journalism, “[algorithms] maximize for clicks by excluding other kinds of content, helping reinforce an existing worldview by diminishing a reader’s chance of encountering content outside of what they already know and believe.” This ‘filter bubble’ (a term taken from the 2011 book by Eli Pariser) generates a dangerous feedback loop which feeds your inherent bias, whether bad or good. This shockingly means that; everyone, dependent on the data collected by algorithms, receives different news, therefore leading to slanted versions of events and potential racial biases, without the reader even being aware of it.

AI’s racial bias

Sandra Wachter of Oxford University claims that algorithms “reflect the inequalities of our society.” Algorithms’ main function is to learn, but humans are the ones that are teaching them. Our online behaviour; whether we listen to solely white musicians on Spotify, where we choose to shop and what news we click on, is merely being reflected back onto us like a mirror. Therefore, Algorithms are not racist unless we teach them to be.

Pioneer of the criticism towards AI’s bias is Joy Buolamwini, who after researching into flawed facial recommender systems for darker-skinned women, founded the Algorithmic Justice League and the Safe Face Pledge to prevent the racial prejudice of recognition technology. Buolamwini criticises “the coded gaze” and encourages more BIPOC people to study coding, “who codes matters,” believing that the bias in Artificial Intelligence tends to “most adversely affect the people who are rarely in positions to develop technology.”

Recognising bias within ourselves

In any field of work, we must be aware of our own personal biases. As an ethnographer, my analysis of events is often entirely reflexive—you must be aware of the power dynamics at play between you and the interviewees, the different knowledge relationships, your position as either outside or inside the cultural environment (‘etic’ and ‘emic’ perspectives).

Many content creators have little recognition of how their own identities and predetermined value judgements about certain events, most significantly with regards to race, affect the stories they tell or the algorithms they create. It seems that too often news articles focus on select facts and algorithm trainers miss out various demographics which can generate significant biases.

Therefore, we need an ethical change in AI systems to allow for the increased awareness of our own inherent biases. We do not frequently enough query our positions of power and whether our gender, age, race, upbringing or education have impacted how we see the world. We must use algorithms’ partiality flaws as a means to educate ourselves on our own biases and use this opportunity to ensure no underlying bias is left unchecked or unnoticed.

-

Websites which mitigate media bias:

Edited by Ellie Muir

FEATURED
INSTAGRAM
YOUTUBE
RECENT
bottom of page