Evie Mithen
3 August 2024
If you had asked me at the start of this year what I thought about the future of Artificial Intelligence, I probably would have excused myself to the bathroom. Discussions with fellow law students about AI robots taking away all our job prospects typically filled me with such strong existential dread that I would feel compelled to steer the conversation as far away from the topic as possible.
In fact, for most of my life, I was harbouring an unconscious bias. Whenever I heard talk of ‘tech’ and ‘big data’, I would shudder internally and once again push for a change of topic.
The negative associations I had meant I didn’t want a bar of any of this ‘AI’ stuff, so I
remained quite wilfully ignorant as to how it works or what it really means.
Luckily, my perspective was forever changed for the better when I attended Tracey Spicer’s book launch.
Tracey was not a male, middle-aged billionaire making an attempt at relatability in a t-shirt and a pair of jeans (I’m looking at you, Zuckerberg). Instead, Tracey is a warm, positive and extremely intelligent woman. In a cosy Readings bookshop in Carlton, Tracey broke down tech-heavy concepts in a way that was compelling, with real-life anecdotes that everyone in the room could relate to.
But most crucially, Tracey made me realise why I needed to learn about this topic. She confronted me with a disturbing reality: AI technology could be insidiously damaging decades of social progress under our noses.
‘Man-Made’
Tracey’s book ‘Man-Made: How the bias of the past is being built into the future’ is a brilliant breakdown of the history of Artificial Intelligence. In it, she shares how some of the world’s first and most exceptional coders, mathematicians and inventors were women who were viewed only as the ‘secretaries’ of their male counterparts, and were subsequently erased from the history books. She explains how such misogyny is still so stickily baked into our society – from Siri and Alexa (our very own domesticated housewives), to the increasing rates of image-based abuse that predominantly target women.
Here are a few examples from Tracey’s book that really spell it out.
As law students, we are all far too familiar with the subject of property. However,
something you may not know is that an increasing number of banks are using trained algorithmic programs as a means of assessing and granting home loan requests. In the US, 80% of Black, 70% of Native American and 40% of Latino people are more likely to be denied a home loan request compared to a white person with the same financial history.
A similar thing might happen if an AI algorithm assesses your credit card application. For instance, researcher Genevieve Smith and her husband both made credit card applications. Beyond their gender, the only difference between their applications was that Genevieve had a slightly better credit record. Although the algorithm had approved both their applications, it had set her spending limit at almost half the amount her husband was granted.
At this point, you might be wondering: if the assessment tool is an algorithm, then isn’t the risk of racialised or gendered prejudice from a human bank employee taken out of the equation? How can this be happening?
The thing is, like regular employees, these AI bots start out as trainees. They have to be taught how to do their job. In other words, the algorithm first needs to be ‘generated’ before it can be used in real-life situations. In order to do this, the banks will train the bot using records that can date back as far as thirty or fifty years. And if a bot is learning anything from home-loan or credit card applications from the 1970s, it’s that minorities are definitely being granted them the least. The bot then engrains that ‘bias’ within its own programming. This is why now, when a person from a minority group applies for a home-loan, the algorithm has been trained to be more sceptical.
Here’s another example from Tracey’s book. In 2021, researchers experimented with an image-generating algorithm to see how it viewed men compared to women. Researchers fed the algorithm headshots of men and women, and then instructed it to finish the picture by generating the rest of their body. Does it surprise you that the algorithm would take a headshot of a man and put him in a nice suit? Probably not. What if I told you that researchers gave the algorithm a headshot of US Congresswoman Alexandria Ocasio-Cortez, and it generated an image of her in a bikini? Well, it actually did. In fact, the algorithm puts women in low-cut tops and bikinis more than half the time. It seems that Artificial Intelligence has a hypersexualised view of women – unfortunately, it really is just reflecting what it has been exposed to from our real world.
If you are reading this now and are starting to feel sick, don’t worry. I was lucky enough to have a chat with Tracey herself, who was able to give some brilliant insight into where we can get started with trying to sort all of this out:
Interview with Tracey
Many students at Melbourne Law School will go on to be the future legislators and policy advisers of our country. How crucial is legislation going to be in controlling the future of AI?
We need legislation as soon as possible in Australia, and different jurisdictions around the
world. AI poses dangers in many areas including bias and discrimination, breach of copyright and privacy. Some argue it also poses an existential threat. By way of analogy, it’s like the time in history between the invention of the car and the advent of seatbelts. We need guardrails and ‘regulatory sandboxes’ to protect the public.
The cover of your book is an AI-generated image of a female robot, created through an art generator that uses prompts submitted by users. Did you encounter any biases in the initial images produced that you had to work around?
Yes. I wanted an image of a strong robot woman looking to the future with concern but hope. The generative AI program, Midjourney, created a sexualised robot woman with a tiny waist, huge breasts, and massive biceps. We had to play around with the prompts to create the cover that eventually made it to print. A neurodiverse film-maker and advocate recently highlighted biases in image generators around disability. He asked for images showing someone with autism. They were all young and male, but – most worryingly – looked depressed, sad, or even distraught. This is a deficit model of disability, rather than concentrating on the positives that diverse thinking can bring to society and the workplace.
AI Technology is not a future threat – it is currently ingrained into our everyday lives. What are some tech habits people can adopt that help to combat the biases within AI data?
Critical thinking is key. While we wait for governments to regulate the industry through practices like the auditing of data and algorithms, we can do what’s called ‘machine teaching’. For example, if you’re asking ChatGPT to write a story about an engineer and a child care worker, tell it to make the engineer female and the child care worker male. This is called ‘intentional bias’: It flips the stereotypes, which are deeply embedded in historical datasets. We can also use our power as consumers. Catch a Sheba or taxi instead of an Uber; change the voice of Siri or Alexa in the home to male, instead of female.
Students are increasingly using ChatGPT as a tool to assist their studies. Is ChatGPT beset with harmful biases? If so, how should students be approaching it?
ChatGPT is riddled with biases, partly because it was predominantly tested on young white men before being released to market. Consequently, its default is always white, male, cisgendered, heteronormative, ‘able-bodied’, young, and urban. Its creators scraped the internet for its huge dataset, gobbling up conspiracy theories and misinformation on the way. The chatbot is known to ‘hallucinate’, making up information while pretending it’s from legitimate sources. (It’s been known to fabricate case law, for example.)
My best advice is to get into the habit of checking this information with a second and third source, to ensure its veracity. And challenge the chatbot on its biases, in order to teach it to be better.
In your book, you mention that only 4.9% of employees at META identify as Black. This is a staggering statistic. What are the key consequences of having an overflow of cis-white male employees within tech industries?
It was not too long ago that both Facebook and Google Photos mistakenly labelled pictures of Black people as “primates” and “gorillas”. We know that bias is built into these machines via a three-step process. It begins with historical data, is exacerbated by the unconscious bias of the programmers, then deepens independently through machine learning. If the overwhelming majority of programmers are cis-white males, this is the type of bias that will continue being evident in the algorithms. One of the experts I interviewed for Man-Made said, “An algorithm is an opinion expressed in code”. There’s also the bigger problem of neo-colonialism. The capital and wealth in AI sit in the Global North, while the low-paid workers live in the Global South. Over time, this will undoubtedly exacerbate the gap between rich and poor.
I imagine that the process of writing this book would have been an oftentimes enraging and unsettling experience. Despite this, what are you hopeful about looking into the future?
I’m a glass half full kinda gal. The EU’s AI Act provides an excellent template for legislation and regulation around the world. In New York, any company using algorithms for hiring is required to conduct regular audits for bias. This is a step in the right direction. Forward-thinking companies like Textio are producing apps that can remove bias and add inclusivity into job ads. And the Australian government is about to announce AI regulation, based on wide-ranging discussions following responses to an AI discussion paper. To paraphrase the Director of Cybernetics at the Australian National University, Distinguished Professor Genevieve Bell, we have an obligation to speak of an equitable future. But we must disrupt the present to make that future a reality.
Conclusion
While the current stats on AI can be a dizzying mix of infuriating and depressing, I feel reassured knowing that Tracey – and so many other inspiring women like her – are advocates in this space. I also feel empowered knowing there are so many ways I can make a difference too, which turns my anxiety into motivation. Artificial Intelligence is no longer something I fear but cannot grasp. Its future is not out of our control, and there are clear steps forward to make it better. So go change your Siri to a male voice on your iPhone, and get him to order you a copy of Tracey’s book.
