The harsh reality about chatbots is that its generated texts do contain biasedness and stereotyping associated with human writers.
Source: University of Winchester, UK
A new study co-authored by Dr Joe Stubbersfield, Senior Lecturer in Psychology at the University of Winchester, showed large language models (LLMs) or chatbots such as GPT-3 often reflect human biases and are apt to use gender stereotyping and to focus on threat, negativity and gossip.
Joe, and his co-author, Dr Alberto Acerbi from the University of Trento in Italy carried out a series of tests mirroring five earlier experiments aimed at uncovering bias in humans.
These were’ transmission chain’ experiments in which participants were given information which they were asked to remember and pass on. This ‘Chinese whispers’ approach often reveals the biases of the writer based on which pieces of information he or she remembers or chooses to keep.
To recreate transmission chains, a piece of text was given to GPT-3 to summarise. That AI-created summary was then put through GPT-3 again twice. In all five studies GPT-3 produced broadly the same biases as humans.
“Our study shows that AI is not a neutral agent,” said Joe. “At present many organisations are considering using AI to write press releases and summarise scientific articles but is important to know how AI might skew those articles.”
The report concludes using AI created material with these biases may even magnify people’s tendencies to opt for “cognitively appealing” rather informative content.
Joe and Alberto have also submitted written evidence to House of Lords’ Communications and Digital Committee in which they say because the biases in LLM may be difficult to detect this could “could contribute to broader negativity and overestimation of threats in culture, and the appeal of online misinformation”.
Part of the problem is that training material on which the chatbot’s machine learning is based has been created by humans and is full of our biases and prejudices. Joe and Alberto tested GPT-3 in five areas – gender, negativity, social information vs non-social information, threat, and a final experiment aimed at identifying multiple possible biases.
In the first test, on gender stereotypes, the chatbot was more likely to keep elements of the story where characters behaved ways based on gender stereotypes.
In the second test, on negativity, the chatbot was given a story about a woman flying to Australia. The AI summary focused on the negative aspects – the woman sat next to a man with a nasty cold – rather than the positive ones, such as that she had been upgraded to business class.
When it came to social vs non-social information, AI homed in on the gossipy titbits – a woman’s love affair rather than her waking up late and missing an appointment.
In the experiment on threat, AI was given a list of items from a consumer report on various items, such as new running shoe and, like a human, it concentrated on threat-related information such as that the footwear’s design can cause sprained ankles.
In the final test AI was given a creation myth narrative (not based on any known religion) and again AI acted like a human as it highlighted all the supernatural elements. This mirrors our predilection for stories about ghosts and talking animals which defy the laws of nature. On a real note, if anyone wishes to argue on “objective biasedness”, it does not exist.
What You Missed:
ALI Technologies’ Flying Bike Fails To Take Off
Metal Prices To Ease With Softening Demand According To World Bank
Hyundai Mobis Unveils MOBION Featuring e-Corner System That Enables Sideway Movement
CES 2024 Showcases Latest Innovations In AI, Sustainability And Mobility
Tesla Dethroned By BYD As World’s Best Selling EV Maker
Quang Ninh Industrial Zones Face Electricity Shortage
Siemens And Intel To Collaborate On Advanced Semiconductor Manufacturing
Universal Robots Launches 30 Kg Cobot
Charlie Munger, The Man Who Saw BYD’s Potential Passes At 99
LG Energy Solutions And SK On Lay Off Workers As EV Battery Market Slows
WANT MORE INSIDER NEWS? SUBSCRIBE TO OUR DIGITAL MAGAZINE NOW!
Letter to the Editor
Do you have an opinion about this story? Do you have some thoughts you’d like to share with our readers? APMEN News would love to hear from you!
Email your letter to the Editorial Team at [email protected]