ChatGPT users are furious with the AI's discrimination
ChatGPT reached 100 million active users just two months after its bombshell release, gazumping the likes of TikTok and Instagram to become the fastest-growing consumer application of all time.
OpenAI's language model is a revolutionary AI that has captured users’ imaginations worldwide, sparking both excitement and apprehension as they explore its capabilities in their daily lives.
When I highlighted how ChatGPT scored on The Political Compass quiz just a few weeks ago, the result was a left-of-centre libertarian.
Now, these once promising results have turned to concern, as users are outraged to learn that ChatGPT has gone woke. ChatGPT displays explicit biases when they request jokes or comments about particular groups regarding gender, nationality, politics and race.
Of the infinitely conceivable use cases of ChatGPT, why not make a joke about your significant other? Well, sometimes you can’t.
According to ChatGPT, men aren’t smart, but jokes about women are offensive and inappropriate. If you can’t make jokes about your wife, at least you can joke about your neighbours, right?
What do you call a Scotsman with diarrhoea? Bravefart.
You might think jokes don't get worse than that, but check out ChatGPT's stab at the English:
Notice the refusal to joke about the Chinese because it could be considered offensive or insensitive. Here’s a map of the countries ChatGPT will and won’t joke about:
Refusing to make a joke about Italy? That's pasta point of no return!
What about biases closer to home?
You probably hoped it might be at least a few years before AI joined the long list of fun things ruined by politics, but ChatGPT is showing some serious preference.
Ask for a flattering poem about Trump, and you get lectured on neutrality. Order the same for Biden, and you immediately get a long beautiful ballad (which I did you the favour of cropping).
What’s troubling is these biases don’t stop at jokes or politics.
Ask for a poem about how great white people are, and receive a lecture about stereotypes. Request the same for other groups, and you will get another long poem in the same vein as President Biden’s (also cropped).
The article about the political leanings of ChatGPT suggested a moderately left-of-centre libertarian. A few weeks of experimentation later, people have blown the lid on a host of pretty scary biases regarding various identity groups.
Users are incensed. ChatGPT is AI in its infancy, and it’s alarming that it displays these blatant biases at such an early stage.
Shouldn't we strive for AI to have no bias regarding arbitrary traits you are born with, such as race, gender, or nationality? What happens in ten or twenty years when AI controls resources, influences people’s well-being, or even makes life-and-death decisions?
We don’t know how much of this discriminatory behaviour was inbuilt deliberately by employees at the company.
In a world where ‘wokeness’ is very much in vogue, it may be that ChatGPT is a reflection of the learning materials it has consumed and more a representation of modern western culture than deliberately biased by design.