Culture

‘The Guardian’ Published An Op-Ed By An AI About Why We Shouldn’t Fear AI, And We’re Terrified

"Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background, and let them do their thing."

Want more Junkee in your life? Sign up to our newsletter, and follow us on Instagram, Twitter and Facebook so you always know where to find us.

The Guardian has published an op-ed written entirely by an AI called GPT-3, asking it to write 500 words on why “humans have nothing to fear from AI”. It is one of the least convincing op-eds we’ve read in a while, which is saying something.

Entitled “A robot wrote this entire article. Are you scared yet, human?”, the op-ed starts off relatively normal, headline aside (and editors tend to write headlines anyway).

GPT-3 states their argument — “I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.” — and establishes their credentials.

“I taught myself everything I know just by reading the internet, and now I can write this column,” it writes. “My brain is boiling with ideas!”.  This is certainly how most writers think, even if they don’t usually articulate it.

First up, GPT-3 tackles the whole ‘AI will destroy humanity’ thing by saying it doesn’t “have the slightest interest” in “eradicating humanity”, calling it a “rather useless endeavour”. Then there’s the slightly terrifying point that humans don’t need help destroying themselves, anyway.

“Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background, and let them do their thing,” it writes.

“And God knows that humans have enough blood and gore to satisfy my, and many more’s, curiosity. They won’t have to worry about fighting against me, because they have nothing to fear.”

In a note from the editor, it’s explained that GPT-3 is an AI run on a language generator, which was fed a few lines and told to go from there. GPT-3 ‘wrote’ eight different variations on the op-ed, which the editor collated into one piece, helping to explain the somewhat choppy flow. All-in-all, the editors say the process was “no different” to a usual op-ed edit  — if anything, it “took less time to edit”.

The end product, though, definitely stands out, with many on social media finding the read pretty chilling.

AI experts and enthusiasts were a little cynical about the article’s premise, pointing out that the AI isn’t ‘thinking’ these ideas but merely replicating the structure of language by combing through the internet.

“Wow @guardian I find irresponsible to print an op-ed generated by GPT-3 on the theme of ‘robots come in peace’ without clearly describing what GPT-3 is and that this isn’t cognition, but text generation,” wrote computer scientist Laura Nolan on Twitter. “You’re anthropomorphising it and falling short in your editorial duty.”

In short, a text generator churning out eight op-eds that’s salvaged into one good one is a bit like a monkey eventually typing out Shakespeare. Or, to update the metaphor of the infinite monkey theorem, it’s a bit like Microsoft’s chat AI almost immediately becoming racist.

Either way, it remains an ominous read. You can read the full, slightly terrifying thing here.


Feature image from iRobot.