HOME BASTED SMALL BUSINESS

Revolutionize The Way You Get Content


AI is making literary leaps – now we need the rules to catch up

John Naughton
A row over the release of a new language-learning model highlights how ethics and the law are lagging behind
Sat 2 Nov 2019 12.00 EDTLast February, OpenAI, an artificial intelligence research group based in San Francisco, announced that it has been training an AI language model called GPT-2, and that it now “generates coherent paragraphs of text, achieves state-of-the-art performance on many language-modelling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarisation – all without task-specific training”.

If true, this would be a big deal. But, said OpenAI, “due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

Given that OpenAI describes itself as a research institute dedicated to “discovering and enacting the path to safe artificial general intelligence”, this cautious approach to releasing a potentially powerful and disruptive tool into the wild seemed appropriate. But it appears to

have enraged many researchers in the AI field for whom “release early and release often” is a kind of mantra. After all, without full disclosure – of program code, training dataset, neural network weights, etc – how could independent researchers decide whether the claims made by OpenAI about its system were valid? The replicability of experiments is a cornerstone of scientific method, so the fact that some academic fields may be experiencing a “replication crisis” (a large number of studies that prove difficult or impossible to reproduce) is worrying. We don’t want the same to happen to AI.

On the other hand, the world is now suffering the consequences of tech companies like Facebook, Google, Twitter, LinkedIn, Uber and co designing algorithms for increasing “user engagement” and releasing them on an unsuspecting world with apparently no thought of their unintended consequences. And we now know that some

AI technologies – for example generative adversarial networks – are being used to generate increasingly convincing deepfake videos.

If the row over GPT-2 has had one useful outcome, it is a growing realisation that the AI research community needs to come up with an agreed set of norms about what constitutes responsible publication (and therefore release). At the moment, as Prof Rebecca Crootof points out in an illuminating analysis on the Lawfare blog, there is no agreement about AI researchers’ publication obligations. And of all the proliferating “ethical” AI guidelines, only a few entities explicitly acknowledge that there may be times when limited release is appropriate. At the moment, the law has little to say about any of this – so we’re currently at the same stage as we were when governments first started thinking about regulating medicinal drugs.

In the case of GPT-2, my hunch is that fears about its pathogenic propensities may be overdone – not because it doesn’t work, but because humans have long experience of dealing with print fakery. Ever since Gutenberg, people have been printing falsehoods and purporting to be someone else. But over the centuries, we’ve developed ways of spotting fakes. Accordingly, machine-generated text poses less of a problem than video deepfakes.

GPT-2’s capabilities are undoubtedly impressive, though.

In a fascinating essay, I, Language Robot, the neuroscientist and writer Patrick House reports on his experience of working alongside OpenAI’s language model – which produces style-matched prose to any written prompt that it’s fed.

It was, he recounts, an instructive exercise. “The language robot and I never wrote the same story,” he recalls. “It became, eventually, a writing tool, both as a curator to new ideas and a splint for bad ones.

When, after a few months, I lost access to it, my brain felt like it had been lesioned. I missed it. I had found myself jumping into its head, asking what it might write as one would a human writing partner, which made me wonder: should I give it the same theory-of-mind, inductive respect as I give to other thinking things? Could I even help but do otherwise?”

The most fascinating part of House’s report deals with his idea of asking the robot the same questions that George Plimpton asked Ernest Hemingway in his famous Paris Review interview in 1958.

In reply to the question whether it remembered the exact moment when it decided to be a writer, it replied: “I don’t remember ever wanting not to be a writer. Writing had always been my way out. My escape. When I was a little kid, my sister was always coming home from school with homework assignments and I knew if I got hers and mine done early I could spend the next day’s lunch hour looking at books.

It was the first time that I actually thought that being a writer might happen.”

And when asked what it did when it got stuck, it replied: “It’s hard to say, but I would say that it would be better not to have a muse at all, to do all that writing you think is going to be good without any conscious control over what you do. It’s better to be totally in charge of your own actions.”

At this point, the reader gets the eerie uncanny valley feeling: this is almost, but not quite, authentic. But the technology is getting there. Why, any day now it’ll be writing newspaper columns.

What I’m reading

The web ain’t the world
A good report by the Reuters Institute at the University of Oxford challenges conventional wisdom by finding that most people still get their news from offline sources.

Culinary conditioning
TheConversation.com has an intriguing essay – “How steak became manly and salads became feminine” – by Yale historian Paul Freedman.

It’s a bot’s worldRenee DiResta has written an insightful piece on the algorithmic public sphere called “There are bots. Look around” at Ribbonfarm.com.

When disinformation is rampant...

... it can swing an election and undermine democracy. In 2020 that threat is real. Academics and experts warn that this year’s election will see a wave of disinformation from domestic actors who have learned from foreign interference in 2016. This year’s tsunami of fake news may well be homegrown as agents seek to sow lies and stir division.

Hyper-partisan news sources, factories of disinformation and a deluge of fake news on social media are no basis on which to inform the American public in 2020. There has never been a greater need for a robust, independent press guided first and foremost by the truth. With your support we can continue to provide fact-based reporting that provides clarity in the face of confusion.

Our journalism is free and open for all, but it's made possible thanks to the support we receive from readers like you across America in all 50 states. None of this would have been attainable without our readers’ generosity – your financial support has meant we can keep investigating, disentangling and interrogating. It has protected our independence, which has never been so critical. We are so grateful.

We hope you will consider supporting us today. We need your support to keep delivering quality journalism that’s open and independent. Every reader contribution, however big or small, is so valuable. Support the Guardian from as little as $1 – and it only takes a minute. Thank you.

Topics
promoted links
from around the web
Recommended by OutbrainAbout this Content

comments (106)

Sign in or create your Guardian account to join the discussion.

Order by Oldest
Threads Collapsed
1 2 3 4
  • 89

    In the case of GPT-2, my hunch is that fears about its pathogenic propensities may be overdone – not because it doesn’t work, but because humans have long experience of dealing with print fakery.

    All the alarm about FAKE NEWS suggests otherwise.

    In this case, the researchers are probably correct. There is great danger that this would be applied more for evil purposes (flooding social media with even more false content, but stuff that appears to be higher quality than the common or garden troll could produce), than for good.

    However, if this model is able to summarise effectively, it would have a great future when applied to journalism. I wonder how difficult it would be to take a news article, fact check it, eliminate all hyperbole, adjectives, mistaken conclusions and false comparisons. Then to produce a few sentences that left readers with all the salient facts but in a small fraction of the verbiage. Ultimately it might reduce a 24 hour news cycle to "nothing important happened today".
    And imagine how helpful it would be to literature. We could reduce Shakespeare's works to a page or two per play, War and Peace to a short story and Harry Potter to a couple of sentences.
    With a bit more effort it might even summarise the sum of human endeavour to Mostly hamless.

    Share
  • 910
    This does sound a bit like what Orwell described in 1984 with machines making books for the proles
    Share
    • 01
      Agreed. The impressive thing is that Orwell was predicting this in 1948, when hardly anyone had SEEN a computer, let alone used one. And the few who had used one were using them for calculating tasks. Even rudimentary email and word processing were decades away, as was networking. So his forecast of machine-written and widely disseminated novels was extraordinarily prescient.
      Share
    • 12
      He also predicts TV screens that watch you at home, and where a central controller can communicate back to you - basically a webcam experience.
      Share
View more comments

Most popular

  1. Most ViewedAcross The Guardian
  2. Most ViewedIn Opinion
Artificial Intelligence Tutorial | AI Tutorial for Beginners | Artificial Intelligence | Simplilearn
Original link
a Eurovision song created by Artificial Intelligence: Blue Jeans and Bloody Tears
Original link
What is Artificial Intelligence Exactly?
Original link
Build an AI Writer - Machine Learning for Hackers
8
Original link
A Bot Wrote an Episode of The Joy of Painting with Bob Ross. Here it Is.
Original link
This Short Film Is Written Entirely By AI
Original link
AI Learns to Write Rap Lyrics!
Original link