Siri, What is Reading?

Pretty much everything I have done has been driven by a single question: what does it mean to understand something?

I remember at nine, I would sit at the back of the class with a friend writing dictionaries of nonsensical languages. As an undergraduate I fell in love with conceptual art and as a postgraduate I studied mediaeval taxonomy and how it shaped people’s ability to imagine themselves and their world. When I became a writer, I wrote a novel of nothing but a list of numbers, inviting readers to “understand” what I had written by composing meaning from them. Now I teach narrative analysis to intelligence analysts, which involves seeing causation as a narrative process to give extra clues as to which apparent patterns in a dataset might be indicative of human behaviour, and which are just pretty swirls in the sand.

So it was incredibly exciting to host Kinga Jentetics, CEO of publishing platform PublishDrive and Forbes 30 under 30 media entrepreneur for the event Writing with Robots. PublishDrive are notable for their use of artificial intelligence to aid the business of publishing. The AI they have developed, Savant, helps to find better ways of marketing books by reading those books and making decisions about what kind of books they are and who might enjoy them.

The question of what it means to “read” is one that the book world is increasingly asking itself as audiobooks become ever more popular. In many ways, one might say this is simply an industry catching up with its own ableism (blind people have been reading in audio format for a very long time). But the idea of an AI “reading” a book and making judgements about it really does make us stop and think.

Savant is an interesting test case for what it means to “read” because it is a form of Turing test. He (after Gina Neff’s talk on the gendering of AI we questioned Kinga on why Savant is a he, but she and her team insist he is “he”) demonstrates that he has “read” a book by giving outputs. The meaningfulness of those outputs in many ways serve, for those who follow the Turing Test model, as a way of proving or falsifying the “understanding” he has of the book.

That led us to some fascinating discussions of what outputs might determine understanding. Could an AI solve the oldest and hardest question of all – “what is literary fiction?” – for example? Could it make judgements on the quality of a book? Interestingly, Savant is already able to analyse sentence structure, and this gives it some insights that feel uncanny. For example, he is developing an understanding of the conveyance of emotion. This is similar to the work being undertaken by the giant storytelling platform Wattpad, who use AI to predict whether a book will make great TV. Their AI works on developing an emotional map of a book, using language and structure (macro and micro) to follow the emotional rhythms they consider essential to creating stories with a hook.

The question was also raised as to whether an AI could understand allegory and metaphor. Many books are not “about” what an analysis of the words might suggest. Rather the meaning (such as the use of science fiction to explore political ideas or to serve as an allegory for political systems) relies on high levels of implicature. This raised the concern that if it did become possible for an “innocent” AI to do this as a way of categorising books, that development could have devastating political consequences. Underground movements in oppressive or totalitarian regimes often rely on codes and allegories to communicate (one thinks of the development of Polari, the secret language used by the gay community). If an AI could reliably scan social media for coded language, that would be a crushing tool of oppression.

The final element of understanding, of course, is that of doing. As a writer, this excites me. The conversation unearthed some very legitimate concerns. Machine learning is very good on placing literature into an existing framework, but many important works seek to creatively break rules. Would an increasing use of AI in marketing lead to a disincentive for such rule-breaking? But overall, I am positive. As AlphaGo, the artificial intelligence which beat the greatest human exponent of the world’s most complex game, has actually enhanced human understanding of Go, so I think literary AIs may eventually teach us things about storytelling. In particular, where we currently “feel” something works or doesn’t, AI may help us understand why. If the question of whether AI will ever write great literature interests you, do come to our conference this October, where Marcus du Sautoy will be talking about just that.


To keep up to date with all Futures Thinking Network events, please email futuresthinking@torch.ox.ac.uk to join our mailing list.

Dan Holloway is a co-convenor of the Futures Thinking Network, News Editor of the Alliance of Independent Authors, and CEO of the spinout company Rogue Interrobang.

futures thinking blog 6 june