Text, Trust, and the Turing Test

Are writers’ jobs about to be lost to computers? Could IBM’s Watson become the next Pulitzer Prize winner?  It seems likely, according to some related articles making the rounds in content strategy forums: first, an essay asking, Can an Algorithm Write a Better News Story Than a Human Reporter?, and then this notice about a somewhat different application of algorithmic writing, How Algorithmically Created Content will Transform Publishing. Hinting at how automated tools have displaced manual jobs in some industries, these articles seemed like wake up calls for technical and other content writers.

The issue goes back to a principle called the Turing test: “…a test to see if a computer can trick a person into believing that the computer is a person too.” (http://simple.wikipedia.org/wiki/Turing_test). Algorithmic writing (not to be confused with robotic writing or automated penmanship) aims to produce readable content economically and have it be useful, although not necessarily insightful, for a human reader.  Let’s compare the two mentioned writing approaches within that definition:

Automated sports journalism looks like writing because it is a tight collection of related facts and snippets cohesively joined by a template that understands effective sentence and news story structure. It passes the Turing test for human-like interaction from the sentence level on up. Perhaps not the kind of human I would choose to hold a conversation with, but seemingly authentic.

Algorithmically created content is wholesale aggregation of large units of pre-existing content selected for thematic unity into a publishable commodity–an information roll-up for sale, as it were. It is the type of information you wish you could get from a Google search, but it’s been pulled together into a somewhat more comprehensive reference. If the transitions are smooth, you might accept this content as you would any encyclopedic collection of items you were looking for.

I believe that much of what we read (and listen to, but that’s another matter) is hardly worthy of attention in the first place. I don’t often read the classified ads or sports section in a newspaper, for example. I do read obituaries from time to time because you just can’t make up some of those stories—those are not about what people died of but what they lived of. But for me, minor updates about Little League stats are about as interesting as the list of Movies In The Park in my neighborhood association newsletter: dry facts lacking engagement.

But this kind of content is just right for that exuberant cub reporter on androids, the algorithmic sports story writer. Fact-driven and ephemeral, it lacks the mature, engaging personality of writers like Frank DeFord, whose writing is anything but robotic. But these advancements in algorithmic writing remind us that human authenticity in other types of writing soon may be in jeopardy, particularly for reference material, where trust in the originator of the advice is on the line.

And I worry about that loss of authenticity. The resulting distrust of knowledge could affect our lives and relationships deeply.

Consider digital imagery. Film-based photography was long regarded to be the last word in a court of law. But after it was replaced by digital photography, trust in pictorial authenticity quickly turned to skepticism: the “must have been Photoshopped” reaction.  I’m concerned that algorithmic writing may likewise displace our trust of authorship in writing, normally the very definition of applied personal effort and synthesis.

You see, algorithmic writing is poised to take over one common writing job: the ghost-writing of college themes.

Mild hyperbole aside, once these tools add just a bit of algorithmic revision of sentence structure and perhaps rhetoric of argumentation, research papers and themes can be totally “researched” and assimilated from archives of prior content, at which point verifying human authenticity becomes much more difficult than just looking for plagiarism. In the resulting wasteland of writing that can’t be authenticated, the venerable research paper could end up being retired in favor of the oral defense as a preferred way to test a student’s apparent grasp of knowledge.

And without some other form of Turing test to verify the authenticity of what we hoped would be academic insights to drive civilization forward, what is there? Across this new intellectual wasteland, I seem to hear an unconvincing reply, “There’s an app for that!”


Can a Cylon ever be more than just an algorithmic writer? Am I a Cylon? Don Day ponders these and other crucial matters of publishing as he consults for Contelligence Group, LLC, and co-Chairs the OASIS DITA Technical Committee, in between collecting and restoring old cameras and collecting and listening to great indie music.

This entry was posted in Opinion and tagged , , . Bookmark the permalink.

2 Responses to Text, Trust, and the Turing Test

  1. Joe Gollner says:

    A great piece Don.

    It’s nice to start the day with a little authentic content (:-]

    On the question of whether an algorithm could write news stories, the answer must be a definite yes. And this is largely because most new stories are so painfully formulaic, and their posturing to resemble inflammatory editorial opinion so obviously feigned and self-serving, that it makes perfect sense to finish the transitional job and just hand it to a program.

    When my wife’s book club was wading through the latest best seller, which predictably touched on every conceivable social dilemma, my comment was that the book (and all of its ilk) could easily be written by a machine. When my wife raised this at the book club meeting there was an uproar of objections and I believe I was burned in effigy. There are too many emotions in the book for this to be true, it was objected. I sent along a rejoinder to the following meeting that this is precisely why I had made my claim – sardonically observing that nothing is more predictable, or more easily manipulated, than emotion. I was officially classified as the spouse who is not to be quoted.

    Now you raise an important point, the key one to my mind, about the importance of authenticity and how the emergence of algorithmic compositions might introduce serious turbulence. On a number of recent projects, I have been re-introduced to the central importance of the network of authority that lies within content – the myriad of references that connect one utterance to a heritage that really does establish how the newest utterance can be taken. In these projects, I have come to see these networks of authority as one of the integral features of content – one of the things that makes it different than just another data type. I have had projects where we ourselves tries to extend automation across some of those lines – only to run into legal or insurance roadblocks that, while they appeared like old-fashioned road-blocks at the time, were in fact very significant encounters with what counts as authority in our society.

    Fortunately, what is probably within reach of algorithmic authors are forms of content that we are already suspicious of – term papers, news stories, best-selling fiction,… But the day is not far off when we will be forced to think much more carefully about where content comes from and to what extent we can act on it with confidence.

    Thanks again.

    Joe

  2. Mark Baker says:

    One aspect of the Turing test is that it supposes that the human being is trying to tell if they are talking to a man or a machine. For most of the text that may be written by men or machines today, the reader probably does not care.

    We have all got used to communicating with machines. Using the ATM once seemed eerily impersonal; now it seems merely convenient. In fact, we probably trust the ATM more than we trust the teller in the bank — the ATM makes fewer mistakes. The fact is, we communicate with machines all the time these days, and it doesn’t seem to bother us much.

    The issue of trust, which Joe rightly raises, is therefore changing. Increasingly, we may prefer to communicate and transact business with machines rather then with fallible and venal human beings. A machine does not have to pass the Turing test for us to trust it; we might trust it more if it does not pass the Turing test, if it seems like a reliable machine rather than a fallible human being.

    Why then should we care if the text we are reading is written by a man or a machine. And if we care, why should we prefer what was written by man to what was written by machine? Why not prefer that which comes from the machine, at least for subjects were the machine’s virtues are more applicable than the man’s?

Comments are closed.