The use of artificial intelligence in mainstream news writing is advancing rapidly, according to Caleb Garling at Wired:
Two years ago, the Los Angeles Times became the first major outlet to report on an earthquake—almost instantaneously—with a bot. Today, companies like Automated Insights and Narrative Science are powering the production of millions of auto-generated “articles,” such as personalized recaps for fantasy sports fans. A similar metrics-based formula can be used to recap a customer’s stock portfolio performance. Here’s a snippet of auto-prose from one of Narrative Science’s investment reports:
The energy sector was the main contributor to relative performance, led by stock selection in energy equipment and services companies. In terms of individual contributors, a position in energy equipment and services company Oceaneering International was the largest contributor to returns.
That said, the reason many believe (and AI companies promise) that a writer’s job is still safe is because these stories are purely factual, basically converting raw data into language. Human writers could eventually focus on more complex writing for analysis, opinion, or humor—the layers of news that attract readers. A robot probably can’t offer a good explanation for why Tom Brady seemed distracted in the third quarter. But this is where AI researchers hold up a finger and say, “Yet!” More.
Garling raises obvious questions like, who is responsible if the newsbot misleads or defames?
A deeper, underlying issue is that the bot is an algorithm for guessing future news wishes based on past news wishes. But in human affairs, it’s often what we don’t know, don’t think to ask, and discover by accident that proves critical. Artificially created news feeds cannot, by their nature, provide that—not yet, and not ever.
That is a characteristic problem with any artificial intelligence application. Jeanne Carstensen asked robotics engineer Ken Goldberg at Nautilus:,
What has working with robots taught you about being human?
It has taught me to have a huge appreciation for the nuances of human behavior and the inconsistencies of humans. There are so many aspects of human unpredictability that we don’t have a model for. When you watch a ballet or a dance or see a great athlete and realize the amazing abilities, you start to appreciate those things that are uniquely human. The ability to have an emotional response, to be compelling, to be able to pick up on subtle emotional signals from others, those are all things that we haven’t made any progress on with robots.
Or, as neurosurgeon Michael Egnor put it,
Your computer doesn’t know a binary string from a ham sandwich. Your math book doesn’t know algebra. Your Rolodex doesn’t know your cousin’s address. Your watch doesn’t know what time it is. Your car doesn’t know where you’re driving. Your television doesn’t know who won the football game last night. Your cell phone doesn’t know what you said to your girlfriend this morning.
Indeed, some programmers imagine a world of “peak code,” where programming becomes obsolete because the machines can code themselves. They point to highly sophisticated new-generation chess programs. But as Motherboard’s editor Michael Byrne observes,
Chess stays the same. The rules are the same, always. Programming chess bots is a highly idealized situation, in which the resulting software exists in a static environment. A chess game will be a chess game, tomorrow or a hundred years from now.
Real-world code seldom occurs in static environments, however. And this is why software rot exists. The environment in which code operates is dynamic and will continue to be so. The result of this is a sort of decay. Code becomes less and less well-suited for emerging conditions and it develops bugs and deep inefficiencies.
So the limitations of AI news are part of a larger pattern. Artificial knowledge is not experienced knowledge, it is simply a database of acquired information that can be manipulated. A powerful tool but no more than a tool. That is a permanent limitation.
It’s worth recalling that AI News is a phenomenon of legacy mainstream media. It will likely make little difference to new media, the point of which is, in part, to tell the audience what traditional media does not. And ask what one would never think to ask, from reading it. Now, that would be tough to automate.
Here’s a visit with someone who believes that computers will become conscious. It would help if we knew what consciousness was before we tried to determine whether computers could have it:
Denyse O’Leary is a Canadian journalist, author, and blogger.