A “quakebot” is now writing earthquake reports…and other bots are reading them…almost without human intervention, according to an article on Poynter by Andrew Beaujon. According to Beaujon, Los Angeles Times (LAT) digital editor Ken Schwencke has written a computer program that automatically generates earthquake stories and injects them into the news .
Schwencke’s program picks up news feeds from the United States Geological Survey (USGS) Earthquake Notification Service whenever a seismic event takes place, writes the story, selects an image from Bing Maps illustrating the area affected by the quake, inserts the finished story into the LAT content management system, and sends an email to the copy editors desk telling them the story is ready to be reviewed.
Earthquakes happen 24 hours a day. In fact, one statistic from 2012 indicates that there are 1.43 million earthquakes per year with a magnitude of 2.0 or more, for an average of one earthquake every three seconds. An automated program that can automatically track such stories in real time gives the LAT a big leg up on earthquake coverage.
Schwencke’s quakebot can distinguish between frequent shudders that barely rattle the china, and earthquakes that merit coverage. It also tracks events and captures the corrections that the USGS issues on a regular basis to update the original story, just like a good human reporter would do.
Computerized news stories do not get into the Los Angeles Times without being reviewed by a human editor but, if robotic readers are scanning the internet looking for news, and robotic reporters are collating data and posting stories to the newspaper’s content feed, it will not take long before robotic copy editors are reviewing the stories posted by the robotic reporter.
Robotic copy editors already exist. The Associated Press, the worldwide news distribution syndicate, publishes an interactive version of the AP Stylebook, which can proof-read and copy-edit articles while they are being written, making it easier for non-reporters to write like reporters.
There are also computer programs that help writers to write “search engine friendly” articles by insuring the articles have the key elements for which the search engines search. That completes the circle, enabling a news organization to capture data from the internet, generate a story based on that data, review the article for grammatical and stylistic correctness, ensure that the article will be picked up by the web crawlers, and post a story that other news organizations will then pick up and rewrite, as this story was.
The search engines are complicit in this process because they publish continuously updated lists of the key words that are trending at any given moment based on the number of searches in which those key words are found. This gives web writers the heads up on what readers are looking for, and explains why the same stories are covered by virtually every news outlet.
Speed is essential in the new internet-driven news cycle. The early bird really does get the worm in the online world. Schwencke’s “quakebot” gives the Los Angeles Times a tactical advantage by enabling the LAT to be the first website to cover a seismic event, giving their reporters and editors an early head start on the story. Schwencke has also written a program that captures police reports about homicides in the same fashion. One such program is an event. Two or more are a trend.
Schwencke’s programs are a symptom of a disconcerting trend in news coverage. The increasing automation of the news gathering and publishing process means that journalists, an already endangered species, will be under even more pressure as the “bots” take over more and more of the reporter’s job functions. The automation process will also reduce the number of assistant positions that would-be reporters and editors once took to learn the ropes and compete for the dwindling number of editorial jobs.
For the news-consuming public, however, there are more serious issues at stake. For some time now, the internet has been deluged with a Tsunami of information, often duplicative, coming from a plethora of news-based websites. The reason behind the deluge of often identical data coming from multiple sources is that, on the internet, content is king. Content is what the web crawlers are looking for and, when they find a key word on a given website, they direct visitors to that site, which generates advertising revenues for the websites.
Problems begin, however, when websites start searching each other’s websites and rewriting stories captured from other websites without sufficient fact-checking or exploratory authentication of the original reporter’s data. One recent event illustrated this when the Newsweek may have erroneously identified Dorian Nakamoto as the mysterious Bitcoin mastermind “Satoshi Nakamoto” without any hard evidence to back up the claim, generating thousands of equally erroneous “look-alike” internet posts.
Recursive errors soon begin to appear as reporters quote, and sometimes misquote, each other in the rush to get their stories into the news feed. Sometimes, the stories are neither revised nor edited. To combat the widespread plagiarism epidemic, news organizations use programs designed to track and identify plagiarized versions of their stories. Other organizations use similar programs to ensure that their articles are sufficiently different from the source article to avoid triggering a plagiarism report. Some “reporters” are using “spinners,” computer programs that are designed to take original news stories from other websites and automatically restructure the stories so they will not violate anti-plagiarism rules.
It gets even worse. There are now computer programs available that can generate completely bogus scholarly papers that are routinely published by online academic journals, which charge a fee for publishing the bogus papers, putting unsubstantiated information into circulation on the internet, despite the fact that there are other programs designed to detect the bogus articles.
If programs like Schwencke quakebot are now writing…and reading earthquake reports and the news this way, more of the world’s conversations may be based on misinformation as websites rewrite stories without further research or data verification.
By Alan M. Milner