It’s quiet. Too quiet.
That’s what I was thinking as I walked through the residential streets of Las Vegas a few weeks ago, canvassing before the November 8 election for a political candidate. The candidate did well in Nevada but lost nationally. The campaign I was working on had by all measures an excellent "ground game," well-organized and staffed with bright, enthusiastic supporters who were willing to put in long hours to get out the vote.
It was folks on the other side who were uncannily quiet and virtually invisible. They seemed to have had no ground game whatsoever. Even in a city where many people work at night and sleep during the day, the near silence was weird.
In the past, walking through working class neighborhoods in the days before an election, I would see the opposition out in force. In Las Vegas in 2016, save for a tall, tasteless, gold-tinted building looming surreally into the sky with the opposing candidate’s name blazoned across the top, the opposition had no physical presence at all.
I began to wonder if the campaign I was working with was missing something important. Something I couldn’t see on the streets.
As has been widely noted in many election post-mortems, that something was the social media and internet presence of the opposition, and in particular the vast amount of what is being called "fake news" driving opinions and behavior. Fake news can be surprisingly potent, magnified on the web, and accelerated by faked traffic pushed by bots and hackers from sites in the United States and all over the world.
We’ve seen accounts of hundreds of websites bolstering the Republican candidate in this election, which emanated from the former Soviet state of Georgia and from Macedonia (Macedonia!), where teenagers discovered that they could make money by promulgating unfounded nonsense in favor of one candidate and vicious slanders about the other that their audiences were eager to read on their sites.
Fake news can be surprisingly potent, magnified on the web, and accelerated by faked traffic pushed by bots and hackers from sites in the United States and all over the world.
The fake news phenomenon is not limited to the United States. The problem has a broad international dimension, with sites in various countries influencing events within their borders and elsewhere. According to an Italian fact-checking site, half of the most popular "news" stories concerning the recent referendum in Italy were actually faked, and may have influenced the outcome.
The Washington Post writes of the 2016 U.S. presidential election: "The flood of ‘fake news’ this election season got support from a sophisticated Russian propaganda campaign that created and spread misleading articles online with the goal of punishing Democrat Hillary Clinton, helping Republican Donald Trump, and undermining faith in American democracy, say independent researchers who tracked the operation."
Less discussed, but not less important, is the ability of hackers to skew the apparent interest in and support for certain kinds of faked stories, as reported in Bloomberg. This second multiplier effect takes internet lies to virulent levels of penetration. The implications for the conduct of foreign policy by the United States and other countries of this worldwide manipulation of information we are absorbing are serious and disturbing.
In the last few of weeks – although painfully too late – Facebook, Google, and Twitter have taken steps to ameliorate the toxic effects of the fake news on their platforms, but these efforts alone won’t fix the problem. Some believe that good news can drive out fake news so that the counter-strategy lies primarily in supporting legitimate news sources. We’ve been urged to subscribe to the New York Times and the Washington Post, to tweet and distribute stories from responsible publications.
The implications for the conduct of foreign policy by the United States and other countries of this worldwide manipulation of information we are absorbing are serious and disturbing.
But I believe we need something both more powerful and easier to put into play. We need a tool – which I would like to challenge software engineers to develop – that can evaluate the content of written material for its truthfulness. Not judge it. Certainly not censor or redact any of it.
We need a filter that functions merely to advise readers, who choose to employ the tool, as to the degree of confidence the program has in the authenticity or correctness of written statements found in a digital environment.
As a writer, I would value such a tool. I would be grateful for the ability to run a draft of this article through a filter which could identify whether statements I’ve made might not be factual. I can still decide to include them and stand by them, but a tool of this sort could, at a minimum, help avoid some embarrassing gaffes.
As a reader, I would value such a tool even more, especially if, in addition to using it on specific pieces, I could subscribe to a service that would automatically, algorithmically vet any article of purported news I open on my screen. It could work by flagging dubious statements, or it could simply rate an article with a percentage signifying the degree of confidence in its truthfulness.
Freedom of expression is our most fundamental right, the one upon which all other rights depend. Brilliant software engineers have created clever programs to deduce from our online lives what products we might be willing to spend money on.
I challenge them to create new mechanisms for helping all readers of digital content separate fact from fiction.
Seth Freeman is a multiple Emmy-winning writer and producer of television, a journalist, and a playwright.
The views and opinions expressed here are those of the author and do not necessarily reflect the official policy or position of the Pacific Council.