WEST LAFAYETTE, Ind. — The idea of “fake news” isn’t a new phenomenon – even the Romans spread lies and rumors to gain the upper hand – but in today’s world, where news is pumped out and spreads at breakneck speeds with the help of technology, its potency has reached dangerous levels.
Quentin Hardy, editorial head of cloud computing at Google and formerly the deputy technology editor at the New York Times, says the quandary of fake news can’t be solved solely by further technological advances. But technology could be part of a suite of solutions, he says.
For example, Hardy says, if a story is spread by thousands of Twitter bots and fake Facebook profiles, readers and the social media networks should be able to identify that it came from automated accounts and is from a questionable source.
Hardy and Dan Goldwasser, professor of computer science at Purdue University, will discuss dealing with fake news and information during Dawn or Doom ’17, a conference at Purdue University on the risks and rewards of emerging technologies. Dawn or Doom will be held Sept. 26-27, on Purdue’s West Lafayette campus and is free and open to the public.
Dawn or Doom, which features a track called Designing Information, also will include a featured talk by Nicholas Thompson, editor-in-chief of WIRED magazine, focusing on the “dawn” aspect of science and technology’s influence over journalism. Other tracks at the conference include Designing Humans, Designing Cities, Designing Food and Designing the Workforce.
“At its root, you have to encourage people that it’s OK to be proven wrong,” Hardy says. “You do that through the education system. But technically speaking, I would like to see people build attribution bots and scour the web to expose the roots of these stories.”
Fake news also presents a psychological problem. Humans don’t like to be wrong, nor do they enjoy experiencing cognitive dissonance. Even when a news item is found to be fake, the lie lives on in many people’s minds, regardless of how ridiculous it may seem.
Goldwasser hopes algorithms that teach computers to understand natural language will help humans understand other humans and break biases.
His most recent project with computer science graduate student Kristen Johnson was analyzing U.S. politicians’ tweets on the topic of health care. Republicans framed the issue around cost, while Democrats framed it around care and empathy.
There were, however, some Republicans who defected from their party’s line. Goldwasser’s algorithm successfully identified which politicians would vote with the Democrats solely based on how they framed an issue in the public sphere, in this case, Twitter.
If computers can in essence read between the lines, it would make it much harder for politicians to talk around an issue. Inversely, it would be harder for political activists to send coded messaging in the guise of fake news.
Goldwasser remains cautiously optimistic about the solutions new technologies will offer and their impact on how people approach information.
The age-old problem of disinformation is not going to go away, nor is the modern technology that enables its rapid spread. Some other alternatives for dealing with the issue – laws limiting who can use the technology and what they can say, for example – are not particularly attractive.
“At the end of the day, it has do with if there is regulation and who has access to the information and technology,” Goldwasser says.
Writer: Kirsten Gibson, 765-494-8190, firstname.lastname@example.org
Sources: Quentin Hardy, email@example.com
Dan Goldwasser, firstname.lastname@example.org