Al Gore never claimed to have invented the internet, but he was an early and enthusiastic proponent. As Senator he introduced a bill that funded development of the internet’s underlying technology. As Vice President he offered soaring rhetoric about an “information superhighway … linking all human knowledge.”
When I first heard those words I was thrilled. Universal access to almost unlimited information would revolutionize education. Also, as an unreformed idealist, I had always felt it was a major flaw of modern democracy that no individual could easily access the full range of hard data about our society and government, from welfare programs to tax policy to defense spending.
Having those facts one click away would change everything. People would make much more informed voting decisions. It would be a better world.
A different reality
Go ahead and laugh. But keep in mind that in 1994 TV news was only halfway through morphing into an entertainment medium. Fox News didn’t exist. Spam email was just beginning to be a problem. The idea that the internet could devolve into a swamp of disinformation never really occurred to me.
Instead I figured the internet would be like any other medium only better: You would have reputable and disreputable sources, and it would be easy to tell them apart. People would be able to surf and triangulate and easily distinguish fact from fiction. Better yet, I was sure that as the vaults of government and academic data opened, new opportunities to explore and visualize that wealth of information would proliferate.
Yet we now know that people have a terrible time distinguishing internet fact from fake news, as this recent Stanford study of nearly 8,000 students reminds us. And while there have been many laudable efforts to aggregate public data and provide interactive visualizations on the open internet, from President Obama’s Data.gov to Hans Rosling’s Gapminder to MIT Media Lab’s Data USA, their audiences, usability, and sustainability have been limited. (Wikipedia is a fantastic resource, but it's an encyclopedia, not a tool for visualizing the numeric data that describes the world.)
I also had no idea that “reality shopping” would become so seductive. Who knew the internet would become the ultimate big-box store for “facts” as products, where people would choose whatever reinforced their preconceptions? Or that one person’s fake news would become another’s canonical fact?
Nor could I have predicted how economically devastating the transition from print to web and web to mobile would be for real journalism. Relentless declines in advertising revenue have made it hard to sustain reputable journalists who check facts and know their beats, particularly at local news outlets. The pressure to turn content into clickbait has been felt by almost every journalist who has stuck it out.
Advertising and power
Well, you can’t blame tech, right? After all, the business of tech is tech, not truth.
Wrong. Yes, you can hold tech responsible—Google and Facebook in particular. Together they will eat an estimated 57 percent of U.S. digital advertising dollars in 2016, leaving their nine nearest competitors (Microsoft, Yahoo, Twitter, Verizon, etc.) to fight over another 15 percent. Major U.S. journalistic operations such as the New York Times or Washington Post don't even show up on the radar.
Google and Facebook didn’t set out to decimate advertising-supported content, but they’re doing a pretty good job of it. The U.K. media think tank ResPublica has even suggested imposing a levy on the revenues of major online search and social networking services to fund journalism.
The unintended consequences of internet dominance go further, as Facebook’s role in disseminating fake news demonstrates. Facebook’s walled garden makes it hard to poke through and check sources on the open internet, so sensationalized junk swirls around its ecosystem unimpeded, boosting ad impressions and revenue.
To compensate, Facebook is experimenting with fact checking (following Poynter’s International Fact-Checking Network’s code of principles) and claims to be “disrupting” the financial incentives for people like this guy to publish fake news.
What about Google, the user interface of the internet? As usual, the world’s largest company has been less than transparent about its efforts, although it claims to be withdrawing AdSense advertising from pages containing fake news. A top search result linking to a fake story declaring that Trump won the popular vote and a rash of Holocaust denial stories from neo-Nazi sites have prompted Google to seek a solution, according to this story in Search Engine Land.
But here’s my problem: This all seems very reactive. Facebook and Google in particular have been reluctant to admit the power of their positions, a coyness that conveniently absolves them of accountability.
Their responsibility goes beyond acting as fair brokers of others’ content or (to the degree that they can) quashing fake news. With such enormous technological and financial resources at their disposal, they need to step up and deliver on the internet’s original, enormous potential.
Restore the promise
One crucial area is the ability to visualize and interact with real data about the world – the environment, the economy, health care, immigration, housing – all the key domains that define our global society. Almost all of that information is open data, but widely distributed and often difficult to find, let alone visualize.
Whether or not a large percentage of ordinary citizens or even policymakers use it, I see this capability as fundamental. Lack of agreement on basic factual information has helped lead to our current impasse, with each tribe subscribing to its own “facts.”
At one point, Google seemed headed in the right direction. Google launched its Public Data Explorer in 2010 using the Trendalyzer software it licensed from the Gapminder Foundation in 2007, even hiring the software’s creator in the process. But the project has since been deprecated.
Google also leads the world in artificial intelligence. With concerted effort, how long would it take to, say, automate the International Fact-Checking Network’s code of principles to evaluate any “fact”? Would it be insanely difficult? Sure. But this is a company building self-driving cars. With great power comes great responsibility. At the least, I have no doubt that a consortium of the leading internet giants could pull together and accomplish it.
I also hear Facebook is collaborating with Pulitzer-winning site PolitiFact to fight fake news. Ironically, if you Google “politifact facebook” today, several search results link to articles on fake news sites that viciously attack PolitiFact’s credibility.
On the brink
As I write this, the virtual world of the internet seems hell-bent on distorting the real world, undermining rather than augmenting reality. Made-up nonsense enjoys equal status with verifiable fact, with real-world consequences (see Pizzagate). We are becoming what Hannah Arendt once called “the ideal subjects of totalitarian rule … people for whom the distinction between fact and fiction and the distinction between true and false no longer exist.”
The vulnerability to bad actors who wish to cultivate that credulity now verges on outweighing the internet’s astounding benefits. We’re on the cusp of multiplying that danger as we enter the era of virtual reality, whose leading proponent Mark Zuckerberg promises will be irresistibly immersive.
We face an existential information crisis. The solution is to rediscover the original promise of the internet and commit to its fulfillment. These cannot be half measures that are toyed with and abandoned. Usable, consistent data visualization and factual verification features are not options—they’re essentials that were skipped in the rush to monetize. Only the internet giants have the wherewithal to pay down that debt.