Greetings, readers of Colligo. This post will be part of a loose collection of posts or essays asking: “How did we get here?” My intent is to plumb the past for insights into the present, an exercise that until recently would have made eminent sense to pretty much anyone with a pulse. Not so today. We seem to be living in a kind of ahistorical bubble. Social media’s an obvious culprit here, but I think the roots of our ahistoricism lie deeper.
The story of the internet and web reaches back to the 1960s. Let’s start there.
Thank you for reading.
—Erik J. Larson
Beginning in the 1960s a talented group of engineers, scientists, writers, and thought leaders began shaping a vision for the internet and later the world wide web. They spoke in the argot of personal liberation. The coming networked world represented an alternative to, and an escape from, big business and its marriage to government. They aimed to disrupt not only the entertainment industry but the slow dynamics of science, business, education, and communication. They predicted an entirely new model for the planet—a system of bottom-up, decentralized networks. A cooperative knowledge society.
The highpoint of thinking arrived when the cover of the June 1997 issue of Wired announced, “The Long BOOM,” adding, “We’re facing 25 years of prosperity, freedom, and a better environment for the whole world. You got a problem with that?” Few would.
But the twenty-first century thus far tells a different story. The gap between the mega-wealthy and everyone else has widened, imperiling the middle class as well as the poor, curtailing or eliminating many jobs. Web technology and uses for artificial intelligence (AI) has proliferated, but productivity has flatlined. We also waged two messy wars and lived through two market crashes—the second in 2008 severe—while also watching the web transmogrify into a cauldron of misinformation and balderdash. Do we have a problem with that? Apparently not.
The “liberation” ethic of late twentieth-century cyberculture has also disappeared, though it briefly flowered in the early 2000s. After a terrifying crash in 2001—the “DotCom” crash—a new generation of entrepreneurs retooled the web, initially democratizing it. The next generation of websites were dubbed “Web 2.0,” and featured the now commonplace “read/write” design that allows web users to write comments, pen blogs, tweet, and so on. The “Like” button gave users a way to signal approval about content on the web, and early Web 2.0 sites like Digg used the concept to capture the idea of voting—the more “Digs” an article received, the higher it would rank. Media theorist Clay Shirky wrote about the coming wonders of the new “2.0” web, declaring that the decentralized democratic web had become the new TV. Except that one couldn’t be a “couch potato” on the web. We now had a “cognitive surplus” to use to design and execute creative projects.
Sites like Wikipedia seemed to validate Shirky and others’ enthusiasm. The site quickly flowered into a proof of concept for not-for-profit production. The web seemed to enhance democracy, while also empowering individuals, who could play the role of “citizen blogger” and weigh in on issues once controlled by old media gatekeepers. New York Times writer James Surowiecki made tech headlines with his 2005 bestseller, The Wisdom of Crowds. In creative groups or individually, the web represented a new freedom, an escape from the conformist model of central control now in our past. Wired, it seemed, had nailed it.
The revelry was short lived. By the late 2000s, a shadow had already fallen over the future, in the form of a monetization strategy based on advertising. Though few recognized it at the time, the dream of a new era of democratic freedom on the web was about to die.
“Data is the new oil” originally meant something positive. What it came to mean was that all those free democratic netizens, the “citizen bloggers,” amateur videographers and everyone else, were not just speaking their minds but releasing personal data to increasingly monopolistic companies, like Google, then Facebook, YouTube, Twitter, Instagram, and the rest. As the “Web 2.0” companies grew exponentially due to “network effects,” their user bases exploded from thousands to millions, then billions. Instagram attracted 25,000 users the day it was launched and had ten million within ten weeks. (It was later sold to Facebook for over $1 billion, with just thirteen employees.) Facebook started at Harvard, spread to other universities, then rolled out to the entire world. It soon had a billion users.
Network effects—exponential growth potential on networks like the web—thrilled entrepreneurs and investors. For a time, it thrilled users too, as large networks mean more opportunities to gain influence. Network effects buoyed futurists’ theories of exponential progress because, up to a point, they really were exponential. Web sites, games, and applications all but unknown a month before had a million users in the next (there are only so many potential customers on earth, so exponential progress claims by necessity peter out, eventually). But it was the darker backstory of data capture, the conversion of users—people—into data, that ruined the liberation dream of the early web visionaries. The transformation of “new economy” ideas about cognitive surplus and wisdom in crowds to all things data, “dataism,” was its own network effect—it happened quickly.
It started as an innovation. In 2002, Google had rather ingeniously hit upon the idea of contextual ads, with its AdWords and later AdSense services. Contextual ads are based on the semantics of the keywords entered in a Google search. If I’m complaining about the rain, show me ads for umbrellas. If I’m reading a news article about running, show me ads for running shoes. Contextual advertising paid back Google’s investors and put Google in the black by the early 2000s. If the backstory were to stop here, all would be well. The skies would clear. But it doesn’t stop. It turns the page to personalization.
Suppose I’m reading about running and see an advertisement for running shoes. Only, the ad is tailored to me personally. They’re for men’s running shoes, and they pop up with my exact size. I click on one, and it gives me free delivery options, somehow knowing where I live, and what merchants offer free delivery to my house.
This kind of personalization also offers relevant recommendations for movies, books, and other products, stuffs newsfeeds with articles you’re more likely to agree with or find pleasing, pitches (supposedly) relevant offers, customized landing pages, and much else. Critics disagree (Eli Pariser has argued we’re now living in a “filter bubble”), but consumer-facing services aren’t inherently bad. They’re made bad by how we got there—they’re made possible only by collecting personal data. Today big tech companies likely know what you like to read, who you are (profile information like age, race, sex, etc.), where you live and have visited (and where you stayed), and even when your habits might change.
Oddly, a new generation of academics, business leaders, and self-styled intellectuals now embrace the reduction of person to datapoint. Popular historian Yuval Harari elevates our diminishment to a worldview, a kind of quasi-metaphysics describing not only people but the entire universe as nothing but data. Harari calls it “Dataism.” Instead of calling out companies for outright manipulation and seeking a revival of humanist values and a human-centered culture, Harari and others simply make a virtue of the mistake, as if the lipstick really does make the pig more attractive. Dataism isn’t progress at all, let alone exponential progress. As a statement about a person, it’s hopelessly anemic, bureaucratic, and reductive.
In the post-war 50s, sociologists like William E. Whyte, Jr. and David Riesman worried that “groupthink” in big business along with a new and expanding consumer culture threatened individual initiative and innovation. They worried the culture was becoming passive, explaining their concerns in books like The Organization Man and The Lonely Crowd. The web was supposed to fix these threats to human flourishing. Instead, we put them on steroids. What’s more conformist, passive, and unimportant than a datapoint?
To sum up: we may be living in our conformist past, not progressing exponentially on the heels of advanced technology. In coming posts I’ll introduce the idea that we are, in some sense, slaves to our “machinery.”
Thanks Keith on the Microsoft angle. It's easy to forget how dominant they were, and they defined the rules of the game for decades.
A person could make a good argument that Microsoft's dominance by the early 1990's was such that it had largely become impossible for most other companies to monetize software through licensing revenue. The browser wars with Netscape led many young entrepreneurs in silicon valley to think really hard about ways to monetize software that didn't involve directly charging end-users. The ad-based alternative to software licensing has been very lucrative for the ad serving companies themselves, but the human effects are beginning to smart. Legislators, especially at the federal level, are either too stupid or too bought to do anything particularly effective toward restraining tech abuses where user data is concerned.
Thanks for this interesting post.