The Big Chill
We're at the end of a tech revolution, not the beginning. Our religion is "dataism," and data-driven AI is form fit to non-democratic objectives, like surveillance and killer drones. What's next?
Colligo is a reader supported guide to artificial intelligence and digital technology, from the perspective of human culture and human values. If you want to support my work, the best way is with a paid subscription. Sign up here:
Hi everyone,
I’m kicking around the idea of an “open mic,” where I ask readers the question that’s been bothering me lately: what’s wrong with the world today, and where are we headed? I’d be happy to promote thoughtful responses to a guest post.
This week, I’m offering my own analysis. This post is the first in a series about the world we’re living in today and why it’s so off kilter. Technology is a major factor, and in this post I dig into the “tech problem” and why it’s destabilizing the world. In later posts in this series I’ll try to pose solutions and paths forward. I will no doubt need your help.
I’m making this post paid because it represents quite a bit of work and thought. That decision was not easy, because I want many people to read this and I’m keen to solicit feedback. Please do consider a paid subscription, as it’s your contribution that enables me to keep Colligo going. Thank you again for your support of Colligo.
I hope you enjoy the “The Big Chill.”
The Discovery of Big Data AI
Here’s a brief recap of the 21st century so far. In the early 2000s, a year after the Y2K nonsense, we tulip-bulbed the newly commercial world wide web, resulting in the DotCom crash of 2001. Investors abandoned “Web 1.0” companies, hopeless Pollyanna projects like Pets.com and abandoned web start-ups en masse, hoping other high-tech sectors like bio and nanotech would make viable businesses in the wake of the web debacle. The diaspora was short-lived, and a new generation of tech-savvy youngsters: Marc Zuckerberg (Facebook), Kevin Rose (Digg), Caterina Fake (Flickr), Reid Hoffman (LinkedIn) and Jack Dorsey and Evan Williams and friends (Twitter) reinvented the world wide web. This became known as “Web 2.0.”
By the mid-2000s renting servers, routers, and other equipment required to launch a company (in a garage) had gotten cheaper, too, and the new rockstars of Web 2.0 ignored—at least initially—venture capital and the Sandhill Road scene and turned to modest “friends and family” investment. (This is not entirely true: Zuckerberg took gobs of venture capital for a fledgling Facebook, though other early web innovators like Kevin Rose went friends and family, as I recall starting Digg with an initial fifty-thousand dollars). Ah, yes. The 2000s. Say good-bye to the web. Say hello to the web.
From the start, though, there were problems. Google realized it first—it had no obvious way to pay back its investors, because it wasn’t charging money for its service. Traffic didn’t monetize itself. Web 1.0 didn’t have traffic or had to pay for it. Web 2.0 had traffic that no one paid for. The solution? Run ads. Google hit upon the ingenious idea of running contextual ads tied to the keywords in a search query, and first AdWords appeared, then AdSense. As Marc Andreessen would put it later, the banks wouldn’t touch the web with its new breed of founders, so without direct customer monetary transactions, advertising became king. And it was.
Targeted ads work better when they’re, well, targeted, so web companies like Google, Facebook and the rest started collecting data about users. A search engine or a social network wasn’t just a service, it was a means of collecting data about human beings, called “customers.” By the mid-2000s, Google had hundreds of millions of customers, so analyzing all that private data—your name, birthdate, your search preferences, your location, your purchasing habits—required number crunching. Big Data AI was born—the idea that “artificial intelligence” is powered by volumes of data, and the algorithms that make it intelligent are stochastic, not rule-based. Statistical analysis, in other words, a field that predated the rebirth of the web in the 2000s but now dominated nearly all research and development on AI, or more specifically the subfield called “machine learning,” and powered the ad strategies and much else in the newly retooled world wide web.
Machine learning was once a whole suite of techniques and algorithms, with names like “Naive Bayes,” “Max Entropy,” and “Support Vector Machines,” (as well as “Artificial Neural Networks”), and many of these techniques had been well explored for decades. It may sound odd today but the machine learning approach got dissed by early AI researchers as “shallow statistics,” thought not to be a path to general intelligence. But the blow-off was superficial: machine learning in the early days before the web suffered from paucity of training data, and slow computers. By the mid-2000s, with Moore’s Law doubling computing power every eighteen months and exponential growth curves of web users dumping megabytes of textual and other data onto the public web, machine learning worked. It worked really well. Data-driven or “Big Data AI” was born. It personalized content, recommended products, ordered news feeds and filtered spam. The web and machine learning go hand in hand. The only problem—not for the commercial web, but for the goals of AI as a field—is that big data AI is not a solution to the problem of creating genuine intelligence.
Big Data AI Isn’t Real Intelligence
It’s might be hard to see problems when making oodles of cash, but there was a problem in full view anyway: machine learning works by provision of prior examples—data—and generalizes to predictions or rules based on those examples. This is known in mathematics and scientific (and philosophic) circles as induction, a weak form of inference because just one counter-example—one “black swan”—can invalidate an inductive inference. On the plus side it’s ampliative, meaning it adds new information, unlike older deductive or rule-based approaches pre-dating the web, sometimes called “Good Old Fashioned AI.” If data analysis is all you want, though, machine learning is the only game in town.
The only problem is that induction is provably inadequate for genuine intelligence, the kind of smarts humans have. Yes, we use induction, and deduction for that matter, but we also use a context-dependent inference known as abduction, or inference to the best explanation. This is a story I’ve told many times before (it’s the point of my book, The Myth of Artificial Intelligence), so I’ll cut to the chase: Big Data AI is hopeless as a model of human intelligence. It won’t get us to so-called Artificial General Intelligence (AGI), and it won’t get us to greater than AGI, or superintelligence. It will sell ads, recognize recognize images, and personalize news feeds. It’ll even drive cars—sort of. Lately we’ve learned that it’ll answer questions from text culled from digitized books and the web. What it won’t do is make an artificial mind. The fact that we think our tech is becoming humanly intelligent is the whole kit and caboodle problem: why invest in stupid human brains—slow computers made of perishable meat—when we’re evolving true intelligence in our computers? Why worry about human culture at all?
Call this, errr, problem number one (quite a doozy). Problem two is that Big Data AI is tailor made technology for advancing anti-democratic objectives, like surveilling a population (image recognition or more specifically facial recognition) or autonomous navigation, as in killer drones. Big Data AI personalizes your news feed. It also watches you, tracks your movements, and (if the state doesn’t like you) blows you up. Sweet! But in seriousness, here we’re confronted with unintended cultural consequences: the successful—from a commercial point of view—retooling of the world wide web and the ascendancy of Big Data AI also started destabilizing the world, with rising levels of anxiety and anger and confusion, social dysfunction and geopolitical instability. There’s no guarantee we’ll survive the Frankenstein we’ve created. The jury is still out. But the world today shows clear signs of straining under the tech that now encircles it, and anti democratic totalistic movements are on the rise—for one, we’re now fighting possibly existential wars in the Middle East and in Ukraine, and the recent drone attacks killing US soldiers in Jordan led today to the killing of a Iran-backed Hezbollah commander in Baghdad by a US missile. We’re living in a high tech data-driven artificial intelligence world that’s also mired in bloody new wars and conflicts, both “on the ground” and in cyberspace. What’s a sure sign you’re at the end of a creative era? Social unrest. Stagnation. Dysfunction. Inequality. And too often: war.
Keep reading with a 7-day free trial
Subscribe to Colligo to keep reading this post and get 7 days of free access to the full post archives.