Puzzle World
On the perils of puzzle-solving, and the difference between puzzles and mysteries
Greetings. My last post speaks to a theme I want to keep returning to, so to begin this post I’ll retread some of that ground as an introduction to the problem of puzzles and mysteries. My point in “From Cyberculture to Dataism” is that we’ve drifted rather alarmingly off “the power to the people” course. This conclusion has been quite obvious for a decade or more, but it’s strangely obscured today. Many people today talk as if the world around us is the world we envisioned before and then deliberately architected. The truth is more that the 20th century vision of the 21st century was hijacked, though it’s frustratingly difficult to assign any specific blame. It was a hijacking that just happened.
The Age of Data
The end of the 2010s and beginning of the 2020s witnessed the popularizing of a new web-centric philosophy. Data played a central role. “Big data” appeared by 2008, and “data is the new oil” became an oft cited dictum. In the years that followed, data became a summum bonum and capturing it and manipulating it was seen as key to just about everything, including human happiness (remember the Quantified Self movement?). One of the century’s key characteristics is this data ethos. I’d bet historians will look back at the first two decades (plus) of the 21st century as—among other things but importantly—a paean to the power of centralized data. Given the turn of the century view of the web as decentralizing force majeure, it’s a shocking outcome.
It’s important to note here that this ethos, this zeitgeist centered as it is on computation and data never fails to deemphasize humans while it inflates the importance and role of machines. That machines—computers—are important and even essential in modern society is hardly at issue. The issue is the creepy dissing of human capability that now too frequently accompanies computer-centric visions of the future. This is profoundly troubling, as I think it signals a shrunken horizon for the person. It’s not that Big Data AI is bad per se (not at all), it’s that it’s become a cudgel to beat out old-fashioned ideas about human potential. Nobel prize winner Daniel Kahneman’s 2021 book (co-authored with Cass Sunstein and Olivier Sibony) Noise: A Flaw in Human Judgement is typical fare this century. “Noise” is a pejorative statistical term, and what humans tend to produce when relied on for judgments. Kahneman et al’s antidote is—no surprise—an algorithm.
To be sure, we still give lip service to the old turn of the century tropes about decentralization and personal freedom, but hearing this today reminds me of Notre Dame philosopher Alasdair Macintyre’s account of modern morality in After Virtue: we use words which had meaning in the past, but are now only placeholders for feelings. Our tech-speak is absent principle, and comfortably devoid of meaningful action. It’s sort of an autopilot jibber-jabber.
Big Tech plays a huge role here. Companies like Meta or Google talk reflexively in the argot of freedom and empowerment, the ideas borrowed from the old cyberculture. The rhetoric is handy because it communicates—though it’s clearly illusory—a continuity with the cyberculture vision of the web as ground up revolution, the dismantling of the old order (many of the techbros in Silicon Valley today, millenials like Zuckerberg or Jack Dorsey, were moved by this rhetoric when they were up and comers). The reality, alas, is different. The reality is that the world has radically changed, but our love affair with data and corporatism and centralizing control are throwbacks to the past we thought we’d left behind. In other words, the world around us has changed, but not in the way we had hoped or today seem even to be aware of, and—really importantly—we haven’t changed with it. This is another major theme of Colligo.
A quasi-syllogism:
We wanted Change A, we got Change B. We still talk like we got Change A. Ergo, we’re not really prepared to deal with Change B (which is actually the world).
This brings us to puzzles and mysteries.
Our drift from creative technology—or what David Graeber once called poetic technology, the big ideas that inspire—to centralized technology and data-fetish encouraged a diminution of our own mental powers, as I keep suggesting. More specifically, when AI got equated with data analysis (I call AI today “Big Data AI” or sometimes “Monster Truck AI”), its problems became puzzles whose solutions required data crunching. Big Data AI has been a boon for centralized data analysis, a sort of ode to the age of Big Iron in the 1950s (more on this in upcoming posts). Big Data AI can find patterns—it’s really quite good at this—but it’s generally crappy at theory formation, being a machine and all. It can discern human faces in pixelized data, find meaningful completions of word sequences (ChatGPT!), classify documents and tweets, recommend and personalize feeds, cluster anything clusterable, play chess and Go and Atari games (it’s super good at games), and even to some extent drive cars (more on this in upcoming posts). These are puzzles in puzzle spaces. That’s what computational methods do: they solve puzzles. Minds solve mysteries. Attempting to solve a mystery thinking it’s a puzzle isn’t even—to paraphrase Jerry Fodor—wandering into a game of three dimensional chess thinking it’s tic-tac-toe. It’s more like trying to solve relativity problems with Newtonian mechanics.
Mind is for Suckers
Back to “From Cyberculture to Dataism,” the lesson of the 21st century so far is that we gave up on the project of liberating the human mind with technology. Instead of tricky human potential stuff, we threw ourselves whole hog into recreating the world as a puzzle, suitable for puzzle-solving with data-crunching computers. People could sort of ride along for free, supplying the data in the form of tweets and comments, taking a backseat in matters of thought and inference as this had been handed over to puzzle solving computation. Nevermind that the world was—has been—filling with mysteries that will require minds.
Puzzle World
By the mid-2000s, the important puzzles for Big Tech involved influencing and manipulating users for profit. By the late-2000s puzzle solving and pattern matching had become cultural currency, an entire ethos, made possible by the retooled read/write web. Big Tech wasn’t the only player. Wallstreet got in on the fun. Hedge funds used AI to ferret out profitable stocks. The 2008 housing fiasco pulled into its orbit plenty of human folly and greed, but also a misplaced puzzle solving idea known as “value at risk”, or VaR. The bubble wasn’t recognized because it fit VaR models, but this computational element saw the real estate derivative market (mortgage backed securities and credit default swaps) as a puzzle, not a mystery requiring humans on deck. Treated like a puzzle, there wasn’t a bubble. As a mystery, there clearly was. As with many catastrophic failures, the Gestalt picture of the problem was there prior to the failure point. It’s just that no one saw it in time. There weren’t any detectives, just analysts.
The US Department of Defense, once into wire tapping, secret codes, and the derring-do of spies, soon morphed into a kind of shady or shadier version of Silicon Valley. Predictably, the DoD soon saw catching terrorists as puzzle solving, a “game” involving a glut of Verizon phone data and other electronic breadcrumbs (more on this in a later post). True, terrorists weren’t really caught this way, but the puzzle approach using Big Data AI avoided pesky human involvement with nothing to offer but flimsy human brains, full of confirmation bias and trips to the bathroom. Spy games were simplified, “your mission, should you choose to accept it, is to click the START button with your mouse…”. Surveillance ramped up. Getting surveilled and treated like a data point wasn’t the idea, though. No wonder we’re in a funk.
Return to the Past
I said recreating the world as a puzzle above because puzzle thinking marked most of the Cold War years of the previous century. The Cold War was scary, but it wasn’t particularly complicated. America and its allies needed more information to respond to threats and to chart out sensible and effective strategies for containing the Soviet Union. The Cold War really was, in other words, a puzzle. The game board was assumed; the challenge was to find the missing pieces. Our obsession with AI and computing today marks a return to puzzle-thinking, the problem-solving approach where computational thinking excels, but which misrepresents our own times. Today’s problems confront us with mysteries, not puzzles. AI is pulling us in the wrong direction.
The Difference Between Puzzles and Mysteries
The common theme in a puzzle is that more information helps solve it. In a mystery, it’s that more information keeps it unsolved. Most of us have too much information already. The information we do have often conflicts, and we now must contend with the spread of false information and fakes online. It’s no wonder we turn to computation to manage the “data deluge,” not recognizing that our own minds can make exhaustive data crunching unnecessary (and even unwise) and elevate us above the puzzle-world of high-tech “smart” devices and artificial intelligence. The core mystery we want to solve may not even be in the “data deluge.” We’re stuck crunching through it with AI algorithms and computers anyway.
Solving mysteries requires a distinctively human capacity to sleuth, to recognize clues. Clues aren’t more data, and AI systems today don’t recognize them. We do. Our data crunching and analysis fetish would have been, again, perfect for the twentieth-century Cold War, where the world really was a puzzle. The broad context of our relationship with the Soviet bloc was stable and predictable. But today, most of the world is open, not closed, and it’s not a dichotomy—it’s a tangle of alliances, convoluted supply chains, and ephemeral deals with shadowy friends and adversaries. We engage in asymmetric conflicts with terrorist groups. The information is largely available and even known, but we don’t know how to make sense of it. We’ve wandered into a three-dimensional world full of mystery with two-dimensional data crunching tools suitable for puzzles and games (to paraphrase Jerry Fodor again).
We should be looking for mysteries, not puzzles, and plumbing them with our minds. Viewing today’s world as a puzzle, solvable by “more data” and Big Iron computing, is a signpost that we’re stuck in the past, not progressing like we had hoped. The rhetoric of the 21st century so far is all about high-tech progress. The truth is rather more pedestrian: it’s a pretty un-innovative, boring, and benumbing century so far. I’m hoping we change that.
The big data AI puzzle mindset also keeps us stuck in a Digital McCarthyism. That’s the subject of my next post.
Erik J. Larson
Quick comment to Colligo readers: I have re-written this post many times, so if you read it earlier today, you might notice it's now somewhat different. The differences aren't substantive per se, but of course I wouldn't have made them at all if they didn't do any work for the end product. My apologies for this. I'll try to review the post in edit mode more thoroughly moving forward.
I found your writing through Matthew B. Crawford and have quickly consumed everything you've posted thus far. The distinction you draw between puzzles and mysteries reminds me of Jacques Ellul and his concept of 'Technique'. Our increasing reliance on digitization has produced a worldview that replaces the actual with the theoretical, the reality for the image, and in the process reduces the magic (or mystery) of human experience to a series of mechanical actions rather than the dizzying gestalt that it truly is.