18 Comments

Hi Eric,

Thanks! Good to hear from you. Okaaaay:

I'm not sure I can help much beyond what I wrote, but here's a stab: there was something like 800 years of meticulously collected astronomical data, observational data, available to mathematicians and astronomers by the 16th century, all used to support the Ptolemaic model--the Earth at the center, geocentric--until Copernicus came along and proposed a heliocentric model (we're still using it today :)). If Copernicus, instead of thinking about an entirely new idea--that the Earth might revolve around the Sun rather than vice versa--were to take all the Ptolemaic "big data" and use it to better predict the motions of the planets, solve retrograde motion problems with more accurate modeling of epicycles and equants, and so on, would all that modeling and predicting have helped? Nope. We'd likely still think the Earth was at the center of the cosmos, and the correct model would have to await some future Copernicus. Thankfully, Copernicus didn't have a supercomputer, so we got to the truth. Copernicus essentially ignored much of the data and reconceptualized the problem first. Something like this is at least in the vicinity of what might help think through the issues. AI folks (like me) are really bad at this, too. When AI enthusiasts talk about deep neural networks, say, they talk as if they're extending human capabilities point blank, and there's nothing lost. But there often is something lost--the absence of human insight, as the focus has shifted to downstream concerns. In general, I think we're--I include myself--sort of bad at "thinking about thinking." If we did that better, I suspect the distinction would be much more obvious. I hope this helped!

Expand full comment

Yes, that is called "intuition", of which computers have none. But I think AI can in some instances, when used wisely, give impulses and ideas to human intuition, very much like an oracle. I heard that the Chinese leadership use it in some sort of similar way. (It's just that spending 10s of billions of dollars on an oracle may be a bit of waste of resources – and I guess that was your initial point!)

Expand full comment

Got it. Thank you! It does help. I don't think I was too far off.

"Thankfully, Copernicus didn't have a supercomputer, so we got to the truth. Copernicus essentially ignored much of the data and reconceptualized the problem first."

That exemplifies the basic idea, and I think it's an incredibly important and nowadays-underdiscussed idea. I'm ashamed to say I haven't read all of your work, so forgive me if this is old news, but I think you've trained your eye here on a core issue. (Though I do see you focusing on this issue, at least tangentially, when you emphasize the importance of abduction vis-a-vis deduction and induction.)

Expand full comment

Very important and very underdiscussed! Thanks Eric. Your input is quite valuable to me and I'm sure all Colligo readers.

Expand full comment

If I may, what truth did we get?

Except for putting the sun in center (kind of - and you have a fifty-fifty shot of getting that right with a coin toss), everything about Corpernicus' model was wrong. It was new math on old, increasingly corrupted data, that tried to turn the solar system inside out (for, I suspect, reasons having little to do with science), and it was still hide bound by Greek Physics (esp. uniformity of motion and movement in perfect circles.) That his model contained epicycles was (ultimately) wrong enough, and that his model contained nearly twice the number of epicycles of the Ptolemaic system made it something of a train wreck. (In at least one instance, his epicycles did goofier things: Mercury librated across its epicycle). It was not easier to use, nor was it more accurate than Ptolemy - and as the saying goes, when it is not necessary to change, it is necessary *not* to change. Ptolemy was still good enough for doing the few things expected of the works of a mathematicus in that time.

There were good, though ultimately wrong, reasons for believing the earth was at the center and did not move. One of the earliest and most perspicacious insights into this problem was a kind of proto-relativity remark made by Nicole Oresme who pointed out that whether the stars or the earth moved, it would look the same from the viewer's frame of reference (Oh, those unscientific, medieval dark agers! ;) )

Whenever this comes up (very rarely), it is important to remember who got it right and how: Kepler, with an indispensable assist from Brahe's Uraniborg data dump. And it's important to remember that it took two hundred years for geomobility to go from "officially" proposed to empirically proven, with whole new physics concepts (like inertia) needed to even begin trying to explain why the earlier evidence against was wrong, and with many players - before and after Copernicus - contributing. Even Newton's Principia - mighty as it was - only laid out a theoretical basis for Kepler correct elliptical orbits. The Earth's rotation was proven when Coriolis deflection was detected in (IIRC) the early 1800s, and heliocentricity was proven when stellar parallax was demonstrated shortly there after, (again, IIRC).

Also, there was nothing new about the idea. Aristarchus' is the first proposal of the idea of which we have a record. He did have an interesting proof that the sun was bigger than the earth, but his main rationale for it being in the center was based on some Pythagorean esoterica about the four elements and positional hierarchy with the center being the noblest position and fire being the most important of the four elements; hence the sun (fire) must be in the center. Quaintly right, but good luck getting to a theory of gravity - and later relativity along those lines. ;)

This is a favorite are of study for me, as it is the first real example – of which we have excellent, in-depth records - of how scientific progress has occurred. There is no one indispensable thinker, politics plays a here-malign, there-beneficial role, and the path is fraught with lurches to-and-fro between Platonist mysticism and Aristotelean empiricism, and with the dross of failed ideas strewn among the gold of what turns out to be right. Many times scientists believed well in advance of proof because of the elegance of a theory or just plain pigheadedness, and except where right, crashed and burned accordingly.

Seems like nothing much has changed.

Expand full comment

Hi Longway,

I get it, and thank you for taking the time to comment. Copernicus's original model was indeed less accurate. I agree with much of what you've said here. But the "flip a coin" analogy to deciding on a geo or helio model is in my view far too glib. As you know, the empirical data strongly argued against a heliocentric model. Given the state of knowledge and "common sense" assumptions (why wouldn't we feel the motion of the earth? why wouldn't objects thrown into the sky go "backwards"? Why isn't there a perpetual wind blowing from the motion of the earth itself?") it's a bit of a miracle we hit on the heliocentric model at all, even by the 16th century. (And "we" didn't. Copernicus did, ancient Greek speculation notwithstanding.)

It's odd how you present the heliocentric case in the history of science, as if it's a knock against Copernicus that his peers generally thought he was full of it. I see it exactly opposite: it's MORE impressive that he had the right view in spite of the community's (and religious leaders') vigorous objections. What a more boring story if he wrote the Revolutionibus and everyone read it and said "yes, that's it. Eureka!" Instead people generally dismissed it or outright attacked it. Is this Copernicus's "fault"? The tail is wagging the dog here.

The Copernican heliocentric model has a major virtue: it's true. I suppose more than, say, "Fluoride prevents cavities" the helio model is a big deal, and rather central to our scientific worldview, so I'm glad he ignored all that data and had wacky non-empirical ideas about the sun (there's a story of a scientist having a dream about a snake eating its tail leading to the correct structure of Benzene. What's true is true.)

At any rate, I take Copernicus as an object lesson that ideas can have priority over data, even "big data," which is why I raised it in the context of AI and modernity. You actually help make my original point here, as I'm saying that it was IN SPITE OF a wealth of data and a more accurate geocentric model--many more centuries for "tweaks"--that the heliocentric model was proposed... the wind was not at his back... and he was right. Why would we ever reject the beautiful idea that the earth is at the center of the cosmos, if data fit nicely (or more nicely) to that theory? That's why it's an interesting case, and it's why data capture and analysis is downstream from theory and discovery.

Anyway, it's great that you've studied this case. And I have to add: I would also agree with your comment in a more philosophical vein that "nothing much has changed," at least in the sense that that sentiment can be an interesting starting point.

Many thanks for your expert comment here. It's clear you know your stuff!

Best,

Erik

Expand full comment

This is a really nice “stab”! Ignoring or burning 🔥 (burning is more dramatic 😉😇 - but then who would want to live the life of Caesar - the destroyer of the republic ?) we could even coin it the “Copernicus vs Caesar approach” ... at one point, if we want to move forward, we must let go of the lie. A sophisticated lie is still a lie. But as all lies they have their expiry date

(So far unknown). We call it “big data” these days, or so it seems. 😉 I vote for the Copernicus approach. The truth wasn’t “born” easily, it wasn’t like an “Eureka” moment - the Ptolemaic “big data” model still had its reach over the lives of academics who stood up against it after Copernicus, eg Giordano Bruno. If there is anything to learn from this history lesson - “big data” isn’t going to give up easily.

Expand full comment

I agree Jana! Thank you for your comment.

Expand full comment

Two quotes from two different George’s come to mind:

“All models are wrong, but some are useful.” George E.P. Box

Why did you climb Mt. Everest? “Because it was there.” George Mallory

Expand full comment

I have been very positive about AlphaFold until now. There was that nagging doubt, but the reporting I read (e.g. New Scientist) was so positive that I took the success for granted and never really dived in reading the actual papers. Well, this teaches me a lesson. Thank you Eric for putting this realism in my inbox.

Note that modelling based on actual science can differ from predictive systems like these GenAI ones. So, AlphaFold deserves the scrutiny, that doesn't mean that modelling is always not 'science'. But I should be careful making these statements when I haven't dived as deep as I should.

Expand full comment

Nice piece, Erik.

I think you've articulated a new theorem, call it Larson's lemma: A field of fundamental research that has taken on a dominant AI focus is liable to make less progress than otherwise.

Expand full comment

Hi Courtney,

Of course I couldn't agree more! If we take an historical example, say computing the blast ratio of a fission device given some quantity of PU-239 or what have you, we'd have (I don't know) hundreds of human computers doing the calculations. No go. John Von Neumann realized this early on with the work of the Manhattan project, and accelerated the development of electronic "computers" for doing the manual labor. More progress--but not a field of fundamental research (at the point of blast ratios the scientists were a long way from E = MC2). I think we still don't know how to think about thinking. It's as if the modern ethos is that computers can somehow do it for us. They can help us! And modeling has been proven effective in scores of situations, from building houses to shaping the wing of an aircraft. But protein folding is fundamental enough that it simply can't be *solved* by throwing a neural network at it, using prior examples of proteins. There's a line between human and machine "intelligence." Does it move? I suppose. But it's quite a bright line when looked at correctly. Thanks for your note. I hope this also somewhat addresses Gerben's excellent distinction (see above).

Expand full comment

I've been a computer modeler in climate research for 30 years. What you say about the lure of modeling and the diversion of talent is true there, too. The field has gone from fluid dynamics models, which make mathematical sense ab initio, first to coupled ocean-atmosphere models (which only worked with some tricks, but they were still physically based), to so-called "Earth system" models. I remember standing in a lecture hall at the Moscone Center in San Francisco in 2000, when the hype started to kick off, thinking "what am I doing here?" (so years before the AI hype, but more limited to the geoscience bunch). It's now got worse with so-called "integrated assessment models", which are also a form of "artificial intelligence", at least from how much the policy making community believes in them. They are basically coded forms of prejudice and preconceptions. Since 2000 I've been winding down my career doing more useful things, like critical writing, supporting activists, or producing olive oil.

Expand full comment

Very few companies actually invest in the intelligent automation stage which includes the time consuming and expensive process re-engineering (to translate human performed processes/workflows into processes/workflows performed by AI/machines). And here is the explainability or at least interpretability part - if we don’t understand or cant explain the AI results - how can we even translate right?

Expand full comment

Most scientists have ambitions (not to end up ostracised and persecuted), dependants, ego ... etc who wants to be the next Copernicus/Bruno? Not many people would raise their hands, most would choose to obediently follow the “800” year old crowd and most people did.

Expand full comment

My question is - of how much meaningful use can a compromised tool be? I know you argue its usefulness in automation of labour and drudgery ...but are we using it for that? And are we using it right?

Expand full comment

Whooh. This is a good one, Erik. Strikes me as really substantive.

I'd like to think more about the distinction I see you drawing between (full-blown?) science and (mere?) modeling. Could you help me?

Here's the rough idea I gleaned from your piece: the difference is that modeling is only for prediction, while "science-ing" also includes explaining and understanding (see: "[. . .] the really hard part of the protein folding problem is divining their holistic activity in the body [. . .]".

I suspect my rough idea isn't quite what you were getting at, so it'd help me if you'd offer the necessary corrections and clarifications.

Thanks!

Expand full comment

Fantastic we have these tools that augment, seed and improve human intellect that will lead to new discoveries in science and improve our lot in life.

Expand full comment