19 Comments
Nov 10Liked by Erik J Larson

AI doesn't fulfill at all its marketed promises, so far, and that's what critics are usually pointing out. LLMs do not have understanding of the world, they *do not* pass Turing test (just use a common sense test that humans pass with at least 95%, and the best models get 50%, and there you go, spotted the fraud). That being said, there is no doubt that there is pattern recognition intelligence in AI. For example, I use AI music models, and they have impressive mathematical understanding of and proficiency in the most important transformations in music. That's truly valuable. BUT... if one tries to sell the model as an unbelievable songwriter that will replace musicians, I'll say, no, all alone it *sucks*.

Expand full comment
author

Hi Frederico, I could not have said it better myself.

Expand full comment
Nov 9Liked by Erik J Larson

The biggest obstacle to achieving technologically excellent AI (mature, most optimally configured LLMs) as fast as possible - is the idea of approaching AGI. I understand that it was created and hyped up cause it sells better (attracts more investment). But it does create unrealistic expectations (missed ROI, more badly implemented tech, etc) and confusion (some vulnerable people fall for the illusion that a chatbot is capable of compassion, kindness, intimacy, etc)… I think we need to urgently educate general public about LLMs via a government led initiatives to give it authority it needs. Public imagination is hooked on AGI (as humans always search for something greater than us - has its pros and cons, this being a con)… we need an AI anti-anthropomorphic dictionary, translating AI hype terms into actual meaning, eg ChatGPT hallucination = error, autonomous = high degree of automation, etc. to be used when reading mainstream media articles.

Expand full comment
author

You prompted this thought! When I wrote the Myth, I was thinking “how do we get to AGI?“ It certainly cannot be just statistical induction. Now I think, why would we want to get to AGI? What we want are systems that serve our purposes by doing things that we either don’t want or can’t do. When I use an LLM, I effectively want it to be an alien intelligence, not another copy of a human. So this just goes to show you! We all continue to develop in our views. But my primary concern is of course with human beings. That has never changed. Thank you for your comment.

Expand full comment
author

This is a great point Jana.

Expand full comment

So, we circle thousands of year back, yes, some people were super insightful back then, too… like Heraclitus panta rhei and yes, that’s wonderful that we continue to develop in our views. And Protagoras - a man is the measure of all things. 😊 Yes, we humans are capable of self knowledge, self reflection or self reference and most importantly, we are capable of acting with integrity 😊 we can be “complete”. I think that could be just the greatest humanist achievement. You prompted this thought in me while I was reading your Myth book and your reference to Kurt Godels theorems of incompleteness (I loved that passage).

😊Bear in mind, that some people in order to achieve their goals which they might express implicitly (- a dig at Stuart russels oecd ai definition) (hide their true intentions) (“ugly” goals they can’t achieve by being truthful) create illusions of kindness, compassion, intimacy etc on purpose, though they don’t feel any, and it is very temporary (basic mechanism of a human construct of a lie). This is just to highlight , that not all humans act with integrity, in fact, most do not. Most of humans don’t use the biggest humanist gift/ability we possess - acting with integrity consistently.

And some people make “true” mistakes. They acted with integrity, but wrong.

Anyway, back to creating technologically excellent tools - 😊

So seeking the truth - the age old goal of true philosophical thinking … is the only way we can make sense of what’s going on. I love when things add up (and I learnt by now, that when they don’t … there is a lie there somewhere). Therefore, explainability or interpretability of our new tool, AI, is so important. If we are to leverage its great capabilities in our everyday lives (in effect - outsource a lot of our decision making processes), we need to be able to examine its outputs validity. At the moment - we can’t. That’s my objection.

Expand full comment

I couldn’t agree more.

Expand full comment

Serious question re "Custom applications are thriving in sectors like law, medicine, and finance, with companies paying top dollar for models tailored to their needs."

Why as a subscriber to several AI bloggers including yourself and MIT Tech Journals and TLDR have I never seen one article in support of the quoted sentence?

I suggest "paying top dollar" in the land of hype mainly means not wanting to fall behind Meta and Microsoft and Alphabet and OpenAI when it comes to separating the rubes from their money.

I pay for your insights so I'm not here to troll.

Anyone, please feel free to educate me.

Expand full comment
author

Hi Simple John, my simplistic answer is that the media just isn’t covering it. but there’s a wide diffusion of foundational models that go well beyond out of the box. It’s a great question, I don’t have a fantastic answer, but I will see if I can come back with something more.

Expand full comment

Ed Zitron has a very interesting piece on this very topic: https://www.wheresyoured.at/oai-business/ If his analysis is right, the economy of the big players is not looking good. He might not have all the data though, of course.

Expand full comment

I've been following Ed for a while. I missed the link you shared. Thanks Ondrej.

Expand full comment

Erik--

My biggest questions about this technology concern whether or not they make sense economically. We've been told that LLMs will transform the economy just like the internet did from the 1990s to the 2010s. In order for them to make good on that promise, they will need to significantly increase productivity across multiple sectors, and create an ecosystem of new jobs (the second point is important not because employment is fundamental to economic growth, but because *demand* is). These gains will need to be greater than the nominal cost of investing in new computational infrastructure, the opportunity cost of investing in new computational infrastructure (rather than, say, the electric grid or defense production), and the operational cost of this infrastructure (basically electricity costs). What's more this infrastructure shouldn't have major negative externalities on other critical industries (by, for example, driving up electricity costs on manufacturing).

In my view, the long-term viability of LLMs as a major technology will hinge on these questions. And I just don't see anyone delving too deeply into the questions -- partly because we probably don't know how it's going to play out.

However, there is one application which definitely isn't going to justify all of the investment happening in this technology: LLM products like ChatGPT helping journalists crank out more content. And, like, 90% of the coverage of LLMs fixates on this.

Expand full comment
author

Hi Jeffrey, I appreciate this. I have somewhat of a different view I suppose. I see an LLM effectively enhancing human writing if people know how to use it correctly. I want to sort of move the discussion in that direction. I have brought this up before, but it behooves saying again, our situation circa 2024 is not unlike the discussion about Google circa 2004. That was supposed to ruin human ingenuity, and I think by and large it enhanced it. Certainly you’re going to be a lonely warrior trying to drum up support against using Google at this point. Something similar I believe will happen with LMMs.

Anyway, it’s a broader discussion, clearly, and it’s not immediately obvious that we are getting a benefit. It has to be explained, and more importantly, people have to see how to use the technology correctly. That’s something I endeavor to do.

Expand full comment

I don't know. I thought that the value of Google in 2004 was pretty obvious. If you need to find something on the web, you go to Google, type in some words related to what you want to find and there's a decent chance it will come up in the first few search result pages. "Finding something on the web" has never had a fixed value or scheme, but really runs the gamut of anything that anyone wants to dream up and connect to the internet and serve to a possible peer-to-peer audience. The use concept is intuitive and almost any idiot can do it. Monetizing the service through advertising was also obvious (a lot of people are looking for things on the web for commercial reasons!). All of the other stuff that Google does are loss-leaders for this basic idea, meant to keep you in close proximity to their advertising.

The argument for the value of AI is not this obvious or intuitive -- you really have to think about how these systems are going to help you do really specific things, and then their outputs are confined to a narrow band of predicates. I also don't see how they improve the underlying monetization of advertising for either startups or incumbents, relative to earlier approaches. Honestly, I think Google's core value proposition has suffered from the amount of slop generated by AI-powered auto-generated content.

The problem I have with the discourse around LLMs is that there's a lot of hand-waving about how their value is unquestionably superlative by boosters (and a smaller group that sort-of says the exact opposite). But these economic questions I've posed are largely empirical and things that can be examined in exactly the terms that I've posed them. Not all of them are easy questions -- for example, how certain changes in load from data centers will impact rates depends on your time horizon and how that load will interact with a variety of other factors you have to make assumptions about.

If the answers are not favorable for LLMs, instead of hyping doomsday scenarios, more down-to-earth problems should be considered more closely:

1. If the money we pour into building out this computational infrastructure does not produce significant productivity gains, there will eventually be some kind of negative stock market impact, as the value of the companies riding this wave of investment tanks. If the bubble gets too big, a recession could result.

2. There is an opportunity cost for building infrastructure. I would argue that, as a nation, we should prioritize investing in housing, electricity transmission and distribution, EV charging infrastructure, defense production of cheap, mass-oriented munitions and drones, regenerative agriculture and public health stuff like new vaccines and new antibiotics. If much of the free, private investment capital that could go towards these other efforts is be concentrated on computational infrastructure, we will all be poorer as a result, and possibly not ready for major challenges in the coming decade (war in Asia and Europe, a new pandemic, electricity price shocks and brownouts, etc.). So there needs to be a better signal as to whether these systems are worth investing vis à vis other sectors.

3. If data centers start driving up electricity costs, it's true that they would be paying higher prices for electricity. But other customers would also be paying higher costs, which means that part of the marginal economic drag of this infrastructure would be socialized to other ratepayers and other industrial sectors. This would impact individual consumers, but it could also encourage inflation, and dampen industrial output and further investment in other industries. Unfortunately, this plays out as an externality for AI companies, where their profits are partly underwritten by other parties. If productivity gains are anemic, these other parties may not benefit much from the AI companies' products, while AI companies remain reasonably profitable, and so LLMs' orientation towards the economy at large will be more extractive than contributive.

Maybe these concerns will turn out to be not well-founded under further empirical scrutiny. I just don't see much public discourse addressing these possibilities and investigating them systematically. (I'm happy to be pointed at enlightening reports or articles!).

One thing I don't want to hear about anymore is how ChatGPT or Claude or whatever can help journalists spew out more content. That has negative value to the larger economy.

Expand full comment

Other than that it's pretty clear that AGI is not on the cards with the trajectory we're on, I above all see a lot of uncertainty.

The current boom is still an investment/expectation (including FOMO) boom and where and how large the economic *value* is is still largely up in the air, including — especially — 'when'.

It seems pretty reasonable to expect a shakeout at some point. But it may still take a while. The dotcom-hype bust took roughly five years (give or take) to arrive. If the release of ChatGPT is the start of the current hype, we are now only two years underway. It may easy be another two years before some of the uncertainty clears up. It might even be longer because this technology is so super-convincing that the downsides may not be detected that clearly for a while, it might take much longer because we're so bad at detecting it. So both 'imminent demise around the corner' and 'so, where is that imminent demise then?' are probably premature.

I expect productivity gains from GenAI. I don't expect a lot of gain where error rates must be very low (like coding, I estimate IT is far too brittle and human-in-the-loop is far too demanding for human powers of concentration).

Expand full comment

This post is a great example of the pot calling the kettle black: the author highlights unbalanced criticisms of AI and then responds with a cheerleading pose for all the money flowing into AI development. While this surely helps his own new business ventures marketing AI, he doesn’t really provide useful insights or information. (Maybe I need to buy the paid version?)

Expand full comment
author

And it’s twice as much for your ass! Lol.

Expand full comment
Nov 10Liked by Erik J Larson

Lol. Anyway, good luck with your work on AI.

Expand full comment
author

Yes, you need to buy the paid version! You specifically.

Expand full comment