How (Some) AI Will Crash (Maybe.)

The AI Apocolypse is Coming, but Not in the Way You Think; Also, I Discovered Fire.

· AI,business strategy,antifragility

I’m writing this mostly as a note to myself, though if it does get published and read by anyone, maybe some feedback will help me work out the thinking a little more clearly. If so, email me @ ben AT Ben Levin dot com.

A pencil sketch of a robot using a wooden rod to push it to attempt  to push a boulder up a steep hill. Creating using OpenAI's DALL-E with the prompt "a pencil sketch of a robot Sisyphus pushing a boulder up a hill"

An image of a robot pushing rod against a flat-bottomed boulder in an attempt to move it up a hill, generated by OpenAI's DALL-E with the prompt, ""a pencil sketch of a robot Sisyphus pushing a boulder up a hill"

Part 1

AI has moved Crypto into the background, and the only news stories you hear these days are of Crypto company leaders being arrested and smacked with securities and fraud allegations, from a jumble of three-letter (mostly) US government agencies. The SEC wants you for taking people’s money without filing disclosures; CFTC wants you for market manipulation; DOJ wants you for just stealing money.

These are not exhaustive descriptions of what’s currently rendering the crypto ecosystem into mulch, but they are illustrative, and I think fairly common. And you can argue whether they are the righteous justice of a public latterly moved to uncover and prosecute fraud perpetrated on the weak by the wealthy or the swift; or you can grumble that it’s more of the same establishment trying to knee-cap the up-and-comers in an attempt to defend an equally-fraudulent, but far more politically-connected and societally embedded system. Both, if not right, are also somewhat probably not-wrong.

And I should say, I think what follows applies AI LLMs being developed by for-profit companies who aim to make some kind of financial return on their development. I don’t think the mechanics I’m describing here apply to, for example: free open source LLMs; generative AI models being used for scientific research and drug discovery (although that’s a “maybe”), and AI deployed in military applications (where the consequences are just far too terrifying to contemplate.)

So in a way, this isn’t about AI so much as it is about the financialization of AI; as such, I think the same weak points are going to show themselves, and maybe the dominant contribution of AI is that the fractures will grow larger, more quickly, and break more spectacularly.

The Oasis of the Real

But back to Crypto for a second. Of the many frauds and crashes out there, I’m going to talk about the few that I think fit a certain model

  • They were big
  • They were supported by lots of “savvy” investors
  • They (probably) did not start out as intentional frauds

Matt Levine has talked a lot about two in particular: FTX and Celcius (the indictment of the latter’s CEO got me writing this, actually.) And the basic outline of these two are similar.

  • They start with a bunch of cash, gathered from venture or other investors
  • They create a “thing” which purports to have value or utility: an exchange, or a token/system
  • They spend a lot of money/time/energy promoting the thing, getting people to use it, telling them good stuff about the thing, ignoring or refuting bad stuff or stuff that seems to detract from their greatness.
  • The thing does stuff, and people like that, and put some more money into the thing.
  • This continues for a while.

At this point, really nothing terrible is happening (probably). This is, in general, the stock story of all new ideas/technologies/innovations that are not purely tangible.

Tangible inventions - things you can see, feel, taste, touch, etc. - tend to work a bit differently. Things like Aspirin, eyeglasses and winter jackets, for example, exist in a way that you don’t really have to do a lot of work to convince people that the purported benefits exist. You might need to harangue them a bit to get them to try the product, but when their headache goes away, or they can read your blog, or they’re not frostbitten from December to March, it’s pretty obvious why.undefined

Discoveries work a little differently, though: that is, if I were to have discovered, say, fire, I could say:

  • “Hey, I found this thing I’m calling ‘fire’ It’s hot, check it out!”
  • And later: “Look, you can do cool stuff with it, like cook that mammoth we just killed, and maybe it’ll taste better, and probably Ogg will not throw up like he does when he eats raw mammoth, maybe?”
  • “Oh, also, it’s warm, and like, we like being warm. That seems like it might be nice.”undefined

And people will pretty immediately see and experience the benefits of fire, and want it, and you’ll get some benefit from that (Praise? A guaranteed place next to the fire? The best cut of mammoth?)

The Network Effect is The Thing

With something like crypto, it all works a bit differently. Again, those first few steps are similar to other tangible inventions, and broadly speaking it works like this:

  • There’s an idea that takes some money to develop and get off the ground.
  • There’s a lot of haranguing of potential users to get them to try the thing.
  • There’s maybe some work you have to do to keep the critics away, or confused
  • Some people start using the thing, and maybe kind of like it.

And then there’s this critical little bit that completely different from most tangible inventions, and it comes down to belief. If people don’t continue to believe that the thing you made is useful/valuable, and more people don’t follow along, and you never reach a critical mass of belief, then it doesn’t matter how actually useful your thing is; it’s utility, viability and value as a system rests on obtaining a critical mass of adherents.

We often call this the network effect, and though it applies to tangible things too (like fax machines, computers and the Internet in general), with intangible things it's essentially the thing itself.

By contrast, aspirin works whether or not you believe it does. Its natural precursor has been used for at least 2400 years; its first synthesis in a lab was in 1853 by Charles Frédéric Gerhardt, and it was not until 118 years later that John Robert Vane discovered how it actually worked (for which he received the Nobel Prize in 1982).

So: FTX and Celcius (and Binance and Three Arrows Capital and and and) all worked from a core premise of “we need some funds, we’ll build a thing, we’ll get people to use it, and because they use it and like it, the thing will have value.”

(People will quibble about the "realness" and "inventiveness" of different flavors of crypto, in particular whether Bitcoin, for example, has utility as a store of value. But I'll argue that even if it does, that's only so because people have generally agreed that it does. And while the same can be said of the US Dollar, and fractional reserve banking in general, that doesn't make Bitcoin any more useful or unique. There are societal and political structures supporting the US Dollar and fractional reserve banking that Bitcoin doesn't have, too. And for all of these, people's whims change; if tomorrow someone gets it in their mind that beeswax is actually a better store of value than Bitcoin and any other currency, and enough people agree, we'll be using beeswax instead of dollars or bitcoins.)

And, like: that’s totally cool. Apple spent a bunch of money, built a thing called the iPhone, told people they should use it, and people did. A lot of them. And they were able to build all kinds of stuff on top of that, which people pay money to use.

Other companies tried the same thing, and failed. Or didn’t succeed as much. And regardless of why you might think that was (marketing, panache, style, gullible fanboys, whatever), you probably aren’t arguing that the mechanism itself was flawed.

Like Sands Through the Hourglass

Companies like FTX and Celcius have, like all innovative companies, a cost structure. It costs money to build the thing, to get people to use it, to run it, maintain it, keep people using it, argue with people who say it shouldn't be used, etc. And in both those cases, there was something about the cost structure that was fundamentally unsustainable.

I’m way over-simplifying here, but the common thread, I think, between FTX and Celcius and all failed crypto firms and projects was that they needed to spend more than they could ever possibly hope to make.

  • In FTX’s case, funding Alameda’s proprietary trading was vital to keeping the wheels on the bus; eventually this meant extending credit to them which FTX didn’t have, using customer funds to make up some of the difference. The hope might have been that eventually they would make up for their losses any pay everyone back before the money ran out, but Hope is Not a Strategy.
  • In Celcius’ case, they had to pay exorbitant interest to customers to convince them to keep their crypto within Celcius. This was first sustained by paying out all the money earned from lending, and then some from early investors, and then eventually from other (newer) customers making deposits. Like all Ponzi schemes, this one fell under the weight of its own false promises, sped along by mean-reversion in capital markets (and a pathologically dishonest CEO.)

Take those two examples: in its best possible telling, the story of FTX is one of hubris and ignorance, but maybe not fraud. And Celcius seems to have been an obvious fraud from the start, but could plausibly have been the same sort of fraud that permeated the first Internet Wave of the late 1990s and early 00's: "we'll lose money on every trade, but eventually we'll make it up on volume."

There's another fatal flaw: all of these companies exist(ed) in enough of a legal gray area that they are(were) able to arbitrage the benefit of regulatory evasion - their cost structures would be much higher if they operated like their fully-regulated financial competitors.

Inevitably, these companies had to fall apart. Or get purchased by someone who then had to deal with them falling apart. In many ways, that is the purpose (and dream) of many venture-backed firms: build to a scale in an unsustainable way, and then try to make sustainability someone else's problem.

If that sounds like it's tinged with bitterness and judgement, you're not wrong.

AI Firms Operate At the Intersection of Belief and Unsustainable Costs

I’ll argue that the parallels with AI at least include the following:

  • An enormous amount of money is being poured into development and enhancement of mathematical models and algorithms. This is, at the moment, limited to “smart” money (i.e., institutional and accredited investors). There is less of an outright attempt to recruit investment from retail investors.
  • Developing, maintaining and just running these models, as tools which people use, is immensely expensive.
  • There is an expectation that processing power and scale will continue to increase along Moore's Law
  • Alongside this, there's a necessity of belief that AI models themselves will become more accurate and efficient.

Astute readers of Nassim Taleb's Antifragile will observe that these are all weaknesses. For-profit sustainability of AI requires a continuation of three things:

  • Funding for development and maintenance
  • A steadily-increasing processing capacity
  • A steadily-increasing model efficiency, with no practical limit

Only one of these things needs to be untrue for the AI "revolution" to fall apart. And they are all linked. If either the second or third item appears to be untrue, funding will dry up quickly.

In Part 2 I will dive into more detail about the limits to processing capacity. I think.

 

Don't miss it - Subscribe: