Hoping for Benign Invisibility

Back to Basics, on AI and the Future we Can't Really Predict

· AI,ux,The Future

 

Of last week's late-breaking news of the firing of Sam Altman from OpenAI, much has already written, I think only two points matter for the moment:

  1. It is quite remarkable to watch the speed with which a Board of Directors can act in a non-profit context vs. what happens at most publicly and privately owned companies (at least from an outsider's view.) Rarely are CEOs held to account at all, even for a company's poor financial performance. 

    Boards are often populated with people friendly to the CEO, and there is, structurally, a lot of incentive for everyone to get along on the big-picture items. But in this case, four people with no financial ownership stake took quick and decisive action over a matter of maybe 2-3 months (the "firing" seems to have coalesced between Thursday and Friday afternoon, but that was just the visible portion. The maneuvering may have begun some weeks ago.)
  2. Apparently, at least if you're Sam Altman, AI is coming for your job.

The weekly torrent of news about accelerating changes in the capacity of ChatGPT and other LLMs obscures the bigger picture, which is largely that changes in technology often trail changes in human consciousness. I think it's safe to say that we (in the industrialized West, anyway) think as deeply about the impact of technology on our lives as we do the impact of electricity. I am reminded of a point I made to a colleague some 18 years ago:

 

Saying, “we do business on the web” will sound as antiquated and quaint as saying, “we power our business with e-LEC-tricity!”

 

My point being: AI is today where the Web was in 2005... exciting for some (in the industry), but nitty-gritty details remain unknown to most, and destined to become so enmeshed in our daily lives that we don't think much about it.

I've seen as much happen to myself in the past 6 months. When ChatGPT 3 came on the scene early this year, I was unimpressed but also deeply concerned. (Not about the potential for AI to overwhelm our ability to control it - LLMs are still really bad at coming up with novel ways to misbehave - but more about the ways in which the companies building LLMs had seeming side-stepped issues of who owned the material they were using to train their models; first lying about, then trying to hand-wave away the implications.)

But then I got a bit more curious, rather than judgmental, and dug in. And a little more than a week ago I started playing with GPTs, and the future came a little more into view.

If you don't know what a GPT is, here's a brief explainer:

  1. ChatGPT is a Large Language Model that takes something you type in (or upload), interprets that input as a question, and essentially searches a hugely-interconnected network of "answers". 

    From a user's perspective, it appears to concoct an original statement as a response to a query; what's essentially happening is a summarization of the many connections between related bits of information in its network.

    (To practitioners with a background in ML and/or AI: yes, I know it's more complicated than that. Bear with me.)
  2. A Custom GPT essentially lets a developer (that's me! Amazing! I can't really code!) cordon off a small part of that network, so when you type something in, my custom GPT does it's normal thing, but only within that cordoned-off space.

    It's essentially like searching only a slice of what's in Google Index.

Why is this cool? Well, it allows pretty much anyone to help other people use ChatGPT in a very targeted way.

Also: the GPT Builder uses ChatGPT as an interface, in a bots-building-bots sort of way, though (at the moment) still very much human-initiated.

The downside is, you're going to quickly see a proliferation of a million small GPTs, loosely joined.

(There's an opportunity here for someone to create an additional layer which lets people find the CustomGPT they want to use.)

But more importantly, CustomGPTs are not so much a way to democratize the development of applications on top of ChatGPT, as they are a means to squash a burgeoning community of AI Startups. The last 6-9 months have seen the launch of hundreds or thousands of startups which claim to offer "AI for [noun]," but which were a barely-disguised thin UI layer sitting on top of ChatGPT and mediating users' interactions with OpenAI.

There's no reason to be content to let third party developers create that infrastructure and profit from it, without OpenAI getting its piece of the action. And in a move quite reminiscent of how Apple handled AppStore Developers in the mid-to-late 00's, OpenAI is quickly rolling bread-and-butter features into their core offering.

All of which reminds me of the phrase "picking up nickels in front of a steamroller." Developers who are building an experience on top of any LLM need to think hard and long - and slowly - about the competitive advantage they might gain by doing so. If your app can be duplicated, it will be. Even worse, the ability to duplicate your app can just be built into an LLM as a (free) feature for any of your users to summon at will, in the moment when they need it.

But really, where is all of this going? For the moment, I'll conclude with an excerpt from this essay on the confluence of Humanness, "artificial" intelligence, and biology:

One of the concerns about developing “organic” AI is its unpredictability and the uncertainty it creates. Human control of natural, social and cultural processes is, however, an illusion created by the seemingly insatiable will to mastery that has turned destructive.

 

We worry about controlling AI because we believe we have control over so many other things, and want that to extend to new technology.

But we don't have that control, and we never did, and we never will. If that's all a bit unsettling, consider what we do have control over: our willingness to engage productively with technology as we evolve it, and it evolves us. Our openness to maximizing the benefits while working diligently to reduce the harms (which also means acknowledging that harms can, and do, exist.)

All of the technological shifts that seem monumental and important very quickly become embedded and forgotten, taken for granted. Thinking deeply, carefully, and slowly about the implications of that affords a bit better odds for avoiding catastrophe, but no assurance.