GPT-3 and existential angst

easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews

Understanding a tool via demystification

In yesterday’s issue we looked at GPT-3 from a technical perspective, in today’s issue let’s delve into the philosophy surrounding it. After all, technology as we know it today is as much philosophical or even ideological as it is technological.

How the tool is used is heavily influenced by why the tool is used, who the tool is used by (and on), and what we expect from usage of said tool. With AI all of this can become rather muddy, as in many cases, the philosophy supersedes the technology. Our belief in the tools tends to be more profound that our actual usage of it.


There’s been an ongoing debate surrounding the rise of AI, that tends to revolve around the issues of narrow AI and general AI. The article above provides a decent explanation of what narrow AI is and why it is limited:

To date, all the capabilities attributed to machine learning and AI have been in the category of narrow AI. No matter how sophisticated – from insurance rating to fraud detection to manufacturing quality control and aerial dogfights or even aiding with nuclear fission research – each algorithm has only been able to meet a single purpose. This means a couple of things: 1) an algorithm designed to do one thing (say, identify objects) cannot be used for anything else (play a video game, for example), and 2) anything one algorithm “learns” cannot be effectively transferred to another algorithm designed to fulfill a different specific purpose. For example, AlphaGO, the algorithm that outperformed the human world champion at the game of Go, cannot play other games, despite those games being much simpler.

General AI is obviously much different, evoking a parallel with human intelligence, and the expectation that machines should emulate how we think. This newsletter tends to be on the side of the debate that says there is no such thing as General AI, or at least as currently imagined.


Yet as AI starts to move beyond the notion of what is “narrow” as GPT-3 is sort of doing, then speculation turns to whether we’re on the road to general AI. More from the article linked above:

Nevertheless, there are experts who believe the industry is at a turning point, shifting from narrow AI to AGI. Certainly, too, there are those who claim we are already seeing an early example of an AGI system in the recently announced GPT-3 natural language processing (NLP) neural network. While NLP systems are normally trained on a large corpus of text (this is the supervised learning approach that requires each piece of data to be labeled), advances toward AGI will require improved unsupervised learning, where AI gets exposed to lots of unlabeled data and must figure out everything else itself. This is what GPT-3 does; it can learn from any text.

GPT-3 “learns” based on patterns it discovers in data gleaned from the internet, from Reddit posts to Wikipedia to fan fiction and other sources. Based on that learning, GPT-3 is capable of many different tasks with no additional training, able to produce compelling narratives, generate computer code, autocomplete images, translate between languages, and perform math calculations, among other feats, including some its creators did not plan. This apparent multifunctional capability does not sound much like the definition of narrow AI. Indeed, it is much more general in function.

With 175 billion parameters, the model goes well beyond the 10 billion in the most advanced neural networks, and far beyond the 1.5 billion in its predecessor, GPT-2. This is more than a 10x increase in model complexity in just over a year. Arguably, this is the largest neural network yet created and considerably closer to the one-trillion level suggested by Hinton for AGI. GPT-3 demonstrates that what passes for intelligence may be a function of computational complexity, that it arises based on the number of synapses. As Hinton suggests, when AI systems become comparable in size to human brains, they may very well become as intelligent as people. That level may be reached sooner than expected if reports of coming neural networks with one trillion parameters are true.

The problem with this analysis is that it creates another sleight of hand or distraction from regarding GPT-3 as a tool, rather than a fledgling form of life. It draws a false comparison between what the tool can do and what humans can do. Keeping in mind we don’t actually know or understand what humans can do.

Humans are also incredibly dynamic and resilient. As our tools increase in capability and power, we will change with them. The medium is the message, which means the unconscious or psychological impact of the tool is always more profound and transformative than what we do with the tool itself.

Make no mistake, GPT-3 is an incredibly powerful technology that will enable a wide range of tools and applications, that may substantially impact how we live and work. However none of that means that it is anything more than a tool. Certainly not the birth of a god or magical oracle.

Here’s a different but equally profound reaction someone has had to using this tool:

I’ve since been shown the beta version.

Here’s what I didn’t expect: GPT-3 is capable of original, creative ideas.

Using GPT-3 doesn’t feel like smart autocomplete. It feels like having a creative sparring partner.

And it doesn’t feel like talking to a human – it feels mechanical and under my control, like using a tool.

“Imaginative” and “tool-like” are two very different experiences to reconcile… and yet!

Matt offers a bunch of different examples from his own experiments that support his argument that GPT-3 is an idea machine. That it generates concepts and connections that a human may not derive on their own. I’m not going to share his examples, as you can click on the link above and read for yourself. However I will share his two main conclusions or insights:

  • Using GPT-3 is work, it’s not a one-shot automation like spellcheck or autocomplete. It’s an interactive, investigative process, and it’s down to the human user to interview GPT-3. There will be people who become expert at dowsing the A.I., just as there are people who are great at searching using Google or finding information in research libraries. I think the skill involved will be similar to being a good improv partner, that’s what it reminds me of.
  • GPT-3 is capable of novel ideas but it takes a human to identify the good ones. It’s not a replacement for creative imagination. In a 15 minute session with the A.I., I can usually generate one or two concepts, suitable for being worked up into a short story, or turned into a design brief for a product feature, or providing new perspectives in some analysis – it feels very much like a brainstorming workshop, or talking something through with a colleague or an editor.

Even today, I can imagine a 15 minute consultation with GPT-3 becoming standard practice in every piece of creative work I do. And in the future?

This is a fantastic summary of what we argued in yesterday’s issue, that this is not about human vs machine but human and machine. The potential here is to use tools like this in service of and augmenting human (creative) work.


Although it is the figuring out that offers opportunity for all sorts of fun and hijinks.


While the above description is meant to be derogatory, it strikes me that it also reflects where we’re at as a society right now.

Here’s another list of some of the GPT-3 applications and takes out there. I’d be curious if any of you take the time this weekend to peruse it, let us know of any other examples worthy of our attention by either posting a comment here or tweeting using #metaviews.

Finally some more reading analysis on the subject.

Leave a Reply

Your email address will not be published. Required fields are marked *