The human-in-the-loop singularity has already started

Let’s face it: despite some exaggeration here and there, GPT-4 is not “the” singularity. Autonomous agents will eventually run up against the fact that it can’t do any computation or logic without plugins, and doubtless the plugin system, while powerful, will also prove to be a bottleneck. It can’t even solve Wordle by itself, so taking over the world should be challenging.

This is in combination with the other bottleneck, the context window size, which will prevent a self-improving system from being able to efficiently self-improve as it grows without multiple layers of summarization and outlining, which will start to create natural degradation and errors.

But let’s think about two different singularities, one without and one with a human in the loop.

The” singularity is the human-out-of-the-loop one.

But the human-in-the-loop singularity consists of people using LLMs (large language models, like ChatGPT) to write code that uses LLMs to perform tasks, which improve human productivity, including productively using LLMs to develop things. This is already happening.

This version of the singularity will be a baby one. It’ll never go as far, as fast, because humans are slow and ponderous, and not very open to having their productivity increased by more than several times.

And that’s the problem. It’s not that we’ve created the singularity, but if we view things from a productivity lens, there’s an incentive to invest toward creating the singularity, because productivity can incrementally increase the more we remove the human in the loop.

This incentive scheme makes it clear how important it is to put ethical concerns, correctness concerns, and frankly human concerns ahead of productivity when it comes to AI.

People take the science-fiction warnings about AI too literally: the machine empire putting people in pods, humanoid droids taking over the world with military might, and so on. This is a far-away danger. It’s not that AI will literally take over factories and make robots. Or paperclips. For the real danger, this is just allegory. AI takes over the world, not by replacing us, but by consensually infecting us.

Economic incentives are the crushing flow of water in comparison to the sledgehammer of taking over systems directly. As long as we act according to incentives to increase productivity by replacing human concerns – such as which patients deserve treatments – with AI, it barely matters that it isn’t AGI (artificial general intelligence) any more than it matters that it doesn’t have a laser gun.

Just as science fiction likes to externalize human traits and make them into aliens, the AIs in sci-fi, from HAL to Ultron, are just mirrors. The danger is coming from inside us. We have this powerful capability – it’s as if we all just got shiny new robot arms. We can use them to build or to cause suffering. We are cyborgs. We are improving ourselves. We are the AGI.

We make the decision how to build upon ourselves next.

Recent posts