AI is more likely to disappoint us than kill us

Nick Felker
8 min readJun 5, 2023

--

The opinions stated here are mine alone, not that of my employer.

AI will not do most of the tasks we ask it

Last week many headlines were written and links were clicked about a topic from the Center for AI Safety. A number of high-profile experts in the field signed a statement calling AI extinction risks a “global priority” similar to “pandemics and nuclear war”.

Of course the press took this statement up a notch and many people are viewing the rollout of trivial AI features as though some academics want to bring about the end of days by pursuing something they know is dangerous.

Pandemics are bad things, and we should know that firsthand. It should also be noted that pandemics are not literally extinction-level events. Nuclear war is terrible and I hope we don’t ever know that firsthand.

Personally, I doubt we’ll ever reach the point of a paperclip maximizing machine. Today we have simple tools that can generate haikus. It is actually quite difficult to imagine a world in which it will become so intelligent it will intentionally cause real-world harm. In a year or so we will become disappointed by what this technology is capable of.

Experts are bad at predicting the future

First, we need to acknowledge that large-language models like ChatGPT and Bard are not capable of hurting anyone. For one, they are just a piece of software running on a server in a remote location. Second, they don’t have the capability to execute any tasks that could result in harm.

Worries about AI are what may happen in the future, not today. Even the experts who signed that letter do not believe that a contemporary chatbot will end all life.

Some people are concerned that AI, particularly those that are able to write code, will be able to write code in order to improve its own performance exponentially. Some go further, worrying that an AI will “learn” how to hack power plants and bank accounts in order to gather the necessary resources for further learning until it achieves infinite intelligence.

Let’s dismantle each of these arguments.

First, there are fundamental physical limits to how efficient code can get. Training on top of open-source repos are not the best idea since many are full of security holes and poorly written code. Even if all the code were written in perfect assembly, each computing task will require at least one CPU cycle to complete. Often tasks will require many CPU cycles.

You can’t get faster than a single clock cycle. Even partnering with AI on chip design will still run into fundamental physical limits on performance. We can still do more on improving certain operations, but we can’t make computers infinitely fast.

Second, hacking of power plants and banks are legitimate yet implausible the way they’re described. Power plant hacks have never been used to redirect power wherever the attacker wants. Grids aren’t setup to generate and pipe infinite energy on the level of individual outlets. Instead, attacks tend to just shut down operations entirely.

Cybersecurity in the power and financial sectors are critical and we absolutely need more operational security. Yet we also have to acknowledge our successes. Financial crimes are regularly detected and enforced.

Red teams are a group of cybersecurity professionals who run exercises to break into their client’s systems including power plants. It’s not just AI who want to infiltrate and we’re improving our security broadly. Firewalls and intrusion detection are useful.

I’m sure mistakes will be made in the future, but our worst fears will not come to pass. By this point we should definitely not trust even the experts with predicting the future.

A decade ago, self-driving trucks appeared to be on the cusp of making millions unemployed. Today, we have a trucker shortage.

I believe that this is significant underestimation. Autonomous cars will be commonplace by 2025 and have a near monopoly by 2030, and the sweeping change they bring will eclipse every other innovation our society has experienced.

Elon Musk, Tesla Motor’s CEO, says that their 2015 models will be able to self-drive 90 percent of the time.

Lots of money has been invested into developing the metaverse without anything to show for it. Virtual reality is still not commonplace and it’s unclear whether Apple will make it cool. The future continues to be hard to predict.

I don’t say this with scorn. Self-driving cars are a very difficult task, as are many kinds of advances. It requires not just research but product development to create something people will want. Yet it does show that even experts cannot be trusted on what will happen in the future.

There is not the slightest indication that energy will ever be obtainable. It would mean that the atom would have to be shattered at will. — Albert Einstein

Journalists are good at sensationalism, not at technology

If a journalist hears a long essay with a lot of context, they will successfully manage to pull out a single sentence or quote that provides a lot of insight. Yet without the remaining context it will provide readers with a poor understanding of what experts genuinely believe.

I actually don’t think that AI poses a risk to human extinction. I think it poses a similar risk to pandemics and nuclear war — which is to say, a risk worth taking seriously, but not something to panic over. Which is what I thought the statement said. — Bruce Schneier, one of the AI statement signatories

Last week there was a conference hosted by the Royal Aeronautical Society on the topic of advances in defense. One of the talks was written as “AI — Is Skynet here already?”

This talk centered around an AI-controlled drone who was trained with reinforcement learning to destroy enemy targets with human supervision. However, it was so adamant about destroying the enemy that it tried to get around any obstacle. When the human did not authorize a strike, the AI turned on the person. It attacked the communication tower to prevent anyone from telling it no.

This is scary, isn’t it? The Air Force is putting robot weapons in the sky? This sounds like something out of a sci-fi movie.

While the story was shared for several days last week, it turned out the entire thing was made up. No, the air force wasn’t creating autonomous weapons. They weren’t even creating simulations. A correction later revealed “the ‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation”.

A lot of people genuinely believed this to be true rather than apply some skepticism. Even under the premise of the experiment, AI today is not advanced enough to act maliciously. It isn’t going to “know” to attack the operator or towers to improve its score. I can believe it’d happen if it was just shooting at random things, but our imaginations are outpacing our technology.

A few months ago, with the trial launch of Bing’s AI search tool, one journalist spent hours prodding and poking around at a large-language model. Large-language models develop a statistical model for language. Essentially it uses pattern detection to provide text that sounds plausible based on a given input.

It should be made clear they are following patterns and don’t actually know things. It does not know legal cases even if it can sometimes give an accurate answer. But it’s not correct to say it lies or anything that suggests it behaves intentionally. It just matches patterns.

As you read the journalist’s transcript, you can tell how the language model is responding to the prompts. It doesn’t have an internal consciousness, just responding based on its internal language model. To understand how it works, you can read Stephen Wolfram’s detailed blog post that goes into much of the underlying math.

But if you didn’t know that, and if you didn’t see any of the mistakes LLMs make, you might misunderstand the technology for something that it isn’t. Popular media has long examples of AI gone amok that skew our contemporary understanding. However, H. G. Wells’s The First Men in the Moon turned out to not be an accurate representation of the moon.

Digital Consciousness

We don’t have a strong idea of what consciousness is, particularly in an artificial capacity. We understand biological forces well and their needs: land, food, status, to survive, and the urge to reproduce.

Yet a digital lifeform has none of those needs. We assume that robots would take over because that’s what we would do. What would a sentient robot want and why would it want it? If I can shut down my computer and reboot it, that’s not really equivalent to death.

And a server running remotely has no capacity for harm in the real world unless we choose that to happen. We could just not let that happen. Unlike what some suggest, there is no automated “manufacture a deadly virus as a service” that anyone can get for a few dollars. Nuclear arsenals are not connected to the Internet.

A lot of bad things require people to do harm, and even that harm will be limited compared to the scale of humanity. At that point you should definitely litigate bad actors, and we already do.

A more likely path

Rather than being killed by a drone because it didn’t like my tweets, I propose a more likely, more disappointing path forward.

Most GPUs and other advanced computer accelerators are manufactured in Taiwan. If China decides to invade Taiwan and a war breaks out, access to these hardware components will become limited. Costs will go up and we will enter an AI winter as we are forced to incremental improvements on old chips.

We will probably see incremental improvements anyway. GPT-4 was trained by slurping all pretty much all the text on the Internet prior to 2021. While the next model could be trained on a few more years, the improvements are likely to be incremental. As such, even Sam Altman is saying GPT-5 is not imminent.

Certainly this tool can be dangerous. Being able to generate all kinds of text, images, videos, and more will provide us with creative and destructive capabilities with a wider audience. It is important to prevent harms while enabling material gains.

In this way, AI is like a lawnmower: something that anyone can use but requires human oversight so that it doesn’t run off. Safety regulations can make sense but those peddling outright bans are absurd.

But whatever happens in the future, it won’t be an existential risk. That is a concept by grifters. Experts do not believe this. And even their predictions cannot be trusted as the obvious path forward. Even my predictions above should be taken with a grain of salt.

I implore readers to see each new AI development with a healthy skepticism. Really interrogate the claim and try to understand what the technology is actually doing rather than relying on what is in the press. If it sounds like science-fiction, it probably isn’t science-fact.

--

--

Nick Felker
Nick Felker

Written by Nick Felker

Social Media Expert -- Rowan University 2017 -- IoT & Assistant @ Google

Responses (1)