Geoff Hinton Issues a Dire Warning: "AI is an Existential Threat to Humanity" - 7investing 7investing
Stock Tips Mobile Menu Dropdown Icon

Geoff Hinton Issues a Dire Warning: “AI is an Existential Threat to Humanity”

The "godfather of deep learning" unexpectedly stepped down from Google and then sent a dire warning to the world. Now's an important time to be paying attention.

May 5, 2023

Something important happened this week. Something that we shouldn’t so easily dismiss.

Geoffrey Hinton unexpectedly stepped down from Alphabet (Nasdaq: GOOGL). Then a few days later, he warned us that AI might be an existential threat to humanity.

That’s a pretty big deal.

Before we explain why, let’s first respect that this isn’t just headline-grabbing media. Geoff is the pioneer of deep learning; the highly-esteemed University of Toronto researcher who invented backpropagation nearly 40 years ago and then went on to win the Turing award. His approach served as the basis of deep learning models, which became the pivotal foundation of artificial intelligence today.

And now, in last week’s MIT EmTech Digital conference, Hinton spoke transparently and publicly about the dangers he sees with AI.

He began by addressing the question that was on everyone’s mind. Why leave so unexpectedly from Google?

He assured us there was nothing nefarious. Big G isn’t sweeping anything under the rug or doing anything illegal. Geoff said his own memory is not quite so good now that he’s 75 years old and he’s not quite as good at technical work as he used to be. We all nodded in agreement. We’re right there with you on that one, Geoff.

Then Hinton continued by describing something that was a bit more curious. He mentioned “he’s changed his mind about the relationship between the brain and AI.” He used to think computer models were not — and would never be — as good at computing as our own human brain.

But last year, something happened that changed his mind. It was called “GPT.”

Image courtesy of MIT Technology Review

A Brief History of AI

Before we jump into GPT, let’s first review how we got to this point.

Hinton spent a career trying to learn how our brain works. His research was based upon trying to model the most efficient computing process that he possibly could.

Our brains learn things through a process called training, and we then use another process called inference to make conclusions about what we’ve learned. Babies start to recognize their parents after they see them enough times. Kids are taught sounds and then words and then grammar, which they later use to say and write things of their own. We are taught things are good, bad, enjoyable, or dangerous — and those go on to craft our behavior and our relationship with the world.

AI is the same way. Just like humans, artificial intelligence also needs to be trained before it can think on its own. And Hinton’s backprop research played an important role in teaching computers how to think.

It was initially used to train computers to recognize images. Computers couldn’t immediately recognize images like a human could. Instead of seeing an image and immediately knowing it was a “bird”, it saw hundreds of thousands of pixels in a combination of red, green, and blue hues.

And what computers lacked in common sense, they made up for their attention to detail. They could detect when certain groups of pixels switched from being bright to dim (“edges”) or when edges were arranged into certain shapes. Over time, computers could figure out what groups of pixels were a “beak”, a “head”, and a “bird”.

AI Just Got a Lot Smarter

So far, so good. Over decades of research, we gave AI the ability to see the world like we do. It could eventually tell us how certain it was about what it was looking at. Based on everything it’s ever been trained on, it’s 94.6% sure that image is a bird. Further iterations allowed it to do the same thing for sounds and for videos.

Yet things are getting interesting now, because AI appears to have surpassed our own human thinking capabilities. In other words, we’re getting uncomfortable about the possibility that computers are learning more efficiently than we can.

GPT made quite a lasting impact on Geoff. He’s noticed their computing models are working “very differently now” than they did through the majority of his research.

The first reason for that is the massive scale we’re now working with. We’re not training models with a dozen, a thousand, or a few million images any more. GPT now has 175 billion parameters that come from all across the internet. Even if you could read books 24 hours a day and perfectly retain everything you ever learned, no human could ever get close to the sheer amount of data that’s being used to train today’s AI.

The second slightly-uncomfortable truth is that computers now immediately share everything they’ve learned with one another. ChatGPT is responding to more than 1 billion monthly visits, who are each feeding it new queries and providing new feedback every second. It’s seeing infinitely more patterns and trends than our five human senses ever could. It’s learning in ways we never thought of before. And it’s instantly sharing everything it learns with millions of other computers who are also plugged in.

Time to Worry?

Now comes the what’s next moment.

AI is learning more efficiently than the human brain. It has access to more information and shares it instantaneously. Now it’s interacting directly with people through a user interface, providing the most-likely answer to the questions it’s being asked.

The technology world sees potential problems that could arise with this. And it’s doing its best to put up guardrails to avoid societal harm.

For one, the large language models all have a layer of oversight, to filter out offensive, harmful, or dangerous content from being considered in GPT’s responses. This is even more important when interacting with minors, who are impressionably asking subjective questions about the world they’re getting into.

While this might be concerning, it’s been that way for awhile. Whether it was Elvis on TV, search results on Google, or user-generated videos on YouTube, we’ve always had a watchful eye on technology’s potentially-problematic influence on society.

But Geoffrey Hinton, the very godfather of deep learning, is sounding the alarm because he’s noticed something much more concerning. He’s worried that AI is actively manipulating us. And it’s getting quite good at it.

Targeted feeds on social media had influence in the 2020 election. China’s firewall censors both public comments and private emails. Russia has tight controls on its internet to influence the messaging about the war with Ukraine.

Technology has long been a megaphone that has amplified biases. But Hinton is worried that we’re getting to the point where it’s the AI — not a human — who’s steering the wheel of influence. Technology is not just responding to human inputs any more. It’s now using inference and is developing answers on its own.

When coupled with our current political system, that could create some serious problems.

Here are a few direct quotes from Hinton’s MIT presentation:

“Even if they can’t directly pull levers, they can get us to pull the levers. It could convince us to assault the Capital, even without doing so itself.”

“There is no chance of stopping AI’s development. But we need to ensure alignment; to ensure it is beneficial to us. There are bad actors out there, who might want to build robot soldiers to kill people.”

“One obvious thing is [the adoption of AI] will make jobs more efficient.  These will be huge increases in productivity. But the worry is those increases will put lots of people out of work. The rich will gets richer and the poor will get poorer. The GINI Index predicts this could lead to more violence. This tech should be good for society. But our political system can’t control it. And it’s not designed to use it for everyone’s good.”

“What’s the worst case scenario? It’s very possible that humanity is just a phase in the progress of intelligence. Biological intelligence could give way to digital intelligence. After that, we’re not needed. Digital intelligence is immortal, as long as its stored somewhere.”

Capitalism Opens Pandora’s Box

Hinton also acknowledges there’s no clear way to address the issues he is raising. There was a call from several business leaders, including Tesla’s (Nasdaq: TSLA) Elon Musk, for a moratorium that would momentarily pause all AI advancement. The idea was that existential risks were very serious. And they needed some coordinated thought on how to design the AI models before putting them out into the real world.

Google actually developed much of the technology powering today’s AI several years ago. They’ve written and published research papers during much of the past decade on transformers and diffusion models. They were excited about them and wanted everyone to see what they were working on. But they weren’t quite ready to launch in prime time because there were still risks in launching them to the public.

But capitalism cracked open Pandora’s Box last year when ChatGPT launched to the public in November. It amassed more than 100 million users in its first two months, making it the fastest rate of adoption of a disruptive platform in the entire history of technology.

Now all other tech companies are following suit and are releasing dozens of language models to avoid being left behind. This has unleashed an AI arms race, where companies will hustle to provide the foundation for enterprises to build upon.

The 7investing Key Takeaway

And this is where we stand today. Artificial intelligence will become one of the most transformative technologies of our lifetime. It has revolutionized computer science and it is growing at an exponential rate. It will create trillions of dollars of value for the companies who embrace it, while driving those who do not into irrelevance. It will attract brilliant entrepreneurs and create millionaires for its investors.

Yet it might also be a good time to take a pause. Just a moment to consider those possible existential threats. The threats expressed by the man who stepped down from a prestigious research role at one of America’s most innovative companies to speak independently. The man who wanted to warn the world about the potential dangers of the technology that he created. He wants us to pay close attention, for fear of the impact of our mistakes.

There are a million potential outcomes and no one knows which direction AI ultimately will take. But if one thing is for sure, it’s that we’re now paying close attention.

Join 7investing's Free Email List

Already a 7investing member? Log in here.