Close

Six Areas of Concern from the ‘Godfather of AI’

By Meghan Hall |  June 29, 2023
LinkedInTwitterFacebookEmail

Geoffrey Hinton, the computer scientist and cognitive psychologist sometimes referred to as the “Godfather of AI,” took the stage at Collision 2023 in Toronto this week to elaborate on his concerns about the potential risks of AI’s rapid development.

Hinton, well known for his contributions to the field of AI and neural networks, worked for Google for a decade before announcing his departure in May. Hinton tweeted that he left the company so he could express his worries about the downsides of AI without having to consider the impact on Google, which is developing AI systems like Bard and DeepMind. Hinton is also a professor at the University of Toronto.

AI Could Be on Pace to Outperform Humans

Hinton said he believes that AI models do have the ability to think, contradicting arguments made by other technologists.

Geoffrey Hinton. Photo by Ramsey Cardy/Collision via Sportsfile.

“A lot of people say [AI] is just autocomplete. If you think [about] what it takes to predict the next word, you have to understand what’s being said to be really good at predicting the next word,” Hinton said. 

Because of those kinds of capabilities, Hinton said that if a model is trained with sufficient parameters, magnitude, and vision, the machines could compete with — or outperform — human intelligence.

“[Humans] are a wonderful, incredibly complicated machine, but we’re just a big neural net. But there’s no reason why an artificial neural net shouldn’t be able to do everything we can do,” he said. 

If that’s the case, Hinton asserted, humans need to be concerned about the consequences of increasingly sophisticated AI systems.

“I think we have to take seriously the possibility that if they get to be smarter than us, which seems quite likely, and they have thoughts of their own, which seems quite likely, they may well develop the goal of taking control. And if they do that, we’re in trouble,” Hinton said.

Hinton’s Six Areas of Concern

During the discussion, Hinton outlined six major points of concern relative to AI models. He acknowledged that while he’s gained prominence for his contention that AI could feasibly cause an existential threat to humans, AI poses other major risks people should be aware of as they develop and use the technology. 

It’s important that people understand it’s not just science fiction. It’s not just fear-mongering.

— Geoffrey Hinton
  1. Bias and discrimination 

Hinton said that bias and discrimination in AI has already appeared, based on how the models are trained, and by whom. 

“It is something that I think is relatively easy to fix compared with all the other things,” Hinton said. How? “If you make a target not to have a completely unbiased system — but just to have a system that’s significantly less biased than what’s replaced.”

  1. Geopolitical implications

Hinton explained that one of his major worries is militaries and defense departments getting their hands on robots for use in battle. 

“Even if AI isn’t superintelligent yet, if defense departments use it for making battle bots, it’s going to be very nasty, scary stuff… even if it doesn’t have its own intentions. … It’s going to make it much easier for rich companies to invade poor countries.” 

  1. Employment impact

Hinton said he remains skeptical of economists’ predictions that more jobs will be created than lost as a result of AI, even given historic patterns of new technologies creating unexpected new jobs and job categories. 

“Superintelligence will be a new situation that has never happened before,” Hinton said. “I’m not sure how [economists] can confidently predict that more jobs will be created than the number of jobs lost.” 

  1. Warring echo chambers 

If companies develop chatbots and public-facing tools that reflect extreme opinions or conspiracies, whether intentionally or unintentionally, that information could seep into the general public’s perception. And, if competing companies feed opposing perspectives into AI models, their ability to crank out massive volumes of content could cause what Hinton calls “warring echo chambers.” 

He said he could envision “big companies… giving you things that are more and more extreme, so you end up in this echo chamber.” 

  1. Fake news

Hinton said he has concerns over the spread of fake news and misinformation throughout societies and populations, without proper oversight in place. 

“It’s very hard to try, for example, to mark everything that’s fake as fake. Whether we can do that type of thing, I don’t know, but it’d be better if we could,” he explained. “Governments do it with counterfeit money. … They should try [to] do it with AI-generated stuff.”

  1. Existential risk

Hinton has been quoted many times in different outlets as having said that AI could take over human life as we know it, if it’s not carefully regulated and developed within guardrails. 

“I think it’s important that people understand it’s not just science fiction. It’s not just fear-mongering. It is a real risk that we need to think about, and we need to figure out in advance how to deal with it,” Hinton said.

Averting Doomsday

If humans want to prevent AI models from taking over, Hinton said, they need to put work into understanding how that could happen in the first place. 

“One thing you can do is — before AI gets superintelligent — you can do empirical work into how it goes wrong, how it tries to get control,” he said. “Right now there are 99 very smart people trying to make it better, but one very smart person trying to figure out how to stop [it from] taking over.”

LinkedInTwitterFacebookEmail