Dr Geoffrey Hinton deserves credit for helping to build the foundation of virtually all neural-network-based generative AI we use today. You can also credit him in recent years with consistency: he still thinks the rapid expansion of AI development and use will lead to some fairly dire outcomes.
Two years ago, in an interview with The New York Times, Dr Hinton warned, āIt is hard to see how you can prevent the bad actors from using it for bad things.ā
Now, in a fresh sit-down, this time with CBS News, the Nobel Prize winner is ratcheting up the concern, admitting that when he figured out how to make a computer brain work more like a human brain, he ādidnāt think weād get here in only 40 years,ā adding that ā10 years ago I didnāt believe weād get here.ā
Yet, now weāre here, and hurtling towards an unknowable future, with the pace of AI model development easily outstripping the pace of Mooreās Law (which states that the number of transistors on a chip doubles roughly every 18 months). Some might argue that artificial intelligence is doubling in capability every 12 months or so, and undoubtedly making significant leaps on a quarterly basis.
Naturally, Dr Hintonās reasons for concern are now manifold. Hereās some of what he told CBS News.
[HEADING=1]1. Thereās a 10%-to-20% risk that AIs will take over[/HEADING]
That, according to CBS News, is Dr Hintonās current assessment of the AI-versus-human risk factor. Itās not that Dr. Hinton doesnāt believe that AI advances wonāt pay dividends in medicine, education, and climate science; I guess the question here is, at what point does AI become so intelligent that we do not know what itās thinking about or, perhaps, plotting?
Dr. Hinton didnāt directly address artificial general intelligence (AGI) in the interview, but that must be on his mind. AGI, which remains a somewhat amorphous concept, could mean that AI machines surpass human-like intelligence ā and if they do that, at what point does AI start to, as humans do, act in its own self-interest?
[HEADING=1]2. Is AI a ācute cubā that could someday kill you?[/HEADING]
In trying to explain his concerns, Dr Hinton likened current AI to someone owning a tiger cub. āItās just such a cute tiger cub, unless you can be very sure that itās not going to want to kill you when itās grown up.ā
The analogy makes sense when you consider how most people engage with AIs like ChatGPT, CoPilot, and Gemini, using them to generate funny pictures and videos, and declaring, āIsnāt that adorable?ā But behind all that amusement and shareable imagery is an emotionless system thatās only interested in delivering the best result as its neural network and models understand it.
[HEADING=1]3. Hackers will be more effective ā banks and more could be at risk[/HEADING]
When it comes to current AI threats Dr. Hinton is clearly taking them seriously. He believes that AI will make hackers more effective at attacking targets like banks, hospitals, and infrastructure.
AI, which can code for you and help you solve difficult problems, could supercharge their efforts. Dr Hintonās response? Risk mitigation by spreading his money across three banks. Seems like good advice.
[HEADING=1]4. Authoritarians can misuse AI[/HEADING]
Dr Hinton is so concerned about the looming AI threat that he told CBS News heās glad heās 77 years old, which I assume means he hopes to be long gone before the worst-case scenario involving AI potentially comes to pass.
Iām not sure heāll get out in time, though. We have a growing legion of authoritarians around the world, some of whom are already using AI-generated imagery to propel their propaganda.
[HEADING=1]5. Tech companies arenāt focusing enough on AI safety[/HEADING]
Dr Hinton argues that the big tech companies focusing on AI, namely OpenAI, Microsoft, Meta, and Google (where Dr Hinton formerly worked), are putting too much focus on short-term profits and not enough on AI safety. Thatās hard to verify, and, in their defense, most governments have done a poor job of enforcing any real AI regulation.
Dr Hinton has taken notice when some try to sound the alarm. He told CBS News that he was proud of his former protĆ©gĆ© and OpenAIās former Chief Scientist, Ilya Sutskever, who helped briefly oust OpenAI CEO Sam Altman over AI safety concerns. Altman soon returned, and Sutskever ultimately walked away.
As for what comes next, and what we should do about it, Dr Hinton doesnāt offer any answers. In fact he seems almost as overwhelmed by it all as the rest of us, telling CBS News that while he doesnāt despair, āweāre at this very very special point in history where in a relatively short time everything might totally change at a change of a scale weāve never seen before. Itās hard to absorb that emotionally.ā
You can say that again, Dr Hinton.
[HEADING=2]You might also like[/HEADING]
[ul]
[li]5 Ways the āGodfather of AIā thinks AI could ruin everything[/li][li]This cheating app teaches all the wrong lessons about AI[/li][li]AI doesnāt belong in the classroom unless you want kids to ā¦[/li][/ul]
Continue readingā¦
Two years ago, in an interview with The New York Times, Dr Hinton warned, āIt is hard to see how you can prevent the bad actors from using it for bad things.ā
Now, in a fresh sit-down, this time with CBS News, the Nobel Prize winner is ratcheting up the concern, admitting that when he figured out how to make a computer brain work more like a human brain, he ādidnāt think weād get here in only 40 years,ā adding that ā10 years ago I didnāt believe weād get here.ā
Yet, now weāre here, and hurtling towards an unknowable future, with the pace of AI model development easily outstripping the pace of Mooreās Law (which states that the number of transistors on a chip doubles roughly every 18 months). Some might argue that artificial intelligence is doubling in capability every 12 months or so, and undoubtedly making significant leaps on a quarterly basis.
Naturally, Dr Hintonās reasons for concern are now manifold. Hereās some of what he told CBS News.
[HEADING=1]1. Thereās a 10%-to-20% risk that AIs will take over[/HEADING]
That, according to CBS News, is Dr Hintonās current assessment of the AI-versus-human risk factor. Itās not that Dr. Hinton doesnāt believe that AI advances wonāt pay dividends in medicine, education, and climate science; I guess the question here is, at what point does AI become so intelligent that we do not know what itās thinking about or, perhaps, plotting?
Dr. Hinton didnāt directly address artificial general intelligence (AGI) in the interview, but that must be on his mind. AGI, which remains a somewhat amorphous concept, could mean that AI machines surpass human-like intelligence ā and if they do that, at what point does AI start to, as humans do, act in its own self-interest?
[HEADING=1]2. Is AI a ācute cubā that could someday kill you?[/HEADING]
In trying to explain his concerns, Dr Hinton likened current AI to someone owning a tiger cub. āItās just such a cute tiger cub, unless you can be very sure that itās not going to want to kill you when itās grown up.ā
The analogy makes sense when you consider how most people engage with AIs like ChatGPT, CoPilot, and Gemini, using them to generate funny pictures and videos, and declaring, āIsnāt that adorable?ā But behind all that amusement and shareable imagery is an emotionless system thatās only interested in delivering the best result as its neural network and models understand it.
[HEADING=1]3. Hackers will be more effective ā banks and more could be at risk[/HEADING]
When it comes to current AI threats Dr. Hinton is clearly taking them seriously. He believes that AI will make hackers more effective at attacking targets like banks, hospitals, and infrastructure.
AI, which can code for you and help you solve difficult problems, could supercharge their efforts. Dr Hintonās response? Risk mitigation by spreading his money across three banks. Seems like good advice.
[HEADING=1]4. Authoritarians can misuse AI[/HEADING]
Dr Hinton is so concerned about the looming AI threat that he told CBS News heās glad heās 77 years old, which I assume means he hopes to be long gone before the worst-case scenario involving AI potentially comes to pass.
Iām not sure heāll get out in time, though. We have a growing legion of authoritarians around the world, some of whom are already using AI-generated imagery to propel their propaganda.
[HEADING=1]5. Tech companies arenāt focusing enough on AI safety[/HEADING]
Dr Hinton argues that the big tech companies focusing on AI, namely OpenAI, Microsoft, Meta, and Google (where Dr Hinton formerly worked), are putting too much focus on short-term profits and not enough on AI safety. Thatās hard to verify, and, in their defense, most governments have done a poor job of enforcing any real AI regulation.
Dr Hinton has taken notice when some try to sound the alarm. He told CBS News that he was proud of his former protĆ©gĆ© and OpenAIās former Chief Scientist, Ilya Sutskever, who helped briefly oust OpenAI CEO Sam Altman over AI safety concerns. Altman soon returned, and Sutskever ultimately walked away.
As for what comes next, and what we should do about it, Dr Hinton doesnāt offer any answers. In fact he seems almost as overwhelmed by it all as the rest of us, telling CBS News that while he doesnāt despair, āweāre at this very very special point in history where in a relatively short time everything might totally change at a change of a scale weāve never seen before. Itās hard to absorb that emotionally.ā
You can say that again, Dr Hinton.
[HEADING=2]You might also like[/HEADING]
[ul]
[li]5 Ways the āGodfather of AIā thinks AI could ruin everything[/li][li]This cheating app teaches all the wrong lessons about AI[/li][li]AI doesnāt belong in the classroom unless you want kids to ā¦[/li][/ul]
Continue readingā¦