Our existence as a species is, in all likelihood, limited.
Whether the downfall of the human race begins as a result of a devastating asteroid impact, a natural pandemic, or an all-out nuclear war, we are facing a number of risks to our future, ranging from the vastly remote to the almost inevitable.
Global catastrophic events like these would, of course, be devastating for our species. Even if nuclear war obliterates 99% of the human race however, the surviving 1% could feasibly recover, and even thrive years down the line, with no lasting damage to our speciesâ potential.
There are some events that thereâs no coming back from though. No possibility of rebuilding, no recovery for the human race.
These catastrophic events are known as existential risks â in other words, circumstances that would cause human extinction or drastically reduce our potential as a species.
Itâs these existential risks that form the basis of the new 10-part podcast called âThe End of The World with Josh Clarkâ who you may already know as the host of the Stuff You Should Know podcast (which recently became the first podcast to be downloaded 1 billion times).
The new podcast sees Clark examining the different ways the world as we know it could come to an abrupt end â including a super intelligent AI taking over the world.
Over the course of his research into existential risk, Clark spoke to experts in existential risk and AI, including Swedish philosopher and founder of the Future of Humanity Institute Nick Bostrom, philosopher and co-founder of the World Transhumanist Association David Pearce, and Oxford University philosopher Sebastian Farquhar.
We spoke to him about the new podcast, and why he, and experts in the field of existential risk, think humanityâs advances in artificial intelligence technology could ultimately lead to our doom.
What is existential risk?
Some might say that there are enormous risks facing humanity right now. Man-made climate change is a prime example, which, if left unchecked could be âhorrible for humanityâ, Clark tells us. âIt could set us back to the Stone Age or earlierâ.
Even this doesnât qualify as an existential risk, as Clark explains, âwe could conceivably, over the course of tens of thousands of years, rebuild humanity, probably faster than the first time, because we would still have some or all of that accumulated knowledge we didnât have the first time we developed civilization.â
With an existential risk, thatâs not the case. As Clark puts it, âthere are no do-overs. Thatâs it for humanity.â
It was philosopher Nick Bostrom that first put forward the idea that existential risk should be taken seriously. In a scholarly article published in the Journal of Evolution and Technology, he defines an existential risk as âone where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.â
âThe idea that humans could accidentally wipe ourselves out is just fascinating.â
Josh Clark
Clark explains that, in this scenario âeven if we continue on as a species, we would never be able to get back to [humanityâs development] at that point in history.â
While it can feel somewhat overwhelming to consider the ways that we could bring about our own demise, it feels more accessible when put through the lens of Clarkâs End Of The World podcast series.
When we asked him why he took on such a formidable subject matter, he told us that, âthe idea that humans could accidentally wipe ourselves out is just fascinating.â
And perhaps the most fascinating of all the potential existential risks facing humanity today, is the one posed by a super intelligent AI taking over the world.
[IMG alt="P7xnALJfr2FLQU9mKyaKW3" width="689px" height="388px"]https://cdn.mos.cms.futurecdn.net/P7xnALJfr2FLQU9mKyaKW3.jpeg[/IMG]
The Anki Vector is a companion toy robot that uses AI to learn
Credit: Anki
The fundamentals of artificial intelligence
In recent years, humanity has enjoyed a technological boom, with the advent of space travel, the birth of the internet, and huge leaps in the field of computing, changing the way we live immeasurably. As technology has become more advanced, a new type of existential risk has come to the fore: a super intelligent AI.
Unravelling how artificial intelligence works is the first step in understanding how it could pose an existential risk to humanity. In the âArtificial Intelligenceâ episode of the podcast, Clark starts by giving an example of a machine that is programmed to sort red balls from green balls.
The technology that goes into a machine of this apparent simplicity is vastly more complicated than you would imagine.
If programmed correctly, it can excel at sorting red balls from green balls, much like DeepBlue excels in the field of chess. As impressive as these machines are, however, they can only do one thing, and one thing only.
Clarke explains that, âthe goal of AI has never been to just build machines that can beat humans at chessâ, instead, it is to âbuild a machine with general intelligence, like a human has."
He continues, âto be good at chess and only chess is to be machine. To be good at chess, good at doing taxes, good at speaking Spanish, and good at picking out apple pie recipes, this begins to approach the ballpark of being human.â
This is the key problem that early AI pioneers encountered in their research - how can the entirety of the human experience be taught to a machine? The answer lies in neural networks.
[IMG alt="VV4KjXybF42vWZFwXYHpmS" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/VV4KjXybF42vWZFwXYHpmS.jpg[/IMG]
Sonyâs Aibo learns just like a real puppy thanks to AI
Credit: Sony
Advances in AI
Early artificial intelligence created machines that excelled at one thing, but the recent development of neural networks has allowed the technology to flourish.
By 2006, the internet had become a huge force in developing neural networks, thanks to the huge data repositories of Google Images and YouTube videos, for example.
Itâs this recent explosion of data access that has allowed the field of neural networks to fully take off, meaning that the artificially intelligent machine of today no longer needs a human to supervise its training â it can train itself by incorporating and analyzing new data.
Sounds convenient right? Well, although artificial intelligence works far better thanks to neural nets, the danger is that we donât fully understand how they work. Clarke explains that âwe canât see inside the thought process of our AI", which could make the people that use AI technology nervous.
A 2017 article by Technology Review described the neural network as a kind of âblack boxâ â in other words, data goes in, the machineâs action comes out, and we have little understanding of the processes in between.
Furthermore, if the use of neural networks means that artificial intelligence can easily self improve, and become more intelligent without our input, whatâs to stop them outpacing humans?
As Clark says "[AI] can self improve, it can learn to code. The seeds for a super intelligent AI are being sownâ â and this, according to the likes of Nick Bostrom, poses an existential risk to humanity. In his article on existential risk for the Journal of Evolution and Technology, he says âWhen we create the first super intelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so.â
[IMG alt="5v3ceMsGaDqB9TQKScZtzP" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/5v3ceMsGaDqB9TQKScZtzP.jpg[/IMG]
SoftBank Roboticsâ AI-powered Nao is currently up to its 5th version, with more than 10,000 sold around the world.
Credit: SoftBank Robotics
What are the risks posed by a super-intelligent AI?
âLet an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an âintelligence explosion,â and the intelligence of man would be left far behind.â
This is a quote from British mathematician I. J. Good, and one Clark refers to throughout the podcast and in our conversation, as a way of explaining how a super-intelligent AI could come to exist.
He gives the example of an increasingly intelligent machine that has the ability to write code â it would have the potential to write better versions of itself, with the rate of improvement increasingly exponentially as it becomes better at doing just that.
As Clark explains, âeventually you have an AI that is capable of writing an algorithm that exceeds any humanâs capability of doing that. At that point we enter what Good called the âintelligence explosionââŚand at that point, we are toast.â
Benevolence is a human trait
So why does this pose an existential risk? Clark asks us to imagine âan AI we created that has become super intelligent beyond our control.â
He continues, âIf we hadnât already programmed what AI theorists call âfriendlinessâ into the AI, we would have no reason to think it would act in our best interests.â
Right now, artificial intelligence is being used to recommend movies on Netflix, conjure up our social media feeds, and translating our speech via apps like Google Translate.
So, imagine Google Translate became super intelligent thanks to the self improvement capabilities provided by neural networks. âThereâs not really any inherent danger from a translator becoming super intelligent, because it would be really great at what it doesâ says Clark, rather âthe danger would come from if it decided it needs stuff that we (humans) want for its own purposes.â
Maybe the super intelligent translation AI decides, that in order to self improve, it needs to take up more network space, or to destroy the rainforests in order to build more servers.
Clark explains, in creating this podcast, he looked into research from the likes of Bostrom, who believes we would then âenter into a resource conflict with the most intelligent being in the universe â and we would probably lose that conflictâ, a sentiment echoed by the likes of Stephen Hawking and Microsoft researcher Eric Horvitz.
[IMG alt="93P8y7xHxr2a475TRN2ggg" width="690px" height="387px"]https://cdn.mos.cms.futurecdn.net/93P8y7xHxr2a475TRN2ggg.jpg[/IMG]
A little girl meets a robot in Osaka, Japan.
Credit: Andy Kelly via Unsplash
In the journal article we mentioned previously, Bostrom provided a hypothetical scenario in which a super intelligent AI could pose an existential risk: âWe tell [the AI] to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.â
So, the problem isnât that a super intelligent AI would be inherently evil â there is of course no such concept of good and evil in the world of machine learning. The problem is that an AI that can continually self improve to get better at what it is programmed to do wouldnât care if humans were unhappy with its methods of improving efficiency or accuracy.
As Clarke puts it, the existential risk comes from âour failure to program friendliness into an AI that then goes on to become super intelligent.â
âA super-intelligent AI takes over the world and we become the chimpanzees of the 21st century.â
Josh ClarkeSolutions to the AI problem
So what can be done? Clark admits that this is a âhuge challengeâ, and the first step would be to âget researchers to admit that this is an actual real problemâ, explaining that many feel generalized intelligence is so far down the road that itâs not worth planning for it being a threat.
Secondly, we would need to âfigure out how to program friendliness into AIâ, which will be an enormously difficult undertaking for AI researchers today and in the future.
One problem that arises from teaching an AI morals and values, is deciding whose morals and values it should be taught â they are of course, not universal.
Even if we can agree on a universal set of values to teach the AI, how would we go about explaining morality to a machine? Clark believes that humans generally âhave a tendency not to get our point across very clearlyâ as it is.
[IMG alt="44Hbu8p9tGvRBWigGv6jHN" width="690px" height="387px"]https://cdn.mos.cms.futurecdn.net/44Hbu8p9tGvRBWigGv6jHN.jpg[/IMG]
Credit: Franck V via Unsplash
Why should we bother planning for existential risk?
If a super intelligent AI poses such a huge existential risk, why not just stop AI research in its tracks completely? Well, as much as it could represent the end of humanity, it could also be the âlast invention we need ever makeâ, as I. J. Good famously said.
Clark tells us that, âweâre at a point in history, where we could create the greatest invention that humankind has ever [made], which is a super-intelligent AI that can take care of humansâ every need for eternity.
âThe other fork in the road goes towards accidentally inventing a super-intelligent AI that takes over the world, and we become the chimpanzees of the 21st century.â
Thereâs a lot we donât know about the route artificial intelligence will take, but Clark makes one thing clear: we absolutely need to begin taking the existential risk it poses seriously, otherwise we may just screw humanity out of ever achieving its true potential.
Main image: Franck V via Unsplash
[ul]
[li]The AI checklist: making artificial intelligence a reality[/li][/ul]
Continue readingâŚ
Whether the downfall of the human race begins as a result of a devastating asteroid impact, a natural pandemic, or an all-out nuclear war, we are facing a number of risks to our future, ranging from the vastly remote to the almost inevitable.
Global catastrophic events like these would, of course, be devastating for our species. Even if nuclear war obliterates 99% of the human race however, the surviving 1% could feasibly recover, and even thrive years down the line, with no lasting damage to our speciesâ potential.
There are some events that thereâs no coming back from though. No possibility of rebuilding, no recovery for the human race.
These catastrophic events are known as existential risks â in other words, circumstances that would cause human extinction or drastically reduce our potential as a species.
Itâs these existential risks that form the basis of the new 10-part podcast called âThe End of The World with Josh Clarkâ who you may already know as the host of the Stuff You Should Know podcast (which recently became the first podcast to be downloaded 1 billion times).
The new podcast sees Clark examining the different ways the world as we know it could come to an abrupt end â including a super intelligent AI taking over the world.
Over the course of his research into existential risk, Clark spoke to experts in existential risk and AI, including Swedish philosopher and founder of the Future of Humanity Institute Nick Bostrom, philosopher and co-founder of the World Transhumanist Association David Pearce, and Oxford University philosopher Sebastian Farquhar.
We spoke to him about the new podcast, and why he, and experts in the field of existential risk, think humanityâs advances in artificial intelligence technology could ultimately lead to our doom.
What is existential risk?
Some might say that there are enormous risks facing humanity right now. Man-made climate change is a prime example, which, if left unchecked could be âhorrible for humanityâ, Clark tells us. âIt could set us back to the Stone Age or earlierâ.
Even this doesnât qualify as an existential risk, as Clark explains, âwe could conceivably, over the course of tens of thousands of years, rebuild humanity, probably faster than the first time, because we would still have some or all of that accumulated knowledge we didnât have the first time we developed civilization.â
With an existential risk, thatâs not the case. As Clark puts it, âthere are no do-overs. Thatâs it for humanity.â
It was philosopher Nick Bostrom that first put forward the idea that existential risk should be taken seriously. In a scholarly article published in the Journal of Evolution and Technology, he defines an existential risk as âone where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.â
âThe idea that humans could accidentally wipe ourselves out is just fascinating.â
Josh Clark
Clark explains that, in this scenario âeven if we continue on as a species, we would never be able to get back to [humanityâs development] at that point in history.â
While it can feel somewhat overwhelming to consider the ways that we could bring about our own demise, it feels more accessible when put through the lens of Clarkâs End Of The World podcast series.
When we asked him why he took on such a formidable subject matter, he told us that, âthe idea that humans could accidentally wipe ourselves out is just fascinating.â
And perhaps the most fascinating of all the potential existential risks facing humanity today, is the one posed by a super intelligent AI taking over the world.
[IMG alt="P7xnALJfr2FLQU9mKyaKW3" width="689px" height="388px"]https://cdn.mos.cms.futurecdn.net/P7xnALJfr2FLQU9mKyaKW3.jpeg[/IMG]
The Anki Vector is a companion toy robot that uses AI to learn
Credit: Anki
The fundamentals of artificial intelligence
In recent years, humanity has enjoyed a technological boom, with the advent of space travel, the birth of the internet, and huge leaps in the field of computing, changing the way we live immeasurably. As technology has become more advanced, a new type of existential risk has come to the fore: a super intelligent AI.
Unravelling how artificial intelligence works is the first step in understanding how it could pose an existential risk to humanity. In the âArtificial Intelligenceâ episode of the podcast, Clark starts by giving an example of a machine that is programmed to sort red balls from green balls.
The technology that goes into a machine of this apparent simplicity is vastly more complicated than you would imagine.
If programmed correctly, it can excel at sorting red balls from green balls, much like DeepBlue excels in the field of chess. As impressive as these machines are, however, they can only do one thing, and one thing only.
Clarke explains that, âthe goal of AI has never been to just build machines that can beat humans at chessâ, instead, it is to âbuild a machine with general intelligence, like a human has."
He continues, âto be good at chess and only chess is to be machine. To be good at chess, good at doing taxes, good at speaking Spanish, and good at picking out apple pie recipes, this begins to approach the ballpark of being human.â
This is the key problem that early AI pioneers encountered in their research - how can the entirety of the human experience be taught to a machine? The answer lies in neural networks.
[IMG alt="VV4KjXybF42vWZFwXYHpmS" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/VV4KjXybF42vWZFwXYHpmS.jpg[/IMG]
Sonyâs Aibo learns just like a real puppy thanks to AI
Credit: Sony
Advances in AI
Early artificial intelligence created machines that excelled at one thing, but the recent development of neural networks has allowed the technology to flourish.
By 2006, the internet had become a huge force in developing neural networks, thanks to the huge data repositories of Google Images and YouTube videos, for example.
Itâs this recent explosion of data access that has allowed the field of neural networks to fully take off, meaning that the artificially intelligent machine of today no longer needs a human to supervise its training â it can train itself by incorporating and analyzing new data.
Sounds convenient right? Well, although artificial intelligence works far better thanks to neural nets, the danger is that we donât fully understand how they work. Clarke explains that âwe canât see inside the thought process of our AI", which could make the people that use AI technology nervous.
A 2017 article by Technology Review described the neural network as a kind of âblack boxâ â in other words, data goes in, the machineâs action comes out, and we have little understanding of the processes in between.
Furthermore, if the use of neural networks means that artificial intelligence can easily self improve, and become more intelligent without our input, whatâs to stop them outpacing humans?
As Clark says "[AI] can self improve, it can learn to code. The seeds for a super intelligent AI are being sownâ â and this, according to the likes of Nick Bostrom, poses an existential risk to humanity. In his article on existential risk for the Journal of Evolution and Technology, he says âWhen we create the first super intelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so.â
[IMG alt="5v3ceMsGaDqB9TQKScZtzP" width="690px" height="388px"]https://cdn.mos.cms.futurecdn.net/5v3ceMsGaDqB9TQKScZtzP.jpg[/IMG]
SoftBank Roboticsâ AI-powered Nao is currently up to its 5th version, with more than 10,000 sold around the world.
Credit: SoftBank Robotics
What are the risks posed by a super-intelligent AI?
âLet an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an âintelligence explosion,â and the intelligence of man would be left far behind.â
This is a quote from British mathematician I. J. Good, and one Clark refers to throughout the podcast and in our conversation, as a way of explaining how a super-intelligent AI could come to exist.
He gives the example of an increasingly intelligent machine that has the ability to write code â it would have the potential to write better versions of itself, with the rate of improvement increasingly exponentially as it becomes better at doing just that.
As Clark explains, âeventually you have an AI that is capable of writing an algorithm that exceeds any humanâs capability of doing that. At that point we enter what Good called the âintelligence explosionââŚand at that point, we are toast.â
Benevolence is a human trait
So why does this pose an existential risk? Clark asks us to imagine âan AI we created that has become super intelligent beyond our control.â
He continues, âIf we hadnât already programmed what AI theorists call âfriendlinessâ into the AI, we would have no reason to think it would act in our best interests.â
Right now, artificial intelligence is being used to recommend movies on Netflix, conjure up our social media feeds, and translating our speech via apps like Google Translate.
So, imagine Google Translate became super intelligent thanks to the self improvement capabilities provided by neural networks. âThereâs not really any inherent danger from a translator becoming super intelligent, because it would be really great at what it doesâ says Clark, rather âthe danger would come from if it decided it needs stuff that we (humans) want for its own purposes.â
Maybe the super intelligent translation AI decides, that in order to self improve, it needs to take up more network space, or to destroy the rainforests in order to build more servers.
Clark explains, in creating this podcast, he looked into research from the likes of Bostrom, who believes we would then âenter into a resource conflict with the most intelligent being in the universe â and we would probably lose that conflictâ, a sentiment echoed by the likes of Stephen Hawking and Microsoft researcher Eric Horvitz.
[IMG alt="93P8y7xHxr2a475TRN2ggg" width="690px" height="387px"]https://cdn.mos.cms.futurecdn.net/93P8y7xHxr2a475TRN2ggg.jpg[/IMG]
A little girl meets a robot in Osaka, Japan.
Credit: Andy Kelly via Unsplash
In the journal article we mentioned previously, Bostrom provided a hypothetical scenario in which a super intelligent AI could pose an existential risk: âWe tell [the AI] to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.â
So, the problem isnât that a super intelligent AI would be inherently evil â there is of course no such concept of good and evil in the world of machine learning. The problem is that an AI that can continually self improve to get better at what it is programmed to do wouldnât care if humans were unhappy with its methods of improving efficiency or accuracy.
As Clarke puts it, the existential risk comes from âour failure to program friendliness into an AI that then goes on to become super intelligent.â
âA super-intelligent AI takes over the world and we become the chimpanzees of the 21st century.â
Josh Clarke
So what can be done? Clark admits that this is a âhuge challengeâ, and the first step would be to âget researchers to admit that this is an actual real problemâ, explaining that many feel generalized intelligence is so far down the road that itâs not worth planning for it being a threat.
Secondly, we would need to âfigure out how to program friendliness into AIâ, which will be an enormously difficult undertaking for AI researchers today and in the future.
One problem that arises from teaching an AI morals and values, is deciding whose morals and values it should be taught â they are of course, not universal.
Even if we can agree on a universal set of values to teach the AI, how would we go about explaining morality to a machine? Clark believes that humans generally âhave a tendency not to get our point across very clearlyâ as it is.
[IMG alt="44Hbu8p9tGvRBWigGv6jHN" width="690px" height="387px"]https://cdn.mos.cms.futurecdn.net/44Hbu8p9tGvRBWigGv6jHN.jpg[/IMG]
Credit: Franck V via Unsplash
Why should we bother planning for existential risk?
If a super intelligent AI poses such a huge existential risk, why not just stop AI research in its tracks completely? Well, as much as it could represent the end of humanity, it could also be the âlast invention we need ever makeâ, as I. J. Good famously said.
Clark tells us that, âweâre at a point in history, where we could create the greatest invention that humankind has ever [made], which is a super-intelligent AI that can take care of humansâ every need for eternity.
âThe other fork in the road goes towards accidentally inventing a super-intelligent AI that takes over the world, and we become the chimpanzees of the 21st century.â
Thereâs a lot we donât know about the route artificial intelligence will take, but Clark makes one thing clear: we absolutely need to begin taking the existential risk it poses seriously, otherwise we may just screw humanity out of ever achieving its true potential.
Main image: Franck V via Unsplash
[ul]
[li]The AI checklist: making artificial intelligence a reality[/li][/ul]
Continue readingâŚ