By Cecilia Tilli — Slate
In January, I joined Stephen Hawking, Elon Musk, Lord Martin Rees, and other artificial intelligence researchers, policymakers, and entrepreneurs in signing an open letter asking for a change in A.I. research priorities. The letter was the product of a four-day conference (held in Puerto Rico in January), and it makes three claims:
- Current A.I. research seeks to develop intelligent agents. The foremost goal of research is to construct systems that perceive and act in a particular environment at (or above) human level.
- A.I. research is advancing very quickly and has great potential to benefit humanity. Fast and steady progress in A.I. forecasts a growing impact on society. The potential benefits are unprecedented, so emphasis should be on developing “useful A.I.,” rather than simply improving capacity.
- With great power comes great responsibility. A.I. has great potential to help humanity but it can also be extremely damaging. Hence, great care is needed in reaping its benefits while avoiding potential pitfalls.
In response to the release of this letter (which anyone can now sign and has been endorsed by more than 6,000 people), news organizations published articles with headlines like:
“Elon Musk, Stephen Hawking Warn of Artificial Intelligence Dangers.” “Don’t Let Artificial Intelligence Take Over, Top Scientists Warn.” “Big Science Names Sign Open Letter Detailing AI Danger.” “AI Has Arrived, and That Really Worries the World’s Brightest Minds.”
Read more