Professor Stuart Russell, a leading artificial intelligence (AI) expert at the University of California, said allowing machines to kill humans would endanger freedom and security.
By.
Posted in drones, habitats, robotics/AI, surveillance | Leave a Comment on Today, as part of my #libertarian California Governor campaign, I toured some of the areas in Northern California destroyed by the recent wildfires
I saw hundreds of homes in one subdivision destroyed (8900 homes were destroyed in total in the fires). We must seek out better technological solutions to stop wildfires in California. Lives are at risk and hundreds of billions of dollars are at stake. The state is getting drier, and innovative technologies—especially drone surveillance—can help spot fires before they get too large to easily contain. AI can also tell us, based on weather conditions, where fire protection resources and first responders should be stationed. Quickly putting out fires that do occur is the key to protecting the state.
The UN attempt to regulate AI is doomed to failure. If the USA doesnt veto, and i’m sure it would, China and Russia will.
UN efforts to limit or regulate military AI may be failing before they even begin.
Arms control advocates had reason for hope when scores of countries met at the United Nations in Geneva last week to discuss the future of lethal autonomous weapons systems, or LAWS. Unlike previous meetings, this one involved a Group of Governmental Experts, a big bump in diplomatic formality and consequence, and those experts had a mandate to better define lethal autonomy in weapons. But hopes for even a small first step toward restricting “killer robots” were dashed as the meeting unfolded. Russia announced that it would adhere to no international ban, moratorium or regulation on such weapons. Complicating the issue, the meeting was run in a way that made any meaningful progress toward defining (and thus eventually regulating) LAWS nearly impossible. Multiple attendees pointed out that that played directly toward Russia’s interests.
Russia’s Nov. 10 statement amounts to a lawyerly attempt to undermine any progress toward a ban. It argues that defining “lethal autonomous robots” is too hard, not yet necessary, and a threat to legitimate technology development.
Most people probably aren’t aware of this, but the 2016 U.S. Presidential election included a candidate who had a radio-frequency identification chip implanted in his hand. No, it wasn’t Donald J. Trump. It was Zoltan Istvan, a nominee representing the Silicon Valley-based Transhumanist Party and his body-worn chip unlocked his front door, provided computer password access and sent an auto-text that said: “Win in 2016!”
The transhumanist movement – employing technology and radical science to modify humans – offers a glimpse into the marriage of machines and people, the focus of a recent paper released by the Institute for Critical Infrastructure Technology (ICIT). With cybernetic implants already available to consumers, the prospect for techno-human transmutation – cyborgs – is not as far away as many may think.
“We are moving towards automation, we are moving towards machine learning,” said Parham Eftekhari (pictured), co-founder and senior fellow at ICIT. “We’re seeing it impact a lot of our society.”
Eftekhari stopped by the set of theCUBE, SiliconANGLE’s mobile livestreaming studio, and spoke with co-hosts John Furrier (@furrier) and Dave Vellante (@dvellante) at CyberConnect 2017 in New York City. They discussed ICIT’s recent cybersecurity research and the potential for increased government regulation. ( Disclosure below.)
Planning to try and automate the entire store.
Walmart (WMT) has been quietly testing out autonomous floor scrubbers during the overnight shifts in five store locations near the company’s headquarters in Bentonville, Arkansas.
Continue Reading Below
A spokesperson for Walmart told FOX Business that the move, which was first reported by LinkedIn, is a “very small proof of concept pilot that we are running” and that the company still has a lot more to learn about how this technology “might work best in our different retail locations.”
In a recent article, I began to unpack Rodney Brooks’ October 2017 essay “The Seven Deadly Sins of AI Predictions.” Now I continue my analysis by looking into the faulty atheistic thinking that motivates the AI salvation preached by futurists such as Google’s Ray Kurzweil. Although Brooks does not address this worldview dimension, his critique of AI predictive sins provides a great opportunity for just that.
Brooks is a pioneer of robotic artificial intelligence (AI) and is MIT Panasonic Professor of Robotics Emeritus. He is also the founder and chief technology officer of Rethink Robotics, which makes cobots—robots designed to collaborate with humans in a shared industrial workspace.
Previously I discussed Brooks’ remark that “all the evidence that I see says we have no real idea yet how to build” the superintelligent devices that Kurzweil and like-minded singularity advocates imagine.
Millimeter waves, massive MIMO, full duplex, beamforming, and small cells are just a few of the technologies that could enable ultrafast 5G networks.
Today’s mobile users want faster data speeds and more reliable service. The next generation of wireless networks—5G—promises to deliver that, and much more. With 5G, users should be able to download a high-definition film in under a second (a task that could take 10 minutes on 4G LTE). And wireless engineers say these networks will boost the development of other new technologies, too, such as autonomous vehicles, virtual reality, and the Internet of Things.
If all goes well, telecommunications companies hope to debut the first commercial 5G networks in the early 2020s. Right now, though, 5G is still in the planning stages, and companies and industry groups are working together to figure out exactly what it will be. But they all agree on one matter: As the number of mobile users and their demand for data rises, 5G must handle far more traffic at much higher speeds than the base stations that make up today’s cellular networks.
Furthermore, with advancements in quantum computing and machine learning, many notable public figures, including Stephen Hawking and Elon Musk, have indicated a growing concern with the imminent threat of AI surpassing human intelligence (Gosset, 2017). For instance, Darrell M. West, a political scientist, has proposed a protectionist framework that appeals to transhumanism, in which he restructures socioeconomic policy to account for changes in technology-induced unemployment. In particular, he posits that “Separating the dispersion of health care, disability, and pension benefits outside of employment offers workers with limited skills social benefits on a universal basis” (West, 2015). Expounding upon this equivocation, a more viable solution to potential unemployment is the realization of a multi-faceted policy which advocates the improvement of STEM-related education on a broad economic base, with habituation programs for the unskilled workforce. That is, with the implementation of appropriate and reformatory policies concerning the future development of AI technologies, this sector provides an economic incentive for new job creation, compatible with industrial development.
Prompt: What are the political implications of artificial intelligence technology and how should policy makers ensure this technology will benefit diverse sectors of society?
In recent years, the rapid development and mass proliferation of artificial intelligence have had various sociopolitical implications. It is a commonly held belief that the emergence of this technology will have an unprecedented impact on policies and political agendas. However, such discourse often lacks a geopolitical and social dimension, which limits the breadth of analysis. Further, little consideration has been given to potential employment and public policy reform. Growing concerns have been raised regarding the potential risk inherent in the evolution of strong AI, which provides the basis for transhumanism, whereby it is conjectured that AI will eventually be able to surpass human intelligence. As such, it is incumbent upon the upcoming generation of policymakers to implement and adopt necessary measures, which will provide a careful, multilateral framework, ultimately achieving market-oriented technological advancement with respect to employment and public policy.
Machine learning, the interplay of computer science and neuroscience, is a rapidly developing field that has been a source of much political controversy in recent years. While emerging technologies have significantly improved production quality and efficiency across industries, they have also raised concerns such as job displacement and other unfavourable socioeconomic implications (Karsten & West, 2015). In particular, the growing shortage of job opportunities has furnished increasing levels of unemployment and has, in various instances, lead to unwanted economic stagnation. On the subject of potential future unemployment, many policymakers have proposed an increase in Earned Income Tax Credit, which provides a collateral basic income and encourages profit-sharing (West, 2015).