Artificial intelligence (AI) interfaces will take over, replacing smartphones in five years, according to a survey of more than 5000 smartphone customers in nine countries by Ericsson ConsumerLab in the fifth edition of its annual trend report, 10 Hot Consumer Trends 2016 (and beyond).
Smartphone users believe AI will take over many common activities, such as searching the net, getting travel guidance, and as personal assistants. The survey found that 44 percent think an AI system would be as good as a teacher and one third would like an AI interface to keep them company. A third would rather trust the fidelity of an AI interface than a human for sensitive matters; and 29 percent agree they would feel more comfortable discussing their medical condition with an AI system.
If human-less self-driving cars of the future creep you out, then this latest experimental automotive technology from China might offer you some respite. Or freak creep you out even more. Researchers from the port city of Tianjin have revealed what they claim is the country’s first ever car to be driven without the use of human hands or feet but with a driver still in control. All it takes is some brain power. And some highly specialized equipment, of course.
Mind-reading devices aren’t actually new. In fact, many companies and technologies make that claim year after year, but few have actually been able to deliver an actual consumer product, with most successful prototypes designed for therapeutic or medical uses. The theory, however, is the same throughout. Sensors read electroencephalogram or EEG from the wearer’s brain. These are then interpolated and interpreted as commands for a computer. In this case, the commands are mapped to car controls.
The application of direct brain control to driving is a two-edged sword. On the one hand, removing the delay between brain to muscle movement, which sometimes can be erroneous, could actually lead to better driver safety. On the other hand, given how easily drivers can be distracted even while their hands are on the wheel, the idea is understandably frightening to some.
Google appears to be more confident about the technical capabilities of its D-Wave 2X quantum computer, which it operates alongside NASA at the U.S. space agency’s Ames Research Center in Mountain View, California.
D-Wave’s machines are the closest thing we have today to quantum computing, which work with quantum bits, or qubits — each of which can be zero or one or both — instead of more conventional bits. The superposition of these qubits can allow great numbers of computations to be performed simultaneously, making a quantum computer highly desirable for certain types of processes.
In two tests, the Google Quantum Artificial Intelligence (AI) Lab today announced that it has found the D-Wave machine to be considerably faster than simulated annealing — a simulation of quantum computation on a classical computer chip.
Can we end violence? Can we create greater emotional well being and intellectual equality for the greater well being of humanity? Will we be able to keep up with machines? How can we augment our intelligence? Could we cure mental illness? After advancements in aging the next major area of research from a standpoint of eliminating personal and global suffering would be upgrades in intelligence. Transhumanist values at their core want to eliminate suffering and existential risk to people’s lives. With well founded logic, these goals are not completely out of reach, it is possible but as usual, we will have to take the complex issue from many angles and from the standpoint of a systems engineer, but let’s look at some fun stuff before we get into the heavy stuff.
The Benefits of Intelligence Upgrades
So, what is the benefit for intelligence upgrades for every day people? We live in a time of exponential technology and vast amounts of face paced information, breakthroughs and invention. So, the most obvious answer to what is the benefit of intelligence upgrade is dealing with the massive amount of information one needs to keep up with daily to be on top of the game for work, for research or for business. Sometimes it can be our mere storage capacity that limits us in our abilities to interact with this information, at other times it is our processing speed, and most fundamentally the rate at which we can interact with new information. In 2012, a prosthetic chip was invented that uses electrodes to expand one’s memory storage. Now, with biotechnology predicted to move more quickly in 2016 and Google ready to back more companies in biotechnology, it may be possible to augment or program selective photographic memory. This is just an example of what one could imagine and begin working with, when combining electronics and gene editing. Many big breakthroughs in enhanced intelligence could be achieved in the future. The implications for business professionals, scientists, and the progress of technology would be astounding if upgrades like these were available. Personally, I can’t wait for the day when me and my personal A.I. through my Google Glass or some sort of eye wear or ear piece could read my brainwaves so I can type and do all my work through what would be a virtual form of telepathy. I could store everything I will need later instantly in the cloud and exactly where I want on my computer, there would be almost no delay because, well, how could there be? Time is everything.
Governments and leading computing companies such as Microsoft, IBM, and Google are trying to develop what are called quantum computers because using the weirdness of quantum mechanics to represent data should unlock immense data-crunching powers. Computing giants believe quantum computers could make their artificial-intelligence software much more powerful and unlock scientific leaps in areas like materials science. NASA hopes quantum computers could help schedule rocket launches and simulate future missions and spacecraft. “It is a truly disruptive technology that could change how we do everything,” said Deepak Biswas, director of exploration technology at NASA’s Ames Research Center in Mountain View, California.
Biswas spoke at a media briefing at the research center about the agency’s work with Google on a machine they bought in 2013 from Canadian startup D-Wave systems, which is marketed as “the world’s first commercial quantum computer.” The computer is installed at NASA’s Ames Research Center in Mountain View, California, and operates on data using a superconducting chip called a quantum annealer. A quantum annealer is hard-coded with an algorithm suited to what are called “optimization problems,” which are common in machine-learning and artificial-intelligence software.
However, D-Wave’s chips are controversial among quantum physicists. Researchers inside and outside the company have been unable to conclusively prove that the devices can tap into quantum physics to beat out conventional computers.
It seems like every day we’re warned about a new, AI-related threat that could ultimately bring about the end of humanity. According to Author and Oxford Professor Nick Bostrom, those existential risks aren’t so black and white, and an individual’s ability to influence those risks might surprise you.
Bostrom defines an existential risk as one distinction of earth originating life or the permanent and drastic destruction of our future development, but he also notes that there is no single methodology that is applicable to all the different existential risks (as more technically elaborated upon in this Future of Humanity Institute study). Rather, he considers it an interdisciplinary endeavor.
“If you’re wondering about asteroids, we have telescopes, we can study them with, we can look at past crater impacts and derive hard statistical data on that,” he said. “We find that the risk of asteroids is extremely small and likewise for a few of the other risks that arrive from nature. But other really big existential risks are not in any direct way susceptible to this kind of rigorous quantification.”
In Bostrom’s eyes, the most significant risks we face arise from human activity and particularly the potential dangerous technological discoveries that await us in the future. Though he believes there’s no way to quantify the possibility of humanity being destroyed by a super-intelligent machine, a more important variable is human judgment. To improve assessment of existential risk, Bostrom said we should think carefully about how these judgments are produced and whether the biases that affect those judgments can be avoided.
“If your task is to hammer a nail into a board, reality will tell you if you’re doing it right or not. It doesn’t really matter if you’re a Communist or a Nazi or whatever crazy ideologies you have, you’ll learn quite quickly if you’re hammering the nail in wrong,” Bostrom said. “If you’re wrong about what the major threats are to humanity over the next century, there is not a reality click to tell you if you’re right or wrong. Any weak bias you might have might distort your belief.”
Noting that humanity doesn’t really have any policy designed to steer a particular course into the future, Bostrom said many existential risks arise from global coordination failures. While he believes society might one day evolve into a unified global government, the question of when this uniting occurs will hinge on individual contributions.
“Working toward global peace is the best project, just because it’s very difficult to make a big difference there if you’re a single individual or a small organization. Perhaps your resources would be better put to use if they were focused on some problem that is much more neglected, such as the control problem for artificial intelligence,” Bostrom said. “(For example) do the technical research to figure that, if we got the ability to create super intelligence, the outcome would be safe and beneficial. That’s where an extra million dollars in funding or one extra very talented person could make a noticeable difference… far more than doing general research on existential risks.”
Looking to the future, Bostrom feels there is an opportunity to show that we can do serious research to change global awareness of existential risks and bring them into a wider conversation. While that research doesn’t assume the human condition is fixed, there is a growing ecosystem of people who are genuinely trying to figure out how to save the future, he said. As an example of how much influence one can have in reducing existential risk, Bostrom noted that a lot more people in history have believed they were Napoleon, yet there was actually only one Napoleon.
“You don’t have to try to do it yourself… it’s usually more efficient to each do whatever we specialize in. For most people, the most efficient way to contribute to eliminating existential risk would be to identify the most efficient organizations working on this and then support those,” Bostrom said. “The values on the line in terms of how many happy lives could exist in humanity’s future, even a very small probability of impact in that, would probably be worthwhile in pursuing”.