Toggle light / dark theme

The point of the experiment was to show how easy it is to bias any artificial intelligence if you train it on biased data. The team wisely didn’t speculate about whether exposure to graphic content changes the way a human thinks. They’ve done other experiments in the same vein, too, using AI to write horror stories, create terrifying images, judge moral decisions, and even induce empathy. This kind of research is important. We should be asking the same questions of artificial intelligence as we do of any other technology because it is far too easy for unintended consequences to hurt the people the system wasn’t designed to see. Naturally, this is the basis of sci-fi: imagining possible futures and showing what could lead us there. Issac Asimov gave wrote the “Three Laws of Robotics” because he wanted to imagine what might happen if they were contravened.

Even though artificial intelligence isn’t a new field, we’re a long, long way from producing something that, as Gideon Lewis-Kraus wrote in The New York Times Magazine, can “demonstrate a facility with the implicit, the interpretive.” But it still hasn’t undergone the kind of reckoning that causes a discipline to grow up. Physics, you recall, gave us the atom bomb, and every person who becomes a physicist knows they might be called on to help create something that could fundamentally alter the world. Computer scientists are beginning to realize this, too. At Google this year, 5,000 employees protested and a host of employees resigned from the company because of its involvement with Project Maven, a Pentagon initiative that uses machine learning to improve the accuracy of drone strikes.

Norman is just a thought experiment, but the questions it raises about machine learning algorithms making judgments and decisions based on biased data are urgent and necessary. Those systems, for example, are already used in credit underwriting, deciding whether or not loans are worth guaranteeing. What if an algorithm decides you shouldn’t buy a house or a car? To whom do you appeal? What if you’re not white and a piece of software predicts you’ll commit a crime because of that? There are many, many open questions. Norman’s role is to help us figure out their answers.

Read more

Recently we saw a new “Master algorithm” that could be used to create the first generation of super intelligent machines, and now a team of researchers from Maryland, USA, announced this week that they’ve invented a general Artificial Intelligence (AI) way for machines to identify and process 3D images that doesn’t require humans to go through the tedium of inputting specific information that accounts for each and every instance, scenario, difference, change and category that could crop up, and they claim it’s a world first, even though it follows on from a not too dissimilar breakthrough from Google DeepMind whose own platform, Alpha Zero, recently taught itself a mix of board games including chess to a grand master level, in just four hours.

Read more

For the first time, Volkswagen experts have succeeded in simulating industrially relevant molecules using a quantum computer. This is especially important for the development of high-performance electric vehicle batteries. The experts have successfully simulated molecules such as lithium-hydrogen and carbon chains. Now they are working on more complex chemical compounds. In the long term, they want to simulate the chemical structure of a complete electric vehicle battery on a quantum computer. Their objective is to develop a “tailor-made battery”, a configurable chemical blueprint that is ready for production. Volkswagen is presenting its research work connected with quantum computing at the CEBIT technology show (Hanover, June 12–15).

Martin Hofmann, CIO of the Volkswagen Group, says: “We are focusing on the modernization of IT systems throughout the Group. The objective is to intensify the digitalization of work processes – to make them simpler, more secure and more efficient and to support new business models. This is why we are combining our core task with the introduction of specific key technologies for Volkswagen. These include the Internet of Things and artificial intelligence, as well as quantum computing.”

The objective is a “tailor-made battery”, a configurable blueprint Using newly developed algorithms, the Volkswagen experts have laid the foundation for simulating and optimizing the chemical structure of high-performance electric vehicle batteries on a quantum computer. In the long term, such a quantum algorithm could simulate the chemical composition of a battery on the basis of different criteria such as weight reduction, maximum power density or cell assembly and provide a design which could be used directly for production. This would significantly accelerate the battery development process, which has been time-consuming and resource-intensive to date.

Read more

Together, this full quantum stack pairs with familiar tools to create an integrated, streamlined environment for quantum processing.

Scalability, from top to bottom

Quantum computers can help address some of the world’s toughest problems, provided the quantum computer has enough high-quality qubits to find the solution. While the quantum systems of today may be able to add a high number of qubits, the quality of the qubits is the key factor in creating useful scale. From the cooling system to qubits to algorithms, scalability is a fundamental part of the Microsoft vision for quantum computing.

Read more

A diagrammatic explanation of how machine consciousness might be feasible.


About 20 years ago I gave my first talk on how to achieve consciousness in machines, at a World Future Society conference, and went on to discuss how we would co-evolve with machines. I’ve lectured on machine consciousness hundreds of times but never produced any clear slides that explain my ideas properly. I thought it was about time I did. My belief is that today’s deep neural networks using feed-forward processing with back propagation training can not become conscious. No digital algorithmic neural network can, even though they can certainly produce extremely good levels of artificial intelligence. By contrast, nature also uses neurons but does produce conscious machines such as humans easily. I think the key difference is not just that nature uses analog adaptive neural nets rather than digital processing (as I believe Hans Moravec first insighted, a view that I readily accepted) but also that nature uses large groups of these analog neurons that incorporate feedback loops that act both as a sort of short term memory and provide time to sense the sensing process as it happens, a mechanism that can explain consciousness. That feedback is critically important in the emergence of consciousness IMHO. I believe that if the neural network AI people stop barking up the barren back-prop tree and start climbing the feedback tree, we could have conscious machines in no time, but Moravec is still probably right that these need to be analog to enable true real-time processing as opposed to simulation of that.

I may be talking nonsense of course, but here are my thoughts, finally explained as simply and clearly as I can. These slides illustrate only the simplest forms of consciousness. Obviously our brains are highly complex and evolved many higher level architectures, control systems, complex senses and communication, but I think the basic foundations of biomimetic machine consciousness can be achieved as follows:

Read more

The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity.

Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society.

While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as “AI solutionism”. This is the philosophy that, given enough data, machine learning algorithms can solve all of humanity’s problems.

Read more

The possibility of time travel through the geodesics of vacuum solutions in first order gravity is explored. We present explicit examples of such geometries, which contain degenerate as well as nondegenerate tetrad fields that are sewn together continuously over different regions of the spacetime.

These classical solutions to the field equations satisfy the energy conditions.

Read more

Over the last few years, Google and Coursera have regularly teamed up to launch a number of online courses for developers and IT pros. Among those was the Machine Learning Crash course, which provides developers with an introduction to machine learning. Now, building on that, the two companies are launching a machine learning specialization on Coursera. This new specialization, which consists of five courses, has an even more practical focus.

The new specialization, called “Machine Learning with TensorFlow on Google Cloud Platform,” has students build real-world machine learning models. It takes them from setting up their environment to learning how to create and sanitize datasets to writing distributed models in TensorFlow, improving the accuracy of those models and tuning them to find the right parameters.

As Google’s Big Data and Machine Learning Tech Lead Lak Lakshmanan told me, his team heard that students and companies really liked the original machine learning course but wanted an option to dig deeper into the material. Students wanted to know not just how to build a basic model but also how to then use it in production in the cloud, for example, or how to build the data pipeline for it and figure out how to tune the parameters to get better results.

Read more

Professor Daniel Kahneman was awarded a Nobel Prize for his work on the psychology of judgment and decision-making, as well as behavioral economics. In this age of human/machine collaboration and shared learning, IDE Director, Erik Brynjolfsson, asked Kahneman about the perils, as well as the potential, of machine-based decision-making. The conversation took place at a recent conference, The Future of Work: Capital Markets, Digital Assets, and the Disruption of Labor, in New York City. Some key highlights follow.


P rofessor Daniel Kahneman was awarded a Nobel Prize for his work on the psychology of judgment and decision-making, as well as behavioral economics. In this age of human/machine collaboration and shared learning, IDE Director, Erik Brynjolfsson, asked Kahneman about the perils, as well as the potential, of machine-based decision-making. The conversation took place at a recent conference, The Future of Work: Capital Markets, Digital Assets, and the Disruption of Labor, in New York City. Some key highlights follow.

Erik Brynjolfsson: We heard today about algorithmic bias and about human biases. You are one of the world’s experts on human biases, and you’re writing a new book on the topic. What are the bigger risks — human or the algorithmic biases?

Daniel Kahneman: It’s pretty obvious that it would be human biases, because you can trace and analyze algorithms.

Read more