Toggle light / dark theme

Deep Learning in Action | A talk by Juergen Schmidhuber, PhD at the Deep Learning in Action talk series in October 2015. He is professor in computer science at the Dalle Molle Institute for Artificial Intelligence Research, part of the University of Applied Sciences and Arts of Southern Switzerland.

Juergen Schmidhuber, PhD | I review 3 decades of our research on both gradient based and more general problem solvers that search the space of algorithms running on general purpose computers with internal memory.

Architectures include traditional computers, Turing machines, recurrent neural networks, fast weight networks, stack machines, and others. Some of our algorithm searchers are based on algorithmic information theory and are optimal in asymptotic or other senses.

Read more

“You like your Tesla, but does your Tesla like you?” My new story for TechCrunch on robots understanding beauty and even whether they like your appearance or not:


Robots are starting to appear everywhere: driving cars, cooking dinners and even as robotic pets.

But people don’t usually give machine intelligence much credence when it comes to judging beauty. That may change with the launch of the world’s first international beauty contest judged exclusively by a robot jury.

The contest, which requires participants to take selfies via a special app and submit them to the contest website, is touting new sophisticated facial recognition algorithms that allow machines to judge beauty in new and improved ways.

The contest intends to have robots analyze the many age-related changes on the human face and evaluate the impact on perception of these changes by people of various ages, races, ethnicities and nationalities.

Read more

A researcher at Singapore’s Nanyang Technological University (NTU) has developed a new technology that provides real-time detection, analysis, and optimization data that could potentially save a company 10 percent on its energy bill and lessen its carbon footprint. The technology is an algorithm that primarily relies on data from ubiquitous devices to better analyze energy use. The software uses data from computers, servers, air conditioners, and industrial machinery to monitor temperature, data traffic and the computer processing workload. Data from these already-present appliances are then combined with the information from externally placed sensors that primarily monitor ambient temperature to analyze energy consumption and then provide a more efficient way to save energy and cost.

The energy-saving computer algorithm was developed by NTU’s Wen Yonggang, an assistant professor at the School of Computer Engineering’s Division of Networks & Distributed Systems. Wen specializes in machine-to-machine communication and computer networking, including looking at social media networks, cloud-computing platforms, and big data systems.

Most data centers consume huge amount of electrical power, leading to high levels of energy waste, according to Wen’s website. Part of his research involves finding ways to reduce energy waste and stabilize power systems by scaling energy levels temporally and spatially.

Read more

In 2010, a Canadian company called D-Wave announced that it had begun production of what it called the world’s first commercial quantum computer, which was based on theoretical work done at MIT. Quantum computers promise to solve some problems significantly faster than classical computers—and in at least one case, exponentially faster. In 2013, a consortium including Google and NASA bought one of D-Wave’s machines.

Over the years, critics have argued that it’s unclear whether the D-Wave machine is actually harnessing quantum phenomena to perform its calculations, and if it is, whether it offers any advantages over classical computers. But this week, a group of Google researchers released a paper claiming that in their experiments, a quantum algorithm running on their D-Wave machine was 100 million times faster than a comparable classical algorithm.

Scott Aaronson, an associate professor of electrical engineering and computer science at MIT, has been following the D-Wave story for years. MIT News asked him to help make sense of the Google researchers’ new paper.

Read more

Machine learning is a bit of a buzz term that describes the way artificial intelligence (AI) can begin to make sense of the world around it by being exposed to massive amount amounts of data.

But a new algorithm developed by researchers in the US has dramatically cut down the amount of learning time required for AI to teach itself new things, with a machine capable of recognising and drawing visual symbols that are largely indistinguishable from those drawn by people.

The research highlights how, for all our imperfections, people are actually pretty good at learning things. Whether we’re learning a written character, how to operate a tool, or how to perform a dance move, humans only need a few examples before we can replicate what we’ve been shown.

Read more

Governments and leading computing companies such as Microsoft, IBM, and Google are trying to develop what are called quantum computers because using the weirdness of quantum mechanics to represent data should unlock immense data-crunching powers. Computing giants believe quantum computers could make their artificial-intelligence software much more powerful and unlock scientific leaps in areas like materials science. NASA hopes quantum computers could help schedule rocket launches and simulate future missions and spacecraft. “It is a truly disruptive technology that could change how we do everything,” said Deepak Biswas, director of exploration technology at NASA’s Ames Research Center in Mountain View, California.

Biswas spoke at a media briefing at the research center about the agency’s work with Google on a machine they bought in 2013 from Canadian startup D-Wave systems, which is marketed as “the world’s first commercial quantum computer.” The computer is installed at NASA’s Ames Research Center in Mountain View, California, and operates on data using a superconducting chip called a quantum annealer. A quantum annealer is hard-coded with an algorithm suited to what are called “optimization problems,” which are common in machine-learning and artificial-intelligence software.

However, D-Wave’s chips are controversial among quantum physicists. Researchers inside and outside the company have been unable to conclusively prove that the devices can tap into quantum physics to beat out conventional computers.

Read more

Which mean for us?


Recently, quantum gates and quantum circuits have been found when portfolios of stocks were simulated in quantum computation processes, pointing out to the existence of a bizarre quantum code beneath the stock market transactions. The quantum code of the stock market might prove to have a more profound signification if is related to the recent finding of quantum codes at the deepest levels of our reality, such as quantum mechanics of black holes and the space-time of the universe. Could this mysterious stock market quantum code be a tiny fragment of a quantum code that our universe uses to create the physical reality?

John Preskill’s talk „Is spacetime a quantum error-correcting code?” held at the Center for Quantum Information and Control, University of New Mexico, and previously at Kavli Institute for Theoretical physics, may represent a turning point in physical research related to questioning the existence and evolution of our Universe. The essence of this talk may change forever our understanding of the Universe, shifting the perspective of physical research from masses and energies to codes of information theory.

John Preskill, professor at California Institute of Technology, is well known mostly for his remarkable developments of quantum computational models, more specifically topological quantum computing. Preskill’s lectures inspire a whole generation of brilliant physicists working on quantum computation. This experience in quantum computing may point out Dr. Preskill to knock at the Universe gates with the unique perspective of a quantum code reality.

Read more

Sir Winston Churchill often spoke of World War 2 as the “Wizard War”. Both the Allies and Axis powers were in a race to gain the electronic advantage over each other on the battlefield. Many technologies were born during this time – one of them being the ability to decipher coded messages. The devices that were able to achieve this feat were the precursors to the modern computer. In 1946, the US Military developed the ENIAC, or Electronic Numerical Integrator And Computer. Using over 17,000 vacuum tubes, the ENIAC was a few orders of magnitude faster than all previous electro-mechanical computers. The part that excited many scientists, however, was that it was programmable. It was the notion of a programmable computer that would give rise to the ai_05idea of artificial intelligence (AI).

As time marched forward, computers became smaller and faster. The invention of the transistor semiconductor gave rise to the microprocessor, which accelerated the development of computer programming. AI began to pick up steam, and pundits began to make grand claims of how computer intelligence would soon surpass our own. Programs like ELIZA and Blocks World fascinated the public and certainly gave the perception that when computers became faster, as they surely would in the future, they would be able to think like humans do.

But it soon became clear that this would not be the case. While these and many other AI programs were good at what they did, neither they, or their algorithms were adaptable. They were ‘smart’ at their particular task, and could even be considered intelligent judging from their behavior, but they had no understanding of the task, and didn’t hold a candle to the intellectual capabilities of even a typical lab rat, let alone a human.

Read more

London, UK, November, 19, 2015 (PRWEB UK) 19 November 2015.

What matters in beauty is perception. Perception is how you and other people see you, and this perception is almost always biased. Still, healthy people look more attractive despite their age and nationality.

This has enabled the team of biogerontologists and data scientists, who believe that in the near future machines will be able to get a lot of vital medical information about people’s health by just processing their photos, to develop a set of algorithms that can accurately evaluate the criteria linked to perception of human beauty and health where it is most important – the human face. But evaluating beauty and health is not enough. The team’s challenge is to find effective ways to slow down ageing and help people look healthy and beautiful.

Read more