Toggle light / dark theme

The snake bites its tail

Google AI can independently discover AI methods.

Then optimizes them

It Evolves algorithms from scratch—using only basic mathematical operations—rediscovering fundamental ML techniques & showing the potential to discover novel algorithms.

AutoML-Zero: new research that that can rediscover fundamental ML techniques by searching a space of different ways of combining basic mathematical operations. Arxiv: https://arxiv.org/abs/2003.


Machine learning (ML) has seen tremendous successes recently, which were made possible by ML algorithms like deep neural networks that were discovered through years of expert research. The difficulty involved in this research fueled AutoML, a field that aims to automate the design of ML algorithms. So far, AutoML has focused on constructing solutions by combining sophisticated hand-designed components. A typical example is that of neural architecture search, a subfield in which one builds neural networks automatically out of complex layers (e.g., convolutions, batch-norm, and dropout), and the topic of much research.

An alternative approach to using these hand-designed components in AutoML is to search for entire algorithms from scratch. This is challenging because it requires the exploration of vast and sparse search spaces, yet it has great potential benefits — it is not biased toward what we already know and potentially allows for the discovery of new and better ML architectures. By analogy, if one were building a house from scratch, there is more potential for flexibility or improvement than if one was constructing a house using only prefabricated rooms. However, the discovery of such housing designs may be more difficult because there are many more possible ways to combine the bricks and mortar than there are of combining pre-made designs of entire rooms. As such, early research into algorithm learning from scratch focused on one aspect of the algorithm, to reduce the search space and compute required, such as the learning rule, and has not been revisited much since the early 90s. Until now.

Extending our research into evolutionary AutoML, our recent paper, to be published at ICML 2020, demonstrates that it is possible to successfully evolve ML algorithms from scratch. The approach we propose, called AutoML-Zero, starts from empty programs and, using only basic mathematical operations as building blocks, applies evolutionary methods to automatically find the code for complete ML algorithms. Given small image classification problems, our method rediscovered fundamental ML techniques, such as 2-layer neural networks with backpropagation, linear regression and the like, which have been invented by researchers throughout the years. This result demonstrates the plausibility of automatically discovering more novel ML algorithms to address harder problems in the future.

The high energy consumption of artificial neural networks’ learning activities is one of the biggest hurdles for the broad use of Artificial Intelligence (AI), especially in mobile applications. One approach to solving this problem can be gleaned from knowledge about the human brain.

Although it has the computing power of a supercomputer, it only needs 20 watts, which is only a millionth of the of a supercomputer.

One of the reasons for this is the efficient transfer of information between in the brain. Neurons send short electrical impulses (spikes) to other neurons—but, to save energy, only as often as absolutely necessary.

In February of last year, the San Francisco–based research lab OpenAI announced that its AI system could now write convincing passages of English. Feed the beginning of a sentence or paragraph into GPT-2, as it was called, and it could continue the thought for as long as an essay with almost human-like coherence.

Now, the lab is exploring what would happen if the same algorithm were instead fed part of an image. The results, which were given an honorable mention for best paper at this week’s International Conference on Machine Learning, open up a new avenue for image generation, ripe with opportunity and consequences.

How do you beat Tesla, Google, Uber and the entire multi-trillion dollar automotive industry with massive brands like Toyota, General Motors, and Volkswagen to a full self-driving car? Just maybe, by finding a way to train your AI systems that is 100,000 times cheaper.

It’s called Deep Teaching.

Perhaps not surprisingly, it works by taking human effort out of the equation.

No industry will be spared.


The pharmaceutical business is perhaps the only industry on the planet, where to get the product from idea to market the company needs to spend about a decade, several billion dollars, and there is about 90% chance of failure. It is very different from the IT business, where only the paranoid survive but a business where executives need to plan decades ahead and execute. So when the revolution in artificial intelligence fueled by credible advances in deep learning hit in 2013–2014, the pharmaceutical industry executives got interested but did not immediately jump on the bandwagon. Many pharmaceutical companies started investing heavily in internal data science R&D but without a coordinated strategy it looked more like re-branding exercise with the many heads of data science, digital, and AI in one organization and often in one department. And while some of the pharmaceutical companies invested in AI startups no sizable acquisitions were made to date. Most discussions with AI startups started with “show me a clinical asset in Phase III where you identified a target and generated a molecule using AI?” or “how are you different from a myriad of other AI startups?” often coming from the newly-minted heads of data science strategy who, in theory, need to know the market.

However, some of the pharmaceutical companies managed to demonstrate very impressive results in the individual segments of drug discovery and development. For example, around 2018 AstraZeneca started publishing in generative chemistry and by 2019 published several impressive papers that were noticed by the community. Several other pharmaceutical companies demonstrated impressive internal modules and Eli Lilly built an impressive AI-powered robotics lab in cooperation with a startup.

However, it was not possible to get a comprehensive overview and comparison of the major pharmaceutical companies that claimed to be doing AI research and utilizing big data in preclinical and clinical development until now. On June 15th, one article titled “The upside of being a digital pharma player” got accepted and quietly went online in a reputable peer-reviewed industry journal Drug Discovery Today. I got notified about the article by Google Scholar because it referenced several of our papers. I was about to discard the article as just another industry perspective but then I looked at the author list and saw a group of heavy-hitting academics, industry executives, and consultants: Alexander Schuhmacher from Reutlingen University, Alexander Gatto from Sony, Markus Hinder from Novartis, Michael Kuss from PricewaterhouseCoopers, and Oliver Gassmann from University of St. Gallen.

The 400 kilogram wheeled system moves about the lab guided by LIDAR laser scanners and has an industrial robotic arm made by German firm Kuka that it uses to carry out tasks like weighing out solids, dispensing liquids, removing air from the vessel, and interacting with other pieces of equipment.

In a paper in Nature, the team describes how they put the device to work trying to find catalysts that speed up reactions that use light to split water into hydrogen and oxygen. To do this, the robot used a search algorithm to decide how to combine a variety of different chemicals and updated its plans based on the results of previous experiments.

The robot carried out 688 experiments over 8 days, working for 172 out of 192 hours, and at the end it had found a catalyst that produced hydrogen 6 times faster than the one it started out with.

A new study offers a better understanding of the hidden network of underground electrical signals being transmitted from plant to plant – a network that has previously been shown to use the Mycorrhizal fungi in soil as a sort of electrical circuit.

Through a combination of physical experiments and mathematical models based on differential equations, researchers explored how this electrical signalling works, though it’s not clear yet exactly what messages plants might want to transmit to each other.

The work builds on previous experiments by the same team looking at how this subterranean messaging service functions, using electrical stimulation as a way of testing how signals are carried even when plants aren’t in the same soil.

That in turn enables a massive software upgrade known as the “autonomy module,” a playbook of algorithms that tell the weapon how to respond to specific changes on the battlefield, whether that means the sighting of a new threat or the destruction of some of the collaborative weapons.

A new approach to designing motion plans for multiple robots grows “trees” in the search space to solve complex problems in a fraction of the time.

In one of the more memorable scenes from the 2002 blockbuster film Minority Report, Tom Cruise is forced to hide from a swarm of spider-like robots scouring a towering apartment complex. While most viewers are likely transfixed by the small, agile bloodhound replacements, a computer engineer might marvel instead at their elegant control system.

In a building several stories tall with numerous rooms, hundreds of obstacles and thousands of places to inspect, the several dozen robots move as one cohesive unit. They spread out in a search pattern to thoroughly check the entire building while simultaneously splitting tasks so as to not waste time doubling back on their own paths or re-checking places other robots have already visited.