Toggle light / dark theme

The human brain operates on roughly 20 watts of power (a third of a 60-watt light bulb) in a space the size of, well, a human head. The biggest machine learning algorithms use closer to a nuclear power plant’s worth of electricity and racks of chips to learn.

That’s not to slander machine learning, but nature may have a tip or two to improve the situation. Luckily, there’s a branch of computer chip design heeding that call. By mimicking the brain, super-efficient neuromorphic chips aim to take AI off the cloud and put it in your pocket.

The latest such chip is smaller than a piece of confetti and has tens of thousands of artificial synapses made out of memristors—chip components that can mimic their natural counterparts in the brain.

Data governs our lives more than ever. But when it comes to disease and death, every data point is a person, someone who became sick and needed treatment.

Recent studies have revealed that people suffering from the same disease category may have different manifestations. As doctors and scientists better understand the reasons underlying this variability, they can develop novel preventive, diagnostic and therapeutic approaches and provide optimal, personalized care for every patient.

To accomplish this goal often requires broadscale collaborations between physicians, basic researchers, theoreticians, experimentalists, computational biologists, computer scientists and data scientists, engineers, statisticians, epidemiologists and others. They must work together to integrate scientific and medical knowledge, theory, analysis of medical big data and extensive experimental work.

This year, the Israel Precision Medicine Partnership (IPMP) selected 16 research projects to receive NIS 60 million in grants with the goal of advancing the implementation of personalized healthcare approaches – providing the right treatment to the right patient at the right time. All the research projects pull data from Israel’s unique and vast medical databases.


HEALTH AND SCIENCE AFFAIRS: 16 Israeli projects get NIS 60m. to innovate next stage of healthcare.

Rice University researchers have demonstrated methods for both designing innovative data-centric computing hardware and co-designing hardware with machine-learning algorithms that together can improve energy efficiency by as much as two orders of magnitude.

Advances in machine learning, the form of artificial intelligence behind self-driving cars and many other high-tech applications, have ushered in a new era of computing—the data-centric era—and are forcing engineers to rethink aspects of computing architecture that have gone mostly unchallenged for 75 years.

“The problem is that for large-scale deep neural networks, which are state-of-the-art for machine learning today, more than 90% of the electricity needed to run the entire system is consumed in moving data between the and processor,” said Yingyan Lin, an assistant professor of electrical and .

Circa 2017


We have sequenced the genome of the endangered European eel using the MinION by Oxford Nanopore, and assembled these data using a novel algorithm specifically designed for large eukaryotic genomes. For this 860 Mbp genome, the entire computational process takes two days on a single CPU. The resulting genome assembly significantly improves on a previous draft based on short reads only, both in terms of contiguity (N50 1.2 Mbp) and structural quality. This combination of affordable nanopore sequencing and light weight assembly promises to make high-quality genomic resources accessible for many non-model plants and animals.

Researchers from the Moscow Institute of Physics and Technology, joined by a colleague from Argonne National Laboratory, U.S., have implemented an advanced quantum algorithm for measuring physical quantities using simple optical tools. Published in Scientific Reports, their study takes us a step closer to affordable linear optics-based sensors with high performance characteristics. Such tools are sought after in diverse research fields, from astronomy to biology.

Maximizing the sensitivity of measurement tools is crucial for any field of science and technology. Astronomers seek to detect remote cosmic phenomena, biologists need to discern exceedingly tiny organic structures, and engineers have to measure the positions and velocities of objects, to name a few examples.

Until recently, no measurement could ensure precision above the so-called shot noise limit, which has to do with the statistical features inherent in classical observations. Quantum technology has provided a way around this, boosting precision to the fundamental Heisenberg limit, stemming from the basic principles of quantum mechanics. The LIGO experiment, which detected for the first time in 2016, shows it is possible to achieve Heisenberg-limited sensitivity by combining complex optical interference schemes and quantum techniques.

In the consumer electronics industry, quantum dots are used to dramatically improve color reproduction in TV displays. That’s because LCD TV displays, the kind in most of our living rooms, require a backlight. This light is typically made up of white, or white-ish LEDs. The LCD filters the white light into red, green, and blue pixels; their combinations create the colors that appear on the screen.

Before quantum dots, filtering meant that much of the light didn’t make it to the screen. Putting a layer of quantum dots between the LEDs and the LCD, however, changes that equation. QD TVs use blue LEDs as the light source, then take advantage of the quantum effect to shift some of that light to tightly constrained red and green wavelengths. Because only this purified light reaches the filters—instead of the full spectrum that makes up white light—far less is blocked and wasted.

It turns out that this same approach to making your TV picture better can make plants grow faster, because plants, like LCD filters, are tuned to certain colors of light.

The COVID-19 pandemic will have a profound impact on robotics, as more companies look to automation as a way forward. While wide-scale automation had long seemed like an inevitability, the pandemic is set to accelerate the push as corporations look for processes that remove the human element from the equation.

Of course, Locus Robotics hasn’t had too much of an issue raising money previously. The Massachusetts-based startup, which raised $26 million back in April of last year, is adding a $40 million Series D to its funds. That brings the full amount to north of $105 million. This latest round, led by Zebra Technologies, comes as the company looks to expand operations with the launch of a European HQ.

“The new funding allows Locus to accelerate expansion into global markets,” CEO Rick Faulk said in a release, “enabling us to strengthen our support of retail, industrial, healthcare, and 3PL businesses around the world as they navigate through the COVID-19 pandemic, ensuring that they come out stronger on the other side.”

Is it possible some instances of artificial intelligence are not as intelligent as we thought?

Call it artificial artificial intelligence.

A team of computer graduate students reports that a closer examination of several dozen information retrieval algorithms hailed as milestones in artificial research were in fact nowhere near as revolutionary as claimed. In fact, AI used in those algorithms were often merely minor tweaks of previously established routines.

Researchers in Italy have melded the emerging science of convolutional neural networks (CNNs) with deep learning — a discipline within artificial intelligence — to achieve a system of market forecasting with the potential for greater gains and fewer losses than previous attempts to use AI methods to manage stock portfolios. The team, led by Prof. Silvio Barra at the University of Cagliari, published their findings on IEEE/CAA Journal of Automatica Sinica.

The University of Cagliari-based team set out to create an AI-managed “buy and hold” (B&H) strategy — a system of deciding whether to take one of three possible actions — a long action (buying a stock and selling it before the market closes), a short action (selling a stock, then buying it back before the market closes), and a hold (deciding not to invest in a stock that day). At the heart of their proposed system is an automated cycle of analyzing layered images generated from current and past market data. Older B&H systems based their decisions on machine learning, a discipline that leans heavily on predictions based on past performance.

By letting their proposed network analyze current data layered over past data, they are taking market forecasting a step further, allowing for a type of learning that more closely mirrors the intuition of a seasoned investor rather than a robot. Their proposed network can adjust its buy/sell thresholds based on what is happening both in the present moment and the past. Taking into account present-day factors increases the yield over both random guessing and trading algorithms not capable of real-time learning.