Toggle light / dark theme

Advanced uses of time in image rendering and reconstruction have been the focus of much scientific research in recent years. The motivation comes from the equivalence between space and time given by the finite speed of light c. This equivalence leads to correlations between the time evolution of electromagnetic fields at different points in space. Applications exploiting such correlations, known as time-of-flight (ToF)1 and light-in-flight (LiF)2 cameras, operate at various regimes from radio3,4 to optical5 frequencies. Time-of-flight imaging focuses on reconstructing a scene by measuring delayed stimulus responses via continuous wave, impulses or pseudo-random binary sequence (PRBS) codes1. Light-in-flight imaging, also known as transient imaging6, explores light transport and detection2,7. The combination of ToF and LiF has recently yielded higher accuracy and detail to the reconstruction process, especially in non-line-of-sight images with the inclusion of higher-order scattering and physical processes such as Rayleigh–Sommerfeld diffraction8 in the modeling. However, these methods require experimental characterization of the scene followed by large computational overheads that produce images at low frame rates in the optical regime. In the radio-frequency (RF) regime, 3D images at frame rates of 30 Hz have been produced with an array of 256 wide-band transceivers3. Microwave imaging has the additional capability of sensing through optically opaque media such as walls. Nonetheless, synthetic aperture radar reconstruction algorithms such as the one proposed in ref. 3 required each transceiver in the array to operate individually thus leaving room for improvements in image frame rates from continuous transmit-receive captures. Constructions using beamforming have similar challenges9 where a narrow focused beam scans a scene using an array of antennas and frequency modulated continuous wave (FMCW) techniques.

In this article, we develop an inverse light transport model10 for microwave signals. The model uses a spatiotemporal mask generated by multiple sources, each emitting different PRBS codes, and a single detector, all operating in continuous synchronous transmit-receive mode. This model allows image reconstructions with capture times of the order of microseconds and no prior scene knowledge. For first-order reflections, the algorithm reduces to a single dot product between the reconstruction matrix and captured signal, and can be executed in a few milliseconds. We demonstrate this algorithm through simulations and measurements performed using realistic scenes in a laboratory setting. We then use the second-order terms of the light transport model to reconstruct scene details not captured by the first-order terms.

We start by estimating the information capacity of the scene and develop the light transport equation for the transient imaging model with arguments borrowed from basic information and electromagnetic field theory. Next, we describe the image reconstruction algorithm as a series of approximations corresponding to multiple scatterings of the spatiotemporal illumination matrix. Specifically, we show that in the first-order approximation, the value of each pixel is the dot product between the captured time series and a unique time signature generated by the spatiotemporal electromagnetic field mask. Next, we show how the second-order approximation generates hidden features not accessible in the first-order image. Finally, we apply the reconstruction algorithm to simulated and experimental data and discuss the performance, strengths, and limitations of this technique.

And they say computers can’t create art.


In 1642, famous Dutch painter Rembrandt van Rijn completed a large painting called Militia Company of District II under the Command of Captain Frans Banninck Cocq — today, the painting is commonly referred to as The Night Watch. It was the height of the Dutch Golden Age, and The Night Watch brilliantly showcased that.

The painting measured 363 cm × 437 cm (11.91 ft × 14.34 ft) — so big that the characters in it were almost life-sized, but that’s only the start of what makes it so special. Rembrandt made dramatic use of light and shadow and also created the perception of motion in what would normally be a stationary military group portrait. Unfortunately, though, the painting was trimmed in 1715 to fit between two doors at Amsterdam City Hall.

For over 300 years, the painting has been missing 60cm (2ft) from the left, 22cm from the top, 12cm from the bottom and 7cm from the right. Now, computer software has restored the missing parts.

A team of researchers working at Johannes Kepler University has developed an autonomous drone with a new type of technology to improve search-and-rescue efforts. In their paper published in the journal Science Robotics, the group describes their drone modifications. Andreas Birk with Jacobs University Bremen has published a Focus piece in the same journal issue outlining the work by the team in Austria.

Finding people lost (or hiding) in the forest is difficult because of the tree cover. People in planes and helicopters have difficulty seeing through the canopy to the ground below, where people might be walking or even laying down. The same problem exists for thermal applications—heat sensors cannot pick up readings adequately through the canopy. Efforts have been made to add drones to search-and–, but they suffer from the same problems because they are remotely controlled by pilots using them to search the ground below. In this new effort, the researchers have added new technology that both helps to see through the tree canopy and to highlight people that might be under it.

The new technology is based on what the researchers describe as an airborne optical sectioning algorithm—it uses the power of a computer to defocus occluding objects such as the tops of . The second part of the new device uses thermal imaging to highlight the heat emitted from a warm body. A machine-learning application then determines if the heat signals are those of humans, animals or other sources. The new hardware was then affixed to a standard autonomous . The computer in the drone uses both locational positioning to determine where to search and cues from the AOS and thermal sensors. If a possible match is made, the drone automatically moves closer to a target to get a better look. If its sensors indicate a match, it signals the research team giving them the coordinates. In testing their newly outfitted drones over 17 field experiments, the researchers found it was able to locate 38 of 42 people hidden below tree canopies.

Without GPS, autonomous systems get lost easily. Now a new algorithm developed at Caltech allows autonomous systems to recognize where they are simply by looking at the terrain around them—and for the first time, the technology works regardless of seasonal changes to that terrain.

Details about the process were published on June 23 in the journal Science Robotics.

The general process, known as visual terrain-relative navigation (VTRN), was first developed in the 1960s. By comparing nearby terrain to high-resolution satellite images, can locate themselves.

If you walk down the street shouting out the names of every object you see — garbage truck! bicyclist! sycamore tree! — most people would not conclude you are smart. But if you go through an obstacle course, and you show them how to navigate a series of challenges to get to the end unscathed, they would.

Most machine learning algorithms are shouting names in the street. They perform perceptive tasks that a person can do in under a second. But another kind of AI — deep reinforcement learning — is strategic. It learns how to take a series of actions in order to reach a goal. That’s powerful and smart — and it’s going to change a lot of industries.

Two industries on the cusp of AI transformations are manufacturing and supply chain. The ways we make and ship stuff are heavily dependent on groups of machines working together, and the efficiency and resiliency of those machines are the foundation of our economy and society. Without them, we can’t buy the basics we need to live and work.

Summary: Combining deep learning algorithms with robotic engineering, researchers have developed a new robot able to combine vision and touch.

Source: EBRAINS / human brain project.

On the new EBRAINS research infrastructure, scientists of the Human Brain Project have connected brain-inspired deep learning to biomimetic robots.

Chipmaker patches nine high-severity bugs in its Jetson SoC framework tied to the way it handles low-level cryptographic algorithms.

Flaws impacting millions of internet of things (IoT) devices running NVIDIA’s Jetson chips open the door for a variety of hacks, including denial-of-service (DoS) attacks or the siphoning of data.

NVIDIA released patches addressing nine high-severity vulnerabilities including eight additional bugs of less severity. The patches fix a wide swath of NVIDIA’s chipsets typically used for embedded computing systems, machine-learning applications and autonomous devices such as robots and drones. Impacted products include Jetson chipset series; AGX Xavier, Xavier NX/TX1, Jetson TX2 (including Jetson TX2 NX), and Jetson Nano devices (including Jetson Nano 2GB) found in the NVIDIA JetPack software developers kit. The patches were delivered as part of NVIDIA’s June security bulletin, released Friday.

Last week, I wrote an analysis of Reward Is Enough, a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward is all you need to create the abilities associated with intelligence, such as perception, motor functions, and language.

This is in contrast with AI systems that try to replicate specific functions of natural intelligence such as classifying images, navigating physical environments, or completing sentences.

The researchers go as far as suggesting that with well-defined reward, a complex environment, and the right reinforcement learning algorithm, we will be able to reach artificial general intelligence, the kind of problem-solving and cognitive abilities found in humans and, to a lesser degree, in animals.

As the number of qubits in early quantum computers increases, their creators are opening up access via the cloud. IBM has its IBM Q network, for instance, while Microsoft has integrated quantum devices into its Azure cloud-computing platform. By combining these platforms with quantum-inspired optimisation algorithms and variable quantum algorithms, researchers could start to see some early benefits of quantum computing in the fields of chemistry and biology within the next few years. In time, Google’s Sergio Boixo hopes that quantum computers will be able to tackle some of the existential crises facing our planet. “Climate change is an energy problem – energy is a physical, chemical process,” he says.

“Maybe if we build the tools that allow the simulations to be done, we can construct a new industrial revolution that will hopefully be a more efficient use of energy.” But eventually, the area where quantum computers might have the biggest impact is in quantum physics itself.

The Large Hadron Collider, the world’s largest particle accelerator, collects about 300 gigabytes of data a second as it smashes protons together to try and unlock the fundamental secrets of the universe. To analyse it requires huge amounts of computing power – right now it’s split across 170 data centres in 42 countries. Some scientists at CERN – the European Organisation for Nuclear Research – hope quantum computers could help speed up the analysis of data by enabling them to run more accurate simulations before conducting real-world tests. They’re starting to develop algorithms and models that will help them harness the power of quantum computers when the devices get good enough to help.

A recent string of problems suggests facial recognition’s reliability issues are hurting people in a moment of need. Motherboard reports that there are ongoing complaints about the ID.me facial recognition system at least 21 states use to verify people seeking unemployment benefits. People have gone weeks or months without benefits when the Face Match system doesn’t verify their identities, and have sometimes had no luck getting help through a video chat system meant to solve these problems.

ID.me chief Blake Hall blamed the problems on users rather than the technology. Face Match algorithms have “99.9% efficacy,” he said, and there was “no relationship” between skin tone and recognition failures. Hall instead suggested that people weren’t sharing selfies properly or otherwise weren’t following instructions.

Motherboard noted that at least some people have three attempts to pass the facial recognition check, though. The outlet also pointed out that the company’s claims of national unemployment fraud costs have ballooned rapidly in just the past few months, from a reported $100 billion to $400 billion. While Hall attributed that to expanding “data points,” he didn’t say just how his firm calculated the damage. It’s not clear just what the real fraud threat is, in other words.