Toggle light / dark theme

In principle, any pitch-shifting technique may be employed, provided that the frequency-dependent parameters analysed from the ultrasonic sound-field are mapped correctly to the frequency scale of the pitch-shifted signal. Since the spatial parameters are averaged over frequency in the currently employed configuration of the device, the frequency mapping is not required in this case. Instead, each time frame of the pitch shifted signal is spatialised according to a frequency-averaged direction. The pitch-shifting technique used for the application targeted in this article should be capable of large pitch-shifting ratios, while also operating within an acceptable latency. Based on these requirements, the phase-vocoder approach15,16 was selected for the real-time rendering in this study, due to its low processing latency and acceptable signal quality with large pitch-shifting ratios. However, the application of other pitch-shifting methods is also demonstrated with recordings processed off-line and described in the Results section.

In summary, the proposed processing approach permits frequency-modified signals to be synthesised with plausible binaural and monaural cues, which may subsequently be delivered to the listener to enable the localisation of ultrasonic sound sources. Furthermore, since the super-hearing device turns with the head of the listener, and the processing latency of the device was constrained to 44 ms, the dynamic cues should also be preserved. Note that the effect of processing latency has been previously studied in the context of head-tracked binaural reproduction systems, where it has been found that a system latency above 50–100 ms can impair the spatial perception17,18. Therefore, it should be noted that a trade-off must be made between: attaining high spatial image and timbral quality (which are improved through longer temporal windows and a higher level of overlapping) and having low processing latency (which relies on shorter windows and reduced overlapping). The current processing latency has been engineered so that both the spatial image and audio quality after pitch-shifting, as determined based on informal listening, remain reasonably high.

One additional advantage of the proposed approach is that only a single signal is pitch shifted, which is inherently more computationally efficient than pitch-shifting multiple signals; as would be required by the three alternative suggestions described in the Introduction section. Furthermore, the imprinting of the spatial information onto the signal only after pitch-shifting, ensures that the directional cues reproduced for the listener are not distorted by the pitch-shifting operation. The requirements for the size of microphone array are also less stringent compared to the requirements for an Ambisonics-based system. In this work, an array with a diameter of 11 mm was employed, which has a spatial aliasing frequency of approximately 17 kHz. This therefore prohibits the use of Ambisonics for the ultrasonic frequencies with the present array. By contrast, the employed spatial parameter analysis can be conducted above the spatial aliasing frequency; provided that the geometry of the array is known and that the sensors are arranged uniformly on the sphere.

Chipmaker Nvidia is acquiring DeepMap, the high-definition mapping startup announced. The company said its mapping IP will help Nvidia’s autonomous vehicle technology sector, Nvidia Drive.

“The acquisition is an endorsement of DeepMap’s unique vision, technology and people,” said Ali Kani, vice president and general manager of Automotive at Nvidia, in a statement. “DeepMap is expected to extend our mapping products, help us scale worldwide map operations and expand our full self-driving expertise.”

One of the biggest challenges to achieving full autonomy in a passenger vehicle is achieving proper localization and updated mapping information that reflects current road conditions. By integrating DeepMap’s tech, Nvidia’s autonomous stack should have greater precision, giving the vehicle enhanced abilities to locate itself on the road.

National Geographic announced Tuesday that it is officially recognizing the body of water surrounding the Antarctic as the Earth’s fifth ocean: the Southern Ocean.

The change marks the first time in over a century that the organization has redrawn the world’s oceanic maps, which have historically only included four: the Atlantic, Pacific, Indian and Arctic Oceans.

“The Southern Ocean has long been recognized by scientists, but because there was never agreement internationally, we never officially recognized it,” National Geographic Society geographer Alex Tait told the magazine.

The researchers started with a sample taken from the temporal lobe of a human cerebral cortex, measuring just 1 mm3. This was stained for visual clarity, coated in resin to preserve it, and then cut into about 5300 slices each about 30 nanometers (nm) thick. These were then imaged using a scanning electron microscope, with a resolution down to 4 nm. That created 225 million two-dimensional images, which were then stitched back together into one 3D volume.

Machine learning algorithms scanned the sample to identify the different cells and structures within. After a few passes by different automated systems, human eyes “proofread” some of the cells to ensure the algorithms were correctly identifying them.

The end result, which Google calls the H01 dataset, is one of the most comprehensive maps of the human brain ever compiled. It contains 50000 cells and 130 million synapses, as well as smaller segments of the cells such axons, dendrites, myelin and cilia. But perhaps the most stunning statistic is that the whole thing takes up 1.4 petabytes of data – that’s more than a million gigabytes.

In January 2020 we released the fly “hemibrain” connectome — an online database providing the morphological structure and synaptic connectivity of roughly half of the brain of a fruit fly (Drosophila melanogaster). This database and its supporting visualization has reframed the way that neural circuits are studied and understood in the fly brain. While the fruit fly brain is small enough to attain a relatively complete map using modern mapping techniques, the insights gained are, at best, only partially informative to understanding the most interesting object in neuroscience — the human brain.

Today, in collaboration with the Lichtman Laboratory at Harvard University, we are releasing the “H01” dataset, a 1.4 petabyte rendering of a small sample of human brain tissue, along with a companion paper, “A connectomic study of a petascale fragment of human cerebral cortex.” The H01 sample was imaged at 4nm-resolution by serial section electron microscopy, reconstructed and annotated by automated computational techniques, and analyzed for preliminary insights into the structure of the human cortex. The dataset comprises imaging data that covers roughly one cubic millimeter of brain tissue, and includes tens of thousands of reconstructed neurons, millions of neuron fragments, 130 million annotated synapses, 104 proofread cells, and many additional subcellular annotations and structures — all easily accessible with the Neuroglancer browser interface.

Mapping how humans move will help in future pandemics.


How people move around cities follows a predictable and universal pattern, scientist say, which will be crucial not only for urban planning but also controlling pandemics.

By analysing mobile-phone tracking data from across four continents, the team confirmed that people visit places more often when they don’t have to travel far to get there.

“We might shop every day at a bakery a few hundred metres away, but we’ll only go once a month to the fancy boutique miles away from our neighborhood,” says project leader Carlo Ratti, from the Massachusetts Institute of Technology (MIT).

Satellite images showing the expansion of large detention camps in Xinjiang, China, between 2016 and 2018 provided some of the strongest evidence of a government crackdown on more than a million Muslims, triggering international condemnation and sanctions.

Other aerial images—of nuclear installations in Iran and missile sites in North Korea, for example—have had a similar impact on world events. Now, image-manipulation tools made possible by artificial intelligence may make it harder to accept such images at face value.

In a paper published online last month, University of Washington professor Bo Zhao employed AI techniques similar to those used to create so-called deepfakes to alter satellite images of several cities. Zhao and colleagues swapped features between images of Seattle and Beijing to show buildings where there are none in Seattle and to remove structures and replace them with greenery in Beijing.

Circa 2016 o.o!


The theory used to be that hydrocarbons were created in “shocks,” or violent stellar events that cause a lot of turbulence and, with the shock waves, make atoms into ions, which are more likely to combine.

The data from the European Space Agency’s Herschel Space Observatory has since proved that theory wrong. Scientists at Herschel studied the components in the Orion Nebula, mapping the amount, temperature and motions for the carbon-hydrogen molecule (CH), the carbon-hydrogen positive ion (CH+) and their parent molecule: the carbon ion (C+).

They found that in Orion, CH+ is emitting light instead of absorbing it, which means that it is warmer than the background gas. This was surprising to scientists because the CH+ molecule is incredibly reactive and needs a high amount of energy to form, so when it interacts with the background hydrogen in the cloud it gets destroyed.

Fully autonomous exploration and mapping of the unknown is a cutting-edge capability for commercial drones.


Drone autonomy is getting more and more impressive, but we’re starting to get to the point where it’s getting significantly more difficult to improve on existing capabilities. Companies like Skydio are selling (for cheap!) commercial drones that have no problem dynamically path planning around obstacles at high speeds while tracking you, which is pretty amazing, and it can also autonomously create 3D maps of structures. In both of these cases, there’s a human indirectly in the loop, either saying “follow me” or “map this specific thing.” In other words, the level of autonomous flight is very high, but there’s still some reliance on a human for high-level planning. Which, for what Skydio is doing, is totally fine and the right way to do it.