Toggle light / dark theme

Despite years of hype, virtual reality headsets have yet to topple TV or computer screens as the go-to devices for video viewing.

One reason: VR can make users feel sick. Nausea and eye strain can result because VR creates an illusion of 3D viewing although the user is in fact staring at a fixed-distance 2D display. The solution for better 3D visualization could lie in a 60-year-old technology remade for the digital world: holograms.

Holograms deliver an exceptional representation of 3D world around us. Plus, they’re beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.

Cosmologists love universe simulations. Even models covering hundreds of millions of light years can be useful for understanding fundamental aspects of cosmology and the early universe. There’s just one problem – they’re extremely computationally intensive. A 500 million light year swath of the universe could take more than 3 weeks to simulate… Now, scientists led by Yin Li at the Flatiron Institute have developed a way to run these cosmically huge models 1000 times faster. That 500 million year light year swath could then be simulated in 36 minutes.

Older algorithms took such a long time in part because of a tradeoff. Existing models could either simulate a very detailed, very small slice of the cosmos or a vaguely detailed larger slice of it. They could provide either high resolution or a large area to study, not both.

To overcome this dichotomy, Dr. Li turned to an AI technique called a generative adversarial network (GAN). This algorithm pits two competing algorithms again each other, and then iterates on those algorithms with slight changes to them and judges whether those incremental changes improved the algorithm or not. Eventually, with enough iterations, both algorithms become much more accurate naturally on their own.

Last August, several dozen military drones and tanklike robots took to the skies and roads 40 miles south of Seattle. Their mission: Find terrorists suspected of hiding among several buildings.

So many robots were involved in the operation that no human operator could keep a close eye on all of them. So they were given instructions to find—and eliminate—enemy combatants when necessary.

The mission was just an exercise, organized by the Defense Advanced Research Projects Agency, a blue-sky research division of the Pentagon; the robots were armed with nothing more lethal than radio transmitters designed to simulate interactions with both friendly and enemy robots.

Using neural networks, Flatiron Institute research fellow Yin Li and his colleagues simulated vast, complex universes in a fraction of the time it takes with conventional methods.

Using a bit of machine learning magic, astrophysicists can now simulate vast, complex universes in a thousandth of the time it takes with conventional methods. The new approach will help usher in a new era in high-resolution cosmological simulations, its creators report in a study published online on May 4, 2021, in Proceedings of the National Academy of Sciences.

“At the moment, constraints on computation time usually mean we cannot simulate the universe at both high resolution and large volume,” says study lead author Yin Li, an astrophysicist at the Flatiron Institute in New York City. “With our new technique, it’s possible to have both efficiently. In the future, these AI-based methods will become the norm for certain applications.”

Blue Robotics, a leading developer of marine robotics systems and components, has partnered with Unmanned Systems Technology (“UST”) to demonstrate their expertise in this field. The ‘Silver’ profile highlights how their underwater ROVs (remotely operated vehicles), thrusters and accessories enable a wide range of missions for commercial, research and exploration applications.

The BlueROV2 is a high-performance, highly configurable ROV designed for underwater inspections, research and ocean exploring. With open-source hardware and software, the platform features an unprecedented level of flexibility and expandability, allowing users to easily make improvements and upgrades to take on a huge variety of missions down to depths of 100m (330 feet).

The ROV incorporates six Blue Robotics T200 thrusters in a vectored configuration, delivering excellent thrust-to-weight ratio and providing the ability to move precisely in any direction. The system can be expanded to eight thrusters via a Heavy Configuration Retrofit Kit, and features adjustable gain levels for precision control at extremely low speeds as well as high power to overcome currents and carry heavy loads. The BlueROV2 is provided with a Fathom ROV tether, with available length options from 25m (82 ft) up to 300 m (984 ft).

Local media reports quoted Wang as saying that artificial intelligence, 6G, quantum technology, driverless vehicles, intelligent networks and other “frontier areas” would be the focus of Shenzhen’s investment plans, while the value of its digital economy would account for more than 31 per cent of GDP by 2025.


Money will be used to support innovation in core technologies, city’s Communist Party chief Wang Weizhong says.

The aircraft can lift up to 2722 kg with unmatched performance in hot and high conditions.


Kaman Air Vehicles performed the maiden flight with the world’s first heavy-lift unmanned helicopter for the commercial market, the K-MAX TITAN, last month.

Kaman’s K-MAX helicopter has been flying unmanned cargo missions for US forces in Afghanistan for roughly a decade now. Now, the company is introducing a commercial version to the market.

With a focus on enabling safety and operational efficiency, the unmanned helicopter will redefine the helicopter external lift market by increasing future mission capabilities in any location and any type of weather. This is made possible thanks to the so-called autonomous Near Earth Autonomy’s sensor-based autonomy suite.

AI is fundamental to many products and services today, but its hunger for data and computing cycles is bottomless. Lightmatter plans to leapfrog Moore’s law with its ultra-fast photonic chips specialized for AI work, and with a new $80 million round, the company is poised to take its light-powered computing to market.

We first covered Lightmatter in 2018, when the founders were fresh out of MIT and had raised $11 million to prove that their idea of photonic computing was as valuable as they claimed. They spent the next three years and change building and refining the tech — and running into all the hurdles that hardware startups and technical founders tend to find.

For a full breakdown of what the company’s tech does, read that feature — the essentials haven’t changed.

The Dead Sea Scrolls, discovered some seventy years ago, are famous for containing the oldest manuscripts of the Hebrew Bible (Old Testament) and many hitherto unknown ancient Jewish texts. But the individual people behind the scrolls have eluded scientists, because the scribes are anonymous. Now, by combining the sciences and the humanities, University of Groningen researchers have cracked the code, which enables them to discover the scribes behind the scrolls. They presented their results in the journal PLOS ONE on April 21, 2021.

The scribes who created the scrolls did not sign their work. Scholars suggested some manuscripts should be attributed to a single scribe based on handwriting. “They would try to find a “smoking gun” in the handwriting, for example, a very specific trait in a letter which would identify a scribe,” explains Mladen Popović, professor of Hebrew Bible and Ancient Judaism at the Faculty of Theology and Religious Studies at the University of Groningen. He is also director of the university’s Qumran Institute, dedicated to studying the Dead Sea Scrolls. However, these identifications are somewhat subjective and often hotly debated.