Toggle light / dark theme

Despite virtual reality (VR) technology being more affordable than ever, developers have yet to achieve a sense of full immersion in a digital world. Among the greatest challenges is making the user feel as if they are walking.

Now, researchers from the Toyohashi University of Technology and The University of Tokyo in Japan have published a paper to the journal Frontiers in Virtual Reality describing a custom-built platform that aims to replicate the sensation of walking in VR, all while sitting motionlessly in a chair.

“Walking is a fundamental and fun activity for human in everyday life. Therefore, it is very worthwhile to provide a high-quality walking experience in a VR space,” says Yusuke Matsuda.

Summary: Computer-generated, or virtual humans, prove to be just as good as humans in helping people practice leadership skills.

Source: Frontiers.

A virtual human can be as good as a flesh-and-blood one when it comes to helping people practice new leadership skills. That’s the conclusion from new research published in the journal Frontiers in Virtual Reality that evaluated the effectiveness of computer-generated characters in a training scenario compared to real human role-players in a conventional setting.

Take my micro-transaction.


We may be on track to our own version of the Oasis after an announcement yesterday from Epic Games that it has raised $1 billion to put towards building “the metaverse.”

Epic Games has created multiple hugely popular video games, including Fortnite, Assassin’s Creed, and Godfall. An eye-popping demo released last May shows off Epic’s Unreal Engine 5, its next-gen computer program for making video games, interactive experiences, and augmented and virtual reality apps, set to be released later this year. The graphics are so advanced that the demo doesn’t look terribly different from a really high-quality video camera following someone around in real life—except it’s even cooler. In February Epic unveiled its MetaHuman Creator, an app that creates highly realistic “digital humans” in a fraction of the time it used to take.

So what’s “the metaverse,” anyway? The term was coined in 1992 when Neal Stephenson published his hit sci-fi novel Snow Crash, in which the protagonist moves between a virtual world and the real world fighting a computer virus. In the context of Epic Games’ announcement, the metaverse will be not just a virtual world, but the virtual world—a digitized version of life where anyone can exist as an avatar or digital human and interact with others. It will be active even when people aren’t logged into it, and would link all previously-existing virtual worlds, like an internet for virtual reality.

What’s New: Intel today announced that it has signed an agreement with Defense Advanced Research Projects Agency (DARPA) to perform in its Data Protection in Virtual Environments (DPRIVE) program. The program aims to develop an accelerator for fully homomorphic encryption (FHE). Microsoft is the key cloud ecosystem and homomorphic encryption partner leading the commercial adoption of the technology once developed by testing it in its cloud offerings, including Microsoft Azure and the Microsoft JEDI cloud, with the U.S. government. The multiyear program represents a cross-team effort across multiple Intel groups, including Intel Labs, the Design Engineering Group and the Data Platforms Group, to tackle “the final frontier” in data privacy, which is computing on fully encrypted data without access to decryption keys.

“Fully homomorphic encryption remains the holy grail in the quest to keep data secure while in use. Despite strong advances in trusted execution environments and other confidential computing technologies to protect data while at rest and in transit, data is unencrypted during computation, opening the possibility of potential attacks at this stage. This frequently inhibits our ability to fully share and extract the maximum value out of data. We are pleased to be chosen as a technology partner by DARPA and look forward to working with them as well as Microsoft to advance this next chapter in confidential computing and unlock the promise of fully homomorphic encryption for all.” – Rosario Cammarota, principal engineer, Intel Labs, and principal investigator, DARPA DPRIVE program

Dr. Shawna Pandya MD, is a scientist-astronaut candidate with Project PoSSUM, physician, aquanaut, speaker, martial artist, advanced diver, skydiver, and pilot-in-training.

Dr. Pandya is also the VP of Immersive Medicine with the virtual reality healthcare company, Luxsonic Technologies, Director of the International Institute of Astronautical Sciences (IIAS)/PoSSUM Space Medicine Group, Chief Instructor of the IIAS/PoSSUM Operational Space Medicine course, Director of Medical Research at Orbital Assembly Construction (a company building the world’s first rotating space station providing the first artificial gravity habitat), clinical lecturer at the University of Alberta, podcast host with the World Extreme Medicine’s WEMCast series, Primary Investigator (PI) for the Shad Canada-Blue Origin student micro-gravity competition, member of the ASCEND 2021 Guiding Coalition, Life Sciences Team Lead for the Association of Spaceflight Professionals, sesional lecturer for the “Technology and the Future of Medicine,” course at the University of Alberta, and Fellow of the Explorers’ Club.

Dr. Pandya also serves as medical advisor to several space, medical and technology companies, including Mission: Space Food, Gennesys and Aquanauta, as well as the Jasper Dark Sky Festival Advisory Committee.

Dr. Pandya holds a Bsc degree in neuroscience from University of Alberta, a MSc in Space Studies from International Space University, an MD from University of Alberta, and a certification in entrepreneurship from the Graduate Studies Program at Singularity University.

Dr. Pandya is currently completing a fellowship in Wilderness Medicine (Academy of Wilderness Medicine), was granted an Honorary Fellowship in Extreme and Wilderness Medicine by the World Extreme Medicine organization in 2021, and was one of 50 physicians selected to attend the 2021 European Space Agency Space Medicine Physician Training Course. Dr. Pandya was named one of the Women’s Executive Network’s Top 100 Most Powerful Women in Canada in 2021, and a Canadian Space Agency Space Ambassador in 2021.

Dr. Pandya was part of the first crew to test a commercial spacesuit in zero-gravity in 2015. Dr. Pandya earned her aquanaut designation during the 2019 NEPTUNE (Nautical Experiments in Physiology, Technology and Underwater Exploration) mission. She previously served as Commander during a 2020 tour at the Mars Desert Research Station. Her expeditions were captured in the Land Rover short, released with the Apollo 11: First Steps film. She previously interned at ESA’s European Astronaut Center and NASA’s Johnson Space Center.

“The quality of VR headsets has improved exponentially since the 1990s. These graphs illustrate how the rapid improvement is likely to continue in the coming decades, with graphical resolutions practically indistinguishable from real life by 2040.”


Virtual reality – future trends.

The quality of virtual reality (VR) headsets has improved exponentially since the 1990s. These graphs illustrate how the rapid improvement is likely to continue in the coming decades, with graphical resolutions practically indistinguishable from real life by 2040.

Early concepts of alternative realities presented to a viewer had emerged as far back as the 19th century. However, it was not until the late 20th century that head-mounted display systems began to see practical and widespread use. Philosopher and computer scientist Jaron Lanier popularised the term “virtual reality” in the 1980s, and the first consumer headsets emerged in the 1990s.

All of which would be nice and handy, but clearly, privacy and ethics are going to be a big issue for people — particularly when a company like Facebook is behind it. Few people in the past would ever have lived a life so thoroughly examined, catalogued and analyzed by a third party. The opportunities for tailored advertising will be total, and so will the opportunities for bad-faith actors to abuse this treasure trove of minute detail about your life.

But this tech is coming down the barrel. It’s still a few years off, according to the FRL team. But as far as it is concerned, the technology and the experience are proven. They work, they’ll be awesome, and now it’s a matter of working out how to build them into a foolproof product for the mass market. So, why is FRL telling us about it now? Well, this could be the greatest leap in human-machine interaction since the touchscreen, and frankly Facebook doesn’t want to be seen to be making decisions about this kind of thing behind closed doors.

“I want to address why we’re sharing this research,” said Sean Keller, FRL Director of Research. “Today, we want to open up an important discussion with the public about how to build these technologies responsibly. The reality is that we can’t anticipate or solve all the ethical issues associated with this technology on our own. What we can do is recognize when the technology has advanced beyond what people know is possible and make sure that the information is shared openly. We want to be transparent about what we’re working on, so people can tell us their concerns about this technology.””


When augmented reality hits the market at full strength, putting digital overlays over the physical world through transparent glasses, it will intertwine itself deeper into the fabric of your life than any technology that’s come before it. AR devices will see the world through your eyes, constantly connected, always trying to figure out what you’re up to and looking for ways to make themselves useful.

Facebook is already leaps and bounds ahead of the VR game with its groundbreaking Oculus Quest 2 wireless headsets, and it’s got serious ambitions in the augmented reality space too. In an online “Road to AR glasses” roundtable for global media, the Facebook Reality Labs (FRL) team laid out some of the eye-popping next-gen AR technology it’s got up and running on the test bench. It also called on the public to get involved in the discussion around privacy and ethics, with these devices just a few scant years away from changing our world as completely as the smartphone did.

Wrist-mounted neuro-motor interfaces

Presently, our physical interactions with digital devices are crude, and they frequently bottleneck our progress. The computer mouse, the graphical user interface, the desktop metaphor and the touchscreen have all been great leaps forward, but world-changing breakthroughs in human-machine interface (HMI) technology come along once in a blue moon.

A new method called tensor holography could enable the creation of holograms for virtual reality, 3D printing, medical imaging, and more — and it can run on a smartphone.

Despite years of hype, virtual reality headsets have yet to topple TV or computer screens as the go-to devices for video viewing. One reason: VR can make users feel sick. Nausea and eye strain can result because VR creates an illusion of 3D viewing although the user is in fact staring at a fixed-distance 2D display. The solution for better 3D visualization could lie in a 60-year-old technology remade for the digital world: holograms.

Holograms deliver an exceptional representation of 3D world around us. Plus, they’re beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.

TOWARDS a METAMATERIALLY-BASED ANALOGUE SENSOR FOR TELESCOPE EYEPIECES jeremy batterson.

https://www.youtube.com/watch?v=rVQWmWkbzkw.

(NB: Those familiar with photography or telescopy can skip over the “elements of a system,” since they will already know this.)

In many telescopic applications, what is desired is not a more magnified image, but a brighter image. Some astronomical objects, such as the Andromeda galaxy or famous nebulae like M42 are very large in apparent size, but very faint. If the human eye could see the Andromeda galaxy, it would appear four times wider than the Moon. The great Orion nebula M42 is twice the apparent diameter of the Moon.

Astrophotographers have an advantage over visual astronomers in that their digital sensors can be wider than the human pupil, and thus can accommodate larger exit pupils for brighter images.

The common three-factor determination of brightness of a photograph (aperture, ISO, and shutter speed) should actually be five-factor, including what is often left out since it had already been inherently designed into a system: magnification and exit pupil. The common factors are.

Elements of a system: 1 )Aperture. As aperture increases, the light gain of a system increases by the square of increased aperture, so a 2-inch diameter entrance pupil aperture has four times gain over a 1-inch diameter entrance pupil and so on.

2) ISO: the defacto sensitivity of the film or digital sensor. A high ISO sensor will increase the light gain of a photograph, so that a higher ISO sensor will record the same camera shot brighter. In modern digital systems, the ISO factor is added PRIOR to post-shot processing, such as would be done with photoshop curves, since boosting the gain after the image is created will also increase the noise. Adding the brightness factor before the shot is finalized does not incorporate the noise as much and makes clearer images.

3) The shutter speed allows more time for light to accumulate, which is fine for still-shots, where there is no motion. In astronomy, long-exposure photographs keep the shutter open, with the telescope slowly moving with the stars, slowly accumulating light to produce images that would only be obtainable by a huge aperture telescope. In daytime, when much light is available, a high shutter speed allows for moving images to not be blurred.

4)Magnification, often left out in accounts, lowers the light gain by the square of increased magnification, so is the inverse of increased aperture. If we magnify an image two times, the lights is spread over an area four times greater, and the image is thus four times dimmer. Of course, for stars, which are points of light at any magnification, this does not apply; they will appear as bright, but against a darker sky. Anything that has an apparent area, planets, nebulae, etc., will be dimmed.

5) Exit pupil, a key limiting factor in astronomy. The above four factors lead to a physical impossibility of having ultra-fast, bright telescopes or binoculars for visual use. This is why astronomical binoculars are typically 7×50 or something akin. The objective lenses at the front create a 50mm entrance pupil, but if these binoculars go to a significantly lower magnification than 7x, the exit pupil becomes wider than the human dark-adjusted pupil, and much of the light is wasted. If this problem did not exist, astronomers would prefer “2×50” binoculars for many applications—for bright visual images far brighter than the human eye can see.

Analogue vs Digital: Passive night vision devices remain one of the ONLY cases where analogue optics systems remain better than digital ones. Virtually all modern cameras are digital, but all of the best night-vision devices, such as “Generation-3” devices are still analogue. The systemic luminance gain (what the viewer actually sees) of the most advanced analogue night vision system is in the range of 2000x, meaning it can make an image 2000x brighter than what it would appear to be otherwise. This is equivalent to turning a 6-inch telescope into a 268-inch mega-telescope in terms of light gain. For comparison, until fairly recently, the largest telescope in the world was the Mount Palomar 200-inch-wide telescope. In fact, the gain is so high, that filters are often used to remove light-pollution, and little effort made to capture the full exit pupil. Even though some light gain will be lost by having too large an exit pupil, there is so much to spare that it is an acceptable loss. Thus, some astronomers use night vision devices to see views of astronomical objects which would be otherwise beyond any existing amateur telescope. People use these devices as stand-alone viewers to walk around on moonless nights, with only starlight for luminance.

Current Loss of Resolution from Digital Gain: Digital sensors increase light gain by increasing the size of the imaging pixels, so that each one has a larger aperture and thus collects more light. But, an image sensor with larger pixels also has less pixels per square inch and thus less resolution, leading to less clear images. One popular night vision digital device, the Sionyx digital camera, is about the size of a high-end telescope eyepiece, but with a low resolution of 1280×720. This camera boosts light gain by registering infrared and ultraviolet in addition to visible light, while adding a high ISO factor—and by having very large pixel size, and reasonably large sensor size. The problem of pixel resolution will be an inherent problem of digital night vision devices unless the ISO value of each individual pixel can be boosted enough to allow smaller pixels than the human eye can perceive. On the advantageous side, if a camera like the Sionyx were to be used as a telescope eyepiece, its slowest shutter speed could be reduced still further, giving the visual equivalent of a long-exposure photograph. As the viewer watched, the image would continually grow brighter. For visual astronomy, it would also need some sort of projection system to make the image field appear round and closer to the eye, instead of as a rectangular movie screen from the back of a movie theater. Visual astronomers will only want such a digital eyepiece if its image appears similar to the image they are accustomed to seeing through the eyepiece. The alternative to such a digital eyepiece with large pixel size is what we mention below.

A metamaterial-based analogue sensor: If a metamaterial-based analogue sensor is devised, with high ISO value, then this problem can be surmounted. An analogue system, such as that produced by image-intensification tubes of Generation-1 and Generation-2 night vision devices, does not use pixels, and does not lose resolution in the same way.

Analogue sensors also can have the added benefit of being larger than the human pupil, just as a digital sensor, and thus able to accommodate larger exit pupils. They could be used in combination with digital processing for the best of both worlds. In addition, in the same way that the human retina widely distributes cone and rod cells, a metamaterial sensor could widely distribute “cells” which were sensitive to UV, IR and VL, combining them all into one system that enabled humans to see all the way from far infrared all the way up to ultraviolet, and everything in-between.

There are multiple tracks for possible metamaterially-based night vision capacities. Many are being developed by the military and will take some time to work into the civilian economy.

One system, the X27 Camera, gives full-color and good resolution: https://www.youtube.com/watch?v=rVQWmWkbzkw.

One metamaterial developed for the US military allows full-IR spectrum imaging in one compact system. It would be possible to allow near, mid and far infrared to register as a basic RGB color system:

One other possibility is a retinal projection system using special contact metalenses over the eyes, which allow the projection of an image onto the full curved human retina. Retinal projection is a technique on the verge of breakthrough, given much effort and funding for “augmented reality” and virtual reality. A telescopic system could project its image through such an arrangement, giving the impression of a full 180 degree field of view, something no telescope in history has ever done. With such a system, the user would wear a special contact fisheye-type metalens which could project a narrow beam from a special projection eyepiece directly onto the retina. This would be a “real” image, ie. not digitally altered. Alternatively, and more easily, the same virtual reality system that projects onto human vision would simply project a digital rendering of the view from a telescope eyepiece.

As a final aside, one additional area where metamaterials will help existing night vision gain is by transmission. Current analogue devices lose around 90% of their initial gain in the system itself. Thus, a generation-3 intensification tube may have a gain of 20000, but only a system gain of 2000. A huge amount of loss is occurring, which would be greatly reduced simply by having simpler, flat and single metamaterial lenses. A flat metamaterial lens can incorporate all the elements of a complete achromatic multi-lens system into one flat lens so that there will be less light loss. If metalenses could cut transmission loss down to 70%, a typical generation-2 system, which is much cheaper, could perform on par with a typical current generation-3 system. At present metalenses have not reached the stage of development where they can be produced at large apertures, but that will change.


CRANE, Ind. – A Navy engineer has invented a groundbreaking method to improve night vision devices without adding weight or more batteries. Dr. Ben Conley, an electro-optics engineer at the Naval Surface Warfare Center-Crane Division, has developed a special metamaterial to bring full-spectrum infrared to warfighters. On August 28, the Navy was granted a U.S. patent on Conley’s technology.