Toggle light / dark theme

Varjo’s XR-3 headset has perhaps the best passthrough view of any MR headset on the market thanks to color cameras that offer a fairly high resolution and a wide field-of-view. But rather than just using the passthrough view for AR (bringing virtual objects into the real world) Varjo has developed a new tool to do the reverse (bringing real objects into the virtual world).

At AWE 2021 this week I got my first glimpse at ‘Varjo Lab Tools’, a soon-to-be released software suite that will work with the company’s XR-3 mixed reality headset. The tool allows users to trace arbitrary shapes that then become windows into the real world, while the rest of the view remains virtual.

Nvidia’s Omniverse, billed as a “metaverse for engineers,” has grown to more than 700 companies and 70,000 individual creators that are working on projects to simulate digital twins that replicate real-world environments in a virtual space.

The Omniverse is Nvidia’s simulation and collaboration platform delivering the foundation of the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. Omniverse is now moving from beta to general availability, and it has been extended to software ecosystems that put it within reach of 40 million 3D designers.

And today during Nvidia CEO Jensen Huang’s keynote at the Nvidia GTC online conference, Nvidia said it has added features such as Omniverse Replicator, which makes it easier to train AI deep learning neural networks, and Omniverse avatar, which makes it simple to create virtual characters that can be used in the Omniverse or other worlds.

French startup Lynx launched a Kickstarter campaign for Lynx R-1 in October, a standalone MR headset which is capable of both VR and passthrough AR. Starting at €530 (or $500 if you’re not subject to European sales tax), the MR headset attracted a strong response from backers as it passed its initial funding goal in under 15 hours, going on to garner over $800,000 throughout the month-long campaign.

Update (November 10th, 2021): Lynx R-1 Kickstarter is now over, and it’s attracted €725,281 (~$835,000) from 1,216 backers. In the final hours the campaign managed to pass its first stretch goal at $700,000—a free facial interface pad.

If you missed out, the company is now offering direct preorders for both its Standard Edition for $600 and Enterprise Edition for $1,100. It’s also selling a few accessories including compatible 6DOF controllers, facial interfaces, and a travel case.

By Jeremy Batterson 11-09-2021

The equivalent of cheap 100-inch binoculars will soon be possible. This memo is a quick update on seven rapidly converging technologies that augur well for astronomy enthusiasts of the near future. All these technologies already exist in either fully developed or nascent form, and all are being rapidly improved due to the gigantic global cell phone market and the retinal projection market that will soon replace it. Listed here are the multiple technologies, after which they are brought together into a single system.

1) Tracking.
2) Single-photon image sensing.
3) Large effective exit pupils via large sensors.
4) Long exposure non-photographic function.
5) Flat optics (metamaterials)
6) Off-axis function of flat optics.
7) Retinal projection.

1) TRACKING: this is already being widely used in so-called “go-to” telescopes, where the instrument will find any object and track it, so Earth’s rotation does not take the object viewed out of the field of vision. The viewer doesn’t have to find the object and doesn’t have to set up the clock drive to track it. Tracking is also partly used in image stabilization software for cameras and smart phones, to prevent motion blurring of images.

2) SINGLE-PHOTON IMAGE SENSORS, whether of the single-photon avalanching diode type, or the type developed by Dr. Fossum, will allow passive imaging in nearly totally dark environments, without the use of IR or other illumination. This new type of image sensor will replace the monochromatic analogue “night-vision” devices, allowing color imaging at higher resolution than they can produce. Unlike these current devices, such sensors will not be destroyed by being exposed to normal or high lighting. Effectively, these sensors increase the effective light-gathering power of a telescope by at least an order of magnitude, allowing small telescopes to see what observatory telescopes see now.

3) EXIT PUPIL: The pupil of the dark-adapted human eye is around 7mm, which means light exiting a telescope must not have a wider-cross axis than this, or a percent of the light captured by the objective lens or mirror will be lost. If the magnification of a system is lowered, to give brighter images, this is limited by this roadblock. This is a well-known problem for visual astronomers. Astro-photographers get around this by two tricks. The first is to use a photographic sensor wider than 7mm, allowing a larger exit pupil and thus brighter images. A 1-inch sensor or photographic plate, for example, already allows an image thirteen times brighter than what a 7mm human pupil can see.

4) LONG EXPOSURE: The other trick astro-photographers use is to keep the shutter of their cameras open for longer periods, thus capturing more light, and allowing a bright image of a faint object to build up over time. As a telescope tracks the stars–so that they appear motionless in the telescopic view–this can be done for hours. The Hubble Space Telescope took a 100 hour long-exposure photograph leading to the famous “deep field” of ultra-faint distant galaxies. An example of a visual use of the same principle is the Sionyx Pro camera, which keeps the shutter open for a fraction of a second. If the exposures are short enough, a video can be produced which appears brighter than what the unaided eye sees. Sionyx adds to this with its black-silicon sensors, which are better at retaining all light that hits them. For astronomy, where stellar objects do not move and do not cause blurring if they are tracked, longer exposures can be created, with the image rapidly brightening as the viewer watches. Unistellar’s eVscope and Vaonis’s Stellina telescope, already use this function, but without an eyepiece. Instead, their images are projected onto people’s cell phones or other viewing devices. However, most astronomers want to be able to see something directly with their eyes, which is a limiting point on such types of telescopes.

5) FLAT OPTICS are already entering the cell phone market and will increase in aperture over coming years. A flat metamaterial lens, replacing a thick and heavy series of lenses can provide a short enough system to easily fit into a cell phone, with no protruding camera bumps. It is possible to produce such lenses with extremely short focal-ratios. Eventually, very large 20-inch or 30-inch objectives will be producible this way, but, in the interim, multiple small objectives could be combined at a DISTANCE from each other. There are two reasons for larger aperture objectives: increased light-gathering power and higher resolution. However, the higher resolution can also be obtained by having two or more smaller objectives kept far apart from each other. If the light-gathering power is already high enough from single-photon detectors, it is not necessary to have a larger aperture.

6) THE OFF-AXIS ability of flat metamaterial optics is another game-changer. In normal optics, the plane of focus travels straight down the perpendicular of the main light-collecting lens or mirror, the so-called “objective” lens or mirror. In an off-axis mirror or lens, the focal axis is not down the middle, but to the side or even totally external to the perpendicular plane of the objective. These types of optics are very difficult to produce traditionally but are easy to produce with the flat, metamaterial method. An example of the value of off-axis optics is a reflector telescope. When light bounces off a reflector’s objective mirror, it then hits a secondary mirror and bounces back to an eyepiece. But the secondary mirror obstructs the main mirror as light passes by it on the way to it. This creates diffraction patterns which reduce the resolution of the image. For this reason, refractors, which use lenses, and thus have no secondary obstruction, are superior in quality, although far harder and expensive to build. The off-axis function would also be valuable for large binoculars, which would not need secondary prisms or mirrors to bring the images to the width between two human eyes. Typical human pupils are two or three inches apart. With off-axis binoculars, the focus of the two objective lenses would be to the side of their diameters, and no secondary guidance of the light cones would be needed.

7) RETINAL PROJECTION is being developed in several ways by numerous companies, and within a decade should be fairly common. This is known generically as “augmented reality,” where a pair of glasses or contact lenses project an image over the normal vision. If a person closed their eyes in a darkened room, they could watch movies, or dictate papers with such a system—or view images of the stars from a telescope. For visual astronomers, who like the idea of seeing something in their eye, instead of viewing it on a TV screen, retinal projection will allow a nearly identical experience, but with all the superior functions listed in this memo.

THE FAMILY TELESCOPE AROUND THE YEAR 2035:
These functions can be brought together in numerous ways. Noted here is but one such possible way.

A) Two flat off-axis metamaterial lenses around 2.5 inches (~60mm) in diameter, and corrected for achromatic, spherical, and other optical aberrations, are positioned 30 inches apart, giving the resolution of a telescope with an objective mirror of 30-inch diameter. Since they have single-photon detectors, they will conservatively have an effective light-gathering power of something like 10-inch binoculars.

B) Whoever has control of the scope looks up at a stellar object, such as the famous M-42 nebula, and says, “zoom and track.” The scope then moves to where the controller is looking, or wherever else it is told to go.

C) A large image sensor increases the brightness of the view another order of magnitude over what the dark-adapted eye would see with 10-inch binoculars. Now, the image is as bright as what would be seen in 30-inch binoculars.

D) The time-exposure function adds yet another order or two of light-gathering power, depending on the length of exposure. Even exposures fractions of a second long already add more brightness. Longer exposures add more, in almost real time. Now, the image is equivalent to looking through something like 100-inch binoculars. As the producible aperture of metamaterial optics increases, the effective aperture will go up to 200-inch, 500-inch, etc.

Finally, the image is projected into the back of the viewer’s eye, as a wide-field image, wider than that of the best TeleVue eyepiece, and is simultaneously projected onto the retinas of many other people. This multiple-viewer function already exists with Stellina and eVscope, but the view is seen on multiple cell phone screens, instead of in multiple viewers’ own eyes.

As pointed out in an earlier memo, the rise of the single-photon detector will also render much of our bright city lights obsolete, since simple night-glasses will allow driving at night without headlights or streetlamps. This means that the night sky could be a lot darker to begin with, another boon to astronomers. Finally, with currently existing filters, it is already possible to filter out much of the light-noise coming from urban areas. This function will no doubt also be incorporated, along with built-in dew heaters, and other useful features.

Today at AWE 2,021 Qualcomm announced Snapdragon Spaces XR Developer Platform, a head-worn AR software suite the company is using to kickstart a broader move towards smartphone-tethered AR glasses.

Qualcomm says its Snapdragon Spaces XR Developer Platform offers a host of machine perception functions that are ideal for smartphone-tethered AR glasses. The software tool kit focuses on performance and low power, and provides the sort of environmental and human interaction stuff it hopes will give AR developers a good starting point.

A US$500 billion accelerator of human progress — mansoor hanif, executive director, emerging technologies, NEOM.


Mansoor Hanif is the Executive Director of Emerging Technologies at NEOM (https://www.neom.com/en-us), a fascinating $500 billion planned cognitive city” & tourist destination, located in north west Saudi Arabia, where he is responsible for all R&D activities for the Technology & Digital sector, including space technologies, advanced robotics, human-machine interfaces, sustainable infrastructure, digital master plans, digital experience platforms and mixed reality. He also leads NEOM’s collaborative research activities with local and global universities and research institutions, as well as manages the team developing world-leading Regulations for Communications and Connectivity.

Prior to this role, Mr Hanif served as Executive Director, Technology & Digital Infrastructure, where he oversaw the design and implementation of NEOM’s fixed, mobile, satellite and sub-sea networks.

An industry leader, Mr Hanif has over 25 years of experience in planning, building, optimizing and operating mobile networks around the world. He is patron of the Institute of Telecommunications Professionals (ITP), a member of the Steering Board of the UK5G Innovation Network, and on the Advisory Boards of the Satellite Applications Catapult and University College London (UCL) Electrical and Electronic Engineering Dept.

Prior to joining NEOM, Mr Hanif was Chief Technology Officer of Ofcom, the UK telecoms and media regulator, where he oversaw the security and resilience of the nation’s networks.

As Director of the Converged Networks Research Lab at BT, he led research into fixed and mobile networks to drive convergence across research initiatives.

Mr Hanif has held several other roles at EE (formerly Everything Everywhere), a UK-based telecommunications company, and was responsible for the technical launch of 4G and integration of the Orange and T-Mobile networks as Director of Radio Networks and board member of Mobile Broadband Network Limited. In addition, he held positions at both Orange Moldova and Vodafone Italy, overseeing network optimization, capacity expansion and the planning and implementation of new technologies.

Mr. Hanif holds a Bachelor of Engineering in Electronic and Electrical Engineering from University College London (UCL) and a Diplôme D’ingénieur from the École Nationale Superieure de Télécom de Bretagne.

Today, we’re bringing you 10 EMERGING Technologies That Will Change The World. Better stick around for #1 to find out how Elon Musk may have plans to turn us all into human robots some day.
What’s up tech-heads and welcome to another episode of TechJoint! It really feels like we’re already living in the future every day. From 5G connectivity to self-driving cars being even more accessible, innovation is everywhere we look! It can sometimes be hard to imagine a world with even more innovation—a future world. What would it look like? What everyday problems would be solved? It’s a pretty good bet some technologies like artificial intelligence will be in our lives, playing important roles in the future of humankind. Other technologies may seem far fetched, unnecessary and frankly, unattainable. So buckle up, and let’s take a look at these 10 futuristic technologies that are going to change the world as we know it.

► Subscribe For More! https://goo.gl/FQqpG8

10 EMERGING technologies that will change the world.

10. 10 Voice Assistants.
9. Gene Splicing.
8. Mixed Reality.
7. Regenerative Medicine.
6. Fully Autonomous Vehicles.
5. Digital Wallets.
4. Artificial Intelligence.
3. Automation.
2. ‘Alive’ Building Materials.
1. Internet For Everyone.

► Disclaimer: This video description contains affiliate links. Meaning, if you click on one of the product links and make a purchase, we receive a small commission. This helps us keep making more videos. Thank you for your support!

For Copyright Issues, Please Contact [email protected]

Microsoft is entering the race to build a metaverse inside Teams, just days after Facebook rebranded to Meta in a push to build virtual spaces for both consumers and businesses. Microsoft is bringing Mesh, a collaborative platform for virtual experiences, directly into Microsoft Teams next year. It’s part of a big effort to combine the company’s mixed reality and HoloLens work with meetings and video calls that anyone can participate in thanks to animated avatars.

With today’s announcement, Microsoft and Meta seem to be on a collision course to compete heavily in the metaverse, particularly for the future of work.

Microsoft Mesh always felt like the future of Microsoft Teams meetings, and now it’s starting to come to life in the first half of 2022. Microsoft is building on efforts like Together Mode and other experiments for making meetings more interactive, after months of people working from home and adjusting to hybrid work.

Apple is looking into how it may change how you view AR (augmented reality) altogether…literally. Instead of projecting an image onto a lens, which is viewed by someone wearing an AR headset or glasses, Apple envisions beaming the image directly onto the user’s eyeball itself.

Apple recently unveiled its upcoming lineup of new products. What it did not showcase, however, was revealed in a recent patent—Apple is shown researching how it can change how we see AR and the future of its “Apple Glass” product, if one comes to exist. The patent reveals how Apple intends to move away from the traditional way of projecting an image onto a lens, to projecting the image directly onto the retina of the wearer. This will be achieved through the use of micro projectors.

The issue that Apple is trying to avoid is the nausea and headaches some people experience while viewing AR and VR (virtual reality). The patent states the issue as “accommodation-convergence mismatch,” which causes eyestrain for some. Apple hopes that by using its “Direct Retinal Projector” it can alleviate those symptoms and make the AR and VR realm accessible for more users.

DigiLens has raised funding from Samsung Electronics in a round that values the augmented reality smart glasses makers at more than $500 million.

Sunnyvale, California-based DigiLens did not say the exact amount it raised for the development of its extended reality glasses (XR), which will offer AR features, such as overlaying digital images on what you see.

DigiLens CEO Chris Pickett said in a previous interview with VentureBeat that the latest smart glasses are more advanced than models the company showed in 2019.