Toggle light / dark theme

Gesture interface company Leap Motion is announcing an ambitious, but still very early, plan for an augmented reality platform based on its hand tracking system. The system is called Project North Star, and it includes a design for a headset that Leap Motion claims costs less than $100 at large-scale production. The headset would be equipped with a Leap Motion sensor, so users could precisely manipulate objects with their hands — something the company has previously offered for desktop and VR displays.

Project North Star isn’t a new consumer headset, nor will Leap Motion be selling a version to developers at this point. Instead, the company is releasing the necessary hardware specifications and software under an open source license next week. “We hope that these designs will inspire a new generation of experimental AR systems that will shift the conversation from what an AR system should look like, to what an AR experience should feel like,” the company writes.

The headset design uses two fast-refreshing 3.5-inch LCD displays with a resolution of 1600×1440 per eye. The displays reflect their light onto a visor that the user perceives as a transparent overlay. Leap Motion says this offers a field of view that’s 95 degrees high and 70 degrees wide, larger than most AR systems that exist today. The Leap Motion sensor fits above the eyes and tracks hand motion across a far wider field of view, around 180 degrees horizontal and vertical.

iFLY, a leading provider of indoor skydiving facilities, today launched their iFLY VR initiative which combines the company’s indoor skydiving experience with immersive visuals powered by a Gear VR headset. I got to try to experience for myself at the company’s SF Bay location.

Now available at 28 locations in the US, the iFLY VR experience is an optional $20 add-on to the usual indoor flight experience offered by the company (which starts around $70). After training and getting a feel for stable non-VR flying, customers don a purpose-built helmet which incorporates a Gear VR headset. They can choose between several different skydiving locations—like Dubai, Hawaii, or Switzerland—where iFly has recorded real skydives specifically for use in the iFly VR experience.

I went to the company’s SF Bay location to try the iFly VR experience first hand, and came away feeling like I got to experience the ultimate haptic simulation.

Over the past several decades, researchers have moved from using electric currents to manipulating light waves in the near-infrared range for telecommunications applications such as high-speed 5G networks, biosensors on a chip, and driverless cars. This research area, known as integrated photonics, is fast evolving and investigators are now exploring the shorter—visible—wavelength range to develop a broad variety of emerging applications. These include chip-scale LIDAR (light detection and ranging), AR/VR/MR (augmented/virtual/mixed reality) goggles, holographic displays, quantum information processing chips, and implantable optogenetic probes in the brain.

The one device critical to all these applications in the is an optical phase modulator, which controls the phase of a light wave, similar to how the phase of radio waves is modulated in wireless computer networks. With a phase modulator, researchers can build an on-chip that channels light into different waveguide ports. With a large network of these optical switches, researchers could create sophisticated integrated optical systems that could control light propagating on a tiny chip or light emission from the chip.

But phase modulators in the visible range are very hard to make: there are no materials that are transparent enough in the visible spectrum while also providing large tunability, either through thermo-optical or electro-optical effects. Currently, the two most suitable materials are silicon nitride and lithium niobate. While both are highly transparent in the visible range, neither one provides very much tunability. Visible-spectrum phase modulators based on these materials are thus not only large but also power-hungry: the length of individual waveguide-based modulators ranges from hundreds of microns to several mm and a single modulator consumes tens of mW for phase tuning. Researchers trying to achieve large-scale integration—embedding thousands of devices on a single microchip—have, up to now, been stymied by these bulky, energy-consuming devices.

As part of its recently announced rebranding, Facebook is doubling down on its vision of the metaverse, an immersive virtual-reality environment for gaming, work meetings, and socializing. In promotional materials, Mark Zuckerberg and his friends enter the metaverse via the company’s own Oculus headsets, and are transformed into cartoon-y animated torsos, often while arranged around a virtual boardroom.

According to Zuckerberg, the metaverse promises an at-work reality better than our own, with lush backdrops and infinite personal customization (as long as that customization stops at the waist for humanoid characters). Borrowing elements from world-building games and environments like Second Life and Fortnite, and inspiration from science-fiction referents like Ready Player One and the Matrix, the insinuation is that working within the metaverse will be fun. (This despite the irony that all of these virtual worlds are positioned as dystopias by their creators.)

Full Story:

Working at the intersection of hardware and software engineering, researchers are developing new techniques for improving 3D displays for virtual and augmented reality technologies.

Virtual and augmented reality headsets are designed to place wearers directly into other environments, worlds and experiences.

While the technology is already popular among consumers for its immersive quality, there could be a future where the holographic displays look even more like real life. In their own pursuit of these better displays, the Stanford Computational Imaging Lab has combined their expertise in optics and artificial intelligence. Their most recent advances in this area are detailed in a paper published in Science Advances and work that will be presented at SIGGRAPH ASIA 2021 in December.

WIRED sat down with West to sift fantasy from reality and pin down what XR is actually good at. And it may come as a surprise that a lot of it relies on collecting a lot of data. The following interview is a transcript of our conversation, lightly edited for clarity and length.

WIRED: So let’s start with sort of an ontological question. There’s been this idea that we’ll be in or go to the metaverse, or several metaverses, which tech companies posit will exist in VR or AR. Do you see VR and AR as being more of a tool or a destination?

Timoni West: That’s a great question. I would actually say neither. I see XR as one of the many different mediums you could choose to work in. For example, we actually have an AR mobile companion app [in beta] that allows you to scan a space and gray box it out, put down objects, automatically tag things. So I’m using AR to do the things that AR is best for. I’ll use VR to do the things that VR is best for, like presence, being able to meet together, sculpt, or do anything that’s, you know, sort of intrinsically 3D.

https://www.youtube.com/watch?v=X1TYqoR-qVA

We explore human enhancement and personal performance hacking with Matt Ward (@mattwardio), host of The Disruptors podcast, startup investor, adviser, and business innovation consultant. Matt and I thought it would be fun to do two episodes, one here on MIND & MACHINE and the other on The Disruptors, where we explore what we’ve learned, the ideas we’ve formed and our takeaways across all these different fields that we cover.

So with this episode here on MIND & MACHINE, we focus on human enhancement — technologies that are extending lifespan and enhancing human capability. Then we get into what Matt and I are doing currently to maximize our own performance capabilities — our ability to think more clearly, to live more energetic vibrant lives… which is all heavily informed by all these amazing guests across these different fields that we explore.

In the other part of this discussion, on The Disruptors, we look at another set of subjects from space to AI to Augmented and Virtual reality. So I encourage you to check that out as well at The Disruptors… For the other part of the Conversation on The Disruptors: https://is.gd/mv1Vez https://youtu.be/PtpwgTr4GSU __________ MIND & MACHINE features interviews by August Bradley with bold thinkers and leaders in transformational technologies. Subscribe to the MIND & MACHINE newsletter: https://www.mindandmachine.io/newsletter MIND & MACHINE Website: https://www.MindAndMachine.io Subscribe to the podcast on: iTunes: https://www.mindandmachine.io/itunes Android or Other Apps: https://www.mindandmachine.io/android Show Host August Bradley on Twitter: https://twitter.com/augustbradley _____________________________

For the other part of the Conversation on The Disruptors:
https://is.gd/mv1Vez.

__________

MIND & MACHINE features interviews by August Bradley with bold thinkers and leaders in transformational technologies.

An innovator in early AR systems has a dire prediction: the metaverse could change the fabric of reality as we know it.

Louis Rosenberg, a computer scientist and developer of the first functional AR system at the Air Force Research Laboratory, penned an op-ed in Big Think this weekend that warned the metaverse — an immersive VR and AR world currently being developed by The Company Formerly Known as Facebook — could create what sounds like a real life cyberpunk dystopia.

“I am concerned about the legitimate uses of AR by the powerful platform providers that will control the infrastructure,” Rosenberg wrote in the essay.

When comparing Meta — formerly Facebook — and Microsoft’s approaches to the metaverse, it’s clear Microsoft has a much more grounded and realistic vision. Although Meta currently leads in the provision of virtual reality (VR) devices (through its ownership of what was previously called Oculus), Microsoft is adapting technologies that are currently more widely used. The small, steady steps Microsoft is making today put it in a better position to be one of the metaverse’s future leaders. However, such a position comes with responsibilities, and Microsoft needs to be prepared to face them.

The metaverse is a virtual world where users can share experiences and interact in real-time within simulated scenarios. To be clear, no one knows yet what it will end up looking like, what hardware it will use, or which companies will be the main players — these are still early days. However, what is certain is that VR will play a key enabling role; VR-related technologies such as simultaneous location and mapping (SLAM), facial recognition, and motion tracking will be vital for developing metaverse-based use cases.

Full Story:

Virtual and augmented reality headsets are designed to place wearers directly into other environments, worlds, and experiences. While the technology is already popular among consumers for its immersive quality, there could be a future where the holographic displays look even more like real life. In their own pursuit of these better displays, the Stanford Computational Imaging Lab has combined their expertise in optics and artificial intelligence. Their most recent advances in this area are detailed in a paper published today (November 12, 2021) in Science Advances and work that will be presented at SIGGRAPH ASIA 2021 in December.

At its core, this research confronts the fact that current augmented and virtual reality displays only show 2D images to each of the viewer’s eyes, instead of 3D – or holographic – images like we see in the real world.

“They are not perceptually realistic,” explained Gordon Wetzstein, associate professor of electrical engineering and leader of the Stanford Computational Imaging Lab. Wetzstein and his colleagues are working to come up with solutions to bridge this gap between simulation and reality while creating displays that are more visually appealing and easier on the eyes.