Toggle light / dark theme

Varjo’s XR-3 headset has perhaps the best passthrough view of any MR headset on the market thanks to color cameras that offer a fairly high resolution and a wide field-of-view. But rather than just using the passthrough view for AR (bringing virtual objects into the real world) Varjo has developed a new tool to do the reverse (bringing real objects into the virtual world).

At AWE 2021 this week I got my first glimpse at ‘Varjo Lab Tools’, a soon-to-be released software suite that will work with the company’s XR-3 mixed reality headset. The tool allows users to trace arbitrary shapes that then become windows into the real world, while the rest of the view remains virtual.

Elon Musk’s revolutionary company Neuralink plans to insert Computer Chips into peoples brains but what if there’s a safer and even more performant way of merging humans and machines in the future?
Enter DARPAs plan to help the emergence of non-invasive brain computer interfaces which led to the organization Battelle to create a kind of Neural Dust to interface with our brains that might be the first step to having Nanobots inside of the human body in the future.

How will Neuralink deal with that potential rival with this cutting edge technology? Its possibilities in Fulldive Virtual Reality Games, Medical Applications, merging humans with artificial intelligence and its potential to scale all around the world are enormous.

If you enjoyed this video, please consider rating this video and subscribing to our channel for more frequent uploads. Thank you! smile

#neuralink #ai #elonmusk.

Credits:

https://www.youtube.com/watch?v=PhzDIABahyc.
https://www.bensound.com/

Nvidia’s Omniverse, billed as a “metaverse for engineers,” has grown to more than 700 companies and 70,000 individual creators that are working on projects to simulate digital twins that replicate real-world environments in a virtual space.

The Omniverse is Nvidia’s simulation and collaboration platform delivering the foundation of the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. Omniverse is now moving from beta to general availability, and it has been extended to software ecosystems that put it within reach of 40 million 3D designers.

And today during Nvidia CEO Jensen Huang’s keynote at the Nvidia GTC online conference, Nvidia said it has added features such as Omniverse Replicator, which makes it easier to train AI deep learning neural networks, and Omniverse avatar, which makes it simple to create virtual characters that can be used in the Omniverse or other worlds.

French startup Lynx launched a Kickstarter campaign for Lynx R-1 in October, a standalone MR headset which is capable of both VR and passthrough AR. Starting at €530 (or $500 if you’re not subject to European sales tax), the MR headset attracted a strong response from backers as it passed its initial funding goal in under 15 hours, going on to garner over $800,000 throughout the month-long campaign.

Update (November 10th, 2021): Lynx R-1 Kickstarter is now over, and it’s attracted €725,281 (~$835,000) from 1,216 backers. In the final hours the campaign managed to pass its first stretch goal at $700,000—a free facial interface pad.

If you missed out, the company is now offering direct preorders for both its Standard Edition for $600 and Enterprise Edition for $1,100. It’s also selling a few accessories including compatible 6DOF controllers, facial interfaces, and a travel case.

NVIDIA has launched a follow-up to the Jetson AGX Xavier, its $1,100 AI brain for robots that it released back in 2018. The new module, called the Jetson AGX Orin, has six times the processing power of Xavier even though it has the same form factor and can still fit in the palm of one’s hand. NVIDIA designed Orin to be an “energy-efficient AI supercomputer” meant for use in robotics, autonomous and medical devices, as well as edge AI applications that may seem impossible at the moment.

The chipmaker says Orin is capable of 200 trillion operations per second. It’s built on the NVIDIA Ampere architecture GPU, features Arm Cortex-A78AE CPUs and comes with next-gen deep learning and vision accelerators, giving it the ability to run multiple AI applications. Orin will give users access to the company’s software and tools, including the NVIDIA Isaac Sim scalable robotics simulation application, which enables photorealistic, physically-accurate virtual environments where developers can test and manage their AI-powered robots. For users in the healthcare industry, there’s NVIDIA Clara for AI-powered imaging and genomics. And for autonomous vehicle developers, there’s NVIDIA Drive.

The company has yet to reveal what the Orin will cost, but it intends to make the Jetson AGX Orin module and developer kit available in the first quarter of 2022. Those interested can register to be notified about its availability on NVIDIA’s website. The company will also talk about Orin at NVIDIA GTC, which will take place from November 8th through 11th.

Have you ever seen the popular movie called The Matrix? In it, the main character Neo realizes that he and everyone else he had ever known had been living in a computer-simulated reality. But even after taking the red pill and waking up from his virtual world, how can he be so sure that this new reality is the real one? Could it be that this new reality of his is also a simulation? In fact, how can anyone tell the difference between simulated reality and a non-simulated one? The short answer is, we cannot. Today we are looking at the simulation hypothesis which suggests that we all might be living in a simulation designed by an advanced civilization with computing power far superior to ours.

The simulation hypothesis was popularized by Nick Bostrum, a philosopher at the University of Oxford, in 2003. He proposed that members of an advanced civilization with enormous computing power may run simulations of their ancestors. Perhaps to learn about their culture and history. If this is the case he reasoned, then they may have run many simulations making a vast majority of minds simulated rather than original. So, there is a high chance that you and everyone you know might be just a simulation. Do not buy it? There is more!

According to Elon Musk, if we look at games just a few decades ago like Pong, it consisted of only two rectangles and a dot. But today, games have become very realistic with 3D modeling and are only improving further. So, with virtual reality and other advancements, it seems likely that we will be able to simulate every detail of our minds and bodies very accurately in a few thousand years if we don’t go extinct by then. So games will become indistinguishable from reality with an enormous number of these games. And if this is the case he argues, “then the odds that we are in base reality are 1 in billions”.

There are other reasons to think we might be in a simulation. For example, the more we learn about the universe, the more it appears to be based on mathematical laws. Max Tegmark, a cosmologist at MIT argues that our universe is exactly like a computer game which is defined by mathematical laws. So for him, we may be just characters in a computer game discovering the rules of our own universe.

With our current understanding of the universe, it seems impossible to simulate the entire universe given a potentially infinite number of things within it. But would we even need to? All we need to simulate is the actual minds that are occupying the simulated reality and their immediate surroundings. For example, when playing a game, new environments render as the player approaches them. There is no need for those environments to exist prior to the character approaching them since this can save a lot of computing power. This can be especially true of simulations that are as big as our universe. So, it could be argued that distant galaxies, atoms, and anything that we are actively not observing simply does not exist. These things render into existence once someone starts to observe them.

On his podcast StarTalk, astrophysicist Neil deGrasse Tyson and comedian Chuck Nice discussed the simulation hypothesis. Nice suggested that maybe there is a finite limit to the speed of light because if there wasn’t, we would be able to reach other galaxies very quickly. Tyson was surprised by this statement and further added that the programmer put in this limit to make sure we cannot get too far away places before the programmer has the time to program them.

Apple is looking into how it may change how you view AR (augmented reality) altogether…literally. Instead of projecting an image onto a lens, which is viewed by someone wearing an AR headset or glasses, Apple envisions beaming the image directly onto the user’s eyeball itself.

Apple recently unveiled its upcoming lineup of new products. What it did not showcase, however, was revealed in a recent patent—Apple is shown researching how it can change how we see AR and the future of its “Apple Glass” product, if one comes to exist. The patent reveals how Apple intends to move away from the traditional way of projecting an image onto a lens, to projecting the image directly onto the retina of the wearer. This will be achieved through the use of micro projectors.

The issue that Apple is trying to avoid is the nausea and headaches some people experience while viewing AR and VR (virtual reality). The patent states the issue as “accommodation-convergence mismatch,” which causes eyestrain for some. Apple hopes that by using its “Direct Retinal Projector” it can alleviate those symptoms and make the AR and VR realm accessible for more users.

With VR data they’ve got data about 100 per cent of your experience — how you saw it, where you looked. The next generation of Facebook’s VR headset is going to have eye tracking.

This is probably the most invasive surveillance technology we’re going to bring into our homes in the next decade.

Facebook’s pivot was met with plenty of scepticism, with critics saying the timing points to a cynical rebrand designed to distance the company from Facebook’s rolling scandals. Others have argued the metaverse already exists as a graveyard strewn with ideas like Google Glass smart glasses, which have failed to catch on. But with Zuckerberg pledging to invest at least $US10 billion this year on metaverse development and proposing to hire 10,000 workers across the European Union over the next five years, there is a looming question for policymakers about how this ambition can or should be regulated.

The Rise of actually real and useful Nanobots making use of the rapidly advancing miniaturization of robotics and microchips through companies such as TSMC, Intel and Samsung. These nanobots are soon going to enable things such as full dive virtual reality, healing diseases such as cancer and potentially even increasing the longevity up to 200 years. These tiny computer/robots will enter our bloodstream and cross the blood brain barrier to read and write similar to how Brain Computer Interfaces such as Neuralink currently work. The future of technology is looking really exciting.

If you enjoyed this video, please consider rating this video and subscribing to our channel for more frequent uploads. Thank you! smile

TIMESTAMPS:
00:00 Have we reached the Nanobot-Era?
02:51 The Applications of Nanobots.
04:26 All the types of BCI’s.
06:44 So, when will there be Nanobots?
09:13 Last Words.

#nanobots #ai #nanotechnology