Toggle light / dark theme

https://youtube.com/watch?v=97FhauH1J58

Ready for some mind blowing information…

“The past and the future exist together simultaneously in one geometric object.”

All time exists, all the time.

“Everything everywhere in one frozen moment of time and the past influences the future and the future influences the past in an endless feedback loop. Time is affecting all time, all the time. Every moment is co-creating ever other moment both forward and backward in time.”


This is a well done video that offers a theory of everything and a model that explains how our simulated reality is constructed and how it works. In this article, I’ve summarized the amazing ideas in this video with my own comments. Let’s get into some of the things discussed in “We Are Living In A Simulation – New Evidence!” from Real Spirit Dynamics. The Future Creates the Past, then the Past Creates the Future A higher dimensional Quasicrystal creates a 4D Quasicrystal that then projects a 3D Quasicrystal which is the fundamental substructure of all reality. Quasicrystals, angles and light form these dimensional projections. Read More →

Read more

Almost two years after the acquisition by Intel, the deep learning chip architecture from startup Nervana Systems will finally be moving from its codenamed “Lake Crest” status to an actual product.

In that time, Nvidia, which owns the deep learning training market by a long shot, has had time to firm up its commitment to this expanding (if not overhyped in terms of overall industry dollar figures) market with new deep learning-tuned GPUs and appliances on the horizon as well as software tweaks to make training at scale more robust. In other words, even with solid technology at a reasonable price point, for Intel to bring Nervana to the fore of the training marke t–and push its other products for inference at scale along with that current, it will take a herculean effort–one that Intel seems willing to invest in given its aggressive roadmap for the Nervana-based lineup.

The difference now is that at least we have some insight into how (and by how much) this architecture differs from GPUs–and where it might carve out a performance advantage and more certainly, a power efficiency one.

Read more

Lecture by Professor Oussama Khatib for Introduction to Robotics (CS223A) in the Stanford Computer Science Department.

Lecture 1 | introduction to robotics

In the first lecture of the quarter, Professor Khatib provides an overview of the course. CS223A is an introduction to robotics which covers topics such as Spatial Descriptions, Forward Kinematics, Inverse Kinematics, Jacobians, Dynamics, Motion Planning and Trajectory Generation, Position and Force Control, and Manipulator Design.

Read more

Hundreds of millions of years ago, at a time when back-boned animals were just starting to crawl onto land, one such creature became infected by a virus. It was a retrovirus, capable of smuggling its genes into the DNA of its host. And as sometimes happens, those genes stayed put. They were passed on to the animal’s children and grandchildren. And as these viral genes cascaded through the generations, they changed, transforming from mere stowaways into important parts of their host’s biology.

One such gene is called Arc. It’s active in neurons, and plays a vital role in the brain. A mouse that’s born without Arc can’t learn or form new long-term memories. If it finds some cheese in a maze, it will have completely forgotten the right route the next day. “They can’t seem to respond or adapt to changes in their environment,” says Jason Shepherd from the University of Utah, who has been studying Arc for years. “Arc is really key to transducing the information from those experiences into changes in the brain.”

Read more