Toggle light / dark theme

Neuralink has received a whole range of reviews, with some scientists hailing it as the next big break and others claiming that the company is making promises it cannot deliver! Stick around for the full update and subscribe to Futurity.

#elonmusk #neuralink.

Here at Futurity, we scour the globe for all the latest tech releases, news, and info just so you don’t have to! Covering everything from cryptocurrency to robotics, small startups to multinational corporations like Tesla and Jeff Bezos to Elon Musk and everything in between!

We live in a very fast-changing world and quite an unpredictable one. In part, it is because we got lots of technological powers while our brain stays just the same as in pre-technological times. What do we teach children in this world? How can we help them to reflect on their thinking, get wiser in using the new technological powers, develop growth mindset and resilience, see the big picture and the interconnections within the complex systems (be that our body, ecological system, or the whole Universe)? We are trying to address these issues by teaching space science, AI and cognitive science, and existential risks and opportunities to pre-teens. In three years, the kids get an opportunity to talk to some of the most prominent thinkers in the field, reflect on deep questions, develop connections with specialists from multiple fields, from space law to ecology to virology, present their work at conferences. Check out our classes:


Art of Inquiry is an Online Science School for Young Explorers. We teach inquiry, thinking skills, and cutting-edge science. Our speakers and consultants are distinguished experts from academia, AI and space industry.

TuSimple has stated that its “Driver Out” program is the first vital step in scaling its autonomous trucking operations on the TuSimple Autonomous Freight Network (AFN).

A robust AFN is now one step closer, following a successful 80-mile, driverless run in Arizona last week.

TuSimple announced its successful driverless ride via a recent press release, along with YouTube footage of the entire one-hour twenty-minute drive.

Updating the goal of Chang’e 8 mission.

Chinese space authorities told state media South China Morning Post (SCMP) that the unmanned lunar station, jointly built with Russia, will be completed around 2027.

The new plan, which is eight years earlier than previously scheduled, will help China get ahead of the U.S. in the space race.

China’s Chang’e 8 moon landing mission was originally aimed to carry out scientific studies like 3D-printing lunar dust, but the Deputy Director of China National Space Administration (CNSA) Wu Yanhua announced that the new target of the administration is putting an unmanned research station on the lunar surface, which was previously scheduled for 2035.

Wu, while not disclosing the details behind the decision, underlined that the mission was to “build a solid foundation for the peaceful use of lunar resources”.

China’s lunar program has progressed steadily and at its own pace for years, with Chinese space authorities repeatedly claiming that the country was not interested in a space race like the one during the Cold War.

Full Story:

More headroom, more legroom, more room in general.

A new design for an autonomous taxi without a steering wheel or pedals has been unveiled by Waymo. The company, which has partnered with the Chinese automaker Geely, announced this week its intention to build a Zeekr minivan filled with passenger seats and little much else.

The minivan will be all-electric and self-driving as in being designed and developed in Gothenburg, Sweden. According to the US-based Waymo, the robot minivan will be added to its existing fleet “in the years to come.”

The announcement came via a blog post where Waymo gave a hint as to some of Zeekr’s planned features. According to Waymo, the Zeekr will have “a flat floor for more accessible entry, easy ingress, and egress thanks to a B-pillarless design, low step-in height, generous head and legroom, and fully adjustable seats.”

Full Story:

In the 2002 science fiction blockbuster film “Minority Report,” Tom Cruise’s character John Anderton uses his hands, sheathed in special gloves, to interface with his wall-sized transparent computer screen. The computer recognizes his gestures to enlarge, zoom in, and swipe away. Although this futuristic vision for computer-human interaction is now 20 years old, today’s humans still interface with computers by using a mouse, keyboard, remote control, or small touch screen. However, much effort has been devoted by researchers to unlock more natural forms of communication without requiring contact between the user and the device. Voice commands are a prominent example that have found their way into modern smartphones and virtual assistants, letting us interact and control devices through speech.

Hand gestures constitute another important mode of human communication that could be adopted for human-computer interactions. Recent progress in camera systems, image analysis and machine learning have made optical-based gesture recognition a more attractive option in most contexts than approaches relying on wearable sensors or data gloves, as used by Anderton in “Minority Report.” However, current methods are hindered by a variety of limitations, including high computational complexity, low speed, poor accuracy, or a low number of recognizable gestures. To tackle these issues, a team led by Zhiyi Yu of Sun Yat-sen University, China, recently developed a new hand gesture recognition algorithm that strikes a good balance between complexity, accuracy, and applicability.

Engineers from National University of Singapore (NUS) have built a robotics system they say can grip various objects, ranging from soft and delicate to bulky and heavy. Designed to be configurable, the robotic hand is touted to address the needs of sectors such as vertical farming, food assembly, and fast-moving consumer goods packaging, and with a 23% improvement in efficiency.

These industries increasingly were automating more of their operations, but currently required manual handling for some processes, according to NUS. The human hand’s natural dexterity remained necessary for these tasks.

Rave Yeow, associate professor from NUS Advanced Robotics Centre and Department of Biomedical Engineering, said: “An object’s shape, texture, weight, and size affect how we choose to grip them. This is one of the main reasons why many industries still heavily rely on human labour to package and handle delicate items.”

Toyota’s cleaning robot has demonstrated new skills, revealing an ability to detect clear objects and snap perfect selfies.


The challenge: While seeing a reflection in a toaster isn’t going to stop us from knowing that it’s a toaster, robots can be easily confused by reflections, as well as transparent objects, such as glasses and windows.

Our houses are full of those tricky objects, so training robots to see them for what they are is key to bringing domestic bots into our homes.

Toyota’s cleaning robot: To ensure Toyota’s cleaning robot wouldn’t be fooled by its own reflection, they developed a training method that helps it “perceive the 3D geometry of the scene while also detecting objects and surfaces,” according to a blog post.