Toggle light / dark theme

I think his purpose in doing this is to prioritize full self driving over partial self driving features.


“Humans drive with eyes and biological neural nets,” Musk said in October. “So [it] makes sense that cameras and silicon neural nets are [the] only way to achieve generalized solution to self-driving.”

Moreover, he’s reportedly implementing that philosophy at Tesla.

Musk has repeatedly instructed the company’s Autopilot team, which works on self-driving car tech, to ditch radar and use only cameras instead, the New York Times reported on Monday.

No multi-billion-dollar acquisitions occurred in the world of AI chips in 2021.

Instead, the leading AI chip startups all raised rounds at multi-billion-dollar valuations, making clear that they aspire not to get acquired but to become large standalone public companies.

In our predictions last December, we identified three startups in particular as likely acquisition targets. Of these: SambaNova raised a $670 million Series D at a $5 billion valuation in April; Cerebras raised a $250 million Series F at a $4 billion valuation last month; and Graphcore raised $220 million at a valuation close to $3 billion amid rumors of an upcoming IPO.

Other top AI chip startups like Groq and Untether AI also raised big funding rounds in 2021.

Full Story:


As of the beginning of this year, no autonomous vehicle company had ever gone public. 2021 is the year that that all changed.

TuSimple, Embark and Au.

To handle this, people have trained neural networks on regions where we have more complete weather data. Once trained, the system could be fed partial data and infer what the rest was likely to be. For example, the trained system can create a likely weather radar map using things like satellite cloud images and data on lightning strikes.

This is exactly the sort of thing that neural networks do well with: recognizing patterns and inferring correlations.

What drew the Rigetti team’s attention is the fact that neural networks also map well onto quantum processors. In a typical neural network, a layer of “neurons” performs operations before forwarding its results to the next layer. The network “learns” by altering the strength of the connections among units in different layers. On a quantum processor, each qubit can perform the equivalent of an operation. The qubits also share connections among themselves, and the strength of the connection can be adjusted. So, it’s possible to implement and train a neural network on a quantum processor.

Robots are already in space. From landers on the moon to rovers on Mars and more, robots are the perfect candidates for space exploration: they can bear extreme environments while consistently repeating the same tasks in exactly the same way without tiring. Like robots on Earth, they can accomplish both dangerous and mundane jobs, from space walks to polishing a spacecraft’s surface. With space missions increasing in number and expanding in scientific scope, requiring more equipment, there’s a need for a lightweight robotic arm that can manipulate in environments difficult for humans.

Robots are already in space. From landers on the moon to rovers on Mars and more, robots are the perfect candidates for space exploration: they can bear extreme environments while consistently repeating the same tasks in exactly the same way without tiring. Like robots on Earth, they can accomplish both dangerous and mundane jobs, from space walks to polishing a spacecraft’s surface. With space missions increasing in number and expanding in scientific scope, requiring more equipment, there’s a need for a lightweight robotic arm that can manipulate in environments difficult for humans.

However, the control schemes that can move such arms on Earth, where the planes of operation are flat, do not translate to space, where the environment is unpredictable and changeable. To address this issue, researchers in Harbin Institute of Technology’s School of Mechanical Engineering and Automation have developed a robotic arm weighing 9.23 kilograms—about the size of a one-year-old baby—capable of carrying almost a quarter of its own weight, with the ability to adjust its position and speed in real time based on its environment.

They published their results on Sept. 28 in Space: Science & Technology.

In a new paper published in Space: Science & Technology, a team of researchers have created a new lightweight robotic arm with precision controls.

As missions in space increase in scope and variety, so to will the tools necessary to accomplish them. Robots are already used throughout space, but robotic arms used on Earth do not translate well to space. A flat plane relative to the ground enables Earth-bound robotic arms to articulate freely in a three-dimensional coordinate grid with relatively simple programming. However, with constantly changing environments in space, a robotic arm would struggle to orient itself correctly.

Inspired by the mastery of artificial intelligence (AI) over games like Go and Super Mario, scientists at the National Synchrotron Light Source II (NSLS-II) trained an AI agent — an autonomous computational program that observes and acts — how to conduct research experiments at superhuman levels by using the same approach. The Brookhaven team published their findings in the journal Machine Learning: Science and Technology and implemented the AI agent as part of the research capabilities at NSLS-II.

As a U.S. Department of Energy (DOE) Office of Science User Facility located at DOE’s Brookhaven National Laboratory, NSLS-II enables scientific studies by more than 2000 researchers each year, offering access to the facility’s ultrabright x-rays. Scientists from all over the world come to the facility to advance their research in areas such as batteries, microelectronics, and drug development. However, time at NSLS-II’s experimental stations — called beamlines — is hard to get because nearly three times as many researchers would like to use them as any one station can handle in a day — despite the facility’s 24/7 operations.

“Since time at our facility is a precious resource, it is our responsibility to be good stewards of that; this means we need to find ways to use this resource more efficiently so that we can enable more science,” said Daniel Olds, beamline scientist at NSLS-II and corresponding author of the study. “One bottleneck is us, the humans who are measuring the samples. We come up with an initial strategy, but adjust it on the fly during the measurement to ensure everything is running smoothly. But we can’t watch the measurement all the time because we also need to eat, sleep and do more than just run the experiment.”