Toggle light / dark theme

Nio’s soon-to-arrive ET7 is practically tailor-made to challenge Tesla’s Model S, and now the company appears to have a (partial) answer to the Model 3. Electrek says Nio has introduced the ET5, a more affordable “mid-size” electric sedan. It starts at RMB 328,000 (about $51,450), or well under the roughly $70,000 of the ET7, but offers similarly grandiose range figures. Nio claims the base 75kWh battery offers over 341 miles of range using China’s test cycle, while the highest-end 150kWh “Ultralong Range” pack is supposedly good for more than 620 miles. You’ll likely pay significantly more for the privilege and may not see that range in real life, but the numbers could still tempt you away from higher-end Model 3s if long-distance driving is crucial.

You can expect the usual heapings of technology. The ET5 will have built-in support for autonomous driving features as they’re approved, and drivers get a “digital cockpit” thanks to Nreal-developed augmented reality glasses that can project a virtual screen equivalent to 201 inches at a 20-foot viewing distance. Nio has teamed with Nolo to make VR glasses, too, although it’s safe to say you won’t wear those while you’re driving.

Deliveries are expected to start September 2022. That’s a long way off, but Nio appears to be on track with its EV plans as it expects to deliver the ET7 on time (if only just) starting March 28th.

In this short video, filmed at ASTRO 2021, Siemens Healthineers’ Gabriel Haras introduces the company’s portfolio of artificial intelligence (AI)-based products. Such technologies support the entire care pathway for cancer patients, from screening and diagnostics to treatment and follow-up, including innovations such as AI-based autocontouring and generation of synthetic CT from an MRI scan for radiotherapy planning.

Next, Varian’s Kevin O’Reilly comments on the combining of Varian and Siemens Healthineers into one united company. He notes that the integration of AI capabilities has increased Varian’s ability to innovate, and will help accelerate its intelligent cancer care strategy: accelerating the path to treatment, increasing global access to care, exploiting data-driven insight and improving personalization.

2021 saw massive growth in the demand for edge computing — driven by the pandemic, the need for more efficient business processes, as well as key advances in the Internet of Things, 5G and AI.

In a study published by IBM in May, for example, 94 percent of surveyed executives said their organizations will implement edge computing in the next five years.

From smart hospitals and cities to cashierless shops to self-driving cars, edge AI — the combination of edge computing and AI — is needed more than ever.

Kindly see my latest FORBES article on technology predictions for the next decade:

Thanks and have a great weekend! Chuck Brooks.


We are approaching 2022 and rather than ponder the immediate future, I want to explore what may beckon in the ecosystem of disruptive technologies a decade from now. We are in the initial stages of an era of rapid and technological change that will witness regeneration of body parts, new cures for diseases, augmented reality, artificial intelligence, human/computer interface, autonomous vehicles, advanced robotics, flying cars, quantum computing, and connected smart cities. Exciting times may be ahead.

By 2032, it will be logical to assume that the world will be amid a digital and physical transformation beyond our expectations. It is no exaggeration to say we are on the cusp of scientific and technological advancements that will change how we live and interact.

What should we expect in the coming decade as we begin 2022? While there are many potential paradigms changing technological influences that will impact the future, let us explore three specific categories of future transformation: cognitive computing, health and medicine, and autonomous everything.

A new study claims machine learning is starting to look a lot like human cognition.

In 2019, The MIT Press Reader published a pair of interviews with Noam Chomsky and Steven Pinker, two of the world’s foremost linguistic and cognitive scientists. The conversations, like the men themselves, vary in their framing and treatment of key issues surrounding their areas of expertise. When asked about machine learning and its contributions to cognitive science, however, their opinions gather under the banner of skepticism and something approaching disappointment.

“In just about every relevant respect it is hard to see how [machine learning] makes any kind of contribution to science,” Chomsky laments, “specifically to cognitive science, whatever value it may have for constructing useful devices or for exploring the properties of the computational processes being employed.”

While Pinker adopts a slightly softer tone, he echoes Chomsky’s lack of enthusiasm for how AI has advanced our understanding of the brain:

“Cognitive science itself became overshadowed by neuroscience in the 1990s and artificial intelligence in this decade, but I think those fields will need to overcome their theoretical barrenness and be reintegrated with the study of cognition — mindless neurophysiology and machine learning have each hit walls when it comes to illuminating intelligence.”

Full Story:

Rather than engineering robotic solutions from scratch, some of our most impressive advances have come from copying what nature has already come up with.

New research shows how we can extend that approach to robot ‘minds’, in this case by getting a robot to learn the best route out of a maze all by itself – even down to keeping a sort-of memory of particular turns.

A team of engineers coded a Lego robot to find its way through a hexagonal labyrinth: by default it turned right at every function, until it hit a point it had previously visited or came to a dead end, at which point it had to start again.

“How do we use these new innovative technologies to really mitigate disparities in diabetes outcomes?” asked Risa Wolf, a pediatric endocrinologist at Johns Hopkins Hospital.

The new machine-learning system can generate a 3D scene from an image about 15,000 times faster than other methods. Humans are pretty good at looking at a single two-dimensional image and understanding the full three-dimensional scene that it captures. Artificial intelligence agents are not.


The hunt is on for leptoquarks, particles beyond the limits of the standard model of particle physics —the best description we have so far of the physics that governs the forces of the Universe and its particles. These hypothetical particles could prove useful in explaining experimental and theoretical anomalies observed at particle accelerators such as the Large Hadron Collider (LHC) and could help to unify theories of physics beyond the standard model, if researchers could just spot them.