Toggle light / dark theme

Circa 2018


After 12 years of work, researchers at the University of Manchester in England have completed construction of a “SpiNNaker” (Spiking Neural Network Architecture) supercomputer. It can simulate the internal workings of up to a billion neurons through a whopping one million processing units.

The human brain contains approximately 100 billion neurons, exchanging signals through hundreds of trillions of synapses. While these numbers are imposing, a digital brain simulation needs far more than raw processing power: rather, what’s needed is a radical rethinking of the standard computer architecture on which most computers are built.

“Neurons in the brain typically have several thousand inputs; some up to quarter of a million,” Prof. Stephen Furber, who conceived and led the SpiNNaker project, told us. “So the issue is communication, not computation. High-performance computers are good at sending large chunks of data from one place to another very fast, but what neural modeling requires is sending very small chunks of data (representing a single spike) from one place to many others, which is quite a different communication model.”

A ‘robot revolution’ is underway and could lead to half of all jobs being done by machines by 2025, according to forecasters.

The World Economic Forum has said that 97 million new jobs are set to be created by increased automation of manual and routine labour in several major industries.

But they’ve warned that just as many jobs will be lost, and that the trend could worsen inequality in poorer communities as humans lose out to machines in the workplace.

Tesla owners who are part of the limited Full Self-Driving rollout have started sharing some images and videos of the advanced driver-assist features in action. Based on videos and the Release Notes of the limited beta, it appears that Tesla is heavily emphasizing safety.

Among the lucky Tesla owners who received the update were @brandonee916 and the @teslaownerssv group, both of whom shared a images and short clips of the limited Full Self-driving beta in action.

Overall, the UI of the limited beta seems to be quite rough in its current state. With this in mind, there seems to be a good chance that the visuals of FSD will be more refined by the time it gets a wider release.

I think some people would be excited. 😃


Tesla’s Full Self-Driving suite is poised for a wide-release by the end of 2020 to all drivers who purchased the capability, Elon Musk said, during its Q3 Earnings Call.

“We’re starting very slow and very cautiously because the world is a very complex and messy place,” Musk said when talking about the Beta rollout of the FSD suite to a minimal group of people, which began late Tuesday night. “We put it out there last night, and then we’ll see how it goes, and then probably release it to more people this weekend or early next week. Then gradually step it up until we hopefully have a wide-release by the end of this year.”

On October 8th, Musk stated that the latest build of the FSD software would be capable of “zero-intervention drives. Will release limited beta in a few weeks.”

When NASA’s Perseverance Mars Rover starts its quest for Martian rocks it will have quite the to-do list:

🕵️‍ Locate
⛏ Drill
🧰 Collect
📦 Stash

The robotic caching system that’ll get the job done is 𝘴𝘰𝘭𝘪𝘥 𝘢𝘴 𝘢 𝘳𝘰𝘤𝘬 thanks to NASA Jet Propulsion Laboratory engineer Eric Aguilar: mars.nasa.gov/mars2020

So now, there are AI doctors.


Machine learning is taking medical diagnosis by storm. From eye disease, breast and other cancers, to more amorphous neurological disorders, AI is routinely matching physician performance, if not beating them outright.

Yet how much can we take those results at face value? When it comes to life and death decisions, when can we put our full trust in enigmatic algorithms—“black boxes” that even their creators cannot fully explain or understand? The problem gets more complex as medical AI crosses multiple disciplines and developers, including both academic and industry powerhouses such as Google, Amazon, or Apple, with disparate incentives.

This week, the two sides battled it out in a heated duel in one of the most prestigious science journals, Nature. On one side are prominent AI researchers at the Princess Margaret Cancer Centre, University of Toronto, Stanford University, Johns Hopkins, Harvard, MIT, and others. On the other side is the titan Google Health.

Boston Dynamics has reportedly already sold more than 250 of its $75,000 Spot robots since starting commercial sales back in June. Interested and deep-pocketed parties can purchase one directly from the company’s website as well as a host of accessories, from $1,650 charging bricks to $34,570 lidar and camera kits. But one add-on which we’ve seen Spot with since some of its earliest demo videos was the prehensile arm sprouting from between its shoulder blades. But come next January, Spots around the world are going to get a whole lot more handsy.

“The next thing on the future Spot is that we’re going to make it available with a robot arm in a few months,” Boston Dynamics founder Marc Raibert told the virtual crowd at the Collision from Home conference in June. “We have prototypes working, but we don’t have them available as a product yet. Once you have an arm on a robot, it becomes a mobile manipulation system. It really opens up just vast horizons on things robots can do. I believe that the mobility of the robot will contribute to the dexterity of the robot in ways that we just don’t get with current fixed factory automation.”