Toggle light / dark theme

The news we like: “In five to 10 years time from now, we’ll have a new, special kind of drugs: longevity drugs. And unlike today’s medication, which always focused on one disease, this kind of drug will will give us an opportunity to influence aging as a whole and a very fatalistic way, working on healthspan, not only on lifespan… it’s very likely that this new drug will be developed with the help of artificial intelligence, which will compress drug development cycle by two or three times from what they are today.”


Ahead of the launch of his new book Growing Young, Sergey Young joins us for a video interview to discuss longevity horizons, personal health strategies and disruptive tech – and how we are moving towards radically extending our lifespan and healthspan.

Sergey Young, the longevity investor and founder of the Longevity Vision Fund is on a mission to extend healthy lifespans of at least one billion people. His new book, Growing Young, is released on 24th August and is already rising up the Amazon charts.

“It’s been amazing three years journey,” Young told Longevity. Technology. “I spent hours and days in different labs in the best clinics in the world and best academic institutions. I even talked to Peter Jackson! I’m very excited to share with everyone, so every reader can start their longevity journey today.”

Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider LeNet, a pioneering deep neural network, designed to do image classification. In 1998 it was shown to outperform other machine techniques for recognizing handwritten letters and numerals. But by 2012 AlexNet, a neural network that crunched through about 1600 times as many multiply-and-accumulate operations as LeNet, was able to recognize thousands of different types of objects in images.

Advancing from LeNet’s initial success to AlexNet required almost 11 doublings of computing performance. During the 14 years that took, Moore’s law provided much of that increase. The challenge has been to keep this trend going now that Moore’s law is running out of steam. The usual solution is simply to throw more computing resources—along with time, money, and energy—at the problem.

As a result, training today’s large neural networks often has a significant environmental footprint. One 2019 study found, for example, that training a certain deep neural network for natural-language processing produced five times the CO2 emissions typically associated with driving an automobile over its lifetime.

Google uses artificial intelligence technology to find millions of buildings on the satellite map that were previously difficult to locate. These can now be used for humanitarian aid or other purposes. Google utilized its building detection model (Continental-Scale Building Detection from High Resolution Satellite Imagery) to create an Open Buildings dataset, containing locations and footprints of 516 million buildings with coverage across most African continent countries.

In this data set, there are millions of buildings that have not been discovered in the past. These newly-discovered building materials will help the outside world understand African populations and where they live, facilitating health care services such as education or vaccination to their communities.

Google’s team of developers built a training set for their building detection model by manually labeling 1.75 million buildings in 100k images to make the most accurate identification possible, even when dealing with rural or urban environments that have vastly different properties and features. The need to identify what kind of dwelling place is being captured was especially difficult during scoping missions in remote areas where natural landmarks were plentiful. At the same time, dense surroundings made it hard to differentiate between multiple structures on an aerial image at once.

Robots today have been programmed to vacuum the floor or perform a preset dance, but there is still much work to be done before they can achieve their full potential. This mainly has something to do with how robots are unable to recognize what is in their environment at a deep level and therefore cannot function properly without being told all of these details by humans. For instance, while it may seem like backup programming for when bumping into an object that would help prevent unwanted collisions from happening again, this idea isn’t actually based on understanding anything about chairs because the robot doesn’t know exactly what one is!

Facebook AI team just released Droidlet, a new platform that makes it easier for anyone to build their smart robot. It’s an open-source project explicitly designed with hobbyists and researchers in mind so you can quickly prototype your AI algorithms without having to spend countless hours coding everything from scratch.

Droidlet is a platform for building embodied agents capable of recognizing, reacting to, and navigating the world. It simplifies integrating all kinds of state-of-the-art machine learning algorithms in these systems so that users can prototype new ideas faster than ever before!

https://youtube.com/watch?v=vCQm_2JgLbk

DeepMind CEO and co-founder. “We believe this work represents the most significant contribution AI has made to advancing the state of scientific knowledge to date. And I think it’s a great illustration and example of the kind of benefits AI can bring to society. We’re just so excited to see what the community is going to do with this.” https://www.futuretimeline.net/images/socialmedia/


AlphaFold is an artificial intelligence (AI) program that uses deep learning to predict the 3D structure of proteins. Developed by DeepMind, a London-based subsidiary of Google, it made headlines in November 2020 when competing in the Critical Assessment of Structure Prediction (CASP). This worldwide challenge is held every two years by the scientific community and is the most well-known protein modelling benchmark. Participants must “blindly” predict the 3D structures of different proteins, and their computational methods are subsequently compared with real-world laboratory results.

The CASP challenge has been held since 1994 and uses a metric known as the Global Distance Test (GDT), ranging from 0 to 100. Winners in previous years had tended to hover around the 30 to 40 mark, with a score of 90 considered to be equivalent to an experimentally determined result. In 2018, however, the team at DeepMind achieved a median of 58.9 for the GDT and an overall score of 68.5 across all targets, by far the highest of any algorithm.

Then in 2020, version 2.0 of their AlphaFold program competed in the CASP, winning once again – this time with even greater accuracy. The AlphaFold 2.0 achieved a median of 92.4 across all targets, with its average margin of error comparable to the width of an atom (0.16 nanometres). Andrei Lupas, biologist at the Max Planck Institute in Germany who assessed the performances of each team in CASP, said of AlphaFold: “This will change medicine. It will change research. It will change bioengineering. It will change everything.”

The applications claimed Dabus, which is made up of artificial neural networks, invented an emergency warning light and a type of food container, among other inventions.

Several countries, including Australia, had rejected the applications, stating a human must be named the inventor. The decision by the Australian deputy commissioner of patents in February this year found that although “inventor” was not defined in the Patents Act when it was written in 1991 it would have been understood to mean natural persons – with machines being tools that could be used by inventors.

But in a federal court judgment on Friday, justice Jonathan Beach overturned the decision, and sent the matter back to the commission for reconsideration.

Every dad should do this. 😃


French dad and robotics engineer Jean-Louis Constanza has built a robotic suit for his 16-year-old son Oscar that allows him to walk.

Oscar, a wheelchair user, activates the suit by saying “Robot, stand up” and it then walks for him.

Jean-Louis co-founded the company that builds the suit, which can allow users to move upright for a few hours a day.

It is used in several hospitals, but it isn’t yet available for everyday use by individuals and has a price tag of around €150000 (about £127700).

A personal exoskeleton would need to be much lighter, the company’s engineers said.

Please subscribe HERE http://bit.ly/1rbfUog.

#Robotics #BBCNews