Toggle light / dark theme

According to Klaus Schwab, the founder and executive chair of the World Economic Forum (WEF), the 4-IR follows the first, second, and third Industrial Revolutions—the mechanical, electrical, and digital, respectively. The 4-IR builds on the digital revolution, but Schwab sees the 4-IR as an exponential takeoff and convergence of existing and emerging fields, including Big Data; artificial intelligence; machine learning; quantum computing; and genetics, nanotechnology, and robotics. The consequence is the merging of the physical, digital, and biological worlds. The blurring of these categories ultimately challenges the very ontologies by which we understand ourselves and the world, including “what it means to be human.”

The specific applications that make up the 4-R are too numerous and sundry to treat in full, but they include a ubiquitous internet, the internet of things, the internet of bodies, autonomous vehicles, smart cities, 3D printing, nanotechnology, biotechnology, materials science, energy storage, and more.

While Schwab and the WEF promote a particular vision for the 4-IR, the developments he announces are not his brainchildren, and there is nothing original about his formulations. Transhumanists and Singularitarians (or prophets of the technological singularity), such as Ray Kurzweil and many others, forecasted these and more revolutionary developments,. long before Schwab heralded them. The significance of Schwab and the WEF’s take on the new technological revolution is the attempt to harness it to a particular end, presumably “a fairer, greener future.”

World-renowned science author Yuval Noah Harari said that someday human brains could be hacked into if emerging AI systems are not properly regulated.

For people with motor impairments or physical disabilities, completing daily tasks and house chores can be incredibly challenging. Recent advancements in robotics, such as brain-controlled robotic limbs, have the potential to significantly improve their quality of life.

Researchers at Hebei University of Technology and other institutes in China have developed an innovative system for controlling robotic arms that is based on augmented reality (AR) and a . This system, presented in a paper published in the Journal of Neural Engineering, could enable the development of bionic or prosthetic arms that are easier for users to control.

“In recent years, with the development of robotic arms, brain science and information decoding technology, brain-controlled robotic arms have attained increasing achievements,” Zhiguo Luo, one of the researchers who carried out the study, told TechXplore. “However, disadvantages like poor flexibility restrict their widespread application. We aim to promote the lightweight and practicality of brain-controlled robotic arms.”

Neuroscientists find the internal workings of next-word prediction models resemble those of language-processing centers in the brain.

In the past few years, artificial intelligence models of language have become very good at certain tasks. Most notably, they excel at predicting the next word in a string of text; this technology helps search engines and texting apps predict the next word you are going to type.

The most recent generation of predictive language models also appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion.

Last year DeepMind’s breakthrough AI system AlphaFold2 was recognised as a solution to the 50-year-old grand challenge of protein folding, capable of predicting the 3D structure of a protein directly from its amino acid sequence to atomic-level accuracy. This has been a watershed moment for computational and AI methods for biology.

Building on this advance, today, I’m thrilled to announce the creation of a new Alphabet company – Isomorphic Labs – a commercial venture with the mission to reimagine the entire drug discovery process from the ground up with an AI-first approach and, ultimately, to model and understand some of the fundamental mechanisms of life.

For over a decade DeepMind has been in the vanguard of advancing the state-of-the-art in AI, often using games as a proving ground for developing general purpose learning systems, like AlphaGo, our program that beat the world champion at the complex game of Go. We are at an exciting moment in history now where these techniques and methods are becoming powerful and sophisticated enough to be applied to real-world problems including scientific discovery itself. One of the most important applications of AI that I can think of is in the field of biological and medical research, and it is an area I have been passionate about addressing for many years. Now the time is right to push this forward at pace, and with the dedicated focus and resources that Isomorphic Labs will bring.