Toggle light / dark theme

Don’t you wish you had your own robotic exoskeleton?

This would really take away the strain in manual labor.


“In the past, the lifting workers could hardly stay after 2 years as the heavy work would burden them with injuries.”

This company in China is developing robotic exoskeletons to keep workers safe. More Bloomberg: https://trib.al/jllD1cT.

Unless you’re a physicist or an engineer, there really isn’t much reason for you to know about partial differential equations. I know. After years of poring over them in undergrad while studying mechanical engineering, I’ve never used them since in the real world.

But partial differential equations, or PDEs, are also kind of magical. They’re a category of math equations that are really good at describing change over space and time, and thus very handy for describing the physical phenomena in our universe. They can be used to model everything from planetary orbits to plate tectonics to the air turbulence that disturbs a flight, which in turn allows us to do practical things like predict seismic activity and design safe planes.

The catch is PDEs are notoriously hard to solve. And here, the meaning of “solve” is perhaps best illustrated by an example. Say you are trying to simulate air turbulence to test a new plane design. There is a known PDE called Navier-Stokes that is used to describe the motion of any fluid. “Solving” Navier-Stokes allows you to take a snapshot of the air’s motion (a.k.a. wind conditions) at any point in time and model how it will continue to move, or how it was moving before.

The future of disaster management, using artificial intelligence, machine learning, and a bit of Waffle House and Starbucks 🙂


Ira Pastor, ideaXme life sciences ambassador interviews Craig Fugate Chief Emergency Management Officer of One Concern and former administrator of the Federal Emergency Management Agency (FEMA).

The international context of this interview: In choosing our leaders it is becoming increasingly important to select people who can both anticipate and address and where possible avoid large scale disasters. Here, Craig Fugate discusses evaluating past disasters, planning for future events and reacting to the “unexpected” — “think big and move fast”.

Ira Pastor comments:

The U.S. has sustained 279 weather and climate disasters since 1980 where overall damages/costs reached or exceeded $1 billion (including CPI adjustment to 2020). The total cost of these 279 events exceeds $1.825 trillion.

Craig Fugate is the Chief Emergency Management Officer of One Concern, a “Resilience-as-a-Service” solutions company that brings disaster science together with machine learning for better disaster recovery decision making.

Craig is the former Director of the Florida Division of Emergency Management, and former administrator of the Federal Emergency Management Agency (FEMA) an agency of the United States Department of Homeland Security, whose primary purpose is to coordinate the response to disasters that has occurred in the United States and that overwhelms the resources of local and state authorities.

Mr. Fugate has decades of experience at the local, state, and federal levels in disaster preparedness and management. He has also overseen preparation and response efforts for disasters such as wildfires and hurricanes, health crises, and national security threats.

As progress in traditional computing slows, new forms of computing are coming to the forefront. At Penn State, a team of engineers is attempting to pioneer a type of computing that mimics the efficiency of the brain’s neural networks while exploiting the brain’s analog nature.

Modern computing is digital, made up of two states, on-off or one and zero. An analog computer, like the , has many possible states. It is the difference between flipping a light switch on or off and turning a dimmer switch to varying amounts of lighting.

Neuromorphic or brain-inspired computing has been studied for more than 40 years, according to Saptarshi Das, the team leader and Penn State assistant professor of engineering science and mechanics. What’s new is that as the limits of digital computing have been reached, the need for high-speed image processing, for instance for self-driving cars, has grown. The rise of big data, which requires types of pattern recognition for which the brain architecture is particularly well suited, is another driver in the pursuit of neuromorphic computing.

Microsoft has announced the launch of the public preview of a free app that allows users to train machine learning (ML) models without writing any code.

This app — Lobe — has been designed for Windows and Mac, only supports image classification; however, the tech giant is planning to expand the app to include other models and data types in the future.

According to Lobe website, the app needs to be shown examples of what the users want to learn, and the app automatically trains a custom machine learning model that can be shipped in the users’ app.

Elon Musk is on the record stating that artificial superintelligence or ASI could bring the end of the human race. Elon has publicly expressed concern about AI many times now. He thinks the advent of a digital superintelligence is the most pressing issue for humanity to get right.

What happens when machines surpass humans in general intelligence? If machine brains surpassed human brains in general intelligence, then this new superintelligence would have undergone an event called the intelligence explosion, likely to occur in the 21st century. It is unknown what, or who this machine-network would become; The issue of superintelligence remains peripheral to mainstream AI research and is mostly discussed by a small group of academics.

Besides Elon Musk, Swedish philosopher Nick Bostrom is also among well known public thinkers who is worried about AI. He lays the foundation for understanding the future of humanity and intelligent life : Now imagine a machine, structurally similar to a brain but with immense hardness and flexibility, designed from the bottom scratch to function as an intelligent agent. Given sufficiently long time, a machine like this could acquire enormous knowledge and skills, surpassing human intellectual capacity in virtually every field. At that point the machine would have become superintelligent. With other words the machine’s intellectual capacities would exceed those of all of humanity put together by a very large margin. This would represent the most radical change in the history of life on earth.

In order to develop a superintelligence that would benefit humanity, the process has to be done in a series of steps with each step being determined before we move to the next one. In fact, it might just be possible to program the AI to help us achieve the things we humans may not be able to do on our own. It is not simply being able to create them and learning how they’ve been commanded, but it is interacting with them and evolving ourselves at the same time. It is learning how to be human after the first ASI.

#ElonMusk #AI #ASI

SUBSCRIBE to our channel “Science Time”: https://www.youtube.com/sciencetime24
SUPPORT us on Patreon: https://www.patreon.com/sciencetime
BUY Science Time Merch: https://teespring.com/science-time-merch

Are you a cutting-edge AI researcher looking for models with clean semantics that can represent the context-specific causal dependencies necessary for causal induction? If so, maybe you should take a look at good old-fashioned probability trees.

Probability trees may have been around for decades, but they have received little attention from the AI and ML community. Until now. “Probability trees are one of the simplest models of causal generative processes,” explains the new DeepMind paper Algorithms for Causal Reasoning in Probability Trees, which the authors say is the first to propose concrete algorithms for causal reasoning in discrete probability trees.

Humans naturally learn to reason in large part through inducing causal relationships from our observations, and we do this remarkably well, cognitive scientists say. Even when the data we perceive is sparse and limited, humans can quickly learn causal structures such as interactions between physical objects, observations of the co-occurrence frequencies between causes and effects, etc.