Toggle light / dark theme

A new kind of brain-computer interface (BCI) that uses neural implants the size of a grain of sand to record brain activity has been proven effective in rats — and one day, thousands of the “neurograins” could help you control machines with your mind.

Mind readers: BCIs are devices (usually electrodes implanted in the skull) that translate electrical signals from brain cells into commands for machines. They can allow paralyzed people to “speak” again, control robots, type with their minds, and even regain control of their own limbs.

Most of today’s interfaces can listen to just a few hundred neurons — but there are approximately 86 billion neurons in the brain. If we could monitor more neurons, in more places in the brain, it could radically upgrade what’s possible with mind-controlled tech.

Showing how Atlas falls and gets back up again.


Boston Dynamics has shared two new videos of its bipedal Atlas robot. The first shows a dual-hander gymnastic routine, and the second offers insight into how such videos are programmed and practiced.

The field of neuroprosthetics was around in its earliest stage in the 1950s, but it’s only just starting to show its true potential, with devices that allow amputees to feel and manipulate their surroundings.

A group of researchers from MIT and Shanghai Jiao Tong University, recently collaborated with the goal of making neuroprosthetic hands, which allow users to feel in a more accessible way. The result is an inflatable robotic hand that costs only $500 to build, making it much cheaper than comparable devices, a post from MIT reveals.

The researchers behind the new prosthetic say their device bears an uncanny resemblance to the inflatable robot in the animated film Big Hero 6. The prosthetic uses a pneumatic system to inflate and bend the fingers of the device, allowing its user to grasp objects, pour a drink, shake hands, and even pet a cat if they so wish. It allows all of this via a software program — detailed in the team’s paper in the journal Nature Biomedical Engineering — that “decodes” EMG signals the brain is sending to an injured or missing limb.

Circa 2020 o.o


While some states have partially reopened and loosened restrictions on barber shops and hair salons, not everyone is ready to head out in public for a haircut just yet. That means many people around the world are still sporting shaggy quarantine cuts.

To tame his wild mane, then, Shane Wighton, an engineer and YouTuber known for his channel, Stuff Made Here, has built the ultimate hairstylist: a robotic barber.

But one idea that hasn’t gotten enough attention from the AI community is how the brain creates itself, argues Peter Robin Hiesinger, Professor of Neurobiology at the Free University of Berlin (Freie Universität Berlin).

In his book The Self-Assembling Brain, Hiesinger suggests that instead of looking at the brain from an endpoint perspective, we should study how information encoded in the genome is transformed to become the brain as we grow. This line of study might help discover new ideas and directions of research for the AI community.

The Self-Assembling Brain is organized as a series of seminar presentations interspersed with discussions between a robotics engineer, a neuroscientist, a geneticist, and an AI researcher. The thought-provoking conversations help to understand the views and the holes of each field on topics related to the mind, the brain, intelligence, and AI.

Intelligent systems engineer, STEM advocate, hip-hop artist — ashley llorens, VP, distinguished scientist, managing director microsoft research, microsoft.


Ashley Llorens (https://www.microsoft.com/en-us/research/people/allorens/) is Vice President, Distinguished Scientist & Managing Director, at Microsoft Research Outreach, where he leads a global team to amplify the impact of research at Microsoft and to advance the cause of science and technology research around the world. His team is responsible for driving strategy and execution for Microsoft Research engagement with the rest of Microsoft and with the broader science and technology community, and they invest in high-impact collaborative research projects on behalf of the company, create pipelines for diverse, world-class talent, and generate awareness of the current and envisioned future impact of science and technology research.

Prior to joining Microsoft, Mr. Llorens served as the founding chief of the Intelligent Systems Center at the Johns Hopkins Applied Physics Laboratory (APL), where he directed research and development in artificial intelligence (AI), robotics and neuroscience and created APL’s first enterprise-wide AI strategy and technology roadmap. During his two decades at APL, Mr. Llorens led interdisciplinary teams in developing novel AI technologies from concept to real-world application with a focus on autonomous systems. His background is in machine learning and signal processing and current research interests include reinforcement learning for real-world systems, machine decision-making under uncertainty, human-machine teaming, and practical AI safety.

As a subject matter expert in AI and autonomous systems, Mr Llorens has served on advisory boards and strategic studies for the U.S. Department of Defense, the U.S. Department of Energy and the National Academy of Sciences. He was recently nominated by the White House Office of Science and Technology Policy to serve as an AI expert on the Global Partnership on AI and was elected to serve as the Science Representative on its inaugural steering committee.

Alongside Mr. Llorens career in engineering, while earning his B.S. and M.S. at the University of Illinois Urbana-Champaign, he pursued a parallel career as a hip-hop artist, also known as SoulStice, founding Wandering Soul Records and serves as a voting member of the Recording Academy, the institution that organizes the Grammys.

By Susan Ip-Jewell## **Space Medicine, Health and MedTech Innovations, a lecture by Susan Ip-Jewell**

In the frame of the new Space Renaissance Academy Webinar Series programme, chaired by the optimum Sabine Heinz, a quite interesting and rich lecture was given yesterday by Dr. Susan Ip Jewell.

Susan is CEO and founder of Mars Moon Astronautic Academy Research Science (MMAARS), one of the SRI VicePresidents and a pasionate space activist. And she’s Commander of Analog Training missions on Moon and Mars simulated surface.

In her lecture, she gives us a wide overlook on many aspects of human health in space, the edge of the space medicine, the innovative techniques using incremental technologies, developing systems integrating robotic, artificial intelligence, remote telemedicine, avatars and drones.

Btw, Sabine, in addition to being an efficient organizer and coordinator, has revealed unexpected talents as a great media presenter!

Sabine was fantastic, moderating the intense discussion that followed the lecture, about the many challenges humanity is facing, while kicking off the civilian space development.

Several questions and considerations were raised, by the audience, and by the panelists — Bernard Foing, SRI President, A. V. Autino. former SRI President and SR Academy Strategy Director, Sabine Heinz, Chair of the Webinar Series and of the SR Art Chapter, Thomas Matula, SR Academy, Educational Director — on topics like (randomly):
* in few months we will have four civilians flying on ISS: though the media only talk about this exciting event in a superficial mode, several challenges stand in the background.
* what is the main danger for health in space, low gravity, radiations, somethimg else.
* what will be the embryo development in space?
* will reproduction be possible in space?
* is there a doctor onboard ISS?
* will the civilian visitors to ISS have any kind of medical insurance, or warrants? will anybody be responsible for their life and health?
* does any idea exist, about what could be surgery in microgravity?
* during civilian space development, will the main danger be the human aspects and behaviours?
* will social issues raise and play a meaningful role, during the settlement oin the Solar System? (refer: R. Heinlein, James Corey \.


A lecture by Dr. Susan Ip-Jewell, MMAARS CEO and Founder, Space Renaissance International Vice-President.

Sylvia Todd, star of Sylvia’s Super-Awesome Maker Show, came up with the idea for the WaterColorBot because she wanted to create an art robot and enter it in the RoboGames competition. She approached us at Evil Mad Scientist Laboratories about collaborating on the project, and we loved it.

Together we designed and built our first prototype in February, and had a nicely-working robot about a month later. As we realized that this project had a lot of appeal beyond just a one-off project, we started developing it into a kit. Sylvia exhibited her prototype at RoboGames (and won a Silver medal), and we also brought the WaterColorBot to Maker Faire, where thousands of people got to play with it.

Sylvia was also invited to the White House Science Fair in April, where she got to demonstrate the WaterColorBot for President Obama (pictures and media coverage here).