Toggle light / dark theme

A tetraplegic man has been able to move all four of his paralyzed limbs by using a brain-controlled robotic suit, researchers have said.

The 28-year-old man from Lyon, France, known as Thibault, was paralyzed from the shoulders down after falling 40 feet from a balcony, severing his spinal cord, the AFP news agency reported.

He had some movement in his biceps and left wrist, and was able to operate a wheelchair using a joystick with his left arm.

It’s easy to imagine advances in AI will have an impact on strategy games and digital versions of board games like Chess and Go, but one of the most interesting implementations of AI technology I’ve seen so far is a text adventure.

AI Dungeon 2 by Nick Walton uses OpenAI to simulate an old-school text adventure of the Zork variety, only instead of having to read the designer’s mind to figure out what to type to use this thing on that thing, you write plain English and get results. It helps to start sentences with verbs but you’ll get a response to basically anything, and that response is likely to be surprising. I played a wizard exploring a ruin and within a handful of turns I’d found out I was responsible for the state of these ruins and confronted a younger version of myself.

Dmitry Kaminskiy speaks as though he were trying to unload everything he knows about the science and economics of longevity—from senolytics research that seeks to stop aging cells from spewing inflammatory proteins and other molecules to the trillion-dollar life extension industry that he and his colleagues are trying to foster—in one sitting.

At the heart of the discussion with Singularity Hub is the idea that artificial intelligence will be the engine that drives breakthroughs in how we approach healthcare and healthy aging—a concept with little traction even just five years ago.

“At that time, it was considered too futuristic that artificial intelligence and data science … might be more accurate compared to any hypothesis of human doctors,” said Kaminskiy, co-founder and managing partner at Deep Knowledge Ventures, an investment firm that is betting big on AI and longevity.

AI is Pandora’s box, s’ true…

On the one hand we can’t close it and on the other hand our current direction is not good. And this is gonna get worse as AI starts taking its own ‘creative’ decisions… the human overlords will claim it has nothing to do with them if and when things go wrong.

The solution for commercialization is actually quite simple.

If my dog attacks a child on the street when off the leash who is responsible?

The owner of course. Although I dislike that word when it comes to anything living, maybe the dogs human representative is a better term.

Business must be held accountable for its AI dogs off the leash, if necessary keep the leash on for longer. So no need to stop commercialization, just reinstate clear accountability, something which seems to be lacking today.

And in my view, this would actually be more profitable… Everyone happy then.


Getting its world premiere at documentary festival IDFA in Amsterdam, Tonje Hessen Schei’s gripping AI doc “iHuman” drew an audience of more than 700 to a 10 a.m. Sunday screening at the incongruously old-school Pathé Tuschinski cinema. Many had their curiosity piqued by the film’s timely subject matter—the erosion of privacy in the age of new media, and the terrifying leaps being made in the field of machine intelligence—but it’s fair to say that quite a few were drawn by the promise of a Skype Q&A with National Security Agency whistleblower Edward Snowden, who made headlines in 2013 by leaking confidential U.S. intelligence to the U.K.’s Guardian newspaper.

Snowden doesn’t feature in the film, but it couldn’t exist without him: “iHuman” is an almost exhausting journey through all the issues that Snowden was trying to warn us about, starting with our civil liberties. Speaking after the film—which he “very much enjoyed”—Snowden admitted that the subject was still raw for him, and that the writing of his autobiography (this year’s “Permanent Record”), had not been easy. “It was actually quite a struggle,” he revealed. “I had tried to avoid writing that book for a very long time, but when I looked at what was happening in the world and [saw] the direction of developments since I came forward [in 2013], I was haunted by these developments—so much so that I began to consider: what were the costs of silence? Which is [something] I understand very well, given my history.

Some argue that only a “Sputnik” moment will wake the American people and government to act with purpose, just as the 1957 Soviet launch of a satellite catalyzed new educational and technological investments. We disagree. We have been struck by the broad, bipartisan consensus in America to “get AI right” now. We are in a rare moment when challenge, urgency, and consensus may just align to generate the energy we need to extend our AI leadership and build a better future.


Congress asked us to serve on a bipartisan commission of tech leaders, scientists, and national security professionals to explore the relationship between artificial intelligence (AI) and national security. Our work is not complete, but our initial assessment is worth sharing now: in the next decade, the United States is in danger of losing its global leadership in AI and its innovation edge. That edge is a foundation of our economic prosperity, military power and ultimately the freedoms we enjoy.

As we consider the leadership stakes, we are struck by AI’s potential to propel us towards many imaginable futures. Some hold great promise; others are concerning. If past technological revolutions are a guide, the future will include elements of both.

Some of us have dedicated our professional lives to advancing AI for the benefit of humanity. AI technologies have been harnessed for good in sectors ranging from health care to education to transportation. Today’s progress only scratches the surface of AI’s potential. Computing power, large data sets, and new methods have led us to an inflection point where AI and its sub-disciplines (including machine vision, machine learning, natural language understanding, and robotics) will transform the world.

Computer scientists from Duke University and Harvard University have joined with physicians from Massachusetts General Hospital and the University of Wisconsin to develop a machine learning model that can predict which patients are most at risk of having destructive seizures after suffering a stroke or other brain injury.

A point system they’ve developed helps determine which patients should receive expensive continuous electroencephalography (cEEG) monitoring. Implemented nationwide, the authors say their could help hospitals monitor nearly three times as many patients, saving many lives as well as $54 million each year.

A paper detailing the methods behind the interpretable machine learning approach appeared online June 19 in the Journal of Machine Learning Research.

Biological weapons could be built which target individuals in a specific ethnic group based on their DNA, a report by the University of Cambridge has warned.

Researchers from Cambridge’s Centre for the Study of Existential Risk (CSER) said the government was failing to prepare for ‘human-driven catastrophic risks’ that could lead to mass harm and societal collapse.

In recent years advances in science such as genetic engineering, and artificial intelligence (AI) and autonomous vehicles have opened the door to a host of new threats.