Toggle light / dark theme

There a spacecraft so far away in space it has become the first humanmade object to reach interstellar space. It is traveling out there among the stars, far from Earth, far from home. Voyager 1 is set to never return to our star system, let alone Earth. Its mission; to explore the most distant reaches of space. Continue reading “15 Things You Should Know About Voyager 1, Mankind’s First Interstellar Spaceship” | >

Artificial intelligence doesn’t hold a candle to the human capacity for harm.

Over the last few years, there has been a lot of talk about the threat of artificial general intelligence (AGI). An AGI is essentially an artificial superintelligence. It is a system that is able to understand — or learn — any intellectual task that a human can. Experts from seemingly every sector of society have spoken out about these kinds of AI systems, depicting them as Terminator-style robots that will run amok and cause massive death and destruction.

Elon Musk, the SpaceX and Tesla CEO, has frequently railed against the creation of AGI and cast such superintelligent systems in apocalyptic terms. At SXSW in 2018, he called digital superintelligences “the single biggest existential crisis that we face and the most pressing one,” and he said these systems will ultimately be more deadly than a nuclear holocaust. The late Stephen Hawking shared these fears, telling the BBC in 2014 that “the development of full artificial intelligence could spell the end of the human race.”

Full Story:

Fears of artificial intelligence run amok make for compelling apocalypse narratives, but the real dangers of artificial intelligence come more from humans than machines.

Guilherme Pereira.

“Having robots working without human direction, for several days or weeks or years, is something we are worried about,” Pereira said. “The problem is that for a robot working long-term, say days at a time, the environment will change. Over years, the environment will change even more. In the forest, you will have plants and trees growing, seasonal changes, sometimes snow, sometimes sunshine, sometimes rain. And indoors, furniture gets moved around, people will be moving around, even other robots will present obstacles.

If a robot recognizes a chair and table, it will know it’s in the dining room, for example. If that changes, the robot will have a rough time localizing itself and figuring that out.

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

We’re in the midst of an NFT boom, but that won’t always be the case. Today, NFTs are being flipped quickly — much like house flipping in the lead up to the 2007-08 financial crisis.

Obviously, that doesn’t mean that NFTs are all driven by speculation, just that we need to be cautious and prudent when evaluating their value. Artificial intelligence (AI) is one tool for helping identify and produce valuable art NFTs. Let’s dive into that more here (but see Christian Jensen’s recent article for a broader background on the investment lingo in NFT land).

There are widely cited forecasts that project accelerating information and communications technology (ICT) energy consumption increases through the 2020’s with a 2018 Nature article estimating that if current trends continue, this will consume more than 20% of electricity demand by 2030. At several industry events I have heard talks that say one of the important limits of data center performance will be the amount of energy consumed. NVIDIA’s latest GPU solutions use 400+W processors and this energy consumption could more than double in future AI processor chips. Solutions that can accelerate important compute functions while consuming less energy will be important to provide more sustainable and economical data centers.

Lightmatter’s Envise chip (shown below) is a general-purpose machine learning accelerator that combines photonics (PIC) and CMOS transistor-based devices (ASIC) into a single compact module. The device uses silicon photonics for high performance AI inference tasks and consumes much less energy than CMOS only solutions and thus helping to reduce the projected power load from data centers.

Full Story:

Lightmatter has a roadmap for even faster processing using more colors for parallel processing channels with each color acting as a separate virtual computer.

Nick said that in addition to data center applications for Envise he could see the technology being used to enable autonomous electric vehicles that require high performance AI but are constrained by battery power, making it easier to provide compelling range per vehicle charge. In addition to the Envise module, Lightmatter also offers optical interconnect technology that it calls Passage.

Lightmatter is making optical AI processors that can provide fast results with less power consumption than conventional CMOS products. Their compute module combines CMOS logic and memory with optical analog processing units useful for AI inference, 0, natural language processing, financial modelling and ray tracing.

Using a brain computer interface, the man cut and ate food with thought-controlled robotic hands. A man paralyzed from the neck down has used two robot arms to cut food and serve himself — a big step in the field of mind-controlled prosthetics.

Robert “Buz” Chmielewski, age 49, has barely been able to move his arms since a surfing accident paralyzed him as a teenager. But in January of 2019, he got renewed hope, when doctors implanted two sets of electrodes in his brain, one in each hemisphere.

The goal was that this brain computer interface would help Chmielewski regain some sensation in his hands, enable him to mentally control two prosthetic arms, and even feel what he is touching. man paralyzed from the neck down has used two robot arms to cut food and serve himself — a big step in the field of mind-controlled prosthetics.

Scientists have been trying to find ways to predict an epileptic seizure for decades, with little success. They are almost always unpredictable. The best techniques we have now — machine learning and self-awareness — give us only minutes notice ahead of the seizure.

Now, for the first time, a study has shown that brain activity could be used to forecast the onset of epileptic seizures several days in advance.

A New Hope

A team of researchers looked into data from brain implants designed to monitor and prevent seizures. Buried in the data, they found patterns of brain activity that predicted seizure risk a day or more in advance. The researchers say this could be used to create an epileptic seizure forecasting tool — giving new hope to patients with epilepsy.

With the help of robotics specialists, we can separate the truth from the hype.

Elon Musk has announced his plans for a new Tesla humanoid robot that will excel at “mundane tasks,” but he’s making some common robotics mistakes with his grand plans.

What does the future of a Tesla robot look like? With the help of a couple of robotics specialists, we can separate the truth from the hype in Musk’s claims.

What can we expect from a true humanoid robot?
Making a robot look and even behave like human figure brings many programming challenges. Will the Tesla bot do bipedal locomotion, the complex two-legged walking that humans have perfected over millions of years? (Or dance like the very obvious human in a Tesla Robot suit did at Tesla AI Day when Musk made the announcement?) Northwestern University robotics professor Michael Peshkin said it’s tasks like this that are often underestimated by the public.

“A baby spends several years learning how to move their body around,” Peshkin said. “We never appreciate how sophisticated people are to begin with. It’s the things that babies can do that are so hard for robots.”

In social deduction games, groups of players attempt to decipher each others’ hidden roles. They need to observe the other players’ actions to deduce their roles while still hiding their roles. Essentially, to succeed in the game, the player needs to learn about the other agent through various sources while remaining anonymous. This needs players to cooperatively work against the other team.

Hidden Agenda

DeepMind and Harvard’s Hidden Agenda is a social deduction game to train multiple players in two fundamental groups. These teams are ‘Crewmates’ and ‘Imposters’. Crewmates have a numerical advantage with the goal to refuel their ship using energy cells scattered around, and Imposters have an informational advantage with the goal of halting the Crewmates. This means the Crewmates are unaware of the roles of the other players, but the Imposers have this knowledge. An environment is created where each player is randomly assigned a role and colour for their avatar at the start of each episode and initialised to a location on the game map.