Toggle light / dark theme

When we think of the interaction between mankind and any type of artificial intelligence in mythology, literature, and pop culture, the outcomes are always negative for humanity, if not apocalyptic. In Greek mythology, the blacksmith god Hephaestus created automatons who served as his attendants, and one of them, Pandora, unleashed all the evils into the world. Mary Shelley wrote the character named the Monster in her 1818 novel Frankenstein, as the product of the delusions of grandeur of a scientist named Victor Frankenstein. In pop culture, the most notable cases of a once-benign piece of technology running amok is the supercomputer Hal in 2001 Space Odyssey and intelligent machines overthrowing mankind in The Matrix. Traditionally, our stories regarding the god-like creative impulse of man bring about something that will overthrow the creators themselves.

The artificial intelligence-powered art exhibition Forging the Gods, curated by Julia Kaganskiy currently on view at Transfer Gallery attempts to portray the interaction between humans and machines in a more nuanced manner, showcasing how this relationship already permeates our everyday lives. The exhibition also shows how this relation is, indeed, fully reflective of the human experience — meaning that machines are no more or less evil than we actually are.

Lauren McCarthy, with her works “LAUREN” (2017) and its follow-up “SOMEONE” (2019) riffs on the trends of smart homes: in the former, she installs and controls remote-controlled networked devices in the homes of some volunteers and plays a human version of Alexa, reasoning that she will be better than Amazon’s virtual assistant because, being a human, she can anticipate people’s needs. The follow-up SOMEONE was originally a live media performance consisting of a four-channel video installation (made to look like a booth one can find at The Wing) where gallery-goers would play human versions of Alexa themselves in the homes of some volunteers, who would have to call for “SOMEONE” in case they needed something from their smart-controlled devices. Unfortunately, what we see at Forging The Gods is the recorded footage of the original run of the performance, so we have to forgo playing God by, say, making someone’s lighting system annoyingly flicker on and off.

Department of Engineering, Aarhus University, is coordinating a FET-Open backed project to build an entirely new AI hardware technology using nano-scale spintronics that can radically change the way in which computers work. The project will develop a neuromorphic computing system using synaptic neurons implemented in spintronics: a novel AI hardware that can set a framework for AI software in a physical system built like a human brain, upping computer performance by up to 100.000 times.

While the Wuhan district in China was under quarantine, news surfaced of robots delivering food and, later, medical supplies. Meanwhile, in the United States, the French company NAVYA configured its autonomous passenger shuttles in Florida to transport COVID-19 tests to the Mayo Clinic from off-site test locations. As the weeks of stay-at-home orders and recommendations slip into months, the delivery robots that were seen as a joke, fad, or nuisance have in some instances found a way into the public consciousness as important tools to combat the spread of coronavirus. The question is, will their usefulness extend post-lockdown?

👽 Facial recognition and Covid 19 in Moscow, Russia.

Fyodor R.


MOSCOW – The Russian capital is home to a network of 178,000 surveillance cameras. Thousands of these cameras are already connected to facial recognition software under a program called “Safe City.” Police claim the technology has helped arrest more than 300 people.

Now, as part of the response to COVID-19, authorities are trying to bring every surveillance camera into the facial recognition network. This Orwelian step is supposedly to catch people breaking quarantine.

At the end of January, before Moscow had any confirmed cases of coronavirus, the city purchased the latest version of NTechLab’s facial recognition software.

They claim their software can identify a face even when 40% or more of it is covered. We tried it, and even in a balaclava it still recognized a face.

When asked, Co-founder Artyom Kukharenko failed to make the connection between his powerful software and mass surveillance.“Why should it be used for mass surveillance I don’t understand?”

“When the system becomes more transparent to the majority of city residents, this fear will go away” he continued.

It is an engineer’s dream to build a robot as competent as an insect at locomotion, directed action, navigation, and survival in complex conditions. But as well as studying insects to improve robotics, in parallel, robot implementations have played a useful role in evaluating mechanistic explanations of insect behavior, testing hypotheses by embedding them in real-world machines. The wealth and depth of data coming from insect neuroscience hold the tantalizing possibility of building complete insect brain models. Robotics has a role to play in maintaining a focus on functional understanding—what do the neural circuits need to compute to support successful behavior?

Insect brains have been described as “minute structures controlling complex behaviors” (1): Compare the number of neurons in the fruit fly brain (∼135,000) to that in the mouse (70 million) or human (86 billion). Insect brain structures and circuits evolved independently to solve many of the same problems faced by vertebrate brains (or a robot’s control program). Despite the vast range of insect body types, behaviors, habitats, and lifestyles, there are many surprising consistencies across species in brain organization, suggesting that these might be effective, efficient, and general-purpose solutions.

Unraveling these circuits combines many disciplines, including painstaking neuroanatomical and neurophysiological analysis of the components and their connectivity. An important recent advance is the development of neurogenetic methods that provide precise control over the activity of individual neurons in freely behaving animals. However, the ultimate test of mechanistic understanding is the ability to build a machine that replicates the function. Computer models let researchers copy the brain’s processes, and robots allow these models to be tested in real bodies interacting with real environments (2). The following examples illustrate how this approach is being used to explore increasingly sophisticated control problems, including predictive tracking, body coordination, navigation, and learning.

AT&T is connecting IoT robots, in new partnerships with Xenex and Brain Corp., that aim to help hospitals and retail establishments like grocery stores keep facilities clean, kill germs and keep shelves stocked more efficiently.

Chris Penrose, SVP of Advanced Solutions at AT&T, told FierceWireless that the robots are riding on the carrier’s 4G LTE network, rather than narrowband IoT (NB-IoT) or LTE-M networks. That’s because of the large amounts of data they need to push, along with latency and speed requirements for these particular use cases.

In the robotics space, AT&T is typically leaning more toward using LTE and potentially 5G in the future, Penrose noted.

On April 16, 2020, Intel and Udacity jointly announced their new Intel® Edge AI for IoT Developers Nanodegree program to train the developer community in deep learning and computer vision. If you are wondering where AI is headed, now you know, it’s headed to the edge. Edge computing is the concept of storing data and computing data directly at the location where it is needed. The global edge computing market is forecasted to reach 1.12 trillion dollars by 2023.

There’s a real need for developers worldwide in this new market. Intel and Udacity aim to train 1 million developers.

AI Needs To Be On the Edge.

Has developed a new method to play out the consequences of its code.

The context: Like any software company, the tech giant needs to test its product any time it pushes updates. But the sorts of debugging methods that normal-size companies use aren’t really enough when you’ve got 2.5 billion users. Such methods usually focus on checking how a single user might experience the platform and whether the software responds to those individual users’ actions as expected. In contrast, as many as 25% of Facebook’s major issues emerge only when users begin interacting with one another. It can be difficult to see how the introduction of a feature or updates to a privacy setting might play out across billions of user interactions.

SimCity: In response, Facebook built a scaled-down version of its platform to simulate user behavior. Called WW, it helps engineers identify and fix the undesired consequences of new updates before they’re deployed. It also automatically recommends changes that can be made to the platform to improve the community experience.