Even with decades of unprecedented development in computational power, the human brain still holds many advantages over modern computing technologies. Our brains are extremely efficient for many cognitive tasks and do not separate memory and computing, unlike standard computer chips.
In the last decade, the new paradigm of neuromorphic computing has emerged, inspired by neural networks of the brain and based on energy-efficient hardware for information processing.
To create devices that mimic what occurs in our brain’s neurons and synapses, researchers need to overcome a fundamental molecular engineering challenge: how to design devices that exhibit controllable and energy-efficient transition between different resistive states triggered by incoming stimuli.
The COVID-19 crisis has led to a significant increase in the use of cyberspace, enabling people to work together at distant places and interact with remote environments and individuals by embodying virtual avatars or real avatars such as robots. However, the limits of avatar embodiment are not clear. Furthermore, it is not clear how these embodiments affect the behaviors of humans.
Therefore, a research team comprising Takayoshi Hagiwara (graduate student) and Professor Michiteru Kitazaki from Toyohashi University of Technology; Dr. Ganesh Gowrishankar (senior researcher) from UM-CNRS LIRMM; Professor Maki Sugimoto from Keio University; and Professor Masahiko Inami from The University of Tokyo aimed to develop a novel collaboration method with a shared avatar, which can be controlled concurrently by two individuals in VR, and to investigate human motor behaviors as the avatar is controlled in VR.
Full body movements of two participants were monitored via a motion-capture system, and movements of the shared avatar were determined as the average of the movements of the two participants. Twenty participants (10 dyads) were asked to perform reaching movements with their right hand towards target cubes that were presented at various locations. Participants exhibited superior reaction times with the shared avatar than individual reaction times, and the avatar’s hand movements were straighter and less jerky than those of the participants. The participants exhibited a sense of agency and body ownership towards the shared avatar although they only formed a part of the shared avatar.
An artificial intelligence technique—machine learning—is helping accelerate the development of highly tunable materials known as metal-organic frameworks (MOFs) that have important applications in chemical separations, adsorption, catalysis, and sensing.
Utilizing data about the properties of more than 200 existing MOFs, the machine learning platform was trained to help guide the development of new materials by predicting an often-essential property: water stability. Using guidance from the model, researchers can avoid the time-consuming task of synthesizing and then experimentally testing new candidate MOFs for their aqueous stability. Already, researchers are expanding the model to predict other important MOF properties.
Supported by the Office of Science’s Basic Energy Sciences program within the U.S. Department of Energy (DOE), the research was reported Nov. 9 in the journal Nature Machine Intelligence. The research was conducted in the Center for Understanding and Control of Acid Gas-Induced Evolution of Materials for Energy (UNCAGE-ME), a DOE Energy Frontier Research Center located at the Georgia Institute of Technology.
A unique type of modular self-reconfiguring robotic system has been unveiled. The term is a mouthful, but it basically refers to a robotic enterprise that can construct itself out of modules that connect to one another to achieve a certain task.
There has been great interest in such machines, also referred to as MSRRs, in recent years. One recent project called simply Space Engine can construct its own physical space environment to meet living, work and recreational needs. It accomplishes those tasks by generating its own kinetic forces to move and shape such spaces. It does this through adding and removing electromagnets to shift and construct modules into optimum room shapes.
MSRRs nevertheless face some constraints. They require gender-opposite components, which is limiting in some circumstances, and the modules must coordinate trajectories to efficiently connect components during self-assembly operations. Those tasks are time consuming and the success rates for connections between modules haven’t consistently been high.
Gonzalez thinks that Tesla taxis could help reinvigorate the city’s yellow-cab industry, which has taken a major hit from ride-hailing services like Uber, Via, and Lyft. He also predicts that the city could, for sustainability reasons, start mandating electric cabs, so he’s looking to get ahead of the curve, even if the commercial charging infrastructure isn’t quite there yet.
Drive Sally plans to bring hundreds of Teslas to New York’s streets in the near future, but for now, the company is still working out the kinks. Gonzalez suspects that the EVs may be better suited for for-hire “black cars” than yellow cabs, and he also said that the more-spacious Model Y would likely work better as a cab than the Model 3, but they’re still too expensive.
The year is coming to a close and it’s safe to say Elon Musk’s prediction that his company would field one million “robotaxis” by the end of 2020 isn’t going to come true. In fact, so far, Tesla’s managed to produce exactly zero self-driving vehicles. And we can probably call off the singularity too. GPT-3 has been impressive, but the closer machines get to aping human language the easier it is to see just how far away from us they really are.
So where does that leave us, ultimately, when it comes to the future of AI? That depends on your outlook. Media hype and big tech’s advertising machine has set us up for heartbreak when we compare the reality in 2020 to our 2016-era dreams of fully autonomous flying cars and hyper-personalized digital assistants capable of managing the workload of our lives.
But, if you’re gauging the future of AI from a strictly financial, marketplace point of view, there’s an entirely different outlook to consider. American rock band Timbuk 3 put it best when they sang “the future’s so bright, I gotta wear shades.”
SAN FRANCISCO – L3Harris Technologies will help the U.S. Defense Department extract information and insight from satellite and airborne imagery under a three-year U.S. Army Research Laboratory contract.
L3Harris will develop and demonstrate an artificial intelligence-machine learning interface for Defense Department applications under the multimillion-dollar contract announced Oct. 26.
“L3Harris will assist the Department of Defense with the integration of artificial intelligence and machine learning capabilities and technologies,” Stacey Casella, general manager for L3Harris’ Geospatial Processing and Analytics business, told SpaceNews. L3Harris will help the Defense Department embed artificial intelligence and machine learning in its workflows “to ultimately accelerate our ability to extract usable intelligence from the pretty expansive set of remotely sensed data that we have available today from spaceborne and airborne assets,” she added.
What rights does a robot have? If our machines become intelligent in the science-fiction way, that’s likely to become a complicated question — and the humans who nurture those robots just might take their side.
Ted Chiang, a science-fiction author of growing renown with long-lasting connections to Seattle’s tech community, doesn’t back away from such questions. They spark the thought experiments that generate award-winning novellas like “The Lifecycle of Software Objects,” and inspire Hollywood movies like “Arrival.”
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers ❤️ Their blog post is available here: https://www.wandb.com/articles/better-paths-through-idea-space
📝 The paper “Emergent Tool Use from Multi-Agent Interaction” is available here: https://openai.com/blog/emergent-tool-use/
❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost„ Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers
Splash screen/thumbnail design: Felícia Fehér — http://felicia.hu
Károly Zsolnai-Fehér’s links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/