Toggle light / dark theme

𝙏𝙝𝙚 𝙎𝙘𝙞𝙚𝙣𝙘𝙚 𝙤𝙛 𝙈𝙞𝙣𝙙 𝙍𝙚𝙖𝙙𝙞𝙣𝙜

𝙍𝙚𝙨𝙚𝙖𝙧𝙘𝙝𝙚𝙧𝙨 𝙖𝙧𝙚 𝙥𝙪𝙧𝙨𝙪𝙞𝙣𝙜 𝙖𝙜𝙚-𝙤𝙡𝙙 𝙦𝙪𝙚𝙨𝙩𝙞𝙤𝙣𝙨 𝙖𝙗𝙤𝙪𝙩 𝙩𝙝𝙚 𝙣𝙖𝙩𝙪𝙧𝙚 𝙤𝙛 𝙩𝙝𝙤𝙪𝙜𝙝𝙩𝙨—𝙖𝙣𝙙 𝙡𝙚𝙖𝙧𝙣𝙞𝙣𝙜 𝙝𝙤𝙬 𝙩𝙤 𝙧𝙚𝙖𝙙 𝙩𝙝𝙚𝙢.

𝙏𝙝𝙚 𝙉𝙚𝙬 𝙔𝙤𝙧𝙠𝙚𝙧:


James Somers writes about researchers in the fields of neuroscience and A.I. pursuing age-old questions about the nature of thoughts—and learning how to read them.

This week, Amazon’s Web Services (AWS) kicked off its tenth re: Invent conference, an event where it typically announces the biggest changes in the cloud computing industry’s dominant platform. This year’s news includes faster chips, more aggressive artificial intelligence, more developer-friendly tools, and even a bit of quantum computing for those who want to explore its ever-growing potential.

Amazon is working to lower costs by boosting the performance of its hardware. Their new generation of machines powered by the third generation of AMD’s EPYC processors, the M6a, is touted as offering a 35% boost in price/performance over the previous generation of M5a machines built with the second generation of the EPYC chips. They’ll be available in sizes that range from two virtual CPUs with 8GB of RAM (m6a.large) up to 192 virtual CPUs and 768GB of RAM (m6a.48xlarge).

AWS also notes that the chips will boast “always-on memory encryption” and rely on faster custom circuitry for faster encryption and decryption. The feature is a nod to users who worry about sharing hardware in the cloud and, perhaps, exposing their data.

Researchers have developed a new approach to machine learning that ‘learns how to learn’ and out-performs current machine learning methods for drug design, which in turn could accelerate the search for new disease treatments.

The method, called transformational machine learning (TML), was developed by a team from the UK, Sweden, India and Netherlands. It learns from multiple problems and improves performance while it learns.

TML could accelerate the identification and production of new drugs by improving the machine learning systems which are used to identify them. The results are reported in the Proceedings of the National Academy of Sciences.

Research has long strived to develop computers to work as energy efficiently as our brains. A study, led by researchers at the University of Gothenburg, has succeeded for the first time in combining a memory function with a calculation function in the same component. The discovery opens the way for more efficient technologies, everything from mobile phones to self-driving cars.

In recent years, computers have been able to tackle advanced cognitive tasks, like language and image recognition or displaying superhuman chess skills, thanks in large part to artificial intelligence (AI). At the same time, the is still unmatched in its ability to perform tasks effectively and energy efficiently.

“Finding new ways of performing calculations that resemble the brain’s energy-efficient processes has been a major goal of research for decades. Cognitive tasks, like image and voice recognition, require significant computer power, and mobile applications, in particular, like mobile phones, drones and satellites, require energy efficient solutions,” says Johan Åkerman, professor of applied spintronics at the University of Gothenburg.

TRU Community Care in Lafayette was the host to the unveiling of a brand new technology in the medical field — a humanoid robot that can perform basic medical tasks.

Beyond Imagination, an AI company based out of Colorado Springs, visited the Lafayette hospice center to test out the robot, named BEOMNI.

“We are excited that TRU sees the almost limitless potential of our humanoid robots in health care and has agreed to run this first pilot study with us. We look forward to partnering with them to bring a highly effective solution to market,” said inventor and CEO Dr. Harry Kloor.

For scanning underground structures and caves. Maybe scanning buildings, and doing security stuff, but doors would be a problem. Also too loud, but would be a nice start point for an Ion Drive flight system.


By Jim Magill

Looking like a micro-sized version of the Death Star, the Dronut X1, which Boston-based start-up Cleo Robotics released for commercial use earlier this month, is the first professional-grade bi-rotor ducted-fan drone – a drone without exposed rotor blades – built to conduct inspections in close-quartered and hazardous environments.

Its unique design, featuring hidden propellers and rounded form, means the Dronut is collision-tolerant and can be operated near sensitive equipment, Cleo Robotics’ CEO and co-founder Omar Eleryan said in an interview.

There were some speculations in the comment section that we probably have large air compressor or some other kind of too huge powering system for our robotic arm that we supposedly don’t show you.

So we packed our Clone in a suitcase and filmed a little presentation for you. The whole thing weights 8kg (18 lbs). We could fit everything inside but we separated the electricity from the water. And this is still just the beginning of the miniaturization process, we must and we will make it portable enough so humanoid robots can help people in everyday life.

After a year of development we have finished the robotic arm 11th prototype. We are starting a new one from scratch, more biomimetic and powerful than ever!

Sorry for the strange colours on the video! We are testing a new film camera!

Clone Incorporated.
Lucas Kozlik, Dhanush Rad, Amdeusz Swierk, Juliusz Tarnowski.

If you want to keep updated on the project, please share, like, comment and hit that subscribe button.

Western intelligence agencies fear Beijing could within decades dominate all of the key emerging technologies, particularly artificial intelligence, synthetic biology and genetics.

China’s economic and military rise over the past 40 years is considered to be one of the most significant geopolitical events of recent times, alongside the 1991 fall of the Soviet Union which ended the Cold War.

MI6, depicted by novelists as the employer of some of the most memorable fictional spies from John le Carré’s George Smiley to Ian Fleming’s James Bond, operates overseas and is tasked with defending Britain and its interests.

The current talk addresses a crucial problem on how compositionality can be naturally developed in cognitive agents by having iterative sensory-motor interactions with the environment.

The talk highlights a dynamic neural network model, so-called the multiple timescales recurrent neural network (MTRNN) model, which has been applied to a set of experiments on developmental learning of compositional actions performed by a humanoid robot made by Sony. The experimental results showed that a set of reusable behavior primitives were developed in the lower level network that is characterized by its fast timescale dynamics while sequential combinations of these primitives were learned in the higher level, which is characterized by its slow timescale dynamics.

This result suggests that adequate functional hierarchy necessary of generating compositional actions can be developed by utilizing timescale differences imposed at different levels of the network. The talk will also introduce our recent results on applications of an extended model of MTRNN to the problem of learning to recognize dynamic visual patterns on a pixel level. The experimental results indicated that dynamic visual images of compositional human actions can be recognized by self-organizing functional hierarchy when both spatial and temporal constraints are adequately imposed on the network activity. The dynamical systems’ mechanisms for development of the higher-order cognition will be discussed upon reviewing the aforementioned research results.

Jun Tani — Professor, Department of Electrical Engineering, KAIST

Prof. Jun Tani received his doctorate degree in electrical engineering from Sophia University in 1995. He worked at Sony Computer Science Lab in Tokyo as a researcher for 8 years and then started his lab as a PI in Riken Brain Science Inst. 12 years ago. He was appointed a visiting associate professor at the Univ. of Tokyo and a visiting researcher in Sony Intelligent Dynamic Lab. He moved to KAIST as a full professor in May, 2012.

He has been interested in neuro-robotics, theoretical problems in cognitive neuroscience, and complex systems. He has authored around 70 journal papers and 90 conference papers. He has been invited for his plenary talks in various international conferences including IEEE ICRA in 2005 and ICANN in 2014. He has served on editorial boards in IEEE Trans. Autonomous Mental Development, Adaptive Behavior, and Frontier in Neurorobotics.

Most times when we think of deepfakes, we think of the myriad negative applications. From pornography to blackmail to politics, deepfakes are a product of machine learning. They create a lie that is so realistic that it is hard to believe it is not the real thing. In a society plagued by fake news, deepfakes have the potential to do a substantial amount of harm.

But a recent team of researchers found another use for deepfakes — to deepfake the mind. And using machine learning to simulate artificial neural data in this way may make a world of difference for those with disabilities.

For people with full body paralysis, the body can seemingly become a prison. Communicating and the simplest of tasks may appear to be an insurmountable challenge. But even if the body is frozen, the mind may be very active. Brain-computer interfaces (BCIs) offer a way for these patients to interact with the world.

BCIs do not rely on muscle or eye movements. Instead, the user is trained to manipulate an object using the power of thought alone. BCIs can allow a fully paralyzed person to operate a wheelchair by just thinking, to move a cursor on a computer screen, or even play pinball by moving the paddles with their mind. BCIs can be freeing for people with this type of paralysis. It can also be used to treat depression or to rehabilitate the brain.

Full Story: