Toggle light / dark theme

In June 2019, Facebook’s AI lab, FAIR, released AI Habitat, a new simulation platform for training AI agents. It allowed agents to explore various realistic virtual environments, like a furnished apartment or cubicle-filled office. The AI could then be ported into a robot, which would gain the smarts to navigate through the real world without crashing.

In the year since, FAIR has rapidly pushed the boundaries of its work on “embodied AI.” In a blog post today, the lab has announced three additional milestones reached: two new algorithms that allow an agent to quickly create and remember a map of the spaces it navigates, and the addition of sound on the platform to train the agents to hear.

“We have for the first time used deep learning to find disease-related genes. This is a very powerful method in the analysis of huge amounts of biological information, or ‘big data’,” said Sanjiv Dwivedi, first author of the newly published research.

AI in gene expression

The plan in the next big war will probably be to let waves of AI fighters wipe out all the enemies targets, Anti aircraft systems, enemy fighters, enemy air fields etc…, however many waves that takes. And, then human pilots come in behind that.


An artificial intelligence algorithm defeated a human F-16 fighter pilot in a virtual dogfight sponsored by the Defense Advanced Research Projects Agency Thursday.

Researchers at Oxford University, in collaboration with DeepMind, University of Basel and Lancaster University, have created a machine learning algorithm that interfaces with a quantum device and ‘tunes’ it faster than human experts, without any human input. They are dubbing it “Minecraft explorer for quantum devices.”

Classical computers are composed of billions of transistors, which together can perform complex calculations. Small imperfections in these transistors arise during manufacturing, but do not usually affect the operation of the computer. However, in a quantum computer similar imperfections can strongly affect its behavior.

In prototype semiconductor quantum computers, the standard way to correct these imperfections is by adjusting input voltages to cancel them out. This process is known as tuning. However, identifying the right combination of voltage adjustments needs a lot of time even for a single quantum . This makes it virtually impossible for the billions of devices required to build a useful general-purpose quantum computer.

Quantifications are produced by several disciplinary houses in a myriad of different styles. The concerns about unethical use of algorithms, unintended consequences of metrics, as well as the warning about statistical and mathematical malpractices are all part of a general malaise, symptoms of our tight addiction to quantification. What problems are shared by all these instances of quantification? After reviewing existing concerns about different domains, the present perspective article illustrates the need and the urgency for an encompassing ethics of quantification. The difficulties to discipline the existing regime of numerification are addressed; obstacles and lock-ins are identified. Finally, indications for policies for different actors are suggested.

Newswise — Most of modern medicine has physical tests or objective techniques to define much of what ails us. Yet, there is currently no blood or genetic test, or impartial procedure that can definitively diagnose a mental illness, and certainly none to distinguish between different psychiatric disorders with similar symptoms. Experts at the University of Tokyo are combining machine learning with brain imaging tools to redefine the standard for diagnosing mental illnesses.

“Psychiatrists, including me, often talk about symptoms and behaviors with patients and their teachers, friends and parents. We only meet patients in the hospital or clinic, not out in their daily lives. We have to make medical conclusions using subjective, secondhand information,” explained Dr. Shinsuke Koike, M.D., Ph.D., an associate professor at the University of Tokyo and a senior author of the study recently published in Translational Psychiatry.

“Frankly, we need objective measures,” said Koike.

“A neuron in the human brain can never equate the human mind, but this analogy doesn’t hold true for a digital mind, by virtue of its mathematical structure, it may – through evolutionary progression and provided there are no insurmountable evolvability constraints – transcend to the higher-order Syntellect. A mind is a web of patterns fully integrated as a coherent intelligent system; it is a self-generating, self-reflective, self-governing network of sentient components… that evolves, as a rule, by propagating through dimensionality and ascension to ever-higher hierarchical levels of emergent complexity. In this book, the Syntellect emergence is hypothesized to be the next meta-system transition, developmental stage for the human mind – becoming one global mind – that would constitute the quintessence of the looming Cybernetic Singularity.” –Alex M. Vikoulov, The Syntellect Hypothesis https://www.ecstadelic.net/e_news/gearing-for-the-2020-vision-of-our-cybernetic-future-the-syntellect-hypothesis-expanded-edition-press-release

#SyntellectHypothesis


Ecstadelic Media Group releases the new 2020 expanded edition of The Syntellect Hypothesis: Five Paradigms of the Mind’s Evolution by Alex M. Vikoulov as eBook and Paperback (Press Release, San Francisco, CA, USA, January 15, 2020 10.20 AM PST)

Picture

Named “The Book of the Year” by futurists and academics alike in 2019 and maintaining high rankings in Amazon charts in Cybernetics, Physics of Time, Phenomenology, and Phenomenological Philosophy, it has now been released as The 2020 Expanded New Deluxe Edition (2020e) in eBook and paperback versions. In one volume, the author covers it all: from quantum physics to your experiential reality, from the Big Bang to the Omega Point, from the ‘flow state’ to psychedelics, from ‘Lucy’ to the looming Cybernetic Singularity, from natural algorithms to the operating system of your mind, from geo-engineering to nanotechnology, from anti-aging to immortality technologies, from oligopoly capitalism to Star-Trekonomics, from the Matrix to Universal Mind, from Homo sapiens to Holo syntellectus.

To avoid this problem, the researchers came up with several shortcuts and simplifications that help focus on the most important interactions, making the calculations tractable while still providing a precise enough result to be practically useful.

To test their approach, they put it to work on a 14-qubit IBM quantum computer accessed via the company’s IBM Quantum Experience service. They were able to visualize correlations between all pairs of qubits and even uncovered long-range interactions between qubits that had not been previously detected and will be crucial for creating error-corrected devices.

They also used simulations to show that they could apply the algorithm to a quantum computer as large as 100 qubits without calculations getting intractable. As well as helping to devise error-correction protocols to cancel out the effects of noise, the researchers say their approach could also be used as a diagnostic tool to uncover the microscopic origins of noise.

The researchers fused machine learning from demonstration algorithms and more classical autonomous navigation systems. Rather than replacing a classical system altogether, APPLD learns how to tune the existing system to behave more like the human demonstration. This paradigm allows for the deployed system to retain all the benefits of classical navigation systems—such as optimality, explainability and safety—while also allowing the system to be flexible and adaptable to new environments, Warnell said.


In the future, a soldier and a game controller may be all that’s needed to teach robots how to outdrive humans.

At the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory and the University of Texas at Austin, researchers designed an algorithm that allows an autonomous ground to improve its existing systems by watching a human drive. The team tested its approach—called adaptive planner parameter learning from demonstration, or APPLD—on one of the Army’s experimental autonomous ground vehicles.

“Using approaches like APPLD, current soldiers in existing training facilities will be able to contribute to improvements in simply by operating their vehicles as normal,” said Army researcher Dr. Garrett Warnell. “Techniques like these will be an important contribution to the Army’s plans to design and field next-generation combat vehicles that are equipped to navigate autonomously in off-road deployment environments.”

On the higher end, they work to ensure that development is open in order to work on multiple cloud infrastructures, providing companies the ability to know that portability exists.

That openness is also why deep learning is not yet part of a solution. There is still not the transparency needed into the DL layers in order to have the trust necessary for privacy concerns. Rather, these systems aim to help manage information privacy for machine learning applications.

Artificial intelligence applications are not open, and can put privacy at risk. The addition of good tools to address privacy for data being used by AI systems is an important early step in adding trust into the AI equation.