Toggle light / dark theme

We are now on the brink of a “third revolution in warfare,” heralded by killer robots — the fully autonomous weapons that could decide who to target and kill… without human input.


Over the weekend, experts on military artificial intelligence from more than 80 world governments converged on the U.N. offices in Geneva for the start of a week’s talks on autonomous weapons systems. Many of them fear that after gunpowder and nuclear weapons, we are now on the brink of a “third revolution in warfare,” heralded by killer robots — the fully autonomous weapons that could decide who to target and kill without human input. With autonomous technology already in development in several countries, the talks mark a crucial point for governments and activists who believe the U.N. should play a key role in regulating the technology.

The meeting comes at a critical juncture. In July, Kalashnikov, the main defense contractor of the Russian government, announced it was developing a weapon that uses neural networks to make “shoot-no shoot” decisions. In January 2017, the U.S. Department of Defense released a video showing an autonomous drone swarm of 103 individual robots successfully flying over California. Nobody was in control of the drones; their flight paths were choreographed in real-time by an advanced algorithm. The drones “are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature,” a spokesman said. The drones in the video were not weaponized — but the technology to do so is rapidly evolving.

This April also marks five years since the launch of the International Campaign to Stop Killer Robots, which called for “urgent action to preemptively ban the lethal robot weapons that would be able to select and attack targets without any human intervention.” The 2013 launch letter — signed by a Nobel Peace Laureate and the directors of several NGOs — noted that they could be deployed within the next 20 years and would “give machines the power to decide who lives or dies on the battlefield.”

Read more

US regulators Wednesday approved the first device that uses artificial intelligence to detect eye damage from diabetes, allowing regular doctors to diagnose the condition without interpreting any data or images.

The device, called IDx-DR, can diagnose a condition called diabetic retinopathy, the most common cause of vision loss among the more than 30 million Americans living with diabetes.

Its software uses an artificial intelligence algorithm to analyze images of the eye, taken with a retinal camera called the Topcon NW400, the FDA said.

Read more

Marking a new era of “diagnosis by software,” the US Food and Drug Administration on Wednesday gave permission to a company called IDx to market an AI-powered diagnostic device for ophthalmology.

What it does: The software is designed to detect greater than a mild level of diabetic retinopathy, which causes vision loss and affects 30 million people in the US. It occurs when high blood sugar damages blood vessels in the retina.

How it works: The program uses an AI algorithm to analyze images of the adult eye taken with a special retinal camera. A doctor uploads the images to a cloud server, and the software then delivers a positive or negative result.

Read more

Machines don’t actually have bias. AI doesn’t ‘want’ something to be true or false for reasons that can’t be explained through logic. Unfortunately human bias exists in machine learning from the creation of an algorithm to the interpretation of data – and until now hardly anyone has tried to solve this huge problem.

A team of scientists from Czech Republic and Germany recently conducted research to determine the effect human cognitive bias has on interpreting the output used to create machine learning rules.

The team’s white paper explains how 20 different cognitive biases could potentially alter the development of machine learning rules and proposes methods for “debiasing” them.

Read more

“If you went to bed last night as an industrial company, you’re going to wake up this morning as a software and analytics company.” Jeff Immelt, former CEO of General Electric

The second wave of digitization is set to disrupt all spheres of economic life. As venture capital investor Marc Andreesen pointed out, “software is eating the world.” Yet, despite the unprecedented scope and momentum of digitization, many decision makers remain unsure how to cope, and turn to scholars for guidance on how to approach disruption.

The first thing they should know is that not all technological change is “disruptive.” It’s important to distinguish between different types of innovation, and the responses they require by firms. In a recent publication in the Journal of Product Innovation, we undertook a systematic review of 40 years (1975 to 2016) of innovation research. Using a natural language processing approach, we analyzed and organized 1,078 articles published on the topics of disruptive, architectural, breakthrough, competence-destroying, discontinuous, and radical innovation. We used a topic-modeling algorithm that attempts to determine the topics in a set of text documents. We quantitatively compared different models, which led us to select the model that best described the underlying text data. This model clustered text into 84 distinct topics. It performs best at explaining the variability of the data in assigning words to topics and topics to documents, minimizing noise in the data.

Read more

Machine learning and Artificial Intelligence developments are happening at a break neck speed! At such pace, you need to understand the developments at multiple levels – you obviously need to understand the underlying tools and techniques, but you also need to develop an intuitive understanding of what is happening.

By end of this article, you will develop an intuitive understanding of RNNs, specially LSTM & GRU.

Ready?

Read more

Protein synthesis is a critical part of how our cells operate and keep us alive and when it goes wrong it drives the aging process. We take a look at how it works and what happens when things break down.


Suppose that your full-time job is to proofread machine-translated texts. The translation algorithm commits mistakes at a constant rate all day long; from this point of view, the quality of the translation stays the same. However, as a poor human proofreader, your ability to focus on this task will likely decline throughout the day; therefore, the number of missed errors, and therefore the number of translations that go out with mistakes, will likely go up with time, even though the machine doesn’t make any more errors at dusk than it did at dawn.

To an extent, this is pretty much what is going on with protein synthesis in your body.

Protein synthesis in a nutshell

The so-called coding regions of your DNA consist of genes that encode the necessary information to assemble the proteins that your cells use. As your DNA is, for all intents and purposes, the blueprint to build you, it is pretty important information, and as such, you want to keep it safe. That’s why DNA is contained in the double-layered membrane of the cell nucleus, where it is relatively safe from oxidative stress and other factors that might damage it. The protein-assembling machinery of the cell, ribosomes, are located outside the cell nucleus, and when a cell needs to build new proteins, what’s sent out to the assembly lines is not the blueprint itself, but rather a disposable mRNA (messenger RNA) copy of it that is read by the ribosomes, which will then build the corresponding protein. The process of making an mRNA copy of DNA is called “translation”, and as the initial analogy suggests, it is not error-free.

Read more

Researchers just overturned a 70-year-old fundamental understanding of how our brains learn – paving the way for faster, more advanced AI applications and a different approach to medical treatments for brain disorders. [This article first appeared on LongevityFacts. Author: Brady Hartman. ]

Researchers just overturned the way scientists thought our brains learn – a view that up until now has been widely accepted for almost 70 years.

This discovery-based upon new experimental evidence – paves the way for more modern artificial intelligence (AI) applications such as machine learning and deep learning algorithms that imitate our brain functions at a much faster speed with advanced features. Moreover, the research may change how doctors view disorders of the brain, such as Alzheimer’s and may alter treatments for other forms of dementia.

Read more

Engineering and construction is behind the curve in implementing artificial intelligence solutions. Based on extensive research, we survey applications and algorithms to help bridge the technology gap.

The engineering and construction (E&C) sector is worth more than $10 trillion a year. And while its customers are increasingly sophisticated, it remains severely underdigitized. To lay out the landscape of technology, we conducted a comprehensive study of current and potential use cases in every stage of E&C, from design to preconstruction to construction to operations and asset management. Our research revealed a growing focus on technological solutions that incorporate artificial intelligence (AI)-powered algorithms. These emerging technologies focus on helping players overcome some of the E&C industry’s greatest challenges, including cost and schedule overruns and safety concerns.

Read more