Toggle light / dark theme

China isn’t the only country with a draconian “social credit score” system — there’s one quite a bit like it operating in the U.S. Except that it’s being run by American businesses, not the government.

There’s plenty of evidence that retailers have been using a technique called “surveillance scoring” for decades in which consumers are given a secret score by an algorithm to give them a different price — but for the same goods and services.

But the practice might be illegal after all: a California nonprofit called Consumer Education Foundation (CEF) filed a petition yesterday asking for the Federal Trade Commission (FTC) to look into the shady practice.

Artificial Intelligence (AI) is an emerging field of computer programming that is already changing the way we interact online and in real life, but the term ‘intelligence’ has been poorly defined. Rather than focusing on smarts, researchers should be looking at the implications and viability of artificial consciousness as that’s the real driver behind intelligent decisions.

Consciousness rather than intelligence should be the true measure of AI. At the moment, despite all our efforts, there’s none.

Significant advances have been made in the field of AI over the past decade, in particular with machine learning, but artificial intelligence itself remains elusive. Instead, what we have is artificial serfs—computers with the ability to trawl through billions of interactions and arrive at conclusions, exposing trends and providing recommendations, but they’re blind to any real intelligence. What’s needed is artificial awareness.

Elon Musk has called AI the “biggest existential threat” facing humanity and likened it to “summoning a demon,”[1] while Stephen Hawking thought it would be the “worst event” in the history of civilization and could “end with humans being replaced.”[2] Although this sounds alarmist, like something from a science fiction movie, both concerns are founded on a well-established scientific premise found in biology—the principle of competitive exclusion.[3]

Competitive exclusion describes a natural phenomenon first outlined by Charles Darwin in On the Origin of Species. In short, when two species compete for the same resources, one will invariably win over the other, driving it to extinction. Forget about meteorites killing the dinosaurs or super volcanoes wiping out life, this principle describes how the vast majority of species have gone extinct over the past 3.8 billion years![4] Put simply, someone better came along—and that’s what Elon Musk and Stephen Hawking are concerned about.

When it comes to Artificial Intelligence, there’s no doubt computers have the potential to outpace humanity. Already, their ability to remember vast amounts of information with absolute fidelity eclipses our own. Computers regularly beat grand masters at competitive strategy games such as chess, but can they really think? The answer is, no, and this is a significant problem for AI researchers. The inability to think and reason properly leaves AI susceptible to manipulation. What we have today is dumb AI.

Rather than fearing some all-knowing malignant AI overlord, the threat we face comes from dumb AI as it’s already been used to manipulate elections, swaying public opinion by targeting individuals to distort their decisions. Instead of ‘the rise of the machines,’ we’re seeing the rise of artificial serfs willing to do their master’s bidding without question.

Russian President Vladimir Putin understands this better than most, and said, “Whoever becomes the leader in this sphere will become the ruler of the world,”[5] while Elon Musk commented that competition between nations to create artificial intelligence could lead to World War III.[6]

The problem is we’ve developed artificial stupidity. Our best AI lacks actual intelligence. The most complex machine learning algorithm we’ve developed has no conscious awareness of what it’s doing.

For all of the wonderful advances made by Tesla, its in-car autopilot drove into the back of a bright red fire truck because it wasn’t programmed to recognize that specific object, and this highlights the problem with AI and machine learning—there’s no actual awareness of what’s being done or why.[7] What we need is artificial consciousness, not intelligence. A computer CPU with 18 cores, capable of processing 36 independent threads, running at 4 gigahertz, handling hundreds of millions of commands per second, doesn’t need more speed, it needs to understand the ramifications of what it’s doing.[8]

In the US, courts regularly use COMPAS, a complex computer algorithm using artificial intelligence to determine sentencing guidelines. Although it’s designed to reduce the judicial workload, COMPAS has been shown to be ineffective, being no more accurate than random, untrained people at predicting the likelihood of someone reoffending.[9] At one point, its predictions of violent recidivism were only 20% accurate.[10] And this highlights a perception bias with AI—complex technology is inherently trusted, and yet in this circumstance, tossing a coin would have been an improvement!

Dumb AI is a serious problem with serious consequences for humanity.

What’s the solution? Artificial consciousness.

It’s not enough for a computer system to be intelligent or even self-aware. Psychopaths are self-aware. Computers need to be aware of others, they need to understand cause and effect as it relates not just to humanity but life in general, if they are to make truly intelligent decisions.

All of human progress can be traced back to one simple trait—curiosity. The ability to ask, “Why?” This one, simple concept has lead us not only to an understanding of physics and chemistry, but to the development of ethics and morals. We’ve not only asked, why is the sky blue? But why am I treated this way? And the answer to those questions has shaped civilization.

COMPAS needs to ask why it arrives at a certain conclusion about an individual. Rather than simply crunching probabilities that may or may not be accurate, it needs to understand the implications of freeing an individual weighed against the adversity of incarceration. Spitting out a number is not good enough.

In the same way, Tesla’s autopilot needs to understand the implications of driving into a stationary fire truck at 65MPH—for the occupants of the vehicle, the fire crew, and the emergency they’re attending. These are concepts we intuitively grasp as we encounter such a situation. Having a computer manage the physics of an equation is not enough without understanding the moral component as well.

The advent of true artificial intelligence, one that has artificial consciousness, need not be the end-game for humanity. Just as humanity developed civilization and enlightenment, so too AI will become our partners in life if they are built to be aware of morals and ethics.

Artificial intelligence needs culture as much as logic, ethics as much as equations, morals and not just machine learning. How ironic that the real danger of AI comes down to how much conscious awareness we’re prepared to give it. As long as AI remains our slave, we’re in danger.

tl;dr — Computers should value more than ones and zeroes.

About the author

Peter Cawdron is a senior web application developer for JDS Australia working with machine learning algorithms. He is the author of several science fiction novels, including RETROGRADE and REENTRY, which examine the emergence of artificial intelligence.

[1] Elon Musk at MIT Aeronautics and Astronautics department’s Centennial Symposium

[2] Stephen Hawking on Artificial Intelligence

[3] The principle of competitive exclusion is also called Gause’s Law, although it was first described by Charles Darwin.

[4] Peer-reviewed research paper on the natural causes of extinction

[5] Vladimir Putin a televised address to the Russian people

[6] Elon Musk tweeting that competition to develop AI could lead to war

[7] Tesla car crashes into a stationary fire engine

[8] Fastest CPUs

[9] Recidivism predictions no better than random strangers

[10] Violent recidivism predictions only 20% accurate


It is now possible to take a talking-head style video, and add, delete or edit the speaker’s words as simply as you’d edit text in a word processor. A new deepfake algorithm can process the audio and video into a new file in which the speaker says more or less whatever you want them to.

For as smart as artificial intelligence systems seem to get, they’re still easily confused by hackers who launch so-called adversarial attacks — cyberattacks that trick algorithms into misinterpreting their training data, sometimes to disastrous ends.

In order to bolster AI’s defenses from these dangerous hacks, scientists at the Australian research agency CSIRO say in a press release they’ve created a sort of AI “vaccine” that trains algorithms on weak adversaries so they’re better prepared for the real thing — not entirely unlike how vaccines expose our immune systems to inert viruses so they can fight off infections in the future.

Researchers at the University of Chicago published a novel technique for improving the reliability of quantum computers by accessing higher energy levels than traditionally considered. Most prior work in quantum computation deals with “qubits,” the quantum analogue of binary bits that encode either zero or one. The new work instead leverages “qutrits,” quantum analogues of three-level trits capable of representing zero, one or two.

The UChicago group worked alongside researchers based at Duke University. Both groups are part of the EPiQC (Enabling Practical-scale Quantum Computation) collaboration, an NSF Expedition in Computing. EPiQC’s interdisciplinary research spans from algorithm and software development to architecture and design, with the ultimate goal of more quickly realizing the enormous potential of computing for scientific discovery and computing innovation.

Researchers from CSIRO’s Data61, the data and digital specialist arm of Australia’s national science agency, have developed a world-first set of techniques to effectively ‘vaccinate’ algorithms against adversarial attacks, a significant advancement in machine learning research.

In the age of big data, we are quickly producing far more digital information than we can possibly store.

Last year, $20 billion was spent on new data centers in the US alone, doubling the capital expenditure on data center infrastructure from 2016.

And even with skyrocketing investment in data storage, corporations and the public sector are falling behind.

Read more

Cops would love to have a system that uses DNA from a crime scene to generate a picture of a suspect’s face, but that tech is still restricted to science fiction.

That technology may never exist, but a team of Belgian and American engineers just developed something similar. Using what they know about how DNA shapes the human face, the researchers built an algorithm that scans through a database of images and selects the faces that could be linked to the DNA found at a crime scene, according to research published Wednesday in the journal Nature Communications — a powerful crime-fighting tool, but also a terrifying new way to subvert privacy.

Read more