Toggle light / dark theme

They say ‘I believe in nature. Nature is harmonious’. Every big fish is eating every smaller fish. Every organ is fighting constantly invading bacteria. Is that what you mean by harmony? There are planets that are exploding out there. Meteorites that hit another and blow up. What’s the purpose of that? What’s the purpose of floods? To drown people? In other words, if you start looking for purpose, you gotta look all over, take in the whole picture. So, man projects his own values into nature. — Jacque Fresco (March 13, 1916 — May 18, 2017)

When most of us use the word ‘nature‘, we really don’t know much about it in reality. — Ursa.

Lethal autonomous weapons systems (LAWS), also called “killer robots” or “slaughterbots” being developed by a clutch of countries, have been a topic of debate with the international military, ethics, and human rights circles raising concerns. Recent talks about a ban on these killer robots have brought them into the spotlight yet again.

What Are Killer Robots?

The exact definition of a killer robot is fluid. However, most agree that they may be broadly described as weapons systems that use artificial intelligence (AI) to identify, select, and kill human targets without any meaningful human control.

How close are we to having fully autonomous vehicles on the roads? Are they safe? In Chandler, Arizona a fleet of Waymo vehicles are already in operation. Waymo sponsored this video and provided access to their technology and personnel. Check out their safety report here: https://waymo.com/safety/

▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
References:

Waymo Safety Reports — https://waymo.com/safety/

Driving Statistics — https://ve42.co/DrivingStats.

The Real Moral Dilemma of Self-Driving Cars https://ve42.co/SelfDriving.

▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
Special thanks to Patreon supporters:
Alvaro naranjo, Burt Humburg, Blake Byers, Dumky, Mike Tung, Evgeny Skvortsov, Meekay, Ismail Öncü Usta, Paul Peijzel, Crated Comments, Anna, Mac Malkawi, Michael Schneider, Oleksii Leonov, Jim Osmun, Tyson McDowell, Ludovic Robillard, Jim buckmaster, fanime96, Juan Benet, Ruslan Khroma, Robert Blum, Richard Sundvall, Lee Redden, Vincent, Marinus Kuivenhoven, Alfred Wallace, Arjun Chakroborty, Joar Wandborg, Clayton Greenwell, Pindex, Michael Krugman, Cy ‘kkm’ K’Nelson, Sam Lutfi, Ron Neal.

Who better to answer the pros and cons of artificial intelligence than an actual AI?


Students at Oxford’s Said Business School hosted an unusual debate about the ethics of facial recognition software, the problems of an AI arms race, and AI stock trading. The debate was unusual because it involved an AI participant, previously fed with a huge range of data such as the entire Wikipedia and plenty of news articles.

Over the last few months, Oxford University Alex Connock and Andrew Stephen have hosted sessions with their students on the ethics of technology with celebrated speakers – including William Gladstone, Denis Healey, and Tariq Ali. But now it was about time to allow an actual AI to contribute, sharing its own views on the issue of … itself.

The AI used was Megatron LLB Transformer, developed by a research team at the computer chip company Nvidia and based on work by Google. It was trained by consuming more content than a human could in a lifetime and was asked to defend and question the following motion: “This house believes that AI will never be ethical.”

Welcome to the future of moral dilemmas.

Not a day passes without a fascinating snippet on the ethical challenges created by “black box” artificial intelligence systems. These use machine learning to figure out patterns within data and make decisions — often without a human giving them any moral basis for how to do it.

Classics of the genre are the credit cards accused of awarding bigger loans to men than women, based simply on which gender got the best credit terms in the past. Or the recruitment AIs that discovered the most accurate tool for candidate selection was to find CVs containing the phrase “field hockey” or the first name “Jared”.

But wait, should we believe it?


An artificial intelligence warning AI researchers about the dangers of AI sounds like the setup of a delightful B movie, but truth is often stranger than fiction.

A professor and a fellow at the University of Oxford came face to face with that reality when they invited an AI to participate in a debate at the Oxford Union on, you guessed it, the ethics of AI. Specifically, as Dr. Alex Connock and Professor Andrew Stephen explain in the Conversation, the prompt was “This house believes that AI will never be ethical.” The AI, it seems, agreed.

“AI will never be ethical,” argued the Megatron-Turing Natural Language Generation model, which was notably trained on Wikipedia, Reddit, and millions of English-language news articles published between 2016 and 2019. “It is a tool, and like any tool, it is used for good and bad.”

PolyAI Ltd. is an ambitious startup that creates artificial voices to replace call center operators. Based in London, it has raised $28 million to bring AI-powered customer service to Metro Bank Plc, BP Plc and more. The idea is that instead of the nightmare of dialing random digits in a decision tree, you can instead ask to, say, book a table and a voice — with just the slightest inflection of its machine-learning origins — responds with great civility. That’s nice. But there was a brief moment two years ago when it wasn’t polite at all.

A software developer with PolyAI who was testing the system, asked about booking a table for himself and a Serbian friend. “Yes, we allow children at the restaurant,” the voice bot replied, according to PolyAI founder Nikola Mrksic. Seemingly out of nowhere, the bot was trying make an obnoxious joke about people from Serbia. When it was asked about bringing a Polish friend, it replied, “Yes, but you can’t bring your own booze.”

Full Story:


Money is pouring into artificial intelligence. Not so much into ethics. That’ll be a problem down the line.