Toggle light / dark theme

A new study has explored whether AI can provide more attractive answers to humanity’s most profound questions than history’s most influential thinkers.

Researchers from the University of New South Wales first fed a series of moral questions to Salesforce’s CTRL system, a text generator trained on millions of documents and websites, including all of Wikipedia. They added its responses to a collection of reflections from the likes of Plato, Jesus Christ, and, err, Elon Musk.

The team then asked more than 1,000 people which musings they liked best — and whether they could identify the source of the quotes.

For years, Brent Hecht, an associate professor at Northwestern University who studies AI ethics, felt like a voice crying in the wilderness. When he entered the field in 2008, “I recall just agonizing about how to get people to understand and be interested and get a sense of how powerful some of the risks [of AI research] could be,” he says.

To be sure, Hecht wasn’t—and isn’t—the only academic studying the societal impacts of AI. But the group is small. “In terms of responsible AI, it is a sideshow for most institutions,” Hecht says. But in the past few years, that has begun to change. The urgency of AI’s ethical reckoning has only increased since Minneapolis police killed George Floyd, shining a light on AI’s role in discriminatory police surveillance.

This year, for the first time, major AI conferences—the gatekeepers for publishing research—are forcing computer scientists to think about those consequences.

But as millions of animals continue to be used in biomedical research each year, and new legislation calls on federal agencies to reduce and justify their animal use, some have begun to argue that it’s time to replace the three Rs themselves. “It was an important advance in animal research ethics, but it’s no longer enough,” Tom Beauchamp told attendees last week at a lab animal conference.


Science talks with two experts in animal ethics who want to go beyond the three Rs.

Are you for Ethical Ai Eric Klien?


Jess Whittlestone at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and her colleagues published a comment piece in Nature Machine Intelligence this week arguing that if artificial intelligence is going to help in a crisis, we need a new, faster way of doing AI ethics, which they call ethics for urgency.

For Whittlestone, this means anticipating problems before they happen, finding better ways to build safety and reliability into AI systems, and emphasizing technical expertise at all levels of the technology’s development and use. At the core of these recommendations is the idea that ethics needs to become simply a part of how AI is made and used, rather than an add-on or afterthought.

Ultimately, AI will be quicker to deploy when needed if it is made with ethics built in, she argues. I asked her to talk me through what this means.

By Valentina Lagomarsino figures by Sean Wilson

Nearly four months ago, Chinese researcher He Jiankui announced that he had edited the genes of twin babies with CRISPR. CRISPR, also known as CRISPR/Cas9, can be thought of as “genetic scissors” that can be programmed to edit DNA in any cell. Last year, scientists used CRISPR to cure dogs of Duchenne muscular dystrophy. This was a huge step forward for gene therapies, as the potential of CRISPR to treat otherwise incurable diseases seemed possible. However, a global community of scientists believe it is premature to use CRISPR in human babies because of inadequate scientific review and a lack of international consensus regarding the ethics of when and how this technology should be used.

Early regulation of gene-editing technology.

What does this have to do with AI self-driving cars?

AI Self-Driving Cars Will Need to Make Life-or-Death Judgements

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One crucial aspect to the AI of self-driving cars is the need for the AI to make “judgments” about driving situations, ones that involve life-and-death matters.

What police would do with the information has yet to be determined. The head of WMP told New Scientist they won’t be preemptively arresting anyone; instead, the idea would be to use the information to provide early intervention from social or health workers to help keep potential offenders on the straight and narrow or protect potential victims.

But data ethics experts have voiced concerns that the police are stepping into an ethical minefield they may not be fully prepared for. Last year, WMP asked researchers at the Alan Turing Institute’s Data Ethics Group to assess a redacted version of the proposal, and last week they released an ethics advisory in conjunction with the Independent Digital Ethics Panel for Policing.

While the authors applaud the force for attempting to develop an ethically sound and legally compliant approach to predictive policing, they warn that the ethical principles in the proposal are not developed enough to deal with the broad challenges this kind of technology could throw up, and that “frequently the details are insufficiently fleshed out and important issues are not fully recognized.”

Daily life during a pandemic means social distancing and finding new ways to remotely connect with friends, family and co-workers. And as we communicate online and by text, artificial intelligence could play a role in keeping our conversations on track, according to new Cornell University research.

Humans having difficult conversations said they trusted artificially —the “smart” reply suggestions in texts—more than the people they were talking to, according to a new study, “AI as a Moral Crumple Zone: The Effects of Mediated AI Communication on Attribution and Trust,” published online in the journal Computers in Human Behavior.

“We find that when things go wrong, people take the responsibility that would otherwise have been designated to their human partner and designate some of that to the system,” said Jess Hohenstein, a doctoral student in the field of information science and the paper’s first author. “This introduces a potential to take AI and use it as a mediator in our conversations.”