Toggle light / dark theme

Where Should AI Ethics Come From? Not Medicine, New Study Says

Posted in biotech/medical, ethics, robotics/AI

The majority have focused on outlining high-level principles that should guide those building these systems. W hether by chance or by design, the principles they have coalesced around closely resemble those at the heart of medical ethics. But writing in Nature Machine Intelligence, Brent Mittelstadt from the University of Oxford points out that AI development is a very different beast to medicine, and a simple copy and paste won’t work.

The four core principles of medical ethics are respect for autonomy (patients should have control over how they are treated), beneficence (doctors should act in the best interest of patients), non-maleficence (doctors should avoid causing harm) and justice (healthcare resources should be distributed fairly).

The more than 80 AI ethics reports published are far from homogeneous, but similar themes of respect, autonomy, fairness, and prevention of harm run through most. And these seem like reasonable principles to apply to the development of AI. The problem, says Mittelstadt, is that while principles are an effective tool in the context of a discipline like medicine, they simply don’t make sense for AI.

Leave a Reply