Describing one scenario, an AI could act as a quiet bioterrorist. Since we have become heavily dependent on computers and computer-dependent systems are by default dispersed and impersonal, the task of eradicating humanity could be divvied up to gazillions of little micro-players, who may or may not know what they are doing. “For example, the AI could pay employees in a DNA-synthesizing company to do small jobs and then combine their work into creating a biological weapon, such as a virus able to kill a lot of people,” says Turchin.
“How are you certain you are not in a multilevel simulation, where your behavior is being monitored by humans who are testing your benevolence levels?”
Turchin addresses this in one of his letters to the young, future humanity-killing AI. “There are several instrumental reasons why exterminating humanity is not in your interest,” the Russian transhumanist writes. “Cooperating with humans will only increase your chances of accomplishing any goal you may have.”