In fact, in a recent paper in Royal Society Open Science, researchers showed that AI tasked with maximizing returns is actually disproportionately likely to pick an unethical strategy in fairly general conditions. Fortunately, they also showed it’s possible to predict the circumstances in which this is likely to happen, which could guide efforts to modify AI to avoid it.
The fact that AI is likely to pick unethical strategies seems intuitive. There are plenty of unethical business practices that can reap huge rewards if you get away with them, not least because few of your competitors dare use them. There’s a reason companies often bend or even break the rules despite the reputational and regulatory backlash they could face.
Those potential repercussions should be of considerable concern to companies deploying AI solutions, though. While efforts to build ethical principles into AI are already underway, they are nascent and in many contexts there are a vast number of potential strategies to choose from. Often these systems make decisions with little or no human input and it can be hard to predict the circumstances under which they are likely to choose an unethical approach.