But in many ways, the field of AI ethics remains limited. Researchers say they are blocked from investigating many systems thanks to trade secrecy protections and laws like the Computer Fraud and Abuse Act (CFAA). As interpreted by the courts, that law criminalizes breaking a website or platform’s terms of service, an often necessary step for researchers trying to audit online AI systems for unfair biases.
Whittaker acknowledges the potential for the AI ethics movement to be co-opted. But as someone who has fought for accountability from within Silicon Valley and outside it, Whittaker says she has seen the tech world begin to undergo a deep transformation in recent years. “You have thousands and thousands of workers across the industry who are recognizing the stakes of their work,” Whittaker explains. “We don’t want to be complicit in building things that do harm. We don’t want to be complicit in building things that benefit only a few and extract more and more from the many.”
It may be too soon to tell if that new consciousness will precipitate real systemic change. But facing academic, regulatory and internal scrutiny, it is at least safe to say that the industry won’t be going back to the adolescent, devil-may-care days of “move fast and break things” anytime soon.
“There has been a significant shift and it can’t be understated,” says Whittaker. “The cat is out of the box, and it’s not going back in.”