Battling bias. If I’ve been a little MIA this week, it was because I spent Monday and Tuesday in Boston for Fortune ’s inaugural Brainstorm A.I. gathering. It was a fun and wonky couple of days diving into artificial intelligence and machine learning, technologies that—for good or ill—seem increasingly likely to shape not just the future of business, but the world at large.
There are a lot of good and hopeful things to be said about A.I. and M.L., but there’s also a very real risk that the technologies will perpetuate biases that already exist, and even introduce new ones. That was the subject of one of the most engrossing discussions of the event by a panel that was—as pointed out by moderator, guest co-chair, and deputy CEO of Smart Eye Rana el Kaliouby—comprised entirely of women.
One of the scariest parts of bias in A.I. is how wide and varied the potential effects can be. Sony Group’s head of A.I. ethics office Alice Xiang gave the example of a self-driving car that’s been trained too narrowly in what it recognizes as a human reason to jam on the breaks. “You need to think about being able to detect pedestrians—and ensure that you can detect all sorts of pedestrians and not just people that are represented dominantly in your training or test set,” said Xiang.