Toggle light / dark theme

AI tradeoffs: Balancing powerful models and potential biases

Posted in information science, robotics/AI

As developers unlock new AI tools, the risk for perpetuating harmful biases becomes increasingly high — especially on the heels of a year like 2020, which reimagined many of our social and cultural norms upon which AI algorithms have long been trained.

A handful of foundational models are emerging that rely upon a magnitude of training data that makes them inherently powerful, but it’s not without risk of harmful biases — and we need to collectively acknowledge that fact.

Recognition in itself is easy. Understanding is much harder, as is mitigation against future risks. Which is to say that we must first take steps to ensure that we understand the roots of these biases in an effort to better understand the risks involved with developing AI models.

Leave a Reply