In 2017, Savannah Thais attended the NeurIPS machine-learning conference in Long Beach, California, hoping to learn about techniques she could use in her doctoral work on electron identification. Instead, she returned home to Yale with a transformed worldview.
At NeurIPS, she had listened to a talk by artificial intelligence researcher Kate Crawford, who discussed bias in machine-learning algorithms. She mentioned a new study showing that facial-recognition technology, which uses machine learning, had picked up gender and racial biases from its dataset: Women of color were 32% more likely to be misclassified by the technology than were White men.
The study, published as a master’s thesis by Joy Adowaa Buolamwini, became a landmark in the machine-learning world, exposing the ways that seemingly objective algorithms can make errors based on incomplete datasets. And for Thais, who’d been introduced to machine learning through physics, it was a watershed moment.
“I didn’t even know about it before,” says Thais, now an associate research scientist at the Columbia University Data Science Institute. “I didn’t know these were issues with the technology, that these things were happening.”
To read more, click here.