Microsoft is building a tool to automatically identify bias in a range of different AI algorithms. It is the boldest effort yet to automate the detection of unfairness that may creep into machine learning—and it could help businesses make use of AI without inadvertently discriminating against certain people.
Big tech companies are racing to sell off-the-shelf machine-learning technology that can be accessed via the cloud. As more customers make use of these algorithms to automate important judgements and decisions, the issue of bias will become crucial. And since bias can easily creep into machine-learning models, ways to automate the detection of unfairness could become a valuable part of the AI toolkit.
“Things like transparency, intelligibility, and explanation are new enough to the field that few of us have sufficient experience to know everything we should look for and all the ways that bias might lurk in our models,” says Rich Caruna, a senior researcher at Microsoft who is working on the bias-detection dashboard.
Algorithmic bias is a growing concern for many researchers and technology experts (see “Inspecting algorithms for bias”). As algorithms are used to automate important decisions, there is a risk that bias could become automated, deployed at scale, and more difficult for the victims to spot.
To read more, click here.