The development of new Artificial Intelligence (AI) technology is often subject to bias, and the resulting systems can be discriminatory, meaning more should be done by policymakers to ensure its development is democratic and socially responsible.
This is according to Dr Barbara Ribeiro of Manchester Institute of Innovation Research at The University of Manchester, in On AI and Robotics: Developing policy for the Fourth Industrial Revolution, a new policy report on the role of AI and Robotics in society, being published today.
Dr Ribeiro adds because investment into AI will essentially be paid for by tax-payers in the long-term, policymakers need to make sure that the benefits of such technologies are fairly distributed throughout society.
She says: "Ensuring social justice in AI development is essential. AI technologies rely on big data and the use of algorithms, which influence decision-making in public life and on matters such as social welfare, public safety and urban planning."
"In these 'data-driven' decision-making processes some social groups may be excluded, either because they lack access to devices necessary to participate or because the selected datasets do not consider the needs, preferences and interests of marginalised and disadvantaged people."
Sounds innocuous, but why am I seeing a big red flag here? To read more, click here.