Google’s declaration of principles for AI is a short but carefully worded text covering the main issues related to the uses of its technology. I would recommend reading the document, given that it raises many about the future and the rules we will need to guide us as it evolves.
The company had been working on the statement of principles for some time, while it continues to work in other areas of AI. Critics say the document is a response to the recent resignation of more than 10 employees and the petition signed by several thousand more in protest at the company’s involvement in the US Defense Department’s Maven Project, designed to recognize images taken by military drones: either they do not know the company, or they are confusing the circumstantial with the foundational, form with background. The regulation of artificial intelligence is far from a new subject and is being debated widely: I have taken part in some discussions; and Google, as a leading player in the field, is simply laying out its position after a long process of reflection.
In the wake of revelations about Google’s involvement in Maven, most media have interpreted the company’s statement of principles somewhat simplistically, along the lines of a promise that its AI won’t be used to develop weapons or in breach of human rights, although it’s clear that the document has much more far-reaching intentions. Weapons are mentioned only briefly, in a section entitled AI applications we will not pursue, limited to saying that the company will not help develop “Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”. That said, it will continue its work “with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.”
To read more, click here.