Over the last weekend in February one of Google’s computer science departments, Research at Google, co-hosted Deep Dream: the art of neural networks, with the Gray Area Foundation, a San Francisco not-for-profit organisation that fosters collaborations between the arts and technology. The idea behind the show is that surely a technology company that has pushed boundaries in tech can offer fine artists an app or two? But can it?
The show, held in a refurbished cinema in the city’s Mission district, displayed a series of manipulated, photographic works created using one of the tech firm’s artificial intelligence programs.
In an opening address and an accompanying online essay, Blaise Agϋera y Arcas, a Google machine-intelligence developer, likened the artistic use of such programs to photography, or the employment of optical instruments by Renaissance artists – tools which may have had their detractors, yet are now an accepted part of art history.
“Faced with a new technical development in art, it’s easier for us to quietly move the goalposts after a suitable period of outrage,” Arcas argued, “re-inscribing what it means for something to be called fine art, what counts as skill or creativity, what is natural and what is artifice, and what it means for us to be privileged as uniquely human.”
To reposition those posts would be mistake, in Arcas’ view: “We believe machine intelligence is an innovation that will profoundly affect art.”
To read more, click here.