Michael Day

Site Platform 2017 Application and Presentation

Friday 13th January 2017

Application | Presentation


Application

Brief summary of your proposal
Please include when you first started developing this idea, and why you think it is appropriate for Platform (max. 500 words) *

The idea for this proposal has been developed alongside my PhD studies at SHU, where I am a practice-based researcher in the arts and media subject area. My research is concerned with experiences of distractibility that are said to have emerged alongside the recent widespread adoption of digital communications technologies. I’m interested in the way digital systems can be understood as data streams or cloud processes, and how this might impact on the way we attend to them, or how they algorithmically attend to us. My artistic practice often appropriates pre-existing images, software or datasets as ready-mades, and uses computer programming techniques to modify them or highlight their particular characteristics.

Attention is often considered a scarce resource in an age where constant emails, notifications and status updates compete for our cognitive focus. Alongside this scarcity of human attention, recent years have seen an exponential rise in the ‘machine reading’ of images (Hayles), where algorithms are routinely used to analyse the visual characteristics of images. It’s now commonplace for a digital camera on an iPhone to use face recognition algorithms to identify the optimal area to focus on, with some mobile devices going as far as automatically capturing the image when a smiling subject is detected. While many of these recognition algorithms are hard-coded into consumer software and devices, there are some that are more accessible to users.

For this project, I aim to use a range of these computer vision APIs (application programming interfaces) to explore Site Gallery’s database of exhibition and event documentation images that is made available through the gallery website and marketing channels. This will generate a series of images that present this material through the lens of machine vision. I intend to mainly use the Google Vision API, which offers a powerful set of image analysis algorithms that can perform a wide range of tasks. Google Vision can identify faces and provide an estimate of the emotion on the face; can identify landmarks; will isolate and recognise text; as well as providing image analysis data such as the quantity of colour or brightness in the image.

I’m curious about what might be revealed through the process of passing the gallery archive images through this system. Could the sentiment analysis be instrumentalised as ACE evaluation feedback? Is there a preference for particular colour schemes, subject matter, or other identifiable features? What are the implications for the gallery in terms of data stewardship? More broadly, what might machine vision see in an artwork that a human reader might overlook?

I anticipate that the results will invite readings that question the stability of archives while also presenting a perhaps sobering sense of the capability of commonly used image analysis algorithms to identify and reinterpret images as data artefacts.

Describe your idea and how you will approach this as a Platform project.
What resources will you need (max. 500 words) *

Development of the system:
Since coding can be quite a solitary activity and isn’t particularly accessible to an audience, I anticipate having the bones of a working system ready in advance of my arrival onsite. In the weeks prior I would hope to liaise with the gallery web team and begin to make sense of the structure of the image archive, and start to process the images in preparation for analysis. In the very early stages of the onsite phase of the project, the code could be projected to make it available for scrutiny in a similar way to ‘algorave’ events.

Processing of images:
I will aim to produce a system that makes the process of the analysis of the images visible. This is likely to be a screen-based interface that will show the image on the way in to the analysis process, and then combine this with the images produced by the machine vision system.

Display of images:
I anticipate that the residency will generate a large quantity of images, perhaps three or more per source image, each with a very specific aesthetic. With the Google Vision API, faces can be identified by ‘landmarks’ such as ‘left_eye’, ‘mouth_left’ and so on, and this information compiled into a very visually reductive image. Colour distributions and tags can also generate images. (See the API demo GIF in my supporting material.) I anticipate the gallery gradually filling with a set of images that have been produced by the algorithm in response to the archive, and are representative of the data contained in the archive while having their own specific aesthetic quality that is very different to the source images.

In previous work, I have presented visual data as an infrastructure that competes for invisibility with physical characteristics of the gallery space, such as cable trunking and air vents, that are supposed to be ignored by the viewer. In Site, the long gallery back wall presents itself as a likely location for a sequential projection of images, and by the end of the residency I would aim to fill as much wall space as possible with these projected images. I would also like to explore the possibility of outputting the data-images produced onto the Site website, temporarily allowing them to replace their sources.

Gallery resources / Production budget:
The main gallery resources I’ll need while onsite will be access to projectors, computers, and installation assistance. Prior to beginning the residency, I’ll need access to the archive of images and communication with the web team in order to get the sources together. There will be some costs involved in the use of the API, but I’m hopeful these can be kept to a minimum.

Presentation

As Seen By Machine
I'm an artist working with digital media, typically installation and interactive projects
  Currently doing phd about art, attention, distraction, and digital media.
  * led onto human vs machine attention, control of attention
  * ubiquity of algorithms and image analysis, phones, number plate recognition, etc
algorithms link to databases, things are below the scope of attention, invisible yet actionable. Seen as OBJECTIVE but are totally not
* infrastructural: ie, not usually visible, and this invisibility obscures ideological or political bias
* intend to use Site's archive of documentation images
* send it through Google Vision API
* it detects:
+++ face, plus basic sentiment (joy, sorrow etc)
+++ landmarks (such as Eiffel Tower etc)
+++ text (OCR - will return as text strings)
+++ colour (basic quantity analysis)
+++ labels (text tags referring to image content)
  * Curious to see what it turns up
  Interested in how utilitarian images have unintended aesthetic qualities

Face recognition points have a sense of the punchcard or invoke the physical
Vision API returns inferred 3D points for face landmarks
* Talk about outcomes in each gallery:
+++ Data as infrastructure = invisible.
+++ Gallery as an attention infrastructure in itself
+++ Large gallery, hiding work, making it quite difficult to see: Make the work compete for invisibility with gallery infrastructure such as plugs etc
+++ Through projection, hiding the work
 
Aggregation – most populous words, colours, etc
Multiplicity of faces

Smaller gallery: use for cinematic projection of 3D face animations