‘Instrumental’, 2016-2017

It had all started with @SAPoliceService, the South African Police Service (SAPS) Twitter feed.

South Africa’s rampant crime and SAPS’s notoriously poor operational record has translated into a lack of faith, mistrust and real anger amongst the communities it serves, fuelled by negative media reports.

We became interested in how SAPS seemed to be using social media to ‘evidence’ their activities and successes, including seizures of illegal products and arrests, honouring officers and promotion of community policing initiatives; and also seek information about criminal activities; with peculiar effects. As the screenshots below show, the SAPS tweets that caught our attention are often accompanied by a supporting visual image of the individual, location and/or object in question. They seems to want to offer, or at least point to, a reliable (if not properly forensic) piece of communication.

However, the images used to represent these events or appeals often lack saliency, which Twitter’s inherently truncated communication style only aggravates. Instead of offering incontrovertible evidence of an event, they invite assumption and inference. But how could they not, regardless of what we might demand of such images? For us, @SAPoliceService on Twitter became a touchstone for addressing the truth-effects of images, language and context.

The interface

Instrumental user interface, first iteration

Conceptually, the original iteration of Instrumental presents a user-driven interpretation of data drawn from an individual body within, and framed as, a site of effective and affective power.

It exploits two Watson IBM Application Programming Interfaces (APIs): IBM Watson Language Alchemy, which analyses semantic tone and structure in language; and IBM Visual Recognition, which recognises faces and classifies objects.

Using the data mining capacity of these APIs, Instrumental downloads and stores text and image associated with the @SAPoliceService handle, as well as geolocation data derived from wherever the work is installed.

It uses this data to synthesize a series of tones.

When a user steps in front of the webcam, their face is mapped with a series of dots. These represent distances between detected facial tracking points.

As the user produces different facial expressions, the distances between the points shift, modulating the tones.

This ‘face space’ data in turn generates a series of locations on the map image, drawn out from the original geolocation.

The user can then moderate a set of variables for their ‘fitness,’ in other words, how they match with their experience or desire.

These variables – Anger, Disgust, Fear, Joy and Sadness – correspond to the hypothetical ‘universal’ facial expressions advanced in cognitive psychology. These form the basis of an analytical framework known as FACS, or the Facial Action Coding System.

In application, these are often used to quantify emotional expression, including in non-verbal subjects.

Based on the user’s evaluation of these variables, the tonal feedback is customized to their specific interaction, reflecting their involvement in shaping this data.

But on reflection…

Instrumental was intended to be a meditation on the face as interface, taking in a complex scene of both visible and invisible data and delivering back a visually pr√©cis’d and arcane interface of dots, lines and aural tones that is also specific to its user, and the duration of a real-time experience.

For us, it functioned a multimedia Claude Glass, or original ‘black mirror’ (the series is great too), a dark, reflective surface that presents the world as tonal gradations versus high-definition, technicolour detail.

Embodying processes of memory and recall – those processes fundamental to being a reliable witness – a user’s ‘composition’ is not recorded; it cannot be retrieved and re-experienced.

The initial project (2016-2017) attempted to distill many ideas, images, observations and frustrations we had and continue to encounter into an actionable interactive work; a machine that could translate the problems of algorithmic culture for human sensibilities.

Face Theremin (2019) An interactive website that converts algorithmically detectable facial emotions into audible tones.

It has become apparent to us that we needed to reframe the project as an ongoing endeavour. A space that would allow us to keep record of more recent iterations of the project – attempts to tease out the threads that had led to the initial project into simpler experiments, like the Face Theremin, illustrated here. A space that could become a kind of archive of everything that informed our thinking about it, as well as new images and ideas that resonate with our research interests. It is our hope that this will not only allow us to make new connections between these various things over time, but also function as a resource for anyone who might be interested in this area.

In a very real sense we want the online project to function as an instrument (idea and/or object) through which one accomplishes an action; a sort of holding space for agency that also becomes enabling for others in the process.

Standard