‘Instrumental’, 2016-2017

It had all started with @SAPoliceService, the South African Police Service (SAPS) Twitter feed.

South Africa’s rampant crime and SAPS’s notoriously poor operational record has translated into a lack of faith, mistrust and real anger amongst the communities it serves, fuelled by negative media reports.

We became interested in how SAPS seemed to be using social media to ‘evidence’ their activities and successes, including seizures of illegal products and arrests, honouring officers and promotion of community policing initiatives; and also seek information about criminal activities; with peculiar effects. As the screenshots below show, the SAPS tweets that caught our attention are often accompanied by a supporting visual image of the individual, location and/or object in question. They seems to want to offer, or at least point to, a reliable (if not properly forensic) piece of communication.

However, the images used to represent these events or appeals often lack saliency, which Twitter’s inherently truncated communication style only aggravates. Instead of offering incontrovertible evidence of an event, they invite assumption and inference. But how could they not, regardless of what we might demand of such images? For us, @SAPoliceService on Twitter became a touchstone for addressing the truth-effects of images, language and context.

The interface

Instrumental user interface, first iteration

Conceptually, the original iteration of Instrumental presents a user-driven interpretation of data drawn from an individual body within, and framed as, a site of effective and affective power.

It exploits two Watson IBM Application Programming Interfaces (APIs): IBM Watson Language Alchemy, which analyses semantic tone and structure in language; and IBM Visual Recognition, which recognises faces and classifies objects.

Using the data mining capacity of these APIs, Instrumental downloads and stores text and image associated with the @SAPoliceService handle, as well as geolocation data derived from wherever the work is installed.

It uses this data to synthesize a series of tones.

When a user steps in front of the webcam, their face is mapped with a series of dots. These represent distances between detected facial tracking points.

As the user produces different facial expressions, the distances between the points shift, modulating the tones.

This ‘face space’ data in turn generates a series of locations on the map image, drawn out from the original geolocation.

The user can then moderate a set of variables for their ‘fitness,’ in other words, how they match with their experience or desire.

These variables – Anger, Disgust, Fear, Joy and Sadness – correspond to the hypothetical ‘universal’ facial expressions advanced in cognitive psychology. These form the basis of an analytical framework known as FACS, or the Facial Action Coding System.

In application, these are often used to quantify emotional expression, including in non-verbal subjects.

Based on the user’s evaluation of these variables, the tonal feedback is customized to their specific interaction, reflecting their involvement in shaping this data.

But on reflection…

Instrumental was intended to be a meditation on the face as interface, taking in a complex scene of both visible and invisible data and delivering back a visually prĂ©cis’d and arcane interface of dots, lines and aural tones that is also specific to its user, and the duration of a real-time experience.

For us, it functioned a multimedia Claude Glass, or original ‘black mirror’ (the series is great too), a dark, reflective surface that presents the world as tonal gradations versus high-definition, technicolour detail.

Embodying processes of memory and recall – those processes fundamental to being a reliable witness – a user’s ‘composition’ is not recorded; it cannot be retrieved and re-experienced.

The initial project (2016-2017) attempted to distill many ideas, images, observations and frustrations we had and continue to encounter into an actionable interactive work; a machine that could translate the problems of algorithmic culture for human sensibilities.

Face Theremin (2019) An interactive website that converts algorithmically detectable facial emotions into audible tones.

It has become apparent to us that we needed to reframe the project as an ongoing endeavour. A space that would allow us to keep record of more recent iterations of the project – attempts to tease out the threads that had led to the initial project into simpler experiments, like the Face Theremin, illustrated here. A space that could become a kind of archive of everything that informed our thinking about it, as well as new images and ideas that resonate with our research interests. It is our hope that this will not only allow us to make new connections between these various things over time, but also function as a resource for anyone who might be interested in this area.

In a very real sense we want the online project to function as an instrument (idea and/or object) through which one accomplishes an action; a sort of holding space for agency that also becomes enabling for others in the process.

Standard

Re-imagining ‘Instrumental’

Instrumental began life as an experimental digital interface combining data drawn from social media, geolocation, automated facial recognition and face tracking to turn a user’s facial expressions and evaluative data input into a tonal instrument. In many ways, it grew out of two individual research-based creative practices (Smith and Blignaut) and was the first attempt to enact a crossover of some shared interests. You can read about it here.

Between the completion of an MA Fine Arts (Blignaut) and the end-stages of a PhD analysing the cultures and practices of Forensic Art (Smith), we decided that its first interation as an interactive artwork failed; there was too much going on both within it and outside it. But a recent residency at A4 Art Foundation (Cape Town, South Africa) provided the opportunity to reflect on Instrumental in the context of other projects both past and current, and what it should become instead.

So we moved it here, to a blog that will act as a repository for thoughts, ideas and resources that circle around the face as a technology of both identity and identification, and the ethics of how the face is deployed in contemporary technoculture. Some ideas from artist Trevor Paglen get to the heart of it:

…visual culture has changed form. It has become detached from human eyes and has largely become invisible. The overwhelming majority of images are now mady by machines for other machines, with humans rarely in the loop. Human visual culture is now an exception to the rule. … Images have begun to intervene in everyday life, their functions changing from representation and mediation, to activations, operations, and enforcement. Invisible images are actively watching us, poking and prodding, guiding our movements, inflicting pain and inducing pleasure.

If we want to understand the invisible world of machine-machine visual culture, he says, we need to unlearn how to see like humans [our emphasis]. We need to learn how to see a parallel universe composed of activations, keypoints, eigenfaces, feature transforms, classifiers, training sets, and the like. But it’s not just as simple as learning a different vocabulary. Formal concepts contain epistemological assumptions, which in turn have ethical consequences. The theoretical concepts we use to analyze visual culture are profoundly misleading when applied to the machinic landscape, producing distortions, vast blind spots, and wild misinterpretations.

Trevor Paglen (2016) Invisible Images (Your Pictures Are Looking at You). The New Inquiry, 8 December, 2016.

Thinking about experiments as holding spaces…

Across different computer vision and machine learning techniques, the face is a central motif or space where ‘personhood’ is replaced by pattern-matching. As a biometric, the face becomes a paradox, embodying both absolute individuality and a negation of the Self. Contemporary advances in machine learning are increasingly blurring distinctions between the visual and the algorithmic in applications that range from investigative or security-driven applications to social media, with paradigm-shifting effects. The face thus presents itselfs as a space of new kinds of negotiation.

The face as technology seems to us to be a particularly potent focus in the contemporary moment, because as humans we not only project so many of our assumptions about identity onto the face, we also invest it with much of our hopes and expectations about what it does or should mean to be human. Its presence, to think along with Emmanuel Levinas, is felt as a kind of receptacle for the formlessness that permeates the tragic and the everyday of our lives.

Welcome, and thanks for being here. We see you.

Standard