the body in\verse

F

An online, interactive performance that combines biophysical sensing, emotive state sonification, visualization and generative poetry to create the scene

By Alan Macy, Mark-David Hosale, Alysia Michelle James


– What happens

– What the audience sees and hears

– How they participate

The performance provides a deep dive from the world outside of ourselves that is dissociated by mediated technology into the deep interoceptive abyss of our emotive sea. The insults of technological culture have been leaving us wanting, consciously or not, for identification and awareness of “essential rhythm”, that we continue to lose track of now that we live mostly in cities as the aboriginal environment recedes from view. Essential autonomic stimuli have gone missing, so we begin with a conversation.

The performer (Alysia Michelle James) is close to the screen, visible from the shoulders up, like a typical teleconference call. As the audience joins the call, she begins with some ice-breaker questions such as:

“How are you today?,”

“Where are you located?,” etc.

Audience members are able to answer these questions through the chat. The questions slowly become more poignant as the performer builds upon their responses.  

“Why are you here?”

If an audience member responds with, “I’m here for a meaningful experience,” the performer may respond with a question like, “What is meaningful?”

Other examples of questions in this category:

“Are you pursuing happiness?”

“Why be ‘new normal’?”

“Who were you before?”

The perspective moves outward in with questions such as:

“Are you worried about the fragility of society?”

“Are you worried about the fragility of life?”

“What makes a culture?”

“What does society avoid feeling?”

Then we shift to inward thinking with the intent of inspiring contemplation and introspective thought:

“Why do we feel?”

“What do you avoid feeling?”

“Can you feel other humans?”

“Can you feel your breathing?”

“Can you feel your heart?”

This more focused conversation (happening in audio and on the chat) becomes the foundation for the tone and direction for the rest of the performance.

The remaining performance is comprised of three elements:

  • An AI algorithm that generates poetry
  • A biophysical sensing system that measures the biophysical state of the performer, and then uses that data to drive the sound, abstract imagery, and generative poetry algorithm
  • A performance that combines movement, sound, abstract imagery, and text

Biophysical Sensing
The science behind this project is based on work first performed by James Russell, 1980, “A Circumplex Model of Affect” .

Standard emotional affect valence measures are associated with the facial micro-expressions of the corrugator and zygomaticus muscles. Standard emotional affect arousal measures include both heart and eccrine activity.

Valence measures are indexed by electromyogram and establish the model axis ranging from displeasure to pleasure. Arousal measures are indexed by electrocardiogram and electrodermal activity and establish the model axis ranging from boredom to alarm.

The core valence and arousal measurements constitute a baseline, affective state, assessment in accordance with James A. Russell’s description of the Circumplex Model of Affect in 1980. This work has been cited roughly 14,000 times.

Since 1980, psychophysiologists have continued to evolve theory, in regard to the assessment of feeling “affective” state, and so additional physiological variables have been utilized for evolving studies. These measures include electroencephalogram, pulse plethysmogram, blood pressure, blood flow, vascular resistance and ventilation.

Recent research in bioinformatics suggests that it is possible to assess the real-time emotional state of an individual using a special class of sensors that track human characteristics such as heart rate, muscular movement, eye movement, skin temperature and breathing (Picard 2002) that contribute to an individual’s emotional valence (range of affect from pleasant to unpleasant) and arousal (range of excitement from activation and deactivation) (Cacciopo 2000; Scherer 2005; Chanel, et. al 2006; Stickel, et. al. 2009; Nicolaou, et. al. 2011; Koelstra, et. al. 2012). Valence and arousal data of a performer can be used to develop co-collaborative applications that help increase our somatic awareness and mediate the bi-directional emotive connection of a performer with an audience, other performers, and interactive computational systems.

 

Video

Currents 2021 Performance

Exhibitions

  • 2021: the body in\verse
    On-line, broadcast from SBCAST and York University