Bokowiec, Mark and Wilson-Bokowiec, Julie (2007) Bodycoder - Voice. In: Bodycoder - Voice, 22 November 2007, The Watermans, Brentford, London.
Abstract

HAND-TO-MOUTH (for performer/vocalist, the Bodycoder System & live MSP)

Sometimes only the energy, breath, and raw fractured tonalities of the voice are employed to populate and animate the soundscape of Hand-to-Mouth. Vocal articulations and phrasing imbue the hybridized digital voice with operatic nostalgia while at other times it is reduced to a crackle – radio interference – background radiation – the electrical ‘liveness’ of the subatomic. Mouth/Larynx and the hands of the performer engage in intimate dialogue in an act of sonic puppetry and ventriloquism.

AMERA (Tape: Electro-acoustic music)

Amera is composition using soundfiles derived from in-the-field recordings taken in various localities including Devon and Cornwall. This material was then processed using a variety of techniques in MSP, some material was then further controlled and modulated in the Absynth environment. Recordings taken in the boat construction yard in Polruan and recordings of the mechanical running gear on the Funicular railway at Lynmouth were enriched, extending their timbre and sonic textures.

THE SUICIDED VOICE (for performer/vocalist, the Bodycoder System, live MSP, video streaming & computer graphics)

In this piece the acoustic voice of the performer is “suicided” and given up to digital processing and physical re-embodiment. Dialogues are created between acoustic and digital voices. Gender specific registers are willfully subverted and fractured. Extended vocal techniques make available unusual acoustic resonances that generate rich processing textures and spiral into new acoustic and physical trajectories that traverse culturally specific boundaries crossing from the human into the virtual, from the real into the mythical. In The Suicided Voice the voice, transformed and re-embodied within the interactive medium, becomes a fluid originality that is defined only by its own transmutations.

CHIMERA (Tape: Electro-acoustic music )

Composed using the raw material accumulated during the extensive rehearsal and improvisational sessions for Etch and The Suicided Voice. This material was then catalogued and edited into short soundfiles which were then reprocessed using Granular synthesis and Vocoding techniques using MSP. The more recognizably human vocalizations were used as the modulator signals for a Vocoder patch constructed in MSP, while the carrier material came from the Etch processed sessions. Chimera, as the name suggests, is therefore an amalgam of the two pieces, an alchemical mix that gives rise to another quite unique creature.

ETCH (for performer/vocalist, the Bodycoder System, live MSP, & computer graphics)

In ETCH extended vocal techniques, Yakut, open throat, overtone and Bell Canto singing are coupled with live interactive sound processing and manipulation. ETCH calls forth forna – building soundscapes of glitch infestations, howler tones, clustering sonic-amphibians, and swirling flocks of synthetic granular flyers. The visual content for this piece is created in a variety of 2D and 3D packages. In ETCH video content is manipulated on the screen by the performer using the same interactive protocols that govern sound manipulation. Visual content is mapped to the physical gestures of the performer and its live manipulation forms a significant part of the piece. As the performer conjures extraordinary voices out of the digital realm, so she weaves a multi-layered visual environment. In ETCH sound, image, and gesture combine to form a powerful ‘linguistic intent’. Etch was created in residency at the Confederation Centre for the Arts on Prince Edward Island, Nova Scotia.

TECHNICAL & AESTHETICS

BODYCODER - VOICE features the Bodycoder System© the first generation of which was developed by the artists in 1995/6. The Bodycoder interface is a flexible sensor array worn on the body of a performer that sends data generated by movement to an MSP environment via radio. Movement data can be mapped in a variety of different ways to the live processing and manipulation of sound. All processed sound is derived from the live and acoustic voice of the performer. The Bodycoder also provides the performer with real-time access to processing parameters and patches within the MSP environment as well as control over the sensitivity of sensors. In this way all vocalisations, decision making, navigation of the MSP environment and qualities of expressivity are selected, initiated and manipulated by the performer. The primary expression functionality of the Bodycoder System is Kinaesonic. The term Kinaesonic is derived from the compound of two words: Kinaesthetic meaning the movement principles of the body and Sonic meaning sound. In terms of interactive technology the term Kinaesonic refers to the one-to-one, mapping of sonic effects to bodily movements. In our practice this is usually executed in real-time. There are no pre-recorded soundfiles used in the live pieces of this program and no sound manipulations external to the performer’s control.

Information
Library
Statistics
Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email