This chapter looks at the technical and compositional methodologies used in the
realization of V’Oct(Ritual)(2011) with particular reference to the choices made with
regard to the mapping of sensor elements to various spatialization functions.
Kinaesonics[1] will be discussed in relation to the coding of real-time one-to-one
mapping of sound to gesture and its expression in terms of hardware and software
design.
Composing for kinaesonic interaction is an interdisciplinary activity that is not
confined to music alone. In terms of my own work with the Bodycoder System,
composition extends to the framing of the physicality of the performer: their
kinaesonic gestural control of live sound processing, spatialisation and navigation of a
Max/MSP environment in performance. Other compositional layers include the live
automation of sound diffusion (the physical movement of sound within a multichannel
speaker system), the programming of a range of evolving real-time instances
initiated by the performer and the design of a large palette of sound processing
objects.