About audio-visual composition
Generally speaking, when multi–media composers are faced with the issue of connecting aural and visual elements they can work on a continuum of a spectrum that spans between a “high level” and a “low level” perceptual connection between aural and visual elements. The former is an abstract connection: a scene from a film is accompanied by a soundtrack that musically suggests what kind of emotion the viewer “should” feel. At this side of the spectrum, the multi–media creator can work in procedurally loose fashion as she is allowed to freely associate sounds and images based on a general aesthetic idea. On the other hand, when working at the “low level” end of the spectrum the multi–media creator aims at creating a tight knit perceptual relation between sounds and visuals so that virtually all data represented in the aural realm find a correspondent in the visual realm (and vice versa).
This approach comes with a fair amount conceptual and technical issues of its own. Conceptually, a group of issues to grapple with is the idea of mapping data into sounds and visuals. If, for example the goal is to represent a musical pattern (rhythm, pitches, timbre) how does one map musical data into visuals? Exploring different solutions to such problems and matching the results to their perceptual outcomes is one of the more compelling aspects of this kind of research. But, as my ongoing efforts have shown me, it is equally crucial for the multi–media composer to carefully choose programming tools and practices that are best suited to achieve just that aesthetic goal. Not dissimilarly from a traditional composer who adjusts her writing according to the instrument or combination of instruments she is writing for, creating audio–visual compositions of the kind I have just described requires the multi–media composer to program the pieces in a way that is idiomatic to the chosen programming platform so that she can achieve a certain quality of outcome, crucial for a certain idea to perceptually come through.
My audiovisual works draw from a series of tools and practices I have developed and differers from other kinds of audio-visual works in fundamental ways. As I was zeroing in the issue of “low level” connection between music and image I realized how the common approach to audio-visualization, commonly defined as “audio– reactive,” was proving excessively cumbersome and computationally expensive: the use of audio–reactive algorithms can majorly limit the possibilities of representing musical events in a discrete manner especially in the case of polyphonic music. Since my approach to composition is algorithmic at its core, I am able to bypass the audio–reactive step. by feeding matrices of raw musical data that I gathered from my algorithms into rendering.
Variazioni su Space Invader (for two cellos, electronics and visuals, mov.1)
This video constitutes the audio visual component of my most recent piece for two cellos, visuals and electronics. It was commissioned by the Nebula Ensemble and it is currently in rehearsal stage for the upcoming performance in Denver, Colorado, on April 27. In this work I focus on creating a tight-knit, “low level” perceptual connection between aural and visual elements. While the sounds are timbrally manipulated re-synthesis of prerecorded cello sounds the visuals are careful representations of the sounds in their innermost detail. To achieve the kind of nuance I am referring to, I store and process the “composition” as a set of matrices that can then be rendered as sound and visuals simultaneously; this approach allows me to avoid the cumbersome and expensive routine of analyzing sound, usually a required step for most audio–visual pieces.
This is the visual and electronics for the first and second movement of my recent commission by the Nebula Ensemble in Denver. The piece calls for two cellos, electronics and visuals. It will be performed in April 2018