Introduction: my approach to programming audiovisuals
My audiovisual work draws from a series of tools and practices I have developed and differers from other kinds of audio-visual works in fundamental ways. As I was zeroing in the issue of “low level” connection between music and image I realized how the common approach to audio-visualization, commonly defined as “audio– reactive,” was proving excessively cumbersome and computationally expensive: the use of audio–reactive algorithms can majorly limit the possibilities of representing musical events in a discrete manner especially in the case of polyphonic music. Since my approach to composition is algorithmic at its core, I am able to bypass the audio–reactive step by feeding matrices of raw musical data that I gathered from my algorithms into rendering.
Audiovisual Sample 1
Ibuprofen is an audiovisual piece for 360 projection and 16 channels submitted for the CobeFest at Virginia Tech. The sample posted below is an excerpt rendered for flat screen and stereo audio. The audiovisual objects in the video are designed to rotate around the audience both as visuals and audio. This work is a good representation of the current state of my research, how it is interested in the audiovisual binding of the objects and how the binding affects the perception of the musical texture.
Audiovisual Sample 2
Variazioni su Space Invader (for two cellos, electronics and visuals, mov.1)
This video constitutes the audio visual component of my most recent piece for two cellos, visuals and electronics. It was commissioned by the Nebula Ensemble and it is currently in rehearsal stage for the upcoming performance in Denver, Colorado, on April 27. In this work I focus on creating a tight-knit, “low level” perceptual connection between aural and visual elements. While the sounds are timbrally manipulated re-synthesis of prerecorded cello sounds the visuals are careful representations of the sounds in their innermost detail. To achieve the kind of nuance I am referring to, I store and process the “composition” as a set of matrices that can then be rendered as sound and visuals simultaneously; this approach allows me to avoid the cumbersome and expensive routine of analyzing sound, usually a required step for most audio–visual pieces.
In my acoustic writing I develop generative algorithms that I tweak and transcribe as acoustic scores. These processes are described in detail in the IRCAM publication “The OM Composer Book vol.3” (click below to read
Acoustic sample 1
Variazioni Su AlDoClEmenti is a piece for chamber orchestra written using generative algorithms designed in OpenMusic.
First Movement: Invenzione
Second Movement: Sinfonia
Acoustic sample 2
He-li-Be-B is a saxophone quartet performed by the Radnofski Quartet in 2011
Group Flow (by Mikey Seigel)
Composing Visual Music
Perspectives of New Music 54 n.1
GitHub repository for mic_externals