Application UofVirginia


Acoustic Music

Variazioni Su AlDoClEmenti is a piece for chamber orchestra written using generative algorithms designed in OpenMusic. In the Ircam publication “The OM Composer Book vol.3” I illustrate how I composed the first two movements of this piece:


The score can be viewed here:


First Movement: Invenzione

Second Movement: Sinfonia

Visual Music & Interactive

WAVEFORMS: audio-visual installation (excerpt)

Generally speaking, when multi–media composers are faced with the issue of connecting aural and visual elements they can work on a continuum of a spectrum that spans between a “high level” and a “low level” perceptual connection between aural and visual elements. The former is an abstract connection: a scene from a film is accompanied by a soundtrack that musically suggests what kind of emotion the viewer “should” feel. At this side of the spectrum, the multi–media creator can work in procedurally loose fashion as she is allowed to freely associate sounds and images based on a general aesthetic idea. On the other hand, when working at the “low level” end of the spectrum the multi–media creator aims at creating a tight knit perceptual relation between sounds and visuals so that virtually all data represented in the aural realm find a correspondent in the visual realm (and vice versa).

This approach comes with a fair amount conceptual and technical issues of its own. Conceptually, a group of issues to grapple with is the idea of mapping data into sounds and visuals. If, for example the goal is to represent a musical pattern (rhythm, pitches, timbre) how does one map musical data into visuals? Exploring different solutions to such problems and matching the results to their perceptual outcomes is one of the more compelling aspects of this kind of research. But, as my ongoing efforts have shown me, it is equally crucial for the multi–media composer to carefully choose programming tools and practices that are best suited to achieve just that aesthetic goal. Not dissimilarly from a traditional composer who adjusts her writing according to the instrument or combination of instruments she is writing for, creating audio–visual compositions of the kind I have just described requires the multi–media composer to program the pieces in a way that is idiomatic to the chosen programming platform so that she can achieve a certain quality of outcome, crucial for a certain idea to perceptually come through.

This installation draws from a series of tools and practices I have developed and differers from other kinds of audio-visual works in fundamental ways. As I was zeroing in the issue of “low level” connection between music and image I realized how the common approach to audio-visualization, commonly defined as “audio– reactive,” was proving excessively cumbersome and computationally expensive: the use of audio–reactive algorithms can majorly limit the possibilities of representing musical events in a discrete manner especially in the case of polyphonic music. Since my approach to composition is algorithmic at its core, I am able to bypass the audio–reactive step by feeding matrices of raw musical data that I gathered from my algorithms into rendering.


Variazioni su Space Invader (for two cellos, electronics and visuals, mov.1)

This video constitutes the audio visual component of my most recent piece for two cellos, visuals and electronics. It was commissioned by the Nebula Ensemble and it is currently in rehearsal stage for the upcoming performance in Denver, Colorado, on April 27. In this work I focus on creating a tight-knit, “low level” perceptual connection between aural and visual elements. While the sounds are timbrally manipulated re-synthesis of prerecorded cello sounds the visuals are careful representations of the sounds in their innermost detail. To achieve the kind of nuance I am referring to, I store and process the “composition” as a set of matrices that can then be rendered as sound and visuals simultaneously; this approach allows me to avoid the cumbersome and expensive routine of analyzing sound, usually a required step for most audio–visual pieces.



In this little audio-visual divertissement  I have created the “insects” from scratch (no model is employed) and animated completely using sound parameters. The wings flapping frequency is used as the frequency of an oscillator that is then granulated to increase its spectral density. I employed the the HOA library to place the mosquitoes into a sonic space that corresponds to the virtual space. I employed the XRAY package to generate the water surface. Although this is a work in progress, I believe it can illustrate some future developments of my work.



Group Flow (by Mikey Seigel)

Recently I have joined forces with the Consciousness Hacking team of Mikey Seigel to work on Group Flow. I developed some sonification patches mostly with granular techniques

Video Tutorials

Composing Visual Music


Selected Publications

Perspectives of New Music 54 n.1



GitHub repository for mic_externals


Presentation: CINC