Splice Festival

Michele Zaccagnini
Assistant Professor, University of Virginia
Email: mz3vq@virginia.edu
Phone: 857-210-9992

Bio: http://music.virginia.edu/content/zaccagnini

Submission type:
New piece for Splice Ensemble

Submission title:
“For My Sins”

Equipment I will provide:
Laptop
Pedal interface

List of equipment provided by Splice Ensemble:
Microphones (1 trumpet,1 piano, 3-4 for drumset)
Speaker system (quad or 5.1)

Program notes:
“For My Sins” (August 2019) is a piece for trumpet, piano, drum set and electronics.
The full title of the piece is actually “For My Sins You Now Think Hellish Excruciation Seems Indeed Suited” which spells out the technique I used in the piece to generate harmonies and pitch material in general. The rhythmic organization draws on my research and obsession with sound densities, repetitiveness and hypnotism.
My approach to composition is deeply visual as I think of the movements of the piece as canvas with no beginning or end, but simply sections of textures containing no telos, no goal.

Score First Movement

https://michelezaccagnini.blog/wp-content/uploads/2019/09/ForMySins_mov1.pdf

Audio mock-up First Movement (with electronics)

Score Second Movement

https://michelezaccagnini.blog/wp-content/uploads/2019/09/ForMySins_mov2.pdf

Audio mock-up Second Movement (with electronics)

Piece description and collaboration aspects.

For My Sins

“For My Sins” is a piece for trumpet, piano, drum set and electronics. It’s in two movements written specifically for the Splice Ensemble.
The electronics are a combination of fixed media and live processing. Some resonant models contained in a max patch are triggered by envelope followers.
Different resonant models are changed as the piece proceeds.

Collaboration:

I plan to collaborate with the ensemble on three aspect of the piece. I present them here in order of importance

  1. Electronics/acoustic balance
  2. Sound spatialization
  3. Visualization (optional)

I would like to collaborate with the ensemble in finding the correct sound balance to achieve optimal blending between the instruments and the electronic parts.
This piece makes heavy use of resonant models of acoustic sounds to generate the electronic part. The electronics “shadow” the instruments by adding acoustic sound re-synthesis and resonances.
The resonant models are drawn from trumpet and timpani sounds so that the electronics share some of the spectral character of the actual instruments.
A simple way to improve the homogeneity between electronics and acoustic sound would be to redraw the resonant models based on samples of the actual performance instruments.
This has the advantage of allowing the initial collaboration to happen remotely.
More in particular, in the mock up submitted the resonant models are filtering digital noise: ideally the digital noise should be at least partially replaced by the instruments own sounds.
As in all situations that involve live filtering of an instrument own sound, feedback is a likely outcome: I plan to balance the instrument’s direct feed with prerecorded sounds to avoid feedback issues.
Ultimately the instrument’s direct feed can be avoided altogether, aside from the envelope following.

The second aspect of collaboration would be to spatialize both electronic and acoustic sounds. While I would like the instruments to be mostly acoustic, I would also like to have some of their sound come out of speakers.
I plan to spatialize all sounds in Either a quad setup or 5.1. The submitted mock-up has no spatialization whatsoever and I believe this would improve greatly the outcome of the piece.

I would like to propose a third element of collaboration: a visual component for the piece. This would happen if and only if the concert set up allows for a projector.
Please know that this is not a crucial aspect of my composition: the piece is totally autonomous from its visuals. In case it is technically feasible, as far as the concert set up is concerned,
I would create a second max patch to create a real time visualization that would be projected during the performance.

Composing Visual Music Workshop

The “Composing Visual Music” workshop focuses on the relation between music creation and its visualization from the perspective of the composer. As composers, can we create audio–visual pieces that truly exist at the boundary of the aural and visual senses? Let us first stipulate that in multi–media works there can be two general kinds of connection between sound and visuals. I define “high level” a connection that is fairly abstract as the case of a composer suggesting to the audience that the inspiration for the piece they are about to hear is a particular painting, or, simply, a film’s soundtrack musically suggesting what kind of emotion the scene is supposed to convey. A “low level” connection, on the other hand, aims at directly translating musical events that take place in time into visual ones that take place in (virtual) space. In this latter case, the two elements are robustly linked at a perceptual level. “Composing Visual Music” explores the latter kind of connection. My approach to visualizing musical textures draws from my practice as an algorithmic composer. Algorithmic composition is the practice of establishing a set of rules for a certain process to take place, given an initial set of inputs. This approach is intuitively well suited for creating audio–visual textures since it allows for “storage” of compositions as an multidimensional set of data. The data can then be rendered to sound and image simply, or not so simply, mapping the data to aural and visual parameters. The approach to visualizing music that I will illustrate in the workshop is different from the most popular practice in vogue today. Today’s audio–visual aficionado will find more often than not experiments of music visualization that are based on so–called “audio–reactive” algorithms that render visuals based on more or less complex spectral analysis. One of the aims of my workshop is to illustrate how the audio–reactive approach presents major drawbacks if the goal of the visualization is to give a robust representation of a music. Given the inherent complexity sound, using audio–reactive algorithms can majorly limit the possibilities of representing musical events in a discrete manner especially in the case of polyphonic music. On the contrary, the non–audio–reactive approach, the one that I will be describing in the workshop, is more suited for a detailed representation of the musical composition rather than its resulting aural output. Incidentally, this approach is also computationally less expensive than the audio– reactive one, since no processing power has to be dedicated to sound analysis. This is particularly interesting when thinking of current developments of music making. A less computationally expensive process also means that the results of the practice will be more ubiquitous as they can be utilized by a combination of programming environment such as the WebAudio and WebGL API which run on the browser and are not suited for complex DSP algorithms. I believe that composing audio–visual pieces that create a robust connection between two perceptual realms is a nascent and exciting field of exploration for the modern composer, this workshop is a way of exploring possible developments in the field.

Visual Music Samples/Tutorials