University of Redlands: application

Acoustic Music

Acoustic sample 1

Variazioni Su AlDoClEmenti is a piece for chamber orchestra written using generative algorithms designed in OpenMusic.

variazioni

First Movement: Invenzione

Acoustic sample 2

He-li-Be-B is a saxophone quartet performed by the Radnofski Quartet in 2011

Boron (IV Movement)

Commercial and Library Music

Soundtrack for “WalkExperience” iPhone App

Traffic: library music

Multimedia

Audiovisual Sample 1

Ibuprofen

Ibuprofen is an audiovisual piece for 360 projection and 16 channels submitted for the CobeFest at Virginia Tech. The sample posted below is an excerpt rendered for flat screen and stereo audio. The audiovisual objects in the video are designed to rotate around the audience both as visuals and audio. This work is a good representation of the current state of my research, how it is interested in the audiovisual binding of the objects and how the binding affects the perception of the musical texture.

Kalectroscope

ABAB

Writing

The OM Composer Book vol.3 (chapter)

Perspectives of New Music:

Article about Aldo Clementi’s algorithmic technique

https://www.jstor.org/stable/10.7757/persnewmusi.54.1.0137

Projects

Group Flow: interactive meditation

Recently I have joined forces with the Consciousness Hacking team of Mikey Seigel to work on Group Flow. I developed some sonification patches mostly with granular techniques

CINC: Computer Interface for Neuro Composition

Presentation at Massachusetts Institute of Technology

Instructional videos

Application Auburn University

Acoustic Music

In my acoustic writing I develop generative algorithms that I tweak and transcribe as acoustic scores. These processes are described in detail in the IRCAM publication “The OM Composer Book vol.3” (click below to read

OMComposer

Acoustic sample 1

Variazioni Su AlDoClEmenti is a piece for chamber orchestra written using generative algorithms designed in OpenMusic.

variazioni

First Movement: Invenzione

Acoustic sample 2

He-li-Be-B is a saxophone quartet performed by the Radnofski Quartet in 2011

Boron (IV Movement)

Commercial and Library Music

Soundtrack for “WalkExperience” iPhone App

Traffic: library music

Multimedia

Audiovisual Sample 1

Ibuprofen

Ibuprofen is an audiovisual piece for 360 projection and 16 channels submitted for the CobeFest at Virginia Tech. The sample posted below is an excerpt rendered for flat screen and stereo audio. The audiovisual objects in the video are designed to rotate around the audience both as visuals and audio. This work is a good representation of the current state of my research, how it is interested in the audiovisual binding of the objects and how the binding affects the perception of the musical texture.


Pos-doc Zurich University of the Arts

Nonlinear Sequencing

Download the zip of the Max project containing examples of nonlinear sequencers. As this is a a proof of concept for the nonlinear sequencer idea, these examples employ simple (square) oscillators and envelopes (no sound treatment or special attention to the quality of generated sound) so that the rhythmic qualities of the patches are left to be observed.

download the patches here:

https://michelezaccagnini.blog/wp-content/uploads/2019/10/Nonlinear_Sequencer.zip

Audiovisual Examples

PLEASE NOTE: These examples are not created using Nonlinear sequencing. They are posted here to offer samples of my audiovisual work, general aesthetic and cross-modal qualities

Kalectroscope


ABAB

Ibuprofen

Ibuprofen is an audiovisual piece for 360 projection and 16 channels submitted for the CobeFest at Virginia Tech. The sample posted below is an excerpt rendered for flat screen and stereo audio. The audiovisual objects in the video are designed to rotate around the audience both as visuals and audio. This work is a good representation of the current state of my research, how it is interested in the audiovisual binding of the objects and how the binding affects the perception of the musical texture.

Writing

The OM Composer Book vol.3 (chapter)

Perspectives of New Music:

Article about Aldo Clementi’s algorithmic technique

https://www.jstor.org/stable/10.7757/persnewmusi.54.1.0137

Projects

Group Flow: interactive meditation

Recently I have joined forces with the Consciousness Hacking team of Mikey Seigel to work on Group Flow. I developed some sonification patches mostly with granular techniques

CINC: Computer Interface for Neuro Composition

Presentation at Massachusetts Institute of Technology

Instructional videos

Splice Festival

Michele Zaccagnini
Assistant Professor, University of Virginia
Email: mz3vq@virginia.edu
Phone: 857-210-9992

Bio: http://music.virginia.edu/content/zaccagnini

Submission type:
New piece for Splice Ensemble

Submission title:
“For My Sins”

Equipment I will provide:
Laptop
Pedal interface

List of equipment provided by Splice Ensemble:
Microphones (1 trumpet,1 piano, 3-4 for drumset)
Speaker system (quad or 5.1)

Program notes:
“For My Sins” (August 2019) is a piece for trumpet, piano, drum set and electronics.
The full title of the piece is actually “For My Sins You Now Think Hellish Excruciation Seems Indeed Suited” which spells out the technique I used in the piece to generate harmonies and pitch material in general. The rhythmic organization draws on my research and obsession with sound densities, repetitiveness and hypnotism.
My approach to composition is deeply visual as I think of the movements of the piece as canvas with no beginning or end, but simply sections of textures containing no telos, no goal.

Score First Movement

https://michelezaccagnini.blog/wp-content/uploads/2019/09/ForMySins_mov1.pdf

Audio mock-up First Movement (with electronics)

Score Second Movement

https://michelezaccagnini.blog/wp-content/uploads/2019/09/ForMySins_mov2.pdf

Audio mock-up Second Movement (with electronics)

Piece description and collaboration aspects.

For My Sins

“For My Sins” is a piece for trumpet, piano, drum set and electronics. It’s in two movements written specifically for the Splice Ensemble.
The electronics are a combination of fixed media and live processing. Some resonant models contained in a max patch are triggered by envelope followers.
Different resonant models are changed as the piece proceeds.

Collaboration:

I plan to collaborate with the ensemble on three aspect of the piece. I present them here in order of importance

  1. Electronics/acoustic balance
  2. Sound spatialization
  3. Visualization (optional)

I would like to collaborate with the ensemble in finding the correct sound balance to achieve optimal blending between the instruments and the electronic parts.
This piece makes heavy use of resonant models of acoustic sounds to generate the electronic part. The electronics “shadow” the instruments by adding acoustic sound re-synthesis and resonances.
The resonant models are drawn from trumpet and timpani sounds so that the electronics share some of the spectral character of the actual instruments.
A simple way to improve the homogeneity between electronics and acoustic sound would be to redraw the resonant models based on samples of the actual performance instruments.
This has the advantage of allowing the initial collaboration to happen remotely.
More in particular, in the mock up submitted the resonant models are filtering digital noise: ideally the digital noise should be at least partially replaced by the instruments own sounds.
As in all situations that involve live filtering of an instrument own sound, feedback is a likely outcome: I plan to balance the instrument’s direct feed with prerecorded sounds to avoid feedback issues.
Ultimately the instrument’s direct feed can be avoided altogether, aside from the envelope following.

The second aspect of collaboration would be to spatialize both electronic and acoustic sounds. While I would like the instruments to be mostly acoustic, I would also like to have some of their sound come out of speakers.
I plan to spatialize all sounds in Either a quad setup or 5.1. The submitted mock-up has no spatialization whatsoever and I believe this would improve greatly the outcome of the piece.

I would like to propose a third element of collaboration: a visual component for the piece. This would happen if and only if the concert set up allows for a projector.
Please know that this is not a crucial aspect of my composition: the piece is totally autonomous from its visuals. In case it is technically feasible, as far as the concert set up is concerned,
I would create a second max patch to create a real time visualization that would be projected during the performance.

Composing Visual Music Workshop

The “Composing Visual Music” workshop focuses on the relation between music creation and its visualization from the perspective of the composer. As composers, can we create audio–visual pieces that truly exist at the boundary of the aural and visual senses? Let us first stipulate that in multi–media works there can be two general kinds of connection between sound and visuals. I define “high level” a connection that is fairly abstract as the case of a composer suggesting to the audience that the inspiration for the piece they are about to hear is a particular painting, or, simply, a film’s soundtrack musically suggesting what kind of emotion the scene is supposed to convey. A “low level” connection, on the other hand, aims at directly translating musical events that take place in time into visual ones that take place in (virtual) space. In this latter case, the two elements are robustly linked at a perceptual level. “Composing Visual Music” explores the latter kind of connection. My approach to visualizing musical textures draws from my practice as an algorithmic composer. Algorithmic composition is the practice of establishing a set of rules for a certain process to take place, given an initial set of inputs. This approach is intuitively well suited for creating audio–visual textures since it allows for “storage” of compositions as an multidimensional set of data. The data can then be rendered to sound and image simply, or not so simply, mapping the data to aural and visual parameters. The approach to visualizing music that I will illustrate in the workshop is different from the most popular practice in vogue today. Today’s audio–visual aficionado will find more often than not experiments of music visualization that are based on so–called “audio–reactive” algorithms that render visuals based on more or less complex spectral analysis. One of the aims of my workshop is to illustrate how the audio–reactive approach presents major drawbacks if the goal of the visualization is to give a robust representation of a music. Given the inherent complexity sound, using audio–reactive algorithms can majorly limit the possibilities of representing musical events in a discrete manner especially in the case of polyphonic music. On the contrary, the non–audio–reactive approach, the one that I will be describing in the workshop, is more suited for a detailed representation of the musical composition rather than its resulting aural output. Incidentally, this approach is also computationally less expensive than the audio– reactive one, since no processing power has to be dedicated to sound analysis. This is particularly interesting when thinking of current developments of music making. A less computationally expensive process also means that the results of the practice will be more ubiquitous as they can be utilized by a combination of programming environment such as the WebAudio and WebGL API which run on the browser and are not suited for complex DSP algorithms. I believe that composing audio–visual pieces that create a robust connection between two perceptual realms is a nascent and exciting field of exploration for the modern composer, this workshop is a way of exploring possible developments in the field.

Visual Music Samples/Tutorials

Students work sampler

Audiovisuals Environments Spring 2019

Click this link to access the latest example of one of my students’ final project and find the patch in the link below!

https://videos.files.wordpress.com/8zQ4p0pw/clifford_chong_final_project_comp_hd.mp4

https://michelezaccagnini.blog/wp-content/uploads/2019/05/Final-Project.zip

Audiovisual Environments Fall 2109

Max/Jitter patches

Project 3

Project2Finn

Clifford Assignment Beap

Postdoc Ohio State

Audiovisual

ABAB (excerpts)

Performed at The Bridge in Charlottesville (VA) on May 10 2019

Ibuprofen

Ibuprofen is an audiovisual piece for 360 projection and 16 channels submitted for the CobeFest at Virginia Tech. The sample posted below is an excerpt rendered for flat screen and stereo audio. The audiovisual objects in the video are designed to rotate around the audience both as visuals and audio. This work is a good representation of the current state of my research, how it is interested in the audiovisual binding of the objects and how the binding affects the perception of the musical texture.

Variazioni su Space Invader

for two cellos, electronics and visuals, mov.1

This video constitutes the audio visual component of my piece for two cellos, visuals and electronics. It was commissioned by the Nebula Ensemble and was performed in Denver, Colorado, on April 27. In this work I focus on creating a tight-knit, “low level” perceptual connection between aural and visual elements. While the sounds are timbrally manipulated re-synthesis of prerecorded cello sounds the visuals are careful representations of the sounds in their innermost detail. To achieve the kind of nuance I am referring to, I store and process the “composition” as a set of matrices that can then be rendered as sound and visuals simultaneously; this approach allows me to avoid the cumbersome and expensive routine of analyzing sound, usually a required step for most audio–visual pieces.

Publications

“The OM Composer Book vol. III”

https://www.amazon.com/OM-Composers-book-3/dp/2752102836

https://michelezaccagnini.blog/wp-content/uploads/2017/11/OMComposer_Zaccagnini_final.pdf

Tutorials

Interactive

Recently I have joined forces with the Consciousness Hacking team of Mikey Seigel to work on Group Flow. I developed some sonification patches mostly with granular techniques

Application U of Wisconsin Milwaukee

Multimedia

Introduction: my approach to programming audiovisuals

My audiovisual work draws from a series of tools and practices I have developed and differers from other kinds of audio-visual works in fundamental ways. As I was zeroing in the issue of “low level” connection between music and image I realized how the common approach to audio-visualization, commonly defined as “audio– reactive,” was proving excessively cumbersome and computationally expensive: the use of audio–reactive algorithms can majorly limit the possibilities of representing musical events in a discrete manner especially in the case of polyphonic music. Since my approach to composition is algorithmic at its core, I am able to bypass the audio–reactive step by feeding matrices of raw musical data that I gathered from my algorithms into rendering.

Audiovisual Sample 1

Ibuprofen

Ibuprofen is an audiovisual piece for 360 projection and 16 channels submitted for the CobeFest at Virginia Tech. The sample posted below is an excerpt rendered for flat screen and stereo audio. The audiovisual objects in the video are designed to rotate around the audience both as visuals and audio. This work is a good representation of the current state of my research, how it is interested in the audiovisual binding of the objects and how the binding affects the perception of the musical texture.

Audiovisual Sample 2

Variazioni su Space Invader (for two cellos, electronics and visuals, mov.1)

This video constitutes the audio visual component of my most recent piece for two cellos, visuals and electronics. It was commissioned by the Nebula Ensemble and it is currently in rehearsal stage for the upcoming performance in Denver, Colorado, on April 27. In this work I focus on creating a tight-knit, “low level” perceptual connection between aural and visual elements. While the sounds are timbrally manipulated re-synthesis of prerecorded cello sounds the visuals are careful representations of the sounds in their innermost detail. To achieve the kind of nuance I am referring to, I store and process the “composition” as a set of matrices that can then be rendered as sound and visuals simultaneously; this approach allows me to avoid the cumbersome and expensive routine of analyzing sound, usually a required step for most audio–visual pieces.


Acoustic Music

In my acoustic writing I develop generative algorithms that I tweak and transcribe as acoustic scores. These processes are described in detail in the IRCAM publication “The OM Composer Book vol.3” (click below to read

OMComposer

Acoustic sample 1

Variazioni Su AlDoClEmenti is a piece for chamber orchestra written using generative algorithms designed in OpenMusic.

variazioni

First Movement: Invenzione

Second Movement: Sinfonia

Acoustic sample 2

He-li-Be-B is a saxophone quartet performed by the Radnofski Quartet in 2011

Boron (IV Movement)


Interactive

Presentation: CINC


Group Flow (by Mikey Seigel)

Recently I have joined forces with the Consciousness Hacking team of Mikey Seigel to work on Group Flow. I developed some sonification patches mostly with granular techniques

Video Tutorials

Composing Visual Music

Selected Publications

Perspectives of New Music 54 n.1

ZaccagniniPerspectiveOfNewMusic

Software

GitHub repository for mic_externals

mic_externals

Application U of Hartford

Multimedia

Introduction: my approach to programming audiovisuals

My audiovisual work draws from a series of tools and practices I have developed and differers from other kinds of audio-visual works in fundamental ways. As I was zeroing in the issue of “low level” connection between music and image I realized how the common approach to audio-visualization, commonly defined as “audio– reactive,” was proving excessively cumbersome and computationally expensive: the use of audio–reactive algorithms can majorly limit the possibilities of representing musical events in a discrete manner especially in the case of polyphonic music. Since my approach to composition is algorithmic at its core, I am able to bypass the audio–reactive step by feeding matrices of raw musical data that I gathered from my algorithms into rendering.

Audiovisual Sample 1

Ibuprofen

Ibuprofen is an audiovisual piece for 360 projection and 16 channels submitted for the CobeFest at Virginia Tech (currently under review). The sample posted below is an excerpt rendered for flat screen and stereo audio. The audiovisual objects in the video are designed to rotate around the audience both as visuals and audio. This work is a good representation of the current state of my research, how it is interested in the audiovisual binding of the objects and how the binding affects the perception of the musical texture.

Audiovisual Sample 2

Variazioni su Space Invader (for two cellos, electronics and visuals, mov.1)

This video constitutes the audio visual component of my most recent piece for two cellos, visuals and electronics. It was commissioned by the Nebula Ensemble and it is currently in rehearsal stage for the upcoming performance in Denver, Colorado, on April 27. In this work I focus on creating a tight-knit, “low level” perceptual connection between aural and visual elements. While the sounds are timbrally manipulated re-synthesis of prerecorded cello sounds the visuals are careful representations of the sounds in their innermost detail. To achieve the kind of nuance I am referring to, I store and process the “composition” as a set of matrices that can then be rendered as sound and visuals simultaneously; this approach allows me to avoid the cumbersome and expensive routine of analyzing sound, usually a required step for most audio–visual pieces.


Acoustic Music

In my acoustic writing I develop generative algorithms that I tweak and transcribe as acoustic scores. These processes are described in detail in the IRCAM publication “The OM Composer Book vol.3” (click below to read

OMComposer

Acoustic sample 1

Variazioni Su AlDoClEmenti is a piece for chamber orchestra written using generative algorithms designed in OpenMusic.

variazioni

First Movement: Invenzione

Second Movement: Sinfonia

Teaching

Courses websites:

Technosonics (UVa Fall 2018)

Audiovisual Environments (UVa Fall 2018)

Composing for Film (UVa Fall 2018)

Teaching evaluations

Electro-acoustic Composition, Music Technology

Music Theory

Music Business


Variazioni su Space Invader (for two cellos, electronics and visuals, mov.1)

Interactive

Presentation: CINC


Group Flow (by Mikey Seigel)

Recently I have joined forces with the Consciousness Hacking team of Mikey Seigel to work on Group Flow. I developed some sonification patches mostly with granular techniques

Video Tutorials

Composing Visual Music

Selected Publications

Perspectives of New Music 54 n.1

ZaccagniniPerspectiveOfNewMusic

Software

GitHub repository for mic_externals

mic_externals