10:00 Keynote, Cheever Hall
Sonographic Interpretation — Greg Hunter
Technological and Cognitive techniques for mixing and mastering music.
11:30 Morning Session, Cheever Hall
Ideas and techniques behind Carbonfeed, an interactive Internet composition — Jon Bellona
The talk will discuss the carbon impacts of digital content, the evolution of #Carbonfeed as installation and composition, and methods for composing with an organically random input (i.e. Twitter). Some questions that will be addressed: how do we impose finite form and structure upon a continuous, fluctuating medium? What are some mapping approaches and strategies to Twitter data? How was Kyma used? Lastly, the talk will point to resources and code to get Twitter’s API working with your own Kyma system.
Controlled Feedback — Michael Wittgraf
Since their invention, microphones and amplified sound have had a tumultuous relationship. Feedback has been both the enemy and the ally of musicians, who simultaneously seek to eliminate it and use it as a musical tool. Controlling feedback in Kyma yields rich musical sounds, and affords composers a delightfully unpredictable but rewarding array of audio choices. This demonstration begins with and expands on techniques used in last year’s composition Microphun for microphone and Kyma, by the author. Topics include limiters, gates, equalization, microphone placement, input and output levels, pitch shifting, and filters, among others.
14:00 Afternoon Session I, Cheever Hall
Kyma Open Lab — Carla Scaletti, Kurt Hebel & others
15:30 Afternoon Session II, Cheever Hall
Still Harmless [BASS]ically, a roving test platform for Kyma development, 1996-2015 — Brian Belet
I have used the composition environment Still Harmless [BASS]ically (formerly [BASS]ically Harmless) since 1996 to test my Kyma algorithms in live performance. The environment is for electric bass and Kyma, with myself performing as bassist. Many sound structures we devise in the solitude of our studios (often late at night, after way too much coffee – or whisky) make sense at the time, but do not always translate effectively when given to live performers on stage. I use this constantly evolving composition to test new algorithms I devise (or adapt) within Kyma. The ‘test’ is a live performance where everything needs to work as planned and without the aid of a computer operator. I am specifically interested in those algorithms where the music from the bass is the sole controlling data input to Kyma. This Demo session presents a subset of those algorithms that I find most successful in this context.
Successful (and not so successful) live performance paradigms using Kyma with the ensemble SoundProof — SoundProof
SoundProof has used Kyma exclusively since its formation in 2009, including two successful concert tours in 2011 and 2012. Our repertoire includes music composed specifically for us, and also our adaptations of earlier compositions. The latter category includes both fixed and interactive electronic environments, often originating in non-Kyma applications. This workshop will present what works well (and also what does not work so well) from the ensemble’s perspective. A specific focus will be what approaches and practices composers may want to consider when composing a new work for this (or any other) ensemble.
17:00 Concert, Black Box Theatre
Frontier — Paul Turowski. Paul Turowski, Kyma & game software; SoundProof
In Frontier, any number of musical performers improvise while simultaneously influencing an avatar in a digital world. Sounds made by the performers are picked up via microphone and particular features are extracted using real-time analysis in Pure Data. These features are mapped to game functions like avatar movement and interaction with NPCs (non-player characters), which in turn affect the generation of electronic sound cues in Kyma via Open Sound Control. Primarily, the game/score software—written in C++ using OpenFrameworks—serves to provide a dynamic framework for improvisers while also incorporating ancillary game goals such as exploration, survival, and high score attainment.
Drop a Quark… — Rich O’Donnell. Rich O’Donnell, percussion & Kyma; Anna Lum, poet
Drop a quark
In any thought pond in space
Watch the universe quiver
Our quarks are Art thoughts to stimulate your ears and eyes
Returning to Unknown Worlds — Michael Monhart & Scott Miller & Scott Wiessinger. Scott Miller, Kyma; Michael Monhart, saxophone; Scott Wiessinger, video
Returning to Unknown Worlds incorporates improvisation and interactive processing with a work of live cinema (Wiessinger), involving an unknown visual structure with a “secret” narrative, known only to the film creator. Scott Miller has created with Kyma an interactive and dynamic audio processing structure based on the concept of an orrey – a mechanical modeld of the the solar system. Michael Monhart, on saxophone and Kyma-processed sounds, and Scott Miller create a soundtrack to the cosmic drama of the film.
Pairs — Joel Chadabe & Cindy Stillwell. Joel Chadabe, Franz Danksagmüller, Scott Miller, Kyma; SoundProof; Cindy Stillwell, video
This first performance of a new composition is about interactions between a pair of people making sounds that relate to one another. In this performance, one of the people is playing an acoustic instrument while the other is playing a Kyma-based instrument. In any performance, any number of pairs may simultaneously perform, but each pair should interact with the other pairs. Thinking of the structure as a model of life, we all interact with a friend and at the same time interact with our surroundings.
AQULAQUTAQU — Madison Heying & Kristin Erickson in collaboration with Matthew Galvin & David Kant. Madison Heying and Kristin Erickson, voice & Kyma; Matthew Galvin, voice & video; David Kant, voice
BACBAABACBA and NABYNAAEEBY are spaceling inventors from the planet AQULAQUTAQU. Their trans-dimensional algorithms decode vibratory earth transmissions, providing the means for super-sonic space travel as well as diverting and subversive musical activity. These inventions enable BACBA and NABY to escape the tyranny of their home-planet’s backwards ways and journey to Earth. AQULAQUTAQU, the operetta is KUKURIKSSUK’s apocryphal retelling of this classic AQU myth. QU IKKAKAAI MEE FU. UNABADAMEEN!
The sonic environment is actualized through the auralization of generative processes including fractals, L-systems, and cellular automata. The accompanying sonic bed includes references to popular and traditional music, and live instruments. The text is derived from L-systems parallel re-writing algorithms.
Biographies of Presenters and Performers