The Arpeggio-Detuning is rule-based audio software, which produces sonic complexity based on pitch analysis from an acoustic zither input. The software was made for a zither with aged strings, a personal tuning system, and personal zither playing techniques. My interfaces in performance are the analysis from the zither input and an Ipad touch screen. John Klima used Marmelade and Maximilian to implement my design specifications. I have been calibrating parameters and re-editing digital sounds. The audio recordings at the end of this text show the resulting musical forms.
“From the Serialists to John Cage to the experimentalists of the post war generation, the project has been to deny the habitual or the hackneyed by developing techniques to restrain or condition the immediate process of choice…. With computer music … the distance comes for free but a distance which can only be viewed as problematical. The emphasis may in fact be shifting back towards a quest for immediacy in music.” [Joel Ryan 1991]
Whilst software operates based on mathematical calculations, humans sample and process the information based on attention, cognitive principles, and cross-sensorial context. The disparities are tangible with pitch analysis. For example, a sound may vary in pitch during attack, sustain and release, and nevertheless we group and hierarchise those pitch variations as we segregate the sound from the soundscape. In contrast, the software slices the spectrum according to a buffer size, which may lead overtones or resonance frequencies to be extracted as fundamentals. Or else, an overtone can be intense due to the musical structure, without the pitch being fundamental according to mathematical formulas.
One can create complexity and unpredictability with purely rule-based software; particularly with an acoustic, audible input. I have been developing a compositional form that explores disparities between an acoustic and a digital sound output. Whereas the zither enables an immediate control over the sonic outcome, software entails thresholds between the performer’s control and the instrument’s unpredictability; which can be manipulated so as to convey liveness and expression.
If the zither was plugged into a guitar tuner, the tuner would display a succession of different values upon a single string or accord. The detected pitch is mapped to the closest tone or half tone. The process provides two streams of data. One corresponds to the extracted fundamental. The other corresponds to the nearest tone/ half tone. Tones and half tones are mapped to prerecorded sounds. A single audio input detection causes a corresponding prerecorded sound to play back twice. The result is not repetitive because the second playback is detuned. The detuning value is equal to the difference between the detected frequency and the closest tone or halftone.
Currently there are three sets of prerecorded sounds. I have been using specific zither playing techniques for each set, and developing corresponding musical forms. The music highlights surreptitious chromaticisms and timings, avoiding easy developments.
With set 1, the zither was dribbled or played with the bow. As an input to the software it activated sounds of bass guitar, ocean waves, water drops, thunder (synthesizer) and wind (synthesizer).
Rec #2 (2min)
With set 2, the zither was played with hands, bottlenecks and pick. The digital sounds were from dobro, bass guitar, and zither (played with bottleneck and pick).
Rec #3 (3 min)
With set 3 the zither was played with bow and bottleneck, activating piano notes and digital timbres.
The compositional strategies from the Arpeggio-Detuning software are now being adapted to 3D-rendering, audio-visual software. Subsequently to focusing on sonic complexity, the iterative prototyping process focuses on visual continuity and fungible audio-visual relationships. I will keep posting updates.