0% found this document useful (0 votes)
30 views350 pages

Document 1-Combined

Uploaded by

Elijah Rollo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views350 pages

Document 1-Combined

Uploaded by

Elijah Rollo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

A History of Electronic Music Pioneers David Dunn Note:

This essay was written for the catalog that accompanied the exhibition: Eigenwelt der
Apparatewelt: Pioneers of Electronic Art. The exhibition was presented as part of Ars
Electronica 1992, in Linz, Austria and was curated by Woody and Steina Vasulka. It
consisted of a comprehensive, interactive display of vintage electronic tools for video and
audio generation/processing from the 1960's and 1970's. The exhibition also presented
several interactive laser disk displays of text, music samples, and still or moving images
that were correlated to the exhibition catalog. "When intellectual formulations are treated
simply by relegating them to the past and permitting the simple passage of time to
substitute for development, the suspicion is justified that such formulations have not
really been mastered, but rather they are being suppressed." Theodor W. Adorno "It is the
historical necessity, if there is a historical necessity in history, that a new decade of
electronic television should follow to the past decade of electronic music." Nam June Paik
(1965) Introduction: Historical facts reinforce the obvious realization that the major
cultural impetus which spawned video image experimentation was the American Sixties.
As a response to that cultural climate, it was more a perceptual movement than an artistic
one in the sense that its practitioners desired an electronic equivalent to the sensory and
physiological tremendum which came to life during the Vietnam War. Principal among
these was the psychedelic experience with its radical experiential assault on the nature of
perception and visual phenomena. Armed with a new visual ontology, whatever art image-
making tradition informed them it was less a cinematic one than an overt counter-cultural
reaction to television as a mainstream institution and purveyor of images that were
deemed politically false. The violence of technology that television personified, both
metaphorically and literally through the war images it disseminated, represented a source
for renewal in the electronic reconstruction of archaic perception. It is specifically a
concern for the expansion of human perception through a technological stratagem that
links those tumultuous years of aesthetic and technical experimentation with the 20th
century history of modernist exploration of electronic potentials, primarily exemplified by
the lineage of artistic research initiated by electronic sound and music experimentation
beginning as far back as 1906 with the invention of the Telharmonium. This essay traces
some of that early history and its implications for our current historical predicament. The
other essential argument put forth here is that a more recent period of video
experimentation, is only one of the later chapters in a history of failed utopianism that
dominates the artistic exploration and use of technology throughout the 20th century. The
following pages present an historical context for the specific focus of this exhibition on
early pioneers of electronic art. Prior to the 1960's, the focus is, of necessity,
predominantly upon electronic sound tool-making and electroacoustic aesthetics as
antecedent to the more relevant discussion of the emergence of electronic image
generation/processing tools and aesthetics. Our intention is to frame this image-making
tradition within the realization that many of its concerns were first articulated within an
audio technology domain and that they repeat, within the higher frequency spectrum of
visual information, similar issues encountered within the electronic music/sound art
traditions. In fact, it can be argued that many of the innovators within this period of
electronic image-making evolved directly from participation in the electronic music
experimentation of that time period. Since the exhibition itself attempts to depict these
individuals and their art through the perspective of the actual means of production, as
exemplified by the generative tools, it must be pointed out that the physical objects on
display are not to be regarded as aesthetic objects per se but rather as instruments which
facilitate the articulation of both aesthetic products and ideological viewpoints. It is
predominantly the process which is on exhibit. In this regard we have attempted to present
the ideas and art work that emerged from these processes as intrinsic parts of ideological
systems which must also be framed within an historical context. We have therefore
provided access to the video/audio art and other cultural artifacts directly from this text as
it unfolds in chronological sequence. Likewise, this essay discusses this history with an
emphasis on issues which reinforce a systemic process view of a complex set of dialectics
(e.g. modernist versus representationist aesthetics, and artistic versus
industrial/technocratic ideologies). Early Pioneers: One of the persistent realities of history
is that the facts which we inherit as descriptions of historical events are not neutral. They
are invested with the biases of individual and/or group participants, those who have
survived or, more significantly, those who have acquired sufficient power to control how
that history is written. In attempting to compile this chronology, it has been my intention to
present a story whose major signposts include those who have made substantive
contributions but remain uncelebrated in addition to those figures who have merely
become famous for being famous. The reader should bear in mind that this is a brief
chronology that must of necessity neglect other events and individuals whose work was
just as valid. It is also an important feature of this history that the artistic use of technology
has too often been criticized as an indication of a de-humanizing trend by a culture which
actually embraces such technology in most other facets of its deepest fabric. It appears to
abhor that which mirrors its fundamental workings and yet offers an alternative to its own
violence. In view of this suspicion I have chosen to write this chronology from a position
that regards the artistic acquisition of technology as one of the few arenas where a creative
critique of the so-called technological era has been possible. One of the earliest
documented musical instruments based upon electronic principles was the Clavecin
Électrique designed by the jesuit priest Jean-Baptiste Delaborde in France, 1759. The
device used a keyboard control based upon simple electrostatic principles. The spirit of
invention which immediately preceded the turn of this century was synchronous with a
cultural enthusiasm about the new technologies that was unprecedented. Individuals such
as Bell, Edison, and Tesla became culture heroes who ushered in an ideology of industrial
progress founded upon the power of harnessed electricity. Amongst this assemblage of
inventor industrialists was Dr. Thaddeus Cahill, inventor of the electric typewriter, designer
and builder of the first musical synthesizer and, by default, originator of industrial muzak.
While a few attempts to build electronic musical instruments were made in the late 19th
century by Elisha Gray, Ernst Lorenz, and William Duddell, they were fairly tentative or
simply the curious byproducts of other research into electrical phenomena. One exception
was the musical instrument called the Choralcelo built in the United States by Melvin L.
Severy and George B. Sinclair between 1888 and 1908. Cahill's invention, the
Telharmonium, however, remains the most ambitious attempt to construct a viable
electronic musical instrument ever conceived. Working against incredible technical
difficulties, Cahill succeeded in 1900 to construct the first prototype of the Telharmonium
and by 1906, a fairly complete realization of his vision. This electro-mechanical device
consisted of 145 rheotome/alternators capable of producing five octaves of variable
harmonic content in imitation of orchestral tone colors. Its principal of operation consisted
of what we now refer to as additive synthesis and was controlled from two touch-sensitive
keyboards capable of timbral, amplitude and other articulatory selections. Since Cahill's
machine was invented before electronic amplification was available he had to build
alternators that produced more than 10,000 watts. As a result the instrument was quite
immense, weighing approximately 200 tons. When it was shipped from Holyoke,
Massachusetts to New York City, over thirty railroad flatcars were enlisted in the effort.
While Cahill's initial intention was simply to realize a truly sophisticated electronic
instrument that could perform traditional repertoire, he quickly pursued its industrial
application in a plan to provide direct music to homes and offices as the strategy to fund
its construction. He founded the New York Electric Music Company with this intent and
began to supply realtime performances of popular classics to subscribers over telephone
lines. Ultimately the business failed due to insurmountable technical and legal difficulties,
ceasing operations in 1911. The Telharmonium and its inventor represent one of the most
spectacular examples of one side of a recurrent dialectic which we will see demonstrated
repeatedly throughout the 20th century history of the artistic use of electronic technology.
Cahill personifies the industrial ideology of invention which seeks to imitate more
efficiently the status quo. Such an ideology desires to summarize existent knowledge
through a new technology and thereby provide a marketable representation of current
reality. In contrast to this view, the modernist ideology evolved to assert an anti-
representationist use of technology which sought to expand human perception through the
acquisition of new technical means. It desired to seek the unknown as new
phenomenological and experiential understandings which shattered models of the so-
called "real". The modernist agenda is brilliantly summarized by the following quote by
Hugo Ball: "It is true that for us art is not an end in itself, we have lost too many of our
illusions for that. Art is for us an occasion for social criticism, and for real understanding of
the age we live in...Dada was not a school of artists, but an alarm signal against declining
values, routine and speculations, a desperate appeal, on behalf of all forms of art, for a
creative basis on which to build a new and universal consciousness of art." Many
composers at the beginning of this century dreamed of new electronic technologies that
could expand the palette of sound and tunings of which music and musical instruments
then consisted. Their interest was not to use the emerging electronic potential to imitate
existent forms, but rather to go beyond what was already known. In the same year that
Cahill finalized the Telharmonium and moved it to New York City, the composer Ferruccio
Busoni wrote his Entwurf einer neuen Ästhetik der Tonkunst (Sketch of a New Aesthetic of
Music) wherein he proposed the necessity for an expansion of the chromatic scale and
new (possibly electrical) instruments to realize it. Many composers embraced this idea
and began to conceptualize what such a music should consist of. In the following year, the
Australian composer Percy Grainger was already convinced that his concept of Free Music
could only be realized through use of electromechanical devices. By 1908 the Futurist
Manifesto was published and the modernist ideology began its artists' revolt against
existent social and cultural values. In 1913 Luigi Russolo wrote The Art of Noise, declaring
that the "evolution of music is paralleled by the multiplication of the machine". By the end
of that year, Russolo and Ugo Piatti had constructed an orchestra of electro-mechanical
noise instruments (intonarumori) capable of realizing their vision of a sound art which
shattered the musical status quo. Russolo desired to create a sound based art form out of
the noise of modern life. His noise intoning devices presented their array of "howlers,
boomers, cracklers, scrapers, exploders, buzzers, gurglers, and whistles" to bewildered
audiences in Italy, London, and finally Paris in 1921, where he gained the attention of
Varèse and Stravinsky. Soon after this concert the instruments were apparently only used
commercially for generating sound effects and were abandoned by Russolo in 1930.
Throughout the second decade of the 20th century there was an unprecedented amount of
experimental music activity much of which involved discourse about the necessity for new
instrumental resources capable of realizing the emerging theories which rejected
traditional compositional processes. Composers such as Ives, Satie, Cowell, Varèse, and
Schoenberg were advancing the structural and instrumental resources for music. It was
into this intellectual climate, and into the cultural changes brought on by the Russian
Revolution, that Leon Theremin (Lev Sergeyevich Termen) introduced the Aetherophone
(later known as the Theremin), a new electronic instrument based on radio-frequency
oscillations controlled by hands moving in space over two antennae. The extraordinary
flexibility of the instrument not only allowed for the performance of traditional repertoire
but also a wide range of new effects. The theatricality of its playing technique and the
uniqueness of its sound made the Theremin the most radical musical instrument
innovation of the early 20th century. The success of the Theremin brought its inventor a
modest celebrity status. In the following years he introduced the instrument to Vladimir
Lenin, invented one of the earliest television devices, and moved to New York City. There
he gave concerts with Leopold Stokowski, entertained Albert Einstein and married a black
dancer named Lavinia Williams. In 1932 he collaborated with the electronic image pioneer
Mary Ellen Bute to display mathematical formulas on a CRT synchronized to music. He
also continued to invent new instruments such as the Rhythmicon, a complex cross-
rhythm instrument produced in collaboration with Henry Cowell. Upon his return to the
Soviet Union in 1938, Theremin was placed under house arrest and directed to work for the
state on communications and surveillance technologies until his retirement in the late
1960's. In many ways, Leon Theremin represents an archetypal example of the
artist/engineer whose brilliant initial career is coopted by industry or government. In his
case the irony is particularly poignant in that he invented his instruments in the full
flowering of the Bolshevik enthusiasm for progressive culture under Lenin and
subsequently fell prey to Stalin's ideology of fear and repression. Theremin was prevented
until 1991 (at 95 years of age) from stepping foot outside the USSR because he possessed
classified information about radar and surveillance technologies that had been obsolete
for years. This suppression of innovation through institutional ambivalence, censorship or
cooptation is also one of the recurrent patterns of the artistic use of technology throughout
the 20th century. What often begins with the desire to expand human perception ends with
commoditization or direct repression. By the end of the 1920's a large assortment of new
electronic musical instruments had been developed. In Germany Jörg Mager had been
experimenting with the design of new electronic instruments. The most successful was the
Sphärophon, a radio frequency oscillator based keyboard instrument capable of producing
quarter-tone divisions of the octave. Mager's instruments used loudspeakers with unique
driver systems and shapes to achieve a variety of sounds. Maurice Martenot introduced his
Ondes Martenot in France where the instrument rapidly gained acceptance with a wide
assortment of established composers. New works were written for the instrument by
Milhaud, Honegger, Jolivet, Varèse and eventually Messiaen who wrote Fête des Belles
Eaux for an ensemble of six Ondes Martenots in 1937 and later as a solo instrument in his 3
petites liturgies of 1944. The Ondes Martenot was based upon similar technology as the
Theremin and Sphärophon but introduced a much more sophisticated and flexible control
strategy. Other new instruments introduced around this time were the Dynaphone of Rene
Bertrand, the Hellertion of Bruno Helberger and Peter Lertes and an organ-like "synthesis"
instrument devised by J. Givelet and E. Coupleaux which used a punched paper roll control
system for audio oscillators constructed with over 700 vacuum tubes. One of the longest
lived of this generation of electronic instruments was the Trautonium of Dr. Friedrich
Trautwein. This keyboard instrument was based upon distinctly different technology than
the principles previously mentioned. It was one of the first instruments to use a neon-tube
oscillator and its unique sound could be selectively filtered during performance. Its
resonance filters could emphasize specific overtone regions. The instrument was
developed in conjunction with the Hochschule für Music in Berlin where a research
program for compositional manipulation of phonograph recordings had been founded two
years earlier in 1928. The composer Paul Hindemith participated in both of these
endeavors, composing a Concertino for Trautonium and String Orchestra and a sound
montage based upon phonograph record manipulations of voice and instruments. Other
composers who wrote for the Trautonium included Richard Strauss and Werner Egk. The
greatest virtuoso of this instrument was the composer Oskar Sala who performed on it,
and made technical improvements, into the 1960's. Also about this time, the composer
Robert Beyer published a curious paper about "space" or "room music" entitled Das
Problem der Kommender Musik that gained little attention from his colleagues. (Beyer's
subsequent role in the history of electronic music will be discussed later.) The German
experiments in phonograph manipulation constitute one of the first attempts at organizing
sound electronically that was not based upon an instrumental model. While this initial
attempt at the stipulation of sound events through a kind of sculptural molding of recorded
materials was short lived, it set in motion one of the main approaches to electronic
composition to become dominant in decades to come: the electronic music studio. Other
attempts at a non-instrumental approach to sound organization began in 1930 within both
the USSR and Germany. With the invention of optical sound tracks for film a number of
theorists become inspired to experiment with synthetic sound generated through standard
animation film techniques. In the USSR two centers for this research were established:
A.M. Avzaamov, N.Y. Zhelinsky, and N.V. Voinov experimented at the Scientific
Experimental Film Institute in Leningrad while E.A Scholpo and G.M. Rimski-Korsakov
performed similar research at the Leningrad Conservatory. In the same year, Bauhaus
artists performed experiments with hand-drawn waveforms converted into sound through
photoelectric cells. Two other German artists, Rudolph Pfenninger and Oscar Fischinger
worked separately at about this time exploring synthetic sound generation through
techniques that were similar to Voinov and Avzaanov. A dramatic increase in new
electronic instruments soon appeared in subsequent years. All of them seem to have had
fascinating if not outrightly absurd names: the Sonorous Cross; the Electrochord; the
Ondioline; the Clavioline; the Kaleidophon; the Electronium Pi; the Multimonica; the
Pianophon; the Tuttivox; the Mellertion; the Emicon; the Melodium; the Oscillion; the
Magnetton; the Photophone; the Orgatron; the Photona; and the Partiturophon. While most
of these instruments were intended to produce new sonic resources, some were intended
to replicate familiar instrumental sounds of the pipe organ variety. It is precisely this desire
to replicate the familiar which spawned the other major tradition of electronic instrument
design: the large families of electric organs and pianos that began to appear in the early
1930's. Laurens Hammond built his first electronic organ in 1929 using the same tone-
wheel process as Cahill's Telharmonium. Electronic organs built in the following years by
Hammond included the Novachord and the Solovox. While Hammond's organ's were
rejected by pipe organ enthusiasts because its additive synthesis technique sounded too
"electronic", he was the first to achieve both stable intonation through synchronized
electromechanical sound generators and mass production of an electronic musical
instrument, setting a precedent for popular acceptance. Hammond also patented a spring
reverberation technique that is still widely used. The Warbo Formant Organ (1937) was one
of the first truly polyphonic electronic instruments that could be considered a predecessor
of current electronic organs. Its designer the German engineer Harald Bode was one of the
central figures in the history of electronic music in both Europe and the United States. Not
only did he contribute to instrument design from the 1930's on, he was one of the primary
engineers in establishing the classic tape music studios in Europe. His contributions
straddled the two major design traditions of new sounds versus imitation of traditional
ones without much bias since he was primarily an engineer interested in providing tools for
a wide range of musicians. Other instruments which he subsequently built included the
Melodium, the Melochord and the Polychord (Bode's other contributions will be discussed
later in this essay). By the late 1930's there was an increase of experimental activity in both
Europe and the United States. 1938 saw the installation of the ANS Synthesizer at the
Moscow Experimental Music Studio. John Cage began his long fascination with electronic
sound sources in 1939 with the presentation of Imaginary Landscape No. 1, a live
performance work whose score includes a part for disc recordings performed on a variable
speed phonograph. A number of similar works utilizing recorded sound and electronic
sound sources followed. Cage had also been one of the most active proselytizers for
electronic music through his writings, as were Edgard Varèse, Joseph Schillinger, Leopold
Stokowski, Henry Cowell, Carlos Chavez and Percy Grainger. It was during the 1930's that
Grainger seriously began to pursue the building of technological tools capable of realizing
his radical concept of Free Music notated as spatial non-tempered structures on graph
paper. He composed such a work for an ensemble of four Theremins (1937) and began to
collaborate with Burnett Cross to design a series of synchronized oscillator instruments
controlled by a paper tape roll mechanism. These instruments saw a number of
incarnations until Grainger's death in 1961. In 1939 Homer Dudley created the voder and
the vocoder for non-musical applications associated with speech analysis. The voder was
a keyboard-operated encoding instrument consisting of bandpass channels for the
simulation of resonances in the human voice. It also contained tone and noise sources for
imitating vowels and consonants. The vocoder was the corresponding decoder which
consisted of an analyzer and synthesizer for analyzing and then reconstituting the same
speech. Besides being one of the first sound modification devices, the vocoder was to take
on an important role in electronic music as a voice processing device that is still widely in
use today. The important technical achievements of the 1930's included the first
successful television transmission and major innovations in audio recording. Since the
turn of the century, research into improving upon the magnetic wire recorder, invented by
Valdemar Poulsen, had steadily progressed. A variety of improvements had been made,
most notably the use of electrical amplification and the invention of the Alternating Current
bias technique. The next major improvement was the replacement of wire with steel
bands, a fairly successful technology that played a significant role in the secret police of
the Nazi party. The German scientist Fritz Pfleumer had begun to experiment with oxide-
coated paper and plastic tape as early as 1927 and the I.G. Farbenindustrie introduced the
first practical plastic recording tape in 1932. The most successful of the early magnetic
recording devices was undoubtedly the AEG Magnetophone introduced in 1935 at the
Berlin Radio Fair. This device was to become the prototypical magnetic tape recorder and
was vastly superior to the wire recorders then in use. By 1945 the Magnetophone adopted
oxide-coated paper tape. After World War II the patents for this technology were
transferred to the United States as war booty and further improvements in tape technology
progressed there. Widespread commercial manufacturing and distribution of magnetic
tape recorders became a reality by 1950. The influence of World War II upon the arts was
obviously drastic. Most experimental creative activity ceased and technical innovation was
almost exclusively dominated by military needs. European music was the most seriously
effected with electronic music research remaining dormant until the late 1940's. However,
with magnetic tape recording technology now a reality, a new period of rapid innovation
took place. At the center of this new activity was the ascendancy of the tape music studio
as both compositional tool and research institution. Tape recording revolutionized
electronic music more than any other single event in that it provided a flexible means to
both store and manipulate sound events. The result was the defining of electronic music
as a true genre. While the history of this genre before 1950 has primarily focused upon
instrument designers, after 1950 the emphasis shifts towards the composers who
consolidated the technical gains of the first half of the 20th century. Just prior to the event
of the tape recorder, Pierre Schaeffer had begun his experiments with manipulation of
phonograph recordings and quickly evolved a theoretical position which he named
Musique Concrète in order to emphasize the sculptural aspect of how the sounds were
manipulated. Schaeffer predominantly used sounds of the environment that had been
recorded through microphones onto disc and later tape. These "sound objects" were then
manipulated as pieces of sound that could be spliced into new time relationships,
processed through a variety of devices, transposed to different frequency registers through
tape speed variations, and ultimately combined into a montage of various mixtures of
sounds back onto tape. In 1948 Schaeffer was joined by the engineer Jacques Poullin who
subsequently played a significant role in the technical evolution of tape music in France.
That same year saw the initial broadcast of Musique Concrète over French Radio and was
billed as a `concert de bruits'. The composer Pierre Henry then joined Schaeffer and
Poullin in 1949. Together they constructed the Symphonie pour un homme seul, one of the
true classics of early tape music completed before they had access to tape recorders. By
1950 Schaeffer and Henry were working with magnetic tape and the evolution of musique
concrète proceeded at a fast pace. The first public performance was given in that same
year at the École Normale de Musique. In the following year, French National Radio
installed a sophisticated studio for the Group for Research on Musique Concrète. Over the
next few years significant composers began to be attracted to the studio including Pierre
Boulez, Michel Philippot, Jean Barraqué, Phillipe Arthuys, Edgard Varèse, and Olivier
Messiaen. In 1954 Varèse composed the tape part to Déserts for orchestra and tape at the
studio and the work saw its infamous premiere in December of that year. Since Musique
Concrète was both a musical and aesthetic research project a variety of theoretical
writings emerged to articulate the movement's progress. Of principal importance was
Schaeffer's book A la recherche d'une musique concrète. In it he describes the group's
experiments in a pseudo-scientific manner that forms a lexicon of sounds and their
distinctive characteristics which should determine compositional criteria and
organization. In collaboration with A. Moles, Schaeffer specified a classification system for
acoustical material according to orders of magnitude and other criteria. In many ways
these efforts set the direction for the positivist philosophical bias that has dominated the
"research" emphasis of electronic music institutions in France and elsewhere. The sonic
and musical characteristics of early musique concrète were pejoratively described by
Olivier Messiaen as containing a high level of surrealistic agony and literary descriptivism.
The movement's evolution saw most of the participating composers including Schaeffer
move away from the extreme dislocations of sound and distortion associated with its early
compositions and simple techniques. Underlying the early works was a fairly consistent
philosophy best exemplified by a statement by Schaeffer: "I belong to a generation which is
largely torn by dualisms. The catechism taught to men who are now middle-aged was a
traditional one, traditionally absurd: spirit is opposed to matter, poetry to technique,
progress to tradition, individual to the group and how much else. From all this it takes just
one more step to conclude that the world is absurd, full of unbearable contradictions. Thus
a violent desire to deny, to destroy one of the concepts, especially in the realm of form,
where, according to Malraux, the Absolute is coined. Fashion faintheartedly approved this
nihilism. If musique concrète were to contribute to this movement, if, hastily adopted,
stupidly understood, it had only to add its additional bellowing, its new negation, after so
much smearing of the lines, denial of golden rules (such as that of the scale), I should
consider myself rather unwelcome. I have the right to justify my demand, and the duty to
lead possible successors to this intellectually honest work, to the extent to which I have
helped to discover a new way to create sound, and the means--as yet approximate--to give
it form. ... Photography, whether the fact be denied or admitted, has completely upset
painting, just as the recording of sound is about to upset music .... For all that, traditional
music is not denied; any more than the theatre is supplanted by the cinema. Something
new is added: a new art of sound. Am I wrong in still calling it music?" While the tape studio
is still a major technical and creative force in electronic music, its early history marks a
specific period of technical and stylistic activity. As recording technology began to reveal
itself to composers, many of whom had been anxiously awaiting such a breakthrough,
some composers began to work under the auspices of broadcast radio stations and
recording studios with professional tape recorders and test equipment in off hours. Others
began to scrounge and share equipment wherever possible, forming informal cooperatives
based upon available technology. While Schaeffer was defining musique concrète, other
independent composers were experimenting with tape and electronic sound sources. The
end of 1940's saw French composer Paul Boisselet compose some of the earliest live
performance works for instruments, tape recorders and electronic oscillators. In the
United States, Bebe and Louis Barron began their pioneering experiments with tape
collage. As early as 1948 the Canadian composer/engineer Hugh Le Caine was hired by the
National Research Council of Canada to begin building electronic musical instruments. In
parallel to all of these events, another major lineage of tape studio activity began to
emerge in Germany. According to the German physicist Werner Meyer-Eppler the events
comprising the German electronic music history during this time are as follows. In 1948 the
inventor of the Vocoder, Homer Dudley, demonstrated for Meyer-Eppler his device. Meyer-
Eppler subsequently used a tape recording of the Vocoder to illustrate a lecture he gave in
1949 called Developmental Possibilities of Sound. In the audience was the
aforementioned Robert Beyer, now employed at the Northwest German Radio, Cologne.
Beyer must have been profoundly impressed by the presentation since it was decided that
lectures should be formulated on the topic of "electronic music" for the International
Summer School for New Music in Darmstadt the following year. Much of the subsequent
lecture by Meyer-Eppler contained material from his classic book, Electronic Tone
Generation, Electronic Music, and Synthetic Speech. By 1951 Meyer-Eppler began a series
of experiments with synthetically generated sounds using Harald Bode's Melochord and an
AEG magnetic tape recorder. Together with Robert Beyer and Herbert Eimert, Meyer-Eppler
presented his research as a radio program called "The World of Sound of Electronic Music"
over German Radio, Cologne. This broadcast helped to convince officials and technicians
of the Cologne radio station to sponsor an official studio for Elektronischen Musik. From its
beginning the Cologne studio differentiated itself from the Musique Concrète activities in
Paris by limiting itself to "pure" electronic sound sources that could be manipulated
through precise compositional techniques derived from Serialism. While one of the
earliest compositional outcomes from the influence of Meyer-Eppler was Bruno Maderna's
collaboration with him entitled Musica su due Dimensioni for flute, percussion, and
loudspeaker, most of the other works that followed were strictly concerned with utilizing
only electronic sounds such as pure sine-waves. One of the first attempts at creating this
labor intensive form of studio based additive synthesis was Karlheinz Stockhausen who
created his Étude out of pure sine-waves at the Paris studio in 1952. Similar works were
produced at the Cologne facilities by Beyer and Eimert at about this time and subsequently
followed by the more sophisticated attempts by Stockhausen, Studie I (1953) and Studie II
(1954). In 1954 a public concert was presented by Cologne radio that included works by
Stockhausen, Goeyvaerts, Pousseur, Gredinger, and Eimert. Soon other composers began
working at the Cologne studio including Koenig, Heiss, Klebe, Kagel, Ligeti, Brün and Ernst
Krenek. The later composer completed his Spiritus Intelligentiae Sanctus at the Cologne
studio in 1956. This work along with Stockhausen's Gesang der Jünglinge, composed at the
same time, signify the end of the short-lived pure electronic emphasis claimed by the
Cologne school. Both works used electronically generated sounds in combination with
techniques and sound sources associated with musique concrète. While the distinction
usually posited between the early Paris and Cologne schools of tape music composition
emphasizes either the nature of the sound sources or the presence of an organizational
bias such as Serialism, I tend to view this distinction more in terms of a reorganization at
mid-century of the representationist versus modernist dialectic which appeared in prior
decades. Even though Schaeffer and his colleagues were consciously aligned in overt ways
with the Futurists' concern with noise, they tended to rely on dramatic expression that was
dependent upon illusionistic associations to the sounds undergoing deconstruction. The
early Cologne school appears to have been concerned with an authentic and didactic
display of the electronic material and its primary codes as if it were possible to reveal the
metaphysical and intrinsic nature of the material as a new perceptual resource. Obviously
the technical limitations of the studio at that time, in addition to the aesthetic demands
imposed by the current issues of musicality, made their initial pursuit too problematic.
Concurrent with the tape studio developments in France and Germany there were
significant advances occurring in the United States. While there was not yet any significant
institutional support for the experimental work being pursued by independent composers,
some informal projects began to emerge. The Music for Magnetic Tape Project was formed
in 1951 by John Cage, Earle Brown, Christian Wolff, David Tudor, and Morton Feldman and
lasted until 1954. Since the group had no permanent facility, they relied on borrowed time
in commercial sound studios such as that maintained by Bebe and Louis Barron or used
borrowed equipment that they could share. The most important work to have emerged
from this collective was Cage's William's Mix. The composition used hundreds of
prerecorded sounds from the Barron's library as the source from which to fulfill the
demands of a meticulously notated score that specified not only the categories of sounds
to be used at any particular time but also how the sounds were to be spliced and edited.
The work required over nine months of intensive labor on the part of Cage, Brown and
Tudor to assemble. While the final work may not have sounded to untutored ears as very
distinct from the other tape works produced in France or Cologne at the same time, it
nevertheless represented a radical compositional and philosophical challenge to these
other schools of thought. In the same year as Cage's William's Mix, Vladimir Ussachevsky
gave a public demonstration of his tape music experiments at Columbia University.
Working in almost complete isolation from the other experimenters in Europe and the
United States, Ussachevsky began to explore tape manipulation of electronic and
instrumental sounds with very limited resources. He was soon joined by Otto Luening and
the two began to compose in earnest some of the first tape compositions in the United
States at the home of Henry Cowell in Woodstock, New York: Fantasy in Space, Low
Speed, and Sonic Contours. The works, after completion in Ussachevsky's living room in
New York and in the basement studio of Arturo Toscanini's Riverdale home, were
presented at the Museum of Modern Art in October of 1952. Throughout the 1950's
important work in electronic music experimentation only accelerated at a rapid pace. In
1953 an Italian electronic music studio (Studio de Fonologia) was established at the Radio
Audizioni Italiane in Milan. During its early years the studio attracted many important
international figures including Luciano Berio, Niccolo Castiglioni, Aldo Clementi, Bruno
Maderna, Luigi Nono, John Cage, Henri Pousseur, André Boucourechliev, and Bengt
Hambraeus. Studios were also established at the Philips research labs in Eindhoven and at
NHK (Japanese Broadcasting System) in 1955. In that same year the David Sarnoff
Laboratories of RCA in Princeton, New Jersey introduced the Olson-Belar Sound
Synthesizer to the public. As its name states, this instrument is generally considered the
first modern "synthesizer" and was built with the specific intention of synthesizing
traditional instrumental timbres for the manufacture of popular music. In an interesting
reversal of the usual industrial absorption of artistic innovation, the machine proved
inappropriate for its original intent and was later used entirely for electronic music
experimentation and composition. Since the device was based upon a combination of
additive and subtractive synthesis strategies, with a control system consisting of a
punched paper roll or tab-card programming scheme, it was an extremely sophisticated
instrument for its time. Not only could a composer generate, combine and filter sounds
from the machine's tuning-fork oscillators and whitenoise generators, sounds could be
input from a microphone for modification. Ultimately the device's design philosophy
favored fairly classical concepts of musical structure such as precise control of twelve-
tone pitch material and was therefore favored by composers working within the serial
genre. The first composers to work with the Olson-Belar Sound Synthesizer (later known as
the RCA Music Synthesizer) were Vladimir Ussachevsky, Otto Luening and Milton Babbitt
who managed to initially gain access to it at the RCA Labs. Within a few years this trio of
composers in addition to Roger Sessions managed to acquire the device on a permanent
basis for the newly established Columbia-Princeton Electronic Music Center in New York
City. Because of its advanced facilities and policy of encouragement to contemporary
composers, the center attracted a large number of international figures such as Alice
Shields, Pril Smiley, Michiko Toyama, Bülent Arel, Mario Davidovsky, Halim ElDabh, Mel
Powell, Jacob Druckman, Charles Wourinen, and Edgard Varèse. In 1958 the University of
Illinois at Champaign/Urbana established the Studio for Experimental Music. Under the
initial direction of Lejaren Hiller the studio became one of the most important centers for
electronic music research in the United States. Two years earlier, Hiller, who was also a
professional chemist, applied his scientific knowledge of digital computers to the
composition of the Illiac Suite for String Quartet, one of the first attempts at serious
computer-aided musical composition. In subsequent years the resident faculty connected
with the Studio for Experimental Music included composers Herbert Brün, Kenneth
Gaburo, and Salvatore Martirano along with the engineer James Beauchamp whose
Harmonic Tone Generator was one of the most interesting special sound generating
instruments of the period. By the end of the decade Pierre Schaeffer had reorganized the
Paris studio into the Groupe de Recherches de Musicales and had abandoned the term
musique concrète. His staff was joined at this time by Luc Ferrari and François-Bernard
Mache, and later by François Bayle and Bernard Parmegiani. The Greek composer,
architect and mathematician Yannis Xenakis was also working at the Paris facility as was
Luciano Berio. Xenakis produced his classic composition Diamorphoses in 1957 in which
he formulated a theory of density change which introduced a new category of sounds and
structure into musique concrète. In addition to the major technical developments and
burgeoning studios just outlined there was also a dramatic increase in the actual
composition of substantial works. From 1950 to 1960 the vocabulary of tape music shifted
from the fairly pure experimental works which characterized the classic Paris and Cologne
schools to more complex and expressive works which explored a wide range of
compositional styles. More and more works began to appear by the mid-1950's which
addressed the concept of combining taped sounds with live instruments and voices. There
was also a tentative interest, and a few attempts, at incorporating taped electronic sounds
into theatrical works. While the range of issues being explored was extremely broad, much
of the work in the various tape studios was an extension of the Serialism which dominated
instrumental music. By the end of the decade new structural concepts began to emerge
from working with the new electronic sound sources that influenced instrumental music.
This expansion of timbral and organizational resources brought strict serialism into
question. In order to summarize the activity of the classic tape studio period, a brief survey
of some of the major works of the 1950's is called for. This list is not intended to be
exhaustive but only to provide a few points of reference: 1949) Schaeffer and Henry:
Symphonie pour un homme seul 1951) Grainger: Free Music 1952) Maderna: Musica su
due Dimensioni; Cage: William's Mix; Luening: Fantasy in Space; Ussachevsky: Sonic
Contours; Brau: Concerto de Janvier 1953) Schaeffer and Henry: Orphée; Stockhausen:
Studie I 1954) Varèse: Déserts; Stockhausen: Studie II; Luening and Ussachevsky: A Poem
in Cycles and Bells 1955) B. & L. Barron: soundtrack to Forbidden Planet 1956) Krenek:
Spiritus Intelligentiae Sanctus; Stockhausen: Gesang der Jünglinge; Berio: Mutazioni;
Maderna: Notturno; Hiller: Illiac Suite for String Quartet 1957) Xenakis: Diamorphoses;
Pousseur: Scambi; Badings: Evolutionen 1958) Varèse: Poème électronique; Ligeti:
Artikulation; Kagel: Transición I; Cage: Fontana Mix; Berio: Thema-- Omaggio a Joyce;
Xenakis: Concret P-H II; Pousseur: Rimes Pour Différentes Sources Sonores 1959) Kagel:
Transición II; Cage: Indeterminacy 1960) Berio: Differences; Gerhard: Collages; Maxfield:
Night Music; Ashley: The Fourth of July; Takemitsu: Water Music; Xenakis: Orient-Occident
III By 1960 the evolution of the tape studio was progressing dramatically. In Europe the
institutional support only increased and saw a mutual interest arise from both the
broadcast centers and from academia. For instance, it was in 1960 that the electronic
music studio at the Philips research labs was transferred to the Institute of Sonology at the
University of Utrecht. While in the United States it was always the universities that
established serious electronic music facilities, that situation was problematic for certain
composers who resisted the institutional milieu. Composers such as Gordon Mumma and
Robert Ashley had been working independently with tape music since 1956 by gathering
together their own technical resources. Other composers who were interested in using
electronics found that the tape medium was unsuited to their ideas. John Cage, for
instance, came to reject the whole aesthetic that accompanied tape composition as
incompatible with his philosophy of indeterminacy and live performance. Some
composers began to seek out other technical solutions in order to specify more precise
compositional control than the tape studio could provide them. It was into this climate of
shifting needs that a variety of new electronic devices emerged. The coming of the 1960's
saw a gradual cultural revolution which was co-synchronous with a distinct acceleration of
new media technologies. While the invention of the transistor in 1948 at Bell Laboratories
had begun to impact electronic manufacturing, it was during the early 1960's that major
advances in electronic design took shape. The subsequent innovations and their impact
upon electronic music were multifold and any understanding of them must be couched in
separate categories for the sake of convenience. The categories to be delineated are 1) the
emergence of the voltage-controlled analog synthesizer; 2) the evolution of computer
music; 3) live electronic performance practice; and 4) the explosion of multi-media.
However, it is important that the reader appreciate that the technical categories under
discussion were never exclusive but in fact interpenetrated freely in the compositional and
performance styles of musicians. It is also necessary to point out that any characterization
of one form of technical means as superior to another (i.e. computers versus synthesizers)
is not intentional. It is the author's contention that the very nature of the symbiosis
between machine and artist is such that each instrument, studio facility, or computer
program yields its own working method and unique artistic produce. Preferences between
technological resources emerge from a match between a certain machine and the
imaginative intent of an artist, and not from qualities that are hierarchically germane to the
history of technological innovation. Claims for technological efficiency may be relevant to
a very limited context but are ultimately absurd when viewed from a broader perspective of
actual creative achievement. 1) The Voltage-Controlled Analog Synthesizer A definition:
Unfortunately the term "synthesizer" is a gross misnomer. Since there is nothing synthetic
about the sounds generated from this class of analog electronic instruments, and since
they do not "synthesize" other sounds, the term is more the result of a conceptual
confusion emanating from industrial nonsense about how these instruments "imitate"
traditional acoustic ones. However, since the term has stuck, becoming progressively
more ingrained over the years, I will use the term for the sake of convenience. In reality the
analog voltage-controlled synthesizer is a collection of waveform and noise generators,
modifiers (such as filters, ring modulators, amplifiers), mixers and control devices
packaged in modular or integrated form. The generators produce an electronic signal
which can be patched through the modifiers and into a mixer or amplifier where it is made
audible through loudspeakers. This sequence of interconnections constitutes a signal path
which is determined by means of patch cords, switches, or matrix pinboards. Changes in
the behaviors of the devices (such as pitch or loudness) along the signal path are
controlled from other devices which produce control voltages. These control voltage
sources can be a keyboard, a ribbon controller, a random voltage source, an envelope
generator or any other compatible voltage source. The story of the analog "synthesizer" has
no single beginning. In fact, its genesis is an excellent example of how a good idea often
emerges simultaneously in different geographic locations to fulfill a generalized need. In
this case the need was to consolidate the various electronic sound generators, modifiers
and control devices distributed in fairly bulky form throughout the classic tape studio. The
reason for doing this was quite straightforward: to provide a personal electronic system to
individual composers that was specifically designed for music composition and/or live
performance, and which had the approximate technical capability of the classic tape
studio at a lower cost. The geographic locales where this simultaneously occurred were
the east coast of the United States, San Francisco, Rome and Australia. The concept of
modularity usually associated with the analog synthesizer must be credited to Harald
Bode, who in 1960 completed the construction of his modular sound modification system.
In many ways this device predicted the more concise and powerful modular synthesizers
that began to be designed in the early 1960's and consisted of a ring modulator, envelope
follower, tone-burst-responsive envelope generator, voltage-controlled amplifier, filters,
mixers, pitch extractor, comparator and frequency divider, and a tape loop repeater. This
device may have had some indirect influence on Robert Moog but the idea for his modular
synthesizer appears to have evolved from another set of circumstances. In 1963, Moog
was selling transistorized Theremins in kit form from his home in Ithaca, New York. Early in
1964 the composer Herbert Deutsch was using one of these instruments and the two
began to discuss the application of solid-state technology to the design of new
instruments and systems. These discussions led Moog to complete his first prototype of a
modular electronic music synthesizer later that year. By 1966 the first production model
was available from the new company he had formed to produce this instrument. The first
systems which Moog produced were principally designed for studio applications and were
generally large modular assemblages that contained voltagecontrolled oscillators, filters,
voltage-controlled amplifiers, envelope generators, and a traditional style keyboard for
voltage-control of the other modules. Interconnection between the modules was achieved
through patch cords. By 1969 Moog saw the necessity for a smaller portable instrument
and began to manufacture the Mini Moog, a concise version of the studio system that
contained an oscillator bank, filter, mixer, VCA and keyboard. As an instrument designer
Moog was always a practical engineer. His basically commercial but egalitarian philosophy
is best exemplified by some of the advertising copy which accompanied the Mini Moog in
1969 and resulted in its becoming the most widely used synthesizer in the "music
industry": "R.A. Moog, Inc. built its first synthesizer components in 1964. At that time, the
electronic music synthesizer was a cumbersome laboratory curiosity, virtually unknown to
the listening public. Today, the Moog synthesizer has proven its indispensability through its
widespread acceptance. Moog synthesizers are in use in hundreds of studios maintained
by universities, recording companies, and private composers throughout the world.
Dozens of successful recordings, film scores, and concert pieces have been realized on
Moog synthesizers. The basic synthesizer concept as developed by R.A. Moog, Inc., as well
as a large number of technological innovations, have literally revolutionized the
contemporary musical scene, and have been instrumental in bringing electronic music
into the mainstream of popular listening. In designing the Mini Moog, R. A. Moog engineers
talked with hundreds of musicians to find out what they wanted in a performance
synthesizer. Many prototypes were built over the past two years, and tried out by
musicians in actual liveperformance situations. Mini Moog circuitry is a combination of our
time-proven and reliable designs with the latest developments in technology and
electronic components. The result is an instrument which is applicable to studio
composition as much as to live performance, to elementary and high school music
education as much as to university instruction, to the demands of commercial music as
much as to the needs of the experimental avant garde. The Mini Moog offers a truly unique
combination of versatility, playability, convenience, and reliability at an eminently
reasonable price." In contrast to Moog's industrial stance, the rather counter-cultural
design philosophy of Donald Buchla and his voltage-controlled synthesizers can partially
be attributed to the geographic locale and cultural circumstances of their genesis. In 1961
San Francisco was beginning to emerge as a major cultural center with several vanguard
composers organizing concerts and other performance events. Morton Subotnick was
starting his career in electronic music experimentation, as were Pauline Oliveros, Ramon
Sender and Terry Riley. A primitive studio had been started at the San Francisco
Conservatory of Music by Sender where he and Oliveros had begun a series of
experimental music concerts. In 1962 this equipment and other resources from electronic
surplus sources were pooled together by Sender and Subotnick to form the San Francisco
Tape Music Center which was later moved to Mills College in 1966. Because of the severe
limitations of the equipment, Subotnick and Sender sought out the help of a competent
engineer in 1962 to realize a design they had concocted for an optically-based sound
generating instrument. After a few failures at hiring an engineer they met Donald Buchla
who realized their design but subsequently convinced them that this was the wrong
approach for solving their equipment needs. Their subsequent discussions resulted in the
concept of a modular system. Subotnick describes their idea in the following terms: "Our
idea was to build the black box that would be a palette for composers in their homes. It
would be their studio. The idea was to design it so that it was like an analog computer. It
was not a musical instrument but it was modular...It was a collection of modules of
voltagecontrolled envelope generators and it had sequencers in it right off the bat...It was a
collection of modules that you would put together. There were no two systems the same
until CBS bought it...Our goal was that it should be under $400 for the entire instrument
and we came very close. That's why the original instrument I fundraised for was under
$500." Buchla's design approach differed markedly from Moog. Right from the start Buchla
rejected the idea of a "synthesizer" and has resisted the word ever since. He never wanted
to "synthesize" familiar sounds but rather emphasized new timbral possibilities. He
stressed the complexity that could arise out of randomness and was intrigued with the
design of new control devices other than the standard keyboard. He summarizes his
philosophy and distinguishes it from Moog's in the following statement: "I would say that
philosophically the prime difference in our approaches was that I separated sound and
structure and he didn't. Control voltages were interchangeable with audio. The advantage
of that is that he required only one kind of connector and that modules could serve more
than one purpose. There were several drawbacks to that kind of general approach, one of
them being that a module designed to work in the structural domain at the same time as
the audio domain has to make compromises. DC offset doesn't make any difference in the
sound domain but it makes a big difference in the structural domain, whereas harmonic
distortion makes very little difference in the control area but it can be very significant in the
audio areas. You also have a matter of just being able to discern what's happening in a
system by looking at it. If you have a very complex patch, it's nice to be able to tell what
aspect of the patch is the structural part of the music versus what is the signal path and so
on. There's a big difference in whether you deal with linear versus exponential functions at
the control level and that was a very inhibiting factor in Moog's more general approach.
Uncertainty is the basis for a lot of my work. One always operates somewhere between the
totally predictable and the totally unpredictable and to me the "source of uncertainty", as
we called it, was a way of aiding the composer. The predictabilities could be highly defined
or you could have a sequence of totally random numbers. We had voltage control of the
randomness and of the rate of change so that you could randomize the rate of change. In
this way you could make patterns that were of more interest than patterns that are totally
random." While the early Buchla instruments contained many of the same modular
functions as the Moog, it also contained a number of unique devices such as its random
control voltage sources, sequencers and voltage-controlled spatial panners. Buchla has
maintained his unique design philosophy over the intervening years producing a series of
highly advanced instruments often incorporating hybrid digital circuitry and unique control
interfaces. The other major voltage-controlled synthesizers to arise at this time (1964) were
the Synket, a highly portable instrument built by Paul Ketoff, and a unique machine
designed by Tony Furse in Australia. According to composer Joel Chadabe, the Synket
resulted from discussions between himself, Otto Luening and John Eaton while these
composers were in residence in Rome. Chadabe had recently inspected the
developmental work of Robert Moog and conveyed this to Eaton and Luening. The engineer
Paul Ketoff was enlisted to build a performance oriented instrument for Eaton who
subsequently became the virtuoso on this small synthesizer, using it extensively in
subsequent years. The machine built by Furse was the initial foray into electronic
instrument design by this brilliant Australian engineer. He later became the principal figure
in the design of some of the earliest and most sophisticated digital synthesizers of the
1970's. After these initial efforts, a number of other American designers and manufacturers
followed the lead of Buchla and Moog. One of the most successful was the Arp Synthesizer
built by Tonus, Inc. with design innovations by the team of Dennis Colin and David Friend.
The studio version of the Arp was introduced in 1970 and basically imitated modular
features of the Moog and Buchla instruments. A year later they introduced a smaller
portable version which included a preset patching scheme that simplified the instruments
function for the average pop-oriented performing musician. Other manufacturers included
EML, makers of the ElectroComp, a small synthesizer oriented to the educational market;
Oberhiem, one of the earliest polyphonic synthesizers; muSonics' Sonic V Synthesizer;
PAIA, makers of a synthesizer in kit form; Roland; Korg; and the highly sophisticated line of
modular analog synthesizer systems designed and manufactured by Serge Tcherepnin and
referred to as Serge Modular Music Systems. In Europe the major manufacturer was
undoubtedly EMS, a British company founded by its chief designer Peter Zinovieff. EMS
built the Synthi 100, a large integrated system which introduced a matrix-pinboard
patching system, and a small portable synthesizer based on similar design principles
initially called the Putney but later modified into the Synthi A or Portabella. This later
instrument became very popular with a number of composers who used it in live
performance situations. One of the more interesting footnotes to this history of the analog
synthesizer is the rather problematic relationship that many of the designers have had with
commercialization and the subsequent solution of manufacturing problems. While the
commercial potential for these instruments became evident very early on in the 1960's, the
different aesthetic and design philosophies of the engineers demanded that they deal with
this realization in different ways. Buchla, who early on got burnt by larger corporate
interests, has dealt with the burden of marketing by essentially remaining a cottage
industry, assembling and marketing his instruments from his home in Berkeley, California.
In the case of Moog, who as a fairly competent businessman grew a small business in his
home into a distinctly commercial endeavor, even he ultimately left Moog Music in 1977,
after the company had been acquired by two larger corporations, to pursue his own design
interests. It is important to remember that the advent of the analog voltage-controlled
synthesizer occurred within the context of the continued development of the tape studio
which now included the synthesizer as an essential part of its new identity as the
electronic music studio. It was estimated in 1968 that 556 non-private electronic music
studios had been established in 39 countries. An estimated 5,140 compositions existed in
the medium by that time. Some of the landmark voltage-controlled "synthesizer"
compositions of the 1960's include works created with the "manufactured" machines of
Buchla and Moog but other devices were certainly also used extensively. Most of these
works were tape compositions that used the synthesizer as resource. The following list
includes a few of the representative tape compositions and works for tape with live
performers made during the 1960's with synthesizers and other sound sources. 1960)
Stockhausen: Kontakte; Mache: Volumes 1961) Berio: Visage; Dockstader: Two Fragments
From Apocalypse 1962) Xenakis: Bohor I; Philippot: Étude III; Parmegiani: Danse 1963)
Bayle: Portraits de l'Oiseau-Qui-N'existe-Pas; Nordheim: Epitaffio 1964) Babbitt:
Ensembles for Synthesizer; Brün: Futility; Nono: La Fabbrica Illuminata 1965) Gaburo:
Lemon Drops; Mimaroglu: Agony; Davidovsky: Synchronisms No. 3 1966) Oliveros: I of IV;
Druckman: Animus I 1967) Subotnick: Silver Apples of the Moon; Eaton: Concert Piece for
Syn-Ket and Symphony Orchestra; Koenig: Terminus X; Smiley: Eclipse 1968) Carlos:
Switched-On Bach; Gaburo: Dante's Joynte; Nono: Contrappunto dialettico alla mente
1969) Wourinen: Time's Encomium; Ferrari: Music Promenade 1970) Arel: Stereo
Electronic Music No. 2; Lucier: I am sitting in a room 2) Computer Music A distinction:
Analog refers to systems where a physical quantity is represented by an analogous
physical quantity. The traditional audio recording chain demonstrates this quite well since
each stage of translation throughout constitutes a physical system that is analogous to the
previous one in the chain. The fluctuations of air molecules which constitute sound are
translated into fluctuations of electrons by a microphone diaphragm. These electrons are
then converted via a bias current of a tape recorder into patterns of magnetic particles on a
piece of tape. Upon playback the process can be reversed resulting in these fluctuations of
electrons being amplified into fluctuations of a loudspeaker cone in space. The final
displacement of air molecules results in an analogous representation of the original
sounds that were recorded. Digital refers to systems where a physical quantity is
represented through a counting process. In digital computers this counting process
consists of a two-digit binary coding of electrical on-off switching states. In computer
music the resultant digital code represents the various parameters of sound and its
organization. As early as 1954, the composer Yannis Xenakis had used a computer to aid in
calculating the velocity trajectories of glissandi for his orchestral composition Metastasis.
Since his background included a strong mathematical education, this was a natural
development in keeping with his formal interest in combining mathematics and music. The
search that had begun earlier in the century for new sounds and organizing principles that
could be mathematically rationalized had become a dominant issue by the mid-1950's.
Serial composers like Milton Babbit had been dreaming of an appropriate machine to
assist in complex compositional organization. While the RCA Music Synthesizer fulfilled
much of this need for Babbitt, other composers desired even more machine-assisted
control. Lejaren Hiller, a former student of Babbitt, saw the compositional potential in the
early generation of digital computers and generated the Illiac Suite for string quartet as a
demonstration of this promise in 1956. Xenakis continued to develop, in a much more
sophisticated manner, his unique approach to computerassisted instrumental
composition. Between 1956 and 1962 he composed a number of works such as Morisma-
Amorisma using the computer as a mathematical aid for finalizing calculations that were
applied to instrumental scores. Xenakis stated that his use of probabilistic theories and
the IBM 7090 computer enabled him to advance "...a form of composition which is not the
object in itself, but an idea in itself, that is to say, the beginnings of a family of
compositions." The early vision of why computers should be applied to music was
elegantly expressed by the scientist Heinz Von Foerster: "Accepting the possibilities of
extensions in sounds and scales, how do we determine the new rules of synchronism and
succession? It is at this point, where the complexity of the problem appears to get out of
hand, that computers come to our assistance, not merely as ancillary tools but as
essential components in the complex process of generating auditory signals that fulfill a
variety of new principles of a generalized aesthetics and are not confined to conventional
methods of sound generation by a given set of musical instruments or scales nor to a given
set of rules of synchronism and succession based upon these very instruments and
scales. The search for those new principles, algorithms, and values is, of course, in itself
symbolic for our times." The actual use of the computer to generate sound first occurred at
Bell Labs where Max Mathews used a primitive digital to analog converter to demonstrate
this possibility in 1957. Mathews became the central figure at Bell Labs in the technical
evolution of computer generated sound research and compositional programming with
computer over the next decade. In 1961 he was joined by the composer James Tenney who
had recently graduated from the University of Illinois where he had worked with Hiller and
Gaburo to finish a major theoretical thesis entitled Meta/Hodos. For Tenney, the Bell Lab
residency was a significant opportunity to apply his advanced theoretical thinking
(involving the application of theories from Gestalt Psychology to music and sound
perception) into the compositional domain. From 1961 to 1964 he completed a series of
works which include what are probably the first serious compositions using the MUSIC IV
program of Max Mathews and Joan Miller and therefore the first serious compositions using
computer-generated sounds: Noise Study, Four Stochastic Studies, Dialogue, Stochastic
String Quartet, Ergodos I, Ergodos II, and Phases. In the following extraordinarily candid
statement, Tenney describes his pioneering efforts at Bell Labs: "I arrived at the Bell
Telephone Laboratories in September, 1961, with the following musical and intellectual
baggage: 1. numerous instrumental compositions reflecting the influence of Webern and
Varèse; 2. two tape-pieces, produced in the Electronic Music Laboratory at the University
of Illinois - both employing familiar, 'concrete' sounds, modified in various ways; 3. a long
paper ("Meta/Hodos, A Phenomenology of 20th Century Music and an Approach to the
Study of Form", June, 1961), in which a descriptive terminology and certain structural
principles were developed, borrowing heavily from Gestalt psychology. The central point of
the paper involves the clang, or primary aural Gestalt, and basic laws of perceptual
organization of clangs, clang-elements, and sequences (a high-order Gestalt-unit
consisting of several clangs). 4. A dissatisfaction with all the purely synthetic electronic
music that I had heard up to that time, particularly with respect to timbre; 5. ideas
stemming from my studies of acoustics, electronics and - especially - information theory,
begun in Hiller's class at the University of Illinois; and finally 6. a growing interest in the
work and ideas of John Cage. I leave in March, 1964, with: 1. six tape-compositions of
computer-generated sounds - of which all but the first were also composed by means of
the computer, and several instrumental pieces whose composition involved the computer
in one way or another; 2. a far better understanding of the physical basis of timbre, and a
sense of having achieved a significant extension of the range of timbres possible by
synthetic means; 3. a curious history of renunciations of one after another of the traditional
attitudes about music, due primarily to gradually more thorough assimilation of the
insights of John Cage. In my two-and-a-half years here I have begun many more
compositions than I have completed, asked more questions than I could find answers for,
and perhaps failed more often than I have succeeded. But I think it could not have been
much different. The medium is new and requires new ways of thinking and feeling. Two
years are hardly enough to have become thoroughly acclimated to it, but the process has
at least begun." In 1965 the research at Bell Labs resulted in the successful reproduction
of an instrumental timbre: a trumpet waveform was recorded and then converted into a
numerical representation and when converted back into analog form was deemed virtually
indistinguishable from its source. This accomplishment by Mathews, Miller and the French
composer Jean Claude Risset marks the beginning of the recapitulation of the traditional
representationist versus modernist dialectic in the new context of digital computing. When
contrasted against Tenney's use of the computer to obtain entirely novel waveforms and
structural complexities, the use of such immense technological resources to reproduce
the sound of a trumpet, appeared to many composers to be a gigantic exercise in
misplaced concreteness. When seen in the subsequent historical light of the recent
breakthroughs of digital recording and sampling technologies that can be traced back to
this initial experiment, the original computing expense certainly appears to have been
vindicated. However, the dialectic of representationism and modernism has only become
more problematic in the intervening years. The development of computer music has from
its inception been so critically linked to advances in hardware and software that its
practitioners have, until recently, constituted a distinct class of specialized enthusiasts
within the larger context of electronic music. The challenge that early computers and
computing environments presented to creative musical work was immense. In retrospect,
the task of learning to program and pit one's musical intelligence against the machine
constraints of those early days now takes on an almost heroic air. In fact, the development
of computer music composition is definitely linked to the evolution of greater interface
transparency such that the task of composition could be freed up from the other arduous
tasks associated with programming. The first stage in this evolution was the design of
specific music-oriented programs such as MUSIC IV. The 1960's saw gradual additions to
these languages such as MUSIC IVB (a greatly expanded assembly language version by
Godfrey Winham and Hubert S. Howe); MUSIC IVBF (a fortran version of MUSIC IVB); and
MUSIC360 (a music program written for the IBM 360 computer by Barry Vercoe). The
composer Charles Dodge wrote during this time about the intent of these music programs
for sound synthesis: "It is through simulating the operations of an ideal electronic music
studio with an unlimited amount of equipment that a digital computer synthesizes sound.
The first computer sound synthesis program that was truly general purpose (i.e., one that
could, in theory, produce any sound) was created at the Bell Telephone Laboratories in the
late 1950's. A composer using such a program must typically provide: (1) Stored functions
which will reside in the computer's memory representing waveforms to be used by the unit
generators of the program. (2) "Instruments" of his own design which logically interconnect
these unit generators. (Unit generators are subprograms that simulate all the sound
generation, modification, and storage devices of the ideal electronic music studio.) The
computer "instruments" play the notes of the composition. (3) Notes may correspond to
the familiar "pitch in time" or, alternatively, may represent some convenient way of dividing
the time continuum." By the end of the 1960's computer sound synthesis research saw a
large number of new programs in operation at a variety of academic and private
institutions. The demands of the medium however were still quite tedious and, regardless
of the increased sophistication in control, remained a tape medium as its final product.
Some composers had taken the initial steps towards using the computer for realtime
performance by linking the powerful control functions of the digital computer to the sound
generators and modifiers of the analog synthesizer. We will deal with the specifics of this
development in the next section. From its earliest days the use of the computer in music
can be divided into two fairly distinct categories even though these categories have been
blurred in some compositions: 1) those composers interested in using the computer
predominantly as a compositional device to generate structural relationships that could
not be imagined otherwise and 2) the use of the computer to generate new synthetic
waveforms and timbres. A few of the pioneering works of computer music from 1961 to
1971 are the following: 1961) Tenney: Noise Study 1962) Tenney: Four Stochastic Studies
1963) Tenney: Phases 1964) Randall: Quartets in Pairs 1965) Randall: Mudgett 1966)
Randall: Lyric Variations 1967) Hiller: Cosahedron 1968) Brün: Indefraudibles; Risset:
Computer Suite from Little Boy 1969) Dodge: Changes; Risset: Mutations I 1970) Dodge:
Earth's Magnetic Field 1971) Chowning: Sabelithe 3) Live Electronic Performance Practice
A Definition: For the sake of convenience I will define live electronic music as that in which
electronic sound generation, processing and control predominantly occurs in realtime
during a performance in front of an audience. The idea that the concept of live
performance with electronic sounds should have a special status may seem ludicrous to
many readers. Obviously music has always been a performance art and the primary usage
of electronic musical instruments before 1950 was almost always in a live performance
situation. However it must be remembered that the defining of electronic music as its own
genre really came into being with the tape studios of the 1950's and that the beginnings of
live electronic performance practice in the 1960's was in large part a reaction to both a
growing dissatisfaction with the perceived sterility of tape music in performance (sound
emanating from loudspeakers and little else) and the emergence of the various
philosophical influences of chance, indeterminacy, improvisation and social
experimentation. The issue of combining tape with traditional acoustic instruments was a
major one ever since Maderna, Varèse, Luening and Ussachevsky first introduced such
works in the 1950's. A variety of composers continued to address this problem with
increasing vigor into the 1960's. For many it was merely a means for expanding the timbral
resources of the orchestral instruments they had been writing for, while for others it was a
specific compositional concern that dealt with the expansion of structural aspects of
performance in physical space. For instance Mario Davidovsky and Kenneth Gaburo have
both written a series of compositions which address the complex contrapuntal dynamics
between live performers and tape: Davidovsky's Synchronisms 1-8 and Gaburo's
Antiphonies 1-10. These works demand a wide variety of combinations of tape channels,
instruments and voices in live performance contexts. In these and similar works by other
composers the tape sounds are derived from all manner of sources and techniques
including computer synthesis. The repertory for combinations of instruments and tape
grew to immense international proportions during the 1960's and included works from
Australia, North America, South America, Western Europe, Eastern Europe, Japan, and the
Middle East. An example of how one composer viewed the dynamics of relationship
between tape and performers is stated by Kenneth Gaburo: "On a fundamental level
Antiphony III is a physical interplay between live performers and two speaker systems
(tape). In performance, 16 soloists are divided into 4 groups, with one soprano, alto, tenor,
and bass in each. The groups are spatially separated from each other and from the
speakers. Antiphonal aspects develop between and among the performers within each
group, between and among groups, between the speakers, and between and among the
groups and speakers. On another level Antiphony III is an auditory interplay between tape
and live bands. The tape band may be divided into 3 broad compositional classes: (1)
quasi-duplication of live sounds, (2) electro-mechanical transforms of these beyond the
capabilities of live performers, and (3) movement into complementary acoustic regions of
synthesized electronic sound. Incidentally, I term the union of these classes electronics,
as distinct from tape content which is pure concrete-mixing or electronic sound synthesis.
The live band encompasses a broad spectrum from normal singing to vocal transmission
having electronically associated characteristics. The total tape-live interplay, therefore, is
the result of discrete mixtures of sound, all having the properties of the voice as a common
point of departure." Another important aesthetic shift that occurred within the tape studio
environment was the desire to compose onto tape using realtime processes that did not
require subsequent editing. Pauline Oliveros and Richard Maxfield were early practitioners
of innovative techniques that allowed for live performance in the studio. Oliveros
composed I of IV (1966) in this manner using tape delay and mixer feedback systems.
Other composers discovered synthesizer patches that would allow for autonomous
behaviors to emerge from the complex interactions of voltage-control devices. The output
from these systems could be recorded as versions on tape or amplified in live performance
with some performer modification. Entropical Paradise (1969) by Douglas Leedy is a
classic example of such a composition for the Buchla Synthesizer. The largest and most
innovative category of live electronic music to come to fruition in the 1960's was the use of
synthesizers and custom electronic circuitry to both generate sounds and process others,
such as voice and/or instruments, in realtime performance. The most simplistic example
of this application extends back to the very first use of electronic amplification by the early
instruments of the 1930's. During the 1950's John Cage and David Tudor used
microphones and amplification as compositional devices to emphasize the small sounds
and resonances of the piano interior. In 1960 Cage extended this idea to the use of
phonograph cartridges and contact microphones in Cartridge Music. The work focused
upon the intentional amplification of small sounds revealed through an indeterminate
process. Cage described the aural product: "The sounds which result are noises, some
complex, others extremely simple such as amplifier feed-back, loud-speaker hum, etc. (All
sounds, even those ordinarily thought to be undesirable, are accepted in this music.)" For
Cage the abandonment of tape music and the move toward live electronic performance
was an essential outgrowth of his philosophy of indeterminacy. Cage's aesthetic position
necessitated the theatricality and unpredictability of live performance since he desired a
circumstance where individual value judgements would not intrude upon the revelation
and perception of new possibilities. Into the 1960's his fascination for electronic sounds in
indeterminate circumstances continued to evolve and become inclusive of an ethical
argument for the appropriateness of artists working with technology as critics and mirrors
of their cultural environment. Cage composed a large number of such works during the
1960's, often enlisting the inspired assistance of like-minded composer/performers such
as David Tudor, Gordon Mumma, David Behrman, and Lowell Cross. Among the most
famous of these works was the series of compositions entitled Variations of which there
numbered eight by the end of the decade. These works were really highly complex and
indeterminate happenings that often used a wide range of electronic techniques and
sound sources. The composer/performer David Tudor was the musician most closely
associated with Cage during the 1960's. As a brilliant concert pianist during the 1950's he
had championed the works of major avant-garde composers and then shifted his
performance activities to electronics during the 1960's, performing other composer's live-
electronic works and his own. His most famous composition, Rainforest, and its
multifarious performances since it was conceived in 1968, almost constitute a musical
sub-culture of electronic sound research. The work requires the fabrication of special
resonating objects and sculptural constructs which serve as one-of-a-kind loudspeakers
when transducers are attached to them. The constructed "loudspeakers" function to
amplify and produce both additive and subtractive transformations of source sounds such
as basic electronic waveforms. In more recent performances the sounds have included a
wide selection of prerecorded materials. While live electronic music in the 1960's was
predominantly an American genre, activity in Europe and Japan also began to emerge. The
foremost European composer to embrace live electronic techniques in performance was
Karlheinz Stockhausen. By 1964 he was experimenting with the straightforward electronic
filtering of an amplified tam-tam in Microphonie I. Subsequent works for a variety of
instrumental ensembles and/or voices, such as Prozession or Stimmung, explored very
basic but ingenious use of amplification, filtering and ring modulation techniques in
realtime performance. In a statement about the experimentation that led to these works,
Stockhausen conveys a clear sense of the spirit of exploration into sound itself that
purveyed much of the live electronic work of the 1960's: "Last summer I made a few
experiments by activating the tamtam with the most disparate collection of materials I
could find about the house --glass, metal, wood, rubber, synthetic materials-- at the same
time linking up a hand-held microphone (highly directional) to an electric filter and
connecting the filter output to an amplifier unit whose output was audible through
loudspeakers. Meanwhile my colleague Jaap Spek altered the settings of the filter and
volume controls in an improvisatory way. At the same time we recorded the results on
tape. This tape-recording of our first experiences in "microphony" was a discovery of the
greatest importance for me. We had come to no sort of agreement: I used such of the
materials I had collected as I thought best and listened-in to the tam-tam surface with the
microphone just as a doctor might listen-in to a body with his stethoscope; Spek reacted
equally spontaneously to what he heard as the product of our joint activity." In many ways
the evolution of live electronic music parallels the increasing technological sophistication
of its practitioners. In the early 1960's most of the works within this genre were concerned
with fairly simple realtime processing of instrumental sounds and voices. Like
Stockhausen's work from this period this may have been as basic as the manipulation of a
live performer through audio filters, tape loops or the performer's interaction with acoustic
feedback. Robert Ashley's Wolfman (1964) is an example of the use of high amplification of
voice to achieve feedback that alters the voice and a prerecorded tape. By the end of the
decade a number of composer's had technologically progressed to designing their own
custom circuitry. For example, Gordon Mumma's Mesa (1966) and Hornpipe (1967) are
both examples of instrumental pieces that use custom-built electronics capable of semi-
automatic response to the sounds generated by the performer or resonances of the
performance space. One composer whose work illustrates a continuity of gradually
increasing technical sophistication is David Behrman. From fairly rudimentary uses of
electronic effects in the early 1960's his work progressed through various stages of live
electronic complexification to compositions like Runthrough (1968), where custom-built
circuitry and a photo electric sound distribution matrix is activated by performers with
flashlights. This trend toward new performance situations in which the technology
functioned as structurally intrinsic to the composition continued to gain favor. Many
composers began to experiment with a vast array of electronic control devices and unique
sound sources which often required audio engineers and technicians to function as
performing musicians, and musicians to be technically competent. Since the number of
such works proliferated rapidly, a few examples of the range of activities during the 1960's
must suffice. In 1965, Alvin Lucier presented his Music for Solo Performer 1965 which used
amplified brainwave signals to articulate the sympathetic resonances of an orchestra of
percussion instruments. John Mizelle's Photo Oscillations (1969) used multiple lasers as
light sources through which the performers walked in order to trigger a variety of photo-cell
activated circuits. Pendulum Music (1968) by Steve Reich simply used microphones
suspended over loudspeakers from long cables. The microphones were set in motion and
allowed to generate patterns of feedback as they passed over the loudspeakers. For these
works, and many others like them, the structural dictates which emerged out of the nature
of the chosen technology also defined a particular composition as a unique environmental
and theatrical experience. Co-synchronous with the technical and aesthetic advances that
were occurring in live performance that I have just outlined, the use of digital computers in
live performance began to slowly emerge in the late 1960's. The most comprehensive
achievement at marrying digital control sophistication to the realtime sound generation
capabilities of the analog synthesizer was probably the Sal-Mar Construction (1969) of
Salvatore Martirano. This hybrid system evolved over several years with the help of many
colleagues and students at the University of Illinois. Considered by Martirano to be a
composition unto itself, the machine consisted of a motley assortment of custom-built
analog and digital circuitry controlled from a completely unique interface and distributed
through multiple channels of loudspeakers suspended throughout the performance space.
Martirano describes his work as follows: The Sal-Mar Construction was designed, financed
and built in 1969-1972 by engineers Divilbiss, Franco, Borovec and composer Martirano
here at the University of Illinois. It is a hybrid system in which TTL logical circuits (small and
medium scale integration) drive analog modules, such as voltage-controlled oscillators,
amplifiers and filters. The SMC weighs 1500lbs crated and measures 8'x5'x3'. It can be set-
up at one end of the space with a "spider web" of speaker wire going out to 24 plexiglass
enclosed speakers that hang in a variety of patterns about the space. The speakers weigh
about 6lbs. each, and are gently mobile according to air currents in the space. A changing
pattern of sound-traffic by 4 independently controlled programs produces rich timbres that
occur as the moving source of sound causes the sound to literally bump into itself in the
air, thus effecting phase cancellation and addition of the signal. The control panel has 291
touch-sensitive set/reset switches that are patched so that a tree of diverse signal paths is
available to the performer. The output of the switch is either set 'out1' or reset 'out2'.
Further the 291 switches are multiplexed down 4 levels. The unique characteristic of the
switch is that it can be driven both manually and logically, which allows human/machine
interaction. Most innovative feature of the human/machine interface is that it allows the
user to switch from control of macro to micro parameters of the information output. This is
analogous to a zoom lens on a camera. A pianist remains at one level only, that is, on the
keys. It is possible to assign performer actions to AUTO and allow the SMC to make all
decisions. One of the major difficulties with the hybrid performance systems of the late
1960's and early 1970's was the sheer size of digital computers. One solution to this
problem was presented by Gordon Mumma in his composition Conspiracy 8 (1970). When
the piece was presented at New York's Guggenheim Museum, a remote data-link was
established to a computer in Boston which received information about the performance in
progress. In turn this computer then issued instructions to the performers and generated
sounds which were also transmitted to the performance site through datalink. Starting in
1970 an ambitious attempt at using the new mini-computers was initiated by Ed Kobrin, a
former student and colleague of Martirano. Starting in Illinois in collaboration with engineer
Jeff Mack, and continuing at the Center for Music Experiment at the University of California,
San Diego, Kobrin designed an extremely sophisticated hybrid system (actually referred to
as Hybrid I through V) that interfaced a mini-computer to an array of voltage-controlled
electronic sound modules. As a live performance electronic instrument, its six-voice
polyphony, complexity and speed of interaction made it the most powerful realtime system
of its time. One of its versions is described by Kobrin: "The most recent system consists of
a PDP 11 computer with 16k words of core memory, dual digital cassette unit, CRT
terminal with ASCII keyboard, and a piano-type keyboard. A digital interface consisting of
interrupt modules, address decoding circuitry, 8 and 10 bit digital to analog converters
with holding registers, programmable counters and a series of tracking and status registers
is hardwired to a synthesizer. The music generated is distributed to 16 speakers creating a
controlled sound environment." Perhaps the most radical and innovative aspect of live
electronic performance practice to emerge during this time was the appearance of a new
form of collective music making. In Europe, North America and Japan several important
groups of musicians began to collaborate in collective compositional, improvisational, and
theatrical activities that relied heavily upon the new electronic technologies. Some of the
reasons for this trend were: 1) the performance demands of the technology itself which
often required multiple performers to accomplish basic tasks; 2) the improvisatory and
open-ended nature of some of the music was friendly and/or philosophically biased
towards a diverse and flexible number of participants; and 3) the cultural and political
climate was particularly attuned to encouraging social experimentation. As early as 1960,
the ONCE Group had formed in Ann Arbor, Michigan. Comprised of a diverse group of
architects, composers, dancers, filmmakers, sculptors and theater people, the Once
Group presented the annual Once Festival. The principal composers of this group
consisted of George Cacioppo, Roger Reynolds, Donald Scavarda, Robert Ashley and
Gordon Mumma, most of whom were actively exploring tape music and developing live
electronic techniques. In 1966 Ashley and Mumma joined forces with David Behrman and
Alvin Lucier to create one of the most influential live electronic performance ensembles,
the Sonic Arts Union. While its members would collaborate in the realization of
compositions by its members, and by other composers, it was not concerned with
collaborative composition or improvisation like many other groups that had formed about
the same time. Concurrent with the ONCE Group activities were the concerts and events
presented by the participants of the San Francisco Tape Music Center such as Pauline
Oliveros, Terry Riley, Ramon Sender and Morton Subotnick. Likewise a powerful center for
collaborative activity had developed at the University of Illinois, Champaign/Urbana where
Herbert Brün, Kenneth Gaburo, Lejaren Hiller, Salvatore Martirano, and James Tenney had
been working. By the late 1960's a similarly vital academic scene had formed at the
University of California, San Diego where Gaburo, Oliveros, Reynolds and Robert Erickson
were now teaching. In Europe several innovative collectives had also formed. To perform
his own music Stockhausen had gathered together a live electronic music ensemble
consisting of Alfred Alings, Harald Boje, Peter Eötvös, Johannes Fritsch, Rolf Gehlhaar, and
Aloys Kontarsky. In 1964 an international collective called the Gruppo di Improvisazione
Nuova Consonanza was created in Rome for performing live electronic music. Two years
later, Rome also saw the formation of Musica Elettronica Viva, one of the most radical
electronic performance collectives to advance group improvisation that often involved
audience participation. In its original incarnation the group included Allan Bryant, Alvin
Curran, John Phetteplace, Frederic Rzewski, and Richard Teitelbaum. The other major
collaborative group concerned with the implications of electronic technology was AMM in
England. Founded in 1965 by jazz musicians Keith Rowe, Lou Gare and Eddie Provost, and
the experimental genius Cornelius Cardew, the group focused its energy into highly
eclectic but disciplined improvisations with electro-acoustic materials. In many ways the
group was an intentional social experiment the experience of which deeply informed the
subsequent Scratch Orchestra collective of Cardew's. One final category of live electronic
performance practice involves the more focused activities of the Minimalist composers of
the 1960's. These composers and their activities were involved with both individual and
collective performance activities and in large part confused the boundaries between the
so-called "serious" avant-garde and popular music. The composer Terry Riley exemplifies
this idea quite dramatically. During the late 1960's Riley created a very popular form of solo
performance using wind instruments, keyboards and voice with tape delay systems that
was an outgrowth from his early experiments into pattern music and his growing interest in
Indian music. In 1964 the New York composer LaMonte Young formed The Theatre of
Eternal Music to realize his extended investigations into pure vertical harmonic
relationships and tunings. The ensemble consisted of string instruments, singing voices
and precisely tuned drones generated by audio oscillators. In early performances the
performers included John Cale, Tony Conrad, LaMonte Young, and Marian Zazeela. A very
brief list of significant live electronic music works of the 1960's is the following: 1960)
Cage: Cartridge Music 1964) Young: The Tortoise, His Dreams and Journeys; Sender:
Desert Ambulance; Ashley: Wolfman; Stockhausen: Mikrophonie I 1965) Lucier: Music for
Solo Performer 1966) Mumma: Mesa 1967) Stockhausen: Prozession; Mumma:
Runthrough 1968) Tudor: Rainforest; Behrman: Runthrough 1969) Cage and Hiller:
HPSCHD; Martirano: Sal-Mar Construction; Mizelle: Photo Oscillations 1970) Rosenboom:
Ecology of the Skin 4) Multi-Media The historical antecedents for mixed-media connect
multiple threads of artistic traditions as diverse as theatre, cinema, music, sculpture,
literature, and dance. Since the extreme eclecticism of this topic and the sheer volume of
activity associated with it is too vast for the focus of this essay, I will only be concerned
with a few examples of mixed-media activities during the 1960's that impacted the
electronic art and music traditions from which subsequent video experimentation
emerged. Much of the previously discussed live electronic music of the 1960's can be
placed within the mixed-media category in that the performance circumstances
demanded by the technology were intentionally theatrical or environmental. This emphasis
on how technology could help to articulate new spatial relationships and heightened
interaction between the physical senses was shared with many other artists from the
visual, theatrical and dance traditions. Many new terms arose to describe the resulting
experiments of various individuals and groups such as "happenings," "events," "action
theatre," "environments," or what Richard Kostelanetz called "The Theatre of Mixed-
Means." In many ways the aesthetic challenge and collaborative agenda of these projects
was conceptually linked to the various counter-cultural movements and social
experiments of the decade. For some artists these activities were a direct continuity from
participation in the avant-garde movements of the 1950's such as Fluxus, electronic
music, "kinetic sculpture," Abstract Expressionism and Pop Art, and for others they were a
fulfillment of ideas about the merger of art and science initiated by the 1930's Bauhaus
artists. Many of the performance groups already mentioned were engaged in mixed-media
as their principal activity. In Michigan, the ONCE Group had been preceded by the
Manifestations: Light and Sound performances and Space Theatre of Milton Cohen as early
1956. The filmmaker Jordan Belson and Henry Jacobs organized the Vortex performances
in San Francisco the following year. Japan saw the formation of Tokyo's Group Ongaku and
Sogetsu Art Center with Kuniharu Akiyama, Toshi Ichiyanagi, Joji Yuasa, Takahisa Kosugi,
and Chieko Shiomi in the early 1960's. At the same time were the ritual oriented activities
of LaMonte Young's The Theatre of Eternal Music. The group Pulsa was particularly active
through the late sixties staging environmental light and sound works such as the Boston
Public Gardens Demonstration (1968) that used 55 xenon strobe lights placed underwater
in the garden's four-acre pond. On top of the water were placed 52 polyplanar
loudspeakers which were controlled, along with the lights, by computer and prerecorded
magnetic tape. This resulted in streams of light and sound being projected throughout the
park at high speeds. At the heart of this event was the unique Hybrid Digital/Analog Audio
Synthesizer which Pulsa designed and used in most of their subsequent performance
events. In 1962, the USCO formed as a radical collective of artists and engineers dedicated
to collective action and anonymity. Some of the artists involved were Gerd Stern, Stan Van
Der Beek, and Jud Yalkut. As Douglas Davis describes them: "USCO's leaders were
strongly influenced by McLuhan's ideas as expressed in his book Understanding Media.
Their environments--performed in galleries, churches, schools, and museums across the
United States--increased in complexity with time, culminating in multiscreen audiovisual
"worlds" and strobe environments. They saw technology as a means of bringing people
together in a new and sophisticated tribalism. In pursuit of that ideal, they lived, worked,
and created together in virtual anonymity." The influence of McLuhan also had a strong
impact upon John Cage during this period and marks a shift in his work toward a more
politically and socially engaged discourse. This shift was exemplified in two of his major
works during the 1960's which were large multi-media extravaganza's staged during
residencies at the University of Illinois in 1967 and 1969: Musicircus and HPSCHD. The
later work was conceived in collaboration with Lejaren Hiller and subsequently used 51
computer-generated sound tapes, in addition to seven harpsichords and numerous film
projections by Ronald Nameth. Another example of a major mixed-media work composed
during the 1960's is the Teatro Probabilistico III (1968) for actors, musicians, dancers, light,
TV cameras, public and traffic conductor by the brazilian composer Jocy de Oliveira. She
describes her work in the following terms that are indicative of a typical attitude toward
mixed media performance at that time: "This piece is an exercise in searching for total
perception leading to a global event which tends to eliminate the set role of public versus
performers through a complementary interaction. The community life and the urban space
are used for this purpose. It also includes the TV communication on a permutation of live
and video tape and a transmutation from utilitarian-camera to creative camera. The
performer is equally an actor, musician, dancer, light, TV camera/video artist or public.
They all are directed by a traffic conductor. He represents the complex contradiction of
explicit and implicit. He is a kind of military God who controls the freedom of the powers by
dictating orders through signs. He has power over everything and yet he cannot predict
everything. The performers improvise on a time-event structure, according to general
directions. The number of performers is determined by the space possibilities. It is
preferable to use a downtown pedestrian area. The conductor should be located in the
center of the performing area visible to the performers (over a platform). He should wear a
uniform representing any high rank. For the public as well as the performers this is an
exercise in searching for a total experience in complete perception." One of the most
important intellectual concerns to emerge at this time amongst most of these artists was
an explicit embracing of technology as a creative countercultural force. In addition to
McLuhan, the figure of Buckminster Fuller had a profound influence upon an entire
generation of artists. Fuller's assertion that the radical and often negative changes wrought
by technological innovation were also opportunities for proper understanding and
redirection of resources became an organizing principle for vanguard thinkers in the arts.
The need to take technology seriously as the social environment in which artists lived and
formulated critical relationships with the culture at large became formalized in projects
such as Experiments in Art and Technology, Inc. and the various festivals and events they
sponsored: Nine Evenings: Theater and Engineering; Some More Beginnings; the series of
performances presented at Automation House in New York City during the late 1960's; and
the Pepsi-Cola Pavilion for Expo 70 in Osaka, Japan. One of the participants in Expo 70,
Gordon Mumma, describes the immense complexity and sophistication that mixed-media
presentations had evolved into by that time: "The most remarkable of all multi-media
collaborations was probably the Pepsi-Cola Pavilion for Expo 70 in Osaka. This project
included many ideas distilled from previous multimedia activities, and significantly
advanced both the art and technology by numerous innovations. The Expo 70 pavilion was
remarkable for several reasons. It was an international collaboration of dozens of artists,
as many engineers, and numerous industries, all coordinated by Experiments in Art and
Technology, inc. From several hundred proposals, the projects of twenty-eight artists and
musicians were selected for presentation in the pavilion. The outside of the pavilion was a
120-foot-diameter geodesic dome of white plastic and steel, enshrouded by an ever-
changing, artificially generated water-vapor cloud. The public plaza in front of the pavilion
contained seven man-sized, soundemitting floats, that moved slowly and changed
direction when touched. A thirty-foot polar heliostat sculpture tracked the sun and
reflected a ten-foot-diameter sunbeam from its elliptical mirror through the cloud onto the
pavilion. The inside of the pavilion consisted of two large spaces, one black-walled and
clam-shaped, the other a ninety-foot high hemispherical mirror dome. The sound and light
environment of these spaces was achieved by an innovative audio and optical system
consisting of state-ofthe -art analog audio circuitry, with krypton-laser, tungsten, quartz-
iodide, and xenon lighting, all controlled by a specially designed digital computer
programming facility. The sound, light, and control systems, and their integration with the
unique hemispherical acoustics and optics of the pavilion, were controlled from a movable
console. On this console the lighting and sound had separate panels from which the
intensities, colors, and directions of the lighting, pitches, loudness, timbre, and directions
of the sound could be controlled by live performers. The soundmoving capabilities of the
dome were achieved with a rhombic grid of thirty-seven loudspeakers surrounding the
dome, and were designed to allow the movement of sounds from point, straight line,
curved, and field types of sources. The speed of movement could vary from extremely slow
to fast enough to lose the sense of motion. The sounds to be heard could be from any live,
taped, or synthesized source, and up to thirty-two different inputs could be controlled at
one time. Furthermore, it was possible to electronically modify these inputs by using eight
channels of modification circuitry that could change the pitch, loudness, and timbre in a
vast number of combinations. Another console panel contained digital circuitry that could
be programmed to automatically control aspects of the light and sound. By their
programming of this control panel, the performers could delegate any amount of the light
and sound functions to the digital circuitry. Thus, at one extreme the pavilion could be
entirely a live-performance instrument, and at the other, an automated environment. The
most important design concept of the pavilion was that it was a live-performance, multi-
media instrument. Between the extremes of manual and automatic control of so many
aspects of environment, the artist could establish all sorts of sophisticated man-machine
performance interactions." Consolidation: the 1970 and 80's The beginning of the 1970's
saw a continuation of most of the developments initiated in the 1960's. Activities were
extremely diverse and included all the varieties of electronic music genres previously
established throughout the 20th century. Academic tape studios continued to thrive with a
great deal of unique custom-built hardware being conceived by engineers, composers and
students. Hundreds of private studios were also established as the price of technology
became more affordable for individual artists. Many more novel strategies for integrating
tape and live performers were advanced as were new concepts for live electronics and
multi-media. A great rush of activity in new circuit design also took place and the now
familiar pattern of continual miniaturization with increased power and memory expansion
for computers began to become evident. Along with this increased level of electronic
music activity, two significant developments became evident: 1) what had been for
decades a pioneering fringe activity within the larger context of music as a cultural activity
now begins to become dominant; and 2) the commercial and sophisticated industrial
manufacturing of electronic music systems and materials that had been fairly esoteric
emerges in response to this awareness. The result of these new factors signals the end of
the pioneering era of electronic music and the beginning of a post-modern aesthetic that is
predominantly driven by commercial market forces. By the end of the 1970's most
innovations in hardware design had been taken over by industry in response to the
emerging needs of popular culture. The film and music "industries" became the major
forces in establishing technical standards which impacted subsequent electronic music
hardware design. While the industrial representationist agenda succeeded in the guise of
popular culture, some pioneering creative work continued within the divergent contexts of
academic tape studios and computer music research centers and in the non-institutional
aesthetic research of individual composers. While specialized venues still exist where
experimental work can be heard, it has been an increasing tendency that access to such
work has gotten progressively more problematic. One of the most important shifts to occur
in the 1980's was the progressive move toward the abandonment of analog electronics in
favor of digital systems which could potentially recapitulate and summarize the prior
history of electronic music in standardized forms. By the mid-1980's the industrial
onslaught of highly redundant MIDI interfaceable digital synthesizers, processors, and
samplers even began to displace the commercial merchandizing of traditional acoustic
orchestral and band instruments. By 1990, the presence of these commercial technologies
had become a ubiquitous cultural presence that largely defined the nature of the music
being produced. Conclusion What began in this century as a utopian and vaguely
Romantic passion, namely that technology offered an opportunity to expand human
perception and provide new avenues for the discovery of reality, subsequently evolved
through the 1960's into an intoxication with this humanistic agenda as a social critique and
counter-cultural movement. The irony is that many of the artist's who were most
concerned with technology as a counter-cultural social critique built tools that ultimately
became the resources for an industrial movement that in large part eradicated their
ideological concerns. Most of these artists and their work have fallen into the anonymous
cracks of a consumer culture that now regards their experimentation merely as inherited
technical R & D. While the mass distribution of the electronic means of musical production
appears to be an egalitarian success, as a worst case scenario it may also signify the
suffocation of the modernist dream at the hands of industrial profiteering. To quote the
philosopher Jacques Attali: "What is called music today is all too often only a disguise for
the monologue of power. However, and this is the supreme irony of it all, never before have
musicians tried so hard to communicate with their audience, and never before has that
communication been so deceiving. Music now seems hardly more than a somewhat
clumsy excuse for the selfglorification of musicians and the growth of a new industrial
sector." From a slightly more optimistic perspective, the current dissolving of emphasis
upon heroic individual artistic contributions, within the context of the current proliferation
of musical technology, may signify the emergence of a new socio-political structure: the
means to create transcends the created objects and the personality of the object's creator.
The mass dissemination of new tools and instruments either signifies the complete failure
of the modernist agenda or it signifies the culminating expression of commoditization
through mass production of the tools necessary to deconstruct the redundant loop of
consumption. After decades of selling records as a replacement for the experience of
creative action, the music industry now sells the tools which may facilitate that creative
participation. We shift emphasis to the means of production instead of the production of
consumer demand. Whichever way the evolution of electronic music unfolds will depend
upon the dynamical properties of a dialectical synthesis between industrial forces and the
survival of the modernist belief in the necessity for technology as a humanistic potential.
Whether the current users of these tools can resist the redundancy of industrial
determined design biases, induced by the cliches of commercial market forces, depends
upon the continuation of a belief in the necessity for alternative voices willing to articulate
that which the status quo is unwillingly to hear.

Nature, Sound Art and the Sacred David Dunn "In the
sound of these foxes, if they were foxes, there was nearly as much joy, and less grief. There
was the frightening joy of hearing the world talk to itself, and the grief of incommunicability.
In that grief I am now as then, with the small yet absolute comfort of knowing that
communication of such a thing is not only beyond possibility but irrelevant to it..." In the
conclusion to his book, Let Us Now Praise Famous Men, James Agee describes the depth
of meaning and intelligence conveyed through the late night calls of two foxes. In his nine
page description of these calls he invokes archaic sentiments and a profound
contradiction that humans must have always felt. We hear in the world talking to itself a
sense of otherness that simultaneously mirrors our deepest sense of belonging. Agee
compares the quality of laughter in these fox calls to the genius of Mozart, "at its angriest,
cleanest, most masculine fire." Somehow we have always intuited that music is part of our
reflection to and from the non-human world. We hear the alien quality of the non-human in
our music and the humanity of music in nature. The following discussion is an attempt to
wrestle with the "grief of incommunicabilty" that arises through our attempts to both hear
and talk to the world. Part One: Assumptions Each of us is constructed as a miraculous
community of systems that function together to form the coherent totality of a living thing
capable of sensing the external world. Since that coherence is finite there are real limits on
what we can sense. All of the sound we hear is only a fraction of all the vibrating going on in
our universe. What we do hear is the result of a dance between the world and how we are
made. In a real sense, we organize our reality out of this dance. Since this is true for all
living things, and since each thing is made differently, each form of life hears a slightly
different multiverse. Each species of insect, frog, bird and mammal listens to a distinct
reality that arises from the constraints of how they are constructed. When we look at the
world, our sense of vision emphasizes the distinct boundaries between phenomena. The
forward focus of vision concentrates on the edges of things or on the details of color as
they help us to define separate contours in space. We usually see things as one window
frame of visual stimuli jumping to the next. The sounds that things make are often not so
distinct and, in fact, the experience of listening is often one of perceiving the inseparability
of phenomena. Think about the sound of ocean surf or the sound of wind in trees. While we
often see something as distinct in its environment, we hear how it relates to other things.
Take for instance the image of an airplane in flight. What looks like a distant pinpoint object
in the sky is heard as a web of sound that spreads out through the terrain beneath it,
reverberating from the contour of the land into and around our bodies. I do not mean to
imply that our hearing is somehow less discriminating than our vision. Actually the number
of nerve fibers that connect our ears to the brain is greater than the number that connects
the eyes. Our ears are better at discriminating certain kinds of complex phenomena and
we can often hear relationships between things that our eyes require external
instrumentation to accomplish. The ease and exactness of matching two frequencies
when tuning is something musicians take for granted. To do the same in the visual domain
requires sophisticated tools. Mathematics in western culture was born from the sense of
sound and not vision. Pythagoras heard the ratios of the monochord vibrating that became
arithmetic. Since then philosophers from Plato to Adorno have discussed the sacred
properties and special responsibilities of music to society. I wonder if music might be our
way of mapping reality through metaphors of sound as if it were a parallel way of thinking to
the visually dominant metaphors of our speech and written symbols. I think that most
musicians can relate to the idea that music is not just something we do to amuse
ourselves. It is a different way of thinking about the world, a way to remind ourselves of a
prior wholeness when the mind of the forest was not something out there, separate in the
world, but something of which we were an intrinsic part. I think music may be a
conservation strategy for keeping something alive that we may now need to make more
conscious, a way of making sense of the world from which we might refashion our
relationship to nonhuman living systems. Personally I believe that we have yet to articulate
the importance of music and the immense cognitive and social terrain that it addresses.
The fact that we have yet to discover a human society without it says something very
profound. Recent discoveries about the ability of music-making to alter the very hard-
wiring of brain development say even more. I have a gut intuition that music, as this vast
terrain of human activity and inheritance of our species, will provide us with clues to our
future survival and that is a responsibility worth pursuing. Most of us listen to recorded
sounds in the form of music or broadast media. Seldom is this done with direct
concentration. As distinct from their former role in traditional societies as a primary social
integrating mechanism, most forms of music are now used merely as a means of
distraction. The merchandising of music has become what Jacques Attali has called a
"disguise for the monologue of power... never before have musicians tried so hard to
communicate with their audience, and never before has that communication been so
deceiving. Music now seems hardly more than a somewhat clumsy excuse for the self-
glorification of musicians and the growth of a new industrial sector." Music as a discipline
has generally failed to transcend the constraints of its status as entertainment. Gregory
Bateson has discussed an essential distinction between art and entertainment: while
entertainment is the food of depression, being easy to engage but lacking long term
interest, art 2 requires discipline to engage but leaves one richer in the end. In this time of
ecological crisis we need to embrace every tool we have to remind us of the sacred. Not
only can aural and musical metaphors provide us with a means to describe the world in
ways that remind us of our physical connection to the environment, but the physical act of
using our aural sense, in contrast to entertainment, can become a means to practice and
engender integrative behavior. Attentive listening to the sounds around us is one of the
most venerable forms of meditative practice. It has been used to concentrate awareness
on where and what we are, and to quiet the incessant chatter of the mind. What we hear
from other forms of life, and the environment they reside in, is information that is unique
and essential about patterns of relationship in context. It is an experiential basis from
which we can shape an understanding of what Gregory Bateson has called the sacred: "the
integrated fabric of mind that envelops us." The attempt to expand our ears toward a
greater receptivity to our aural environment has been the major focus of some of the 20th
century's most important musicians. Edgard Varese, Pierre Schaeffer, and John Cage
sought to expand the resources of music beyond the vocabulary of pitch and harmony that
had previously defined it. Through the "musical" manipulation of the noises of everyday
life, they achieved an understanding of the meaning of these sounds as aesthetic
phenomena, opportunities for a deepened awareness of the world we live in. Perhaps
because of their contribution to art we now can understand the need to extend these ideas
further. The sounds of living things are not just a resource for manipulation, they are
evidence of mind in nature, and patterns of communication with which we share a
common bond and meaning. When Cage expressed that the emancipation of music
required the use of all sounds as a resource for composition, he unfortunately was also
establishing a precedent for the exploitation of "sound" as a decontextualized commodity
that could be defined, and manipulated, by a set of cultural codes called music. The result
of this ideological stance has been to set in motion a tautological game: the expansion of
"music" becomes synonymous with an additive process of simply commandeering new
phenomena into its cultural framework. Parallel to this process has been the asking of a
supposedly profound question: are these sound-making activities music? Underneath the
surface triviality of this question is the disturbing assumption that attaining the mere
status of music itself forms a meaningful discourse. The complex of activities that have
formed the emergence of environmental music and sound art as artistic genre is in part a
response to this dilemma. Such activities share a general impulse to not only differentiate
themselves from traditional musical activities but to also ask a different question: what is
the meaning of these sound-making activities if they are not traditional music and not
intended to be? My answer to this question is in part the explicit content of my sound art
work: to recontextualize the perception of sound as it pertains to a necessary
epistemological shift in the human relationship to our physical environment. My belief is
that there is an important role for the evolution of an art form that can address the
phenomenon of sound as a prime integrating factor in the understanding of our place
within the biosphere's fabric of mind. 3 As the ecology movement has repeatedly
articulated, we must develop a participatory relationship between humanity and the
greater environmental complexity of the biosphere that is mutually lifeenhancing. The
traditional epistemological dichotomies between humans and nature are no longer
tenable. As we appear to be moving further away from a somatic relationship with a
biological environment that we have irreversibly altered, we must confront the realization
that if the biosphere is going to survive in a manner inclusive of human beings, then human
beings must not only allow more room for the non-human, but face responsibility for the
role of environmental maintenance that our technologies have already engendered. This
realization must include an understanding that we have so altered our environment that
back-to-nature campaigns will not suffice to solve our problems nor those of the
biosphere. The political implications of the preceding ideas seem poignant: 1) issues of
freedom and dignity must now include the total fabric of life within which we reside and 2)
we require new modes of experience that can help recover those aspects of human
integrity that are rooted in a fundamental sense of connectedness with the non-human
world. These demands not only require a heightened awareness of the role of art and the
artist but of the very metaphors we use to organize reality. Francisco Varela has pointed
out that visually-based spatiotemporal metaphors are the worst for describing the
denseness of interpenetration of phenomena that gives rise to the world. When we
predominantly speak of the world in topological terms we impose a fixed time/space
relationship on the rich dance of living things. We constrain our understanding of the true
interdependence of life. In Buddhism the concept of Sunya (a Sanskrit word translated as
"emptiness") describes the complex chain of connection that forms the world. Each "thing"
is so densely connected to everything else that it resides nowhere. We cannot isolate the
thing from all the states of matter or energy that preceded it or to which it will become.
Music as a language of vibration is one of the best means we have for thinking about this
fabric of mind that resides everywhere. Sound as a vibrant plenum reminds us of the
profound physical interconnectedness that is our true environment. Part Two: My Work:
Over the past twenty-five years most of my creative work connected with the relationship
of sound and nature can be described as fitting into two fairly separate categories. In the
first category are environmental performance works intended for outdoor performance.
The second category consists of tape compositions derived from environmental sounds
that are a hybrid between electroacoustic composition and soundscape recording. What
follows are descriptions of representative works from each of these two categories:
Category 1: Environmental Performance Works Through these compositions it has been
my goal to deconstruct the materials and attributes of music as a means to explore and
demonstrate the emergent intelligence of non-human living systems. As 4 distinct from
John Cage who wanted to decontextualize sounds so as to "allow them to be themselves,"
I have focused upon the recontextualization of the sounds of nature as evidence of
purposeful minded systems: the song of a bird is not just grist for compositional
manipulation, it is a code of signification not only between members of that particular
species but also for the extended fabric of mind that forms the biohabitat within which that
species resides. While Cage wanted to abstract these sounds, I'm interested in regarding
these as conscious living systems with which I'm interacting. These sounds are the
evidence of sentient beings and complex-minded systems. Many of my compositions have
consisted of establishing an interactive process through which a collaborative dialogue
emerges that is inclusive of this larger pattern of mind. The resulting projects are not only
descriptive of their environmental context but generate a linguistic structure intrinsic to the
observer/observed relationship. They are an expression of the composite mind immanent
in a particular connective instance. I refer to much of my work as "environmental language"
so as to distinguish it from the more general term "environmental music." The issue is not,
how can one bring out latent musical qualities in nature but rather, what is necessary to
stipulate an intrinsic sonic structure emergent from a specific interaction with non-human
systems? My process has been to set up an interaction with the environment using sound
as the vehicle or medium through which the interaction unfolds. Since I cannot know what
the outcome of these interactions will be, I am often gaining information from an
experimental situation that can't be arrived at otherwise. While such a process is similar to
what experimental refers to in the scientific sense, I am only making a claim for
experimentation within the domain of an experiential exploration of sound and
consciousness from a trans-disciplinary perspective. Through combinations of analog,
digital and traditional sound-generating devices, I have designed realtime performance
interactions in wilderness spaces where the resulting events reflect a larger system of
mind inclusive of myself and these other living systems. Two of these works are described
as follows: 1) Mimus Polyglottos Mimus Polyglottos was an experiment in interspecies
communication that Ric Cupples an I initiated in 1976. Both of us were fascinated by the
mimicry of mockingbirds. Ric had been photographing them in their urban milieu, usually
on top of one of their favorite perches, television aerials. I was living at one end of Florida
Canyon with the famous San Diego Zoo at the other end. Some nights I would be awakened
by the inexplicable sounds of monkeys and tropical birds from my backyard. It took me
awhile to figure out that the sounds didn't come from zoo escapees but from the
mockingbirds who travelled up and down the canyon. Ric and I spent several months
researching the literature on mockingbirds and recording them in the city. Our idea was to
formulate an audio stimulus that could engage the birds but also challenge their ability to
mimic. At first we did a variety of experiments in locating the birds by playing back
recordings of one bird to another. This allowed us to acquire essential knowledge of proper
5 mockingbird etiqutte, how to approach the birds and what sort of proximity to maintain.
The final stimulus tape was made out of frequency-modulated square waves, a notoriously
problematic waveform for audio systems. We made the tape with the mockingbird
frequency range in mind and ratios of sound to silence that were characteristic of their
song. The tape was first played without warning to a single bird at approximately 3:00 AM.
The bird's response was typical of the reactions we got from several different
mockingbirds. It initially reacted with enthusiasm trying to match various parameters of the
electronic sound: pitch, rhythm, and timbre. At a certain point it appeared to withdraw but
slowly began to build its confidence until it was interacting with an extraordinary range of
accommodation to the stimulus sounds. The result of this experiment is one of my favorite
examples of the unexpected ability of humans and animals to be aware of each other and
to engage creatively. I'm also fascinated by the fact that this occurs through something
generally regarded as artificial. While humans often reject aspects of technology as
something evil when compared to the rest of nature, the bird does not. To my ears the
mockingbird is just as fascinated by the sound made by these dancing electrons as by
another bird. Of course I've also heard them imitate washing machines and Volswagen
motors so there's no accounting for taste even among mockingbirds. 2) Entrainments 2
Entrainments 2 was composed for and performed in a specific wilderness site. Three
performers prerecorded stream-of-consciousness descriptions and observations of the
surrounding environment from three mountain peaks in the Cuyamaca Mountains of
California. These recordings were subsequently mixed with static drones derived from an
astrological chart for the time and location of the performance. Playback of these sounds
occurred from portable cassette recorders with self-amplified loudspeakers and sufficient
amplitude to be audible from the center of the performance configuration. In the center of
the space was placed a computer programmed to sample and immediately output
periodic sound blocks through a central loudspeaker. The input signal to the computer was
from a parabolic microphone. A performer carried this microphone while walking slowly
around the perimeter of a large central circle. This performer also recorded the overall
performance with binaural microphones. Three other performers carried portable, self-
amplified oscillators while walking slowly around the perimeter of three outer circles. The
performance took place at Azalea Glen, Cuyamaca State Park, California, on May 19, 1985.
While Entrainments 2 intentionally borrows metaphors from a variety of archaic
philosophical traditions (feng shui, geomancy), it can most readily be understood as an
attempt to be in contact with the "spirit of a place." More precisley this spirit can be
defined through a cybernetic definition of mind that serves as a heuristic hypothesis. An
important scientific concept of the later 20th century has been the idea of emergent
properties: patterns can arise from a complex process that appear to transcend the agents
that bring the process into being. In the case of this composition, mind can be 6
understood to reside in all of the pathways of interaction that arise from the system of
sound making that we specified and in which we participated. Experientially this was most
evident in the relationship to time that the prerecorded voices evidenced. Observations
made days before the performance coincided exactly with realitime events occuring during
the performance. The resulting time and memory compression was experienced as if, on
their prior visits, the speaking voices describe events that will happen in the future, and
then those events do take place. These descriptions illustrate a transition that my work has
pursued over the past two decades: a progressive expansion of context, moving from
interactions with a single member of another species toward interactions with complex
environments. In a very direct way I have tried to expand the sense of "mindedness" that
I'm working with. My idea of environmental language is an experiential, dynamic process
that explores whatever tools and metaphors are available toward a greater understanding
of the profound interconnections between sound, language and the environment. It is my
contention that the exploration of these linkages suggests an essential role for the
evolution of sound art and music: the creation of human actions which reinforce the
inclusiveness of the larger systemic mentality resident in the interactions of environment
and consciousness. Category 2: Hybrid Soundscape Compositions There are many
parallels in the collecting of sounds to other means by which we document and "bind time"
in order to study, intensify experience, or cherish the past. The similarity of recorded sound
to photography has been considered but "phonography" has yet to be taken seriously as a
discipline beyond its commercial or scientific applications. Its status as an artistic genre is
still quite tentative despite appropriate efforts in this direction. While the best known and
most serious work in this area has been the soundscape recording movement initiated by
R. Murray Schafer and his colleagues at the World Soundscape Project, and later the World
Forum for Acoustic Ecology, the audio documentation of "natural" acoustic environments
has become a commercial success story. Several recordists market their recordings as
purist audio documentation of pristine natural environments with particular appeal to the
armchair environmental movement. Personally I find something perverse about many of
these recordings, as if the encoding of a semiotic referent in the form of an audio
description of place could ever be something other than a human invention. Sometimes
the sounds are intrinsically beautiful but are too often marketed as if if their mere
existence were somehow doing the environment a big favor. I can certainly understand
arguments for the preservation of actual biohabitats but not as recorded sonic objects. The
premise appears to be that these recordings will somehow sensitize the listener to a
greater appreciation of the natural world when in fact they are more often perpetuating a
19th century vision of nature and at best merely documenting a state of affairs that will
soon disappear. There were two experiences in particular that charged my cynicism about
soundscape work and the aesthetic role of phonography: 7 Several years ago I was hired to
do audio field recordings for a new aquarium project. Since the focus of the exhibition was
on two of the major watersheds of North America, the Mississippi and Tennessee river
basins, my job (along with a colleague) was to gather sound from the corresponding
biohabitats that would later be mixed and correlated to the aquarium exhibits as canned
audio playback. We were to provide the raw source materials that would later be used to
sonically conjure a portrait of these places. This specifically meant that we were to travel
to the remaining sites of virgin hardwood forest in the Smokey Mountains of Tennessee and
the cypress swamps of the Atchafalaya Basin of Louisiana and document the acoustical
environments. Both of these expeditions turned out to be extraordinarily difficult since
these environments were, for our purposes at least, non-existant. What remained were
small vestiges of these once grand habitats. So small, in fact, that there was simply no
unique acoustical identity left to capture. We subsequently spent weeks in each location
waiting out the long periods of incessant automobile, plane and boat traffic in order to
capture enough snippets of wildlife sounds that phoney mixes could be constructed as
convincing audio portraits of places that do not actually exist. While doing sound
recordings in an African game park for a zoo project, I travelled to remote waterhole
habitats. The fantasy I had been nurturing for weeks about my impending great African
safari experience, and a confrontation with true wilderness, was instantly shattered when I
set up my equipment. As I put on my headphones I immediately heard the sound of a
kerosene driven pump used to bring water up from the aquifer to the watering hole. I was
later told that this was the rule and not the exception. These pumps are a common feature
in many game parks that resulted from the artificial boundaries imposed upon wildlife by
man. Without them much of the wildlife would perish. At first I found their presence to be a
real disturbance to my wild safari fantasy. Later I understood that Africa is no different
from the rest of the Earth's fast transition of wilderness into global park. The important
thing to understand is not only how humanity has radically altered the biosphere but the
depth of the responsibility we now carry for its future survival. In both of these situations I
came away feeling that my involvement supported something duplicitous. My job was to
pretend that I was not present in the situation in order to create a false representation of
the reality and then foist this upon a naive public. It would have been a classic example of
confusing the map for the territory except that the map wasn't even in the right ballpark.
Such fakery is even more reprehensible because it lures people into the belief that these
places still fullfill their Romantic expectations and that all is well. As an alternative position
I have preferred to apply a compositional aesthetic to the creation of soundscape works. I
am interested in evolving an intrinsic relationship to a subject rather than "inventing" or
fantasizing a musical event. This is the idea of composition as a strategy for expanding the
boundary of what is reality itself. If I want to transcend the limiting conditions that my
current state of knowledge imposes upon me, to invent or improvise something from that
condition will obviously not suffice. I will merely reiterate the previously known conditions.
By paying close attention to the reality of what actually is, there arises the opportunity to
participate in the emergence 8 of something that is mutually created between the subject
and myself. I have then danced toward a definition of the reality that I am participating in,
rather than from a preconceived one that is probably no longer relevant. Given this
philosophical stance, it is obvious that I will be very "present" in the editing process, but
this does not mean that I wish to impose myself or some fantasy on the materials. Instead,
I seek to invoke patterns of relationship intrinsic to the materials themselves. Discussion
of two such works follows: 1) The Lion in Which the Spirits of the Royal Ancestors Make
Their Home The title derives from the Shona phrase "Mhondoro Dzemidzimu" that I ran
across in David Lan's book, "Guns and Rain: Guerillas and Spirit Mediums in Zimbabwe."
Lan is a writer and social anthropologist who was born in South Africa. His book is a
brilliantly written account of the role that traditional spirit mediums played in Zimbabwe's
war for independence. He details the facts concerning the profound significance of an anti-
colonial war fought with the guidance of the Shona royal ancestors communicating
through these spirit mediums. The title specifically refers to one of the traditional beliefs of
Shona religion that I took as emblematic of African religious beliefs in general. The concept
for the disc originated after the fact. I went to Zimbabwe partially as a tourist and partially
on "assignment" (as described above) for a friend who's sound design firm specializes in
audio installations for large public institutions. We needed to gather some sounds from
African watering hole habitats for a couple of projects then under development. The
concept of how to "compose" the overall piece came a few years later in response to a
request for an extended radio piece from the Australian Broadcasting Commission. The
unifying premise arose from the realization that none of these sounds were "pure" in the
sense of simple naturalistic representations of the African environment or traditional
culture. All of the sounds I recorded were clearly problematic and contradictory. They were
recordings of the current reality of social and environmental change and not
representations of a fantasy Africa that no longer exists. In that sense it was the reality of
Zimbabwe that led me to the piece and not a preconceived idea. My foremost interest was
in composing an articulation of those patterns of the sacred which emerge or persist within
(and despite) the contradictions and conumdrums of rapid cultural change. While these
sounds can be heard as further evidence of an environment, nation and world undergoing
mutation and threat of annihilation, they also can be heard as evidence for processes of
dynamical adaptation where the tribal and wilderness voices speak not only as something
under siege but as phenomena capable of survival in a way that may inform our collective
survival here on Earth. For example: in one of the recording cuts we hear a human
habitation wedged between the African wilderness and a two lane paved highway that
serves as a major trucking route. The length of this cut is just about the average time
between passing vehicles. In the foreground are various nocturnal 9 insects. In the
distance are frogs and the village ambience itself: voices, drums, and a braying donkey.
This recording reinforces one of the most powerful impressions I had of the relationship
between African culture and environment: an overwhelming sense of the persistence of
spirit as an intrinsic component of the African ecology. For many African people the
sounds of animals are not merely the calls of separate organisms. They are the voice of a
spirit form resident in that individual but also present in all the members of its species.
That spirit is like a persistent and collective intelligence that defies geographic separation.
This concept is not only present in the beliefs of the traditional religious practices but
appears as an essential trait of domestic life. It can even be understood to include the
influence of the dead (both human and animal) as a resonance from the past that not only
informs all aspects of daily life but is essential to the vitality and interaction of all living
things. 2) Chaos and the Emergent Mind of the Pond We usually associate the intelligence
of life forms with how big they are or with their proximity to us on the evolutionary tree. The
tiny size and alien quality of insects and spiders presents us with a challenge. How could
they possess anything but the most rudimentary of mental functions, tiny automatons
without thought or feeling? The amazing sophistication of social insects betrays this
assumption. Ant societies are particularly impressive while the observed behavior of bee
colonies has taken on mythic proportions. We know that bees communicate a large range
of information about the details of their environment through dance (along with sound and
smell). While this “waggle dance” is regarded as the only insect “language” yet known,
there are clues that others await discovery. One candidate is a water beetle of the genus
Berosus. These little critters appear to have a vocabulary of faint sounds that they emit
underwater for purposes of warning and mating. This work was composed entirely from
underwater sound recordings made in vernal pools in North America and Africa. My intent
was to articulate the amazing complexity and apparent intelligence that these sounds
signified. After a couple years of listening to these small ponds and marshes, I came to
understand a pattern to their underwater sound-making. The one consistent factor is how
beautiful and complex these miniature sounds are. I have finally reconciled myself to the
gut feeling that these sounds are an emergent property of the pond. Something that speaks
as a collective voice for a mind that is beyond my grasp. I know that this is not a scientific
way of thinking but I can’t help myself. Now when I see a pond, I think of the water’s
surface as a membrane enclosing something deep in thought. Even for someone who has
had a lot of experience listening to animal sounds, the feeling that these pond sounds are
some sort of alien language is irresistible. 10 The philosopher Wittgenstein once said: “If a
lion could talk, we could not understand him.” He meant that the schism between human
culture and the lion’s world is so great that a mere linguistic code cannot bridge the gap.
What I like about this statement is how it respects the otherness of the animal world and
recognizes how codes of communication, like these insect sounds, arise from the unique
organization of living things. Science has begun to probe deeply into the possibility that our
assumptions about animal intelligence and communication have been too simplistic. For
centuries much of humanity has claimed superiority over the nonhuman world and our
older models of evolution have guaranteed this view. The justification for this argument
was often based upon an assumption that since animals did not possess language, they
were simply organic machines to be ruthlessly exploited. New evidence suggests that
thinking does not require language in human terms and that each form of life may have its
own way of being self-aware. Life and cognition might be considered synonymous even at
the cellular level. We can embrace the alien for its right to exist without destroying it or
demanding that it either serve us or exhibit human traits. Along with humans, other forms
of life exist as co-conspirators in a mystery of which we only have a small glimpse. Perhaps
the most important feature of their being alien is that they are part of a puzzle through
which we can truly know what we are.

AN EXPOSITORY JOURNAL OF
EXTRACTIONS FROM WILDERNESS: notes
toward an environmental language David
Dunn 1981 AN EXPOSITORY JOURNAL OF
EXTRACTIONS FROM WILDERNESS notes
toward an environmental language (1981) 1.
Is my primary responsibility as as composer merely the creation of substantive concepts
and structures, or am I responsible for the formation and maintenance of a proper
environment for such structures? Beyond this the question must be asked: what are the
contextual limits for such an environment and of what might appropriate maintenance
consist? I assert that the significance of this question is particularly relevant to
discussions addressing the musical acquisition of technology at a time when the sheer
immensity of technological resources constrains the survival of living systems in such a
way that I must confront the technological acquisition of music. Despite what may be seen
as obvious predispositions intrinsic to certain technologies, rendering their function more
readily exploitable by industrial and political power structures, it remains a commonly held
belief that the user has autonomous responsibility to determine if the signals generated
from use of a particular tool are input to or output from a given social context. But to what
extent do I have free choice in either selection of my tools or in what I make with them? The
culture of technology asserts its values with relentless force. It constrains behavior in
convention with such values while maintaining the facade of its neutrality. Machines are
not neutral objects, they are vestiges of thought empowered with the force of intent. Thus,
the issue of what is appropriate maintenance begins to occupy a larger context, namely:
the mere creation of structures is not sufficient if either the means for their making or the
environment in which they must reside contributes to the negation of those structures.
Additionally I must recognize that all technology is part of a larger structure: the adaptive
whorls of organic energy blossoming into living systems; and that the dialectic of
exploitation surrounding the influence of technology must therefore include the whole of
the biotic world. 2. We reside in a fabric of communication, the environment's language
encoded in the patterns of its living systems. As our species moves forward with the
purposeful extinction of other forms of life at the current rate of one species per day, it
appears that how we converse with this fabric has much to do with the continuation of life
on this planet. Whatever understanding we may have of our place among these systems, it
must be directed toward the hope that this earth has spawned us for some other purpose
then its own destruction. 3. Energy from the sun to the earth seems destined to cause an
increasingly ordered state in the organization of matter. The compounding of structures of
matter into more complex organizations which cannot be described in terms of their
simpler components, stops at the level of simple molecules. Living organisms, however,
continue this buildup integrating more complex patterns of organization such that
molecules become macromolecules, then organelles and finally cells. The rather
mysterious processes of evolution continue with the combining of cells into higher
organisms. Various terms have been proposed to describe this phenomenon such as
negentropy or syntropy, but what they fundamentally refer to is this innate drive in living
organisms toward interaction, growth, and complexity. The Gaia Hypothesis, proposed by
James Lovelock, theorizes that the biosphere has strategically programmed its evolution
for three billion years. The extraordinary implication is that the whole of the biosphere is
akin to one incredibly large living organism. Support for such a contention is based upon
observations about the Earth's extremely unlikely atmospheric makeup, suggesting that
the composition of the atmosphere is itself a biological construction resulting as a
consequence of an immense cybernetic system termed Gaia which seeks optimal
conditions for the totality of planetary life.1 Inherent in the interaction of these systems is
the exchange and transformation of communication energy. More precisely this could be
termed the transmission of difference. The inevitable complex increase in condensation of
this energy within societies of higher organisms generates language. What seems evident
about the extreme compression of these energies in human languages is that the various
realities in which we are engaged are themselves shaped and constrained by language
constructs. The mental world does not stop at the boundaries of the flesh, nor is it inside
my head. Mind is a compound phenomenon of interacting parts bounded arbitrarily by
what I either wish to or am capable of understanding. In other words, mind consists of
organism in and of environment. Although I recognize my subjectivity to be inescapable, I
am willing to contend that the dimensions of self actually consist of a vast interlocking
network of eco-mental systems. How much of these larger systems are incorporated into
self is a function of my language, determining at what point I limit connection with what
appears to be the outer world. If I reduce the dimensions of self to extreme exiguity, I
subsequently decrease the interaction with those systems necessary to sustain life. 4. The
act of description is not passive, I speak in the place of what is described and in one sense
become its representative. Responsible representation demands accuracy gained through
interaction: listening as expansion of connection within the biotic world. It is not trivial to
assert that when humanity ceases to listen to the voice of wolf or whale, hindering their
survival, we help to limit the biosphere's potential reality toward our own destructive short
term advantage. Biologist Gregory Bateson has stated: "There is an ecology of bad ideas,
just as there is an ecology of weeds, and it is characteristic of the system that basic error
propagates itself. It branches out like a rooted parasite through the tissues of life, and
everything gets into a rather peculiar mess."2 The making of creative connections between
phenomena involves the disassembling of reality constructs with which I operate in blind
assumption. I consist of more than I recognize. Freedom is not just having choice among a
set of contrived possibilities, it is fundamentally the expansion of what I do not know,
expanding the connection with what I previously thought outside myself. Most current
socio-economic systems reward attempts to make social and biotic systems predictable.
Predictability is achieved through redundancy introduced as subsequent loss of choice.
High predictability yields low information and therefore less freedom. For example, the
diversity of the food we currently eat diminishes almost daily. Large corporate takeovers of
the patented seed industry has recently put pressure on world governments to centralize
the manufacture of seeds in order to guarantee industry profit. Laws have been passed in
both the United States and Europe which outlaw certain unpatented plants. The European
Common Catalog lists all varieties which remain legal to grow, and over a year's time
literally hundreds of plants are removed from the list. Stiff fines are levied against
gardeners who attempt to grow these illegal varieties. It has been estimated that these
attempts to ensure corporate profits will result in three-quarters of all European vegetable
varieties becoming extinct by 1991.3 5. Human consciousness of nature is itself an event
in nature which contributes to its transformation. As consciousness, in the form of culture,
folds back upon the biosphere pushing toward civilization, the energy absorbed from the
surrounding environment, necessary to sustain the decrease of internal entropy within
consciousness, is subsequently excreted not only as waste but as disruption of the
surrounding organic systems. This would seemingly result in a consumption of energy
exceeding what the environment is capable of sustaining. In other words, there is probably
an essential point of equilibrium between the growth rate of civilization and the capability
of supporting life systems to supply energy, beyond which breakdown of the total system
begins. For example, the 1978 Conservation Biology Conference predicted the probable
end of vertebrate evolution by the turn of the century, including massive extinction of many
species.4 Perhaps the point beyond where equilibrium is maintained is also the point at
which redundancy sets in: culture becomes negative feedback generating more waste
than knowledge. Technology is a culture which by its overwhelming power either absorbs
or eradicates biological, cultural, and linguistic diversity. In view of this, it seems trivial to
ask what effect technology has had upon music instead of asking, what of music might
remain unaffected? To find such a phenomenon is probably also to absorb it since as a
member of such a culture I begin to hear with technological ears. The very choice of
whether to use or not use technology to disseminate my ideas has largely been taken away
from me. Thus the question remains: do my ideas attempt to disintermediate this cultural
redundancy or merely reinforce it? By now it must seem obvious that the naive fascination
with new machines is not only trivial but dangerous. The well-worn assertion that
technology is neutral, awaiting specific use by good or evil people, is a cliche whose idiocy
is only compounded by equating advances in music with advances in machines. It places
music in a status similar to mineral resources where values await the strip-mining
mentality of commercialization. International industrialization and the energy
consumption which feeds it have unfortunately become synonymous with social evolution.
Discussions about technology inevitably link it to notions of progress which demand
consumptive and centralized economies. Machines are somehow thought to signify the
future while skills derived from living interactively with the biotic environment are thought
to represent the past. Technology is not merely the manufacture and use of tools: it is a
residue of how we imagine the world into being. It is an environment of symbols against
whose institutions we must each day pit our needs or conform to that environment's
mechanization. Beyond the residue of our imaginings is the freedom of the yet unknown. At
best technology is merely the collective debris upon which we may stand in further
imagining; at worst, technology is the refuse within which to bury choice. 6. Near where I
lived is a coastal estuary set aside as a bird refuge. This estuary lies north of a small group
of hills and canyons covered in the indigenous chapparel (Southern California coastal
scrub). But surrounding this patch of uninhabited terrain is the suburban sprawl of
Southern California: condominiums to the east; private homes to the south; and Interstate
Highway 5 to the west, with the Pacific Ocean just beyond. Standing on these hills alone at
night, no matter in what direction I turn, I see lights flashing: automobile headlights,
advertising searchlights, airplanes, streetlights, and the eerie glow of television sets in
windows. Close to my feet are living things, their presence illuminated by these abrupt and
disparate bursts of light. Everything that struggles for life here must listen continuously, all
day and all night, to the roar of nearby traffic. It is beyond my imagination to believe that
what lives here is not changed by all of this; or not changed by the web of communication
network which surrounds and entangles the biosphere. It is an interesting activity to try and
listen to what this place has to tell me, because for all my effort I cannot hear it; the din of
humanity is too loud. It is a lonely thought that this disconnectedness has been chosen by
us. Of what shall humanity consist when all that is left to hear are the sounds of our
isolation? 7. My composition entitled, MADRIGAL: (The Language of the Environment is
Encoded in the Patterns of Its Living Systems), began with a reticular notion: perhaps each
instance of environmental ambience which I prceive is part of a much larger structure, that
within the patterns of communication between living organisms there is a larger
communication logic which each separate utterance combines with to form an
environmental language. To decode a moment of this pattern might generate an
appropriate language not only descriptive of a specific place and time, but more precisely
a language descriptive of the mentality implicit in this connective instance: a composition
of this environment and not merely about it. The compositional process for MADRIGAL
entailed the pho-netic transcription of an environmental ambience recording made in the
Cuyamaca Mountains of Southern California. One minute of recorded ambience provided
the entire source material for the notated score. The transcription procedure involved
attempting to bring the ambience into my physiology through both aural sensing and vocal
emulation. Compositional organization of this transcription was made according to
structural relationships intrinsic to the material itself. In one sense MADRIGAL juxtaposes
a primitive function of language (namely, to interact with the external environment) with
one of the most recent analytical notations for language. Additionally my intention has
been to combine multiple descriptions of a particular environment in order to convey: (1) a
resonant sense of the richness of information contained in one spatial and temporal
location; and (2) to exemplify the notion that most definitions of wilderness are not based
upon interaction but are generalized abstractions which may or may not apply to a
particular place. MADRIGAL requires seven vocalists and a two-channel audio tape. The
audio tape consists of filtered transformations of the original ambience recording. The
score is notated in the International Phonetic Alphabet (American Dialect of English) with
additional signs. NOTES: 1. James E. Lovelock, GAIA, A New Look At Life on Earth (Oxford
University Press, 1979). 2. Gregory Bateson, Steps to an Ecology of Mind (New York:
Ballantine, 1972), p.484. 3. See Cary Fowler, "Sowing the Seeds of Destruction", in Science
for the People (September/October, 1980), p.8. 4. Science News, vol. 114, no.13
(September 23, 1978), p.215. This paper was first presented on August 26, 1981 at the
International Music and Technology Conference, University of Melbourne, Victoria,
Australia.

TABLE OF CONTENTS Preface and


Acknowledgements Introduction I. Seeds II. Collage
#1--("Blue Suede") and Monody III. Computer Music IV. New York City 1964-70 v. Three
Piano Rags VI. Quiet Fan for Erik Satie and Hey When I Sing These Four Songs Hey Look
What Happens VII. Postal Pieces VIII. Clang IX. Quintext X. Chorales XI. Spectral CANON
for CONLON Nancarrow XII. The Drum Ouartets XIII. Harmonia XIV. Three Indigenous
Songs XV. Nancarrow, Ruggles, Ives, et. al. XVI. Meta Hodos XVII. Harmony Appendix I.A
Annotated List of Works I.B : Writings I.C: Selected Performances, Recordings, Activities,
etc. I.D: About Tenney Appendix II: List of Examples Appendix III : Glossary of Selected
Terms 117 Preface James Tenney's work, as a composer, theorist, performer and teacher,
is of singular importance in American music of the last twenty-five years. He is by nature a
quiet, almost publicity-shy musician, but his musical and theoretical works are steadily
becoming widely known, despite the fact that few have been published and almost none,
to this date, have been recorded on disk. META/HODOS seems to have the widest
"underground" readership of any treatise of its kind, although it has never appeared in print
in any readily available form. The drum quartets, For Ann (rising), and a few other works are
also familiar, in a wide variety of contexts, to contemporary musicians. However, general
knowledge of Tenney's total oeuvre, and of the integrities found therein (to borrow a term
from Fuller, in whose work Tenney has always been interested) is at best spotty. To some,
Tenney is known as one of the first composers to suc- cessfully make use of the digital
synthesis techniques developed by Max Matthews at Bell Labs, and to make these ideas
known to the music world. He is also known for his groundbreaking work in the
development of compositional algorithms. To others, he is the pianist who plays the
Concord Sonata so wonderfully from memory, and who, as a conductor and pianist has
long been a courageous pioneer and advocate of contemporary music, particularly
American. He is known solely as a theorist to some, and as a composer to others. Very few
have the opportunity to appreciate the "complete" James Tenney, and I intend this current
effort as a small token toward this end.. In my attempt to provide an overview of the music
and the theoretical works of Tenney, several disclaimers need to be made. First, time and
space permit only brief analyses/descriptions, even of major works. It is my hope that
these small introductions will stimulate further consideration of this music. I am painfully
aware that because most of these works have not been discussed in print, and few
musicians are familiar with the majority of them, much of what I have to say might prove in
some ways incomplete and even slightly inaccurate, or at best only a part of the story. Yet,
since I believe that a sincere first effort is both necessary and better than none at all, I have
simply tried to include much of what I know or can determine about some of these works.
Second, much of the music is not recorded. Of those recordings that do exist, (due to
performance or recording problems), few are adequate representations of the music. In
many cases, we have only our eyes and imagination (but not our ears!) to make use of
when considering the pieces. Once again, I hope that this brief essay might stimulate more
frequent and careful performances of Tenney's work. Tenney's own strong critical and
analytical abilities make him the best authority on these works, and though he has been
generous and detailed in explaining 119 many of the musical ideas to me, my own
understanding remains at best that of a careful, interested and educated listener. It would
indeed be a wonderful thing to read Tenney's analyses of his own music some day;
something which we have all had a taste of in his published remarks on Ives, Nancarrow,
Ruggles, Varese, and others. Third, Tenney is still a young composer, and quite prolific. His
output of the last twenty years will probably require another twenty before its historical and
musicai significance is properly appreciated. I have tried to confine my comments mainly
to the descriptive, and to avoid historical and critical conclusions as much as possible. Yet
the reader will no doubt sense quite quickly my admiration for the man and his work, and
my feelings that the work represents a musical statement of unique importance in the
latter half of this century. "I know nothing I can say about any of these pieces can possibly
replace the extraordinary experience of listening to them, but I shall try,..., to communicate
some of my own observations, impressions, thoughts and feelings, in a way that may make
it easier for others to 'hear into' the music." (From Tenney's introduction to his article on
Nancarrow) Acknowledgements Many people deserve my sincere thanks for their help in
the preparation of this essay. Peter Garland, composer, indefa- tigible editor, publisher
and supporter of new music, pro- posed the idea in the first place, and has been
tremendously supportive and patient with me. Philip Corner, Malcolm Gold- stein, Michael
Byron and Alison Knowles all talked to me at length about Jim and his music, and their
remarks were invaluable. David Rosenboom, who has been my teacher, friend and of late,
colleague, has been extremely supportive intelGaburo lectually, emotionally and
editorially. Mark Haag and Ken contributed invaluable and prolific editorial advice, and Phil
Stone, a composer at the Mills CCM, was instrumenbara tal in Golden, the preparation and
editing of the manuscript. Barable Richard Povall and Jody Diamond were also valuported
in this regard. The omnipresent Lou Harrison has supothers) this paper and my own works
(as well as that of many thank my in other numerous significant ways. I would also like to
colleagues and students at the Mills College Music frequent Dept. and Center for
Contemporary Music, for their input and for creating a stimulating and congenial Alyssa
environment in which to work. My deepest appreciation is to Hess, who served as primary
editor, and without whom this work might not exist in its present form. Finally. 120 James
Tenney himself has been careful and quick to respond to my various questions and
requests, except to the one that he refrain from composing any new works until the article
was completed. The author would also like to thank Jeanne Jambu for her work on the
corrections and paste-up. 121 Introduction There are several important ideas which seem
to pervade and unite Tenney's work, and the understanding of them can aid in the proper
appreciation of the music. This "list" is by no means exhaustive. Not only does it
neccesarily omit certain "spiritual" and perhaps less defineable qualities, it also cannot
describe the synergy of his work (again, from Fuller: "The behavior of whole systems
unpredicted by the behavior of their parts taken seperately;" Synergetics, page 3) nor the
multiplicity of ways in which these ideas relate and interact, like the vertices of a complex
polyhedron. Economy Economy of idea, musical material, and above all "dramatic
embellishment" is extremely important in Tenney's music. "Avoidance of drama" is a
concept we will see again and again in the pieces that follow. Tenney is interested in
generative studies which are in themselves metaphors, representations, or even
invocations of philosophical, physical, or perceptual processes. His music is an attempt to
free these processes, to let them "resonate" - and he utilizes all his considerable
compositional skill towards this end. David Rosenboom has put it beautifully: "... Tenney is
a formal, conceptual purist, believing that ultimately a greater musical universality may be
achieved by sticking to the inspirations of nature and its evolving forms, rather than
clouding our perceptions with one man's emotive point of view." (private communication)
In a sense, many of the pieces (like For Ann (rising), the Chorales, the Harmonia, ...) are
monothematic in that they systematically and exhaustively explore the ramifications of a
particular sonic idea, using the various musical parameters to directly re-enforce the
perception of that idea. Thus, direct, large structures perhaps suggest what is often called
(lately) minimalism. Certainly Tenney was part of the musical "scene" from which that
school was born, but in his music I believe that it would be a misnomer. In every other way,
these pieces present the listener with a maximally complex set of musical events, in many
cases achieved by an equally maximal compositional effort (as in the string trio). Tenney is
constant in his fealty to the single idea, and all decisions in a given piece seem to be made
so that that same idea might be most clearly perceived, as well as most resonantly heard.
For example, once the harmonic idea 123 of the Chorales has been envisioned, the act of
writing the beautiful melody is a secondary but extremely important compositional task,
and one in which we can even find integral relationships to the harmonic "meta-theme".
Formal Ideas As a result of his quest for economy, simplicity and clarity, Tenney has
sometimes embraced ergodic and canonical forms, and in other cases has drawn the form
directly from some pre-existing material. I think that this is his way, in a Cagean sense, of
freeing the composer from the act of imposing a formal structure on sonic material, when
in fact the composer has no interest in or reason for doing so. In an ergodic structure, any
given temporal "slice" is equally likely to have the same parametric or morphological
statistical characteristics as any other slice. The listener realizes very early on that certain
things will not change, and that no surprises are in store for him along at least one given
axis. He is then free to concentrate on his/her perception of the resultants of a single set of
ideas. Examples of ergodic forms are the "koans", some of Quintext, For Ann (rising), the
Chorales, and several of the computer pieces. Canonical forms, such as the drum
quartets, the Harmonia, Quiet Fan, and Spectral CANON also free the listener from certain
dramatic and formal surprises, and allow the composer another way in which to realize
patterns, processes, and complexities from simpler, limited material. Tenney's mastery of
canon is wonderfully evident in everything from Seeds, which uses imitation in more subtle
but traditional ways, to the Harmonia, in which contrapuntal and harmonic ideas are
integrated in virtuosic ways reminiscent of the "masters". Examples of works in which pre-
existing structural or formal information is simply translated into the piece are the Three
Indigenous Songs, Saxony (the harmonic series!), Hey When I Sing..., and to some extent
Collage #1-("Blue Suede"). In these pieces, Tenney is happily "yielding" to the form of the
music or text that he at once pays homage to and transforms. This type of non-imposition
of dramatic intent is certainly consistent with both the canonic and ergodic structures, and
also with the revolutionary ideas of John Cage which have been of such tremendous
importance to Tenney. Several early works, particularly 13 Ways..., Seeds, Monody, and
the rags, employ more dramatic forms, as do parts of the later works. Tenney's facility with
this aspect of composition is quite evident. This perhaps finds its way into all his music,
touches of different poetic style occurring here and there. An important aspect of this
formal economy is the frea quent use of what might be called, borrowing from literary
usage, "multiple perspective," in which the same facts are presented in several different
narrative personae (as in, for example, Faulkner's As I Lay Dying ). This affords 124 Tenney
the opportunity to elicit variant "dramatic" 'perspectives of a single generative idea. In the
Chorales, a sin- gle melodic/harmonic idea is stated four times, with only the orchestration
changing. The Harmonia are, in a sense, several different personalities of the modulatory-
intonational scheme, and the Three Indigenous Songs can be seen as an attempt to alter
the meanings of the given musics/texts by placing them in a different "narrative" context.
Tenney's intent, I believe, is to allow the listener to extract his own aural "truths" from the
sonic "arguments" and in this way it emerges as still another device by which the
composer can free himself from imposing his dramatic will upon the audience. Historical
Sense A third important facet of Tenney's work is its strong sense of history. He often uses
and investigates the act of homage as a kind of aesthetic motif. Not only the titles of many
of the pieces, but the particular forms and questions asked in them point to his
tremendous sense of musical continuity, both with his contemporaries and with the past.
These references are not simply dedications - Tenney makes the things he loves into
essential, integral parts of his own works. Often his pieces take the form of a kind of pub-
lic and artistic communication with another artist. This respect is also shown in his
pervasive sense of American cultural and musical heritage. Tenney has made it his
business to promote American music in all of his several capacities. This is not, of course,
blind chauvinism, nor is it a reaction to a percieved oppression by European culture -
rather, it is an affirmation of his own background and knowledge, a sense that one can
perhaps make the deepest contribution if one is transforming what he knows best. Implicit
in this is the frequent use of quotation, which is again embedded in the very fabric of the
musical idea (like the drum quartets, or Quiet Fan). When quotations do appear, they are
usually the seed of the particular process at hand, although in several instances they are
juxtaposed with another, related idea (examples of both occur in Quiet Fan). Koan The
koan, a traditional zen question in which the answer is less important than the processes
stimulated by contemplation of an apparent paradox, is also important in most of the
pieces since 1964. Tenney likes to set a process in motion and let its aural manifestations
be a kind of meditative fabric, as in the music of Pauline Oliveros, LaMonte Young and
others. His processes/questions are often rather complex in their formulation - usually
outgrowths of the tireless investigation of deeper, perhaps "simpler" 125 musical and
perceptual problems. I have tried to illustrate in many of the pieces not only how the
immediacies of the music are beautiful and powerful, but that the theoretical formulations
that lie beneath are of tremendous interest and intricacy. In this sense, they are not unlike
the wonderful complement of intellectuality and sensuality one finds in the music of
Schoenberg, Webern, Ives, Ruggles, and a few others. "Clang" and "Swell" Two unique and
important formal ideas are common in Tenney's as in in music, perceptible both as simple
sonic events and formal/philosophical "generators". The first is the clang a term and idea
which has several shades of meaning Tenney's music. Its "formal" ramifications are
explored theoretical detail in META HODOS, but Tenney uses it frequently in a much
simpler way, in what might be called "aggregates" of indivisible sound combinations. In
this idea, we can see the powerful influence of the sonorities and techniques of both Cage
and Varèse. A frequent "Tenneyism" along these lines is a percussive attack followed
immediately by a sustained pitch and/or sound which seems to arise out of that attack.
This sonority, or some minor variation of it, is found in nearly every work. It is at once a kind
of philosophical integrity and simply a sound that Tenney likes. It is a consistent
compositional choice which contributes to the individuality of his music. The second
related formal idea is the swell (pun intended, I'm sure), found in so many pieces. The
"swell" is one of the simplest geometric forms: an arch with no plateau, the two sides of an
iscosceles triangle, the movement from nothingness to existence and then back again.
Often it is the entire work, as in several of the postcard pieces (which can be considered
either a single clang. swell, or both). Tenney's awareness of his own interest in these sonic
ideas is reflected in the fact that several works draw their titles from them. On a more
mundane level, they are identifiable musical signatures. Orchestration Something that has
often been overlooked in Tenney's style is the importance of his instrumental technique.
Tenney's mastery of instrumental nuance is essential to the clarity and uncompromised
quality of his work. Pieces like Seeds are more obvious examples of this, but the subtle use
of orchestration is even more important in Clang, Three Indi- genous Songs , the Harmonia,
Quiet Fan, and several other works in which the orchestrational virtuosity is not quite so
much in the foreground. Yet the instrumental choices are of paramount importance to the
final clarity of these pieces, and to their particular emotive effects. 126 Musical
"Personality" One final aspect of Tenney's work that should not be over- looked but is not
so easily demonstrated is the fundamental good-natured quality of it. He is genuinely glad
to be composing and making music, and this joy is as present in the music as the complex
musical and intellectual ideas. His music is a kind of exultation of musical truths and the
joy of experiment, and he is not above even poking fun at himself (punning shamelessly
and often). This childlike quality of Jim's music most absorbs my own interest, and is
perhaps what makes it so attractive to many others as well. 127 I. Seeds (1956-1961; for
flute, clarinet (in A), bassoon, horn, violin and 'cello ) Before discussing the larger work,
Seeds, I would briefly mention another work which originates in this period (later revised, in
1971), and which has much in common with it. Thirteen Ways of Looking at a Blackbird,
based on a poem by Wallace Stevens, and originally scored for two flutes, violin, viola,
'cello, and tenor voice (later rescored for flute/alto flute, oboe, viola, 'cello, bass and bass
voice) is a fine example of Tenney's early writing. Though not quite as developed in
contrapuntal and orchestrational technique as Seeds , it remains a fresh and beautiful
piece, and should certainly be performed more often than it is (almost never that I know of,
with the exception of one performance at Cal. Arts, the source of the recording I've heard).
If time and space permitted analysis of it here, I think we would discover many of the same
intervallic, formal and contrapuntal explorations that Tenney was later to use in Seeds,
Monody, and other works. Seeds, one of Tenney's earliest works (though it was revised
over a five year period), remains one of his most satisfying. In one sense, it is a six
movement study in the use of simple melodic motives (notably the minor second and
unison) to generate dense, yet lyrical musical structures. There are several techniques
used consistently throughout the work, and indeed, these techniques become the "seeds"
of many of his later musical ideas. For example, the use of klangfarbenmelodie, especially
in the case of one instrument attacking and a second sustaining a given pitch (the "clang"
spoken of above) is found in several later pieces, including the Swell Pieces, the Harmonia,
Clang, Hey When I Sing. Crystal Canon , and in a way the Chorales and Three Indigenous
Songs. This idea is also something of a distinguishing feature in the music of Varese, as can
be seen in this example from Deserts (Example I.1; note the use of minor seconds as well).
Each of the six movements, especially the first three, is a rather singular, focused
development of a simple idea, although in IV - VI this idea is not so easy to put into words,
and is perhaps better understood aurally. This monothematic trend in Tenney's music
grows more and more pervasive later on, finally becoming one of the central points of
Tenney's aesthetic. A strong interest in timbre, and, in a related way, vertical harmonic
relationships is present in Seeds, and this also presages much of Tenney's future
exploration. Seeds clearly shows Tenney's earliest musical influences: Webern (whose
complete works were first appearing on record in the late 1950's), and Varese (whom
Tenney came to know personally, and who must certainly be considered Tenney's most
dominant influence. In fact, Tenney is, at the present time, assisting in new editions of
Varese's 128 Example I.1 Pian EL) Psf2. (5 14-15) music). There are four rather specific
musical ideas central to this work: 1) the frequent use of the minor second (minor ninth,
major seventh) which indirectly, as in the case of Varese and Ruggles, generates quasi-
serial structures with little pitch class repetition. 2) short, transparent motives, continually
transformed (as in Webern) 3) the special case of klangfarbenmelodie where a unison is
passed from one timbre to the next ("note-passing"). 4) very little extended melodic
development of any kind, combined with a delicately interwoven contrapuntal texture.
(Both of #3 and #4 are characteristic of Varese as well). Formally, Tenney seems to have
set up a rather deliberate restriction for himself: the brevity of each movement does not
allow him to work with extended forms. In fact, I 129 can think of no work, with the possible
exception of the earliest pieces (like Thirteen Ways..., Essay for Chamber Orchestra,
Sonata for 10 Wind Instruments) in which Tenney has been interested in the type of
"dramatic" development conventionally associated with a large scale musical form.
Although this avoidance of a certain mode of composition has become more common in
the last twenty years, in the late 1950's and early 1960's it was not so - John Cage and
others had begun to experiment with the removal of the composer from composition, but
Tenney was one of the first to devote his explorations to what might be called "self-
generating" pieces. Tenney's debt to and friendship with Cage are profound and long-
lasting, and I believe he was one of the earliest composers to fully embrace and
understand the latter'S ideas, and then to develop and expand upon them. The brevity is, I
think, clearly out of interest and intent. In these short movements, Tenney is not thinking
"formally," but in terms of musical essence: stripping the music down to motives, timbres
(aggregates?), and what he later called clangs. Where traditional formal notions appear
(sectionality, recapitulatory ideas, drama), they are straightfoward and elegant. There is as
well a certain toying with serial procedures, though I think that Tenney devises his own
concepts of seriality in much the same way Wolpe and Ruggles did - using a defined set of
compositional and motivic primitives to organize the pitch material. This is not serialism
per se, but rather the result of a systematic avoidance of pitch class repetition to negate
tonal tendencies. (The following analyses/description should, in an ideal world,
accompany the score and/or a recording of Seeds. In the absence of these, this chapter
might appear relatively dense, but still (I hope) comprehensible to the reader. I have
attempted a more traditionally detailed analysis in this chapter because of the similar
intent of the work itself.) Movement I utilizes all of the above mentioned techniques. The
opening flute motive (Example I.2), a series of "minor-second" intervals, generates nearly
the entire movement. It reappears exactly four times, the last bringing in a sort of
recapitulation at measure 12 (accompanied by a Example 1.2 130 return to the initial
tempo). Although it is split up, inverted, condensed, etc. throughout, its most perceptible
variations occur in ms. 7-8 (Example I.3). Note the Vareselike quality of this opening solo
flute entrance (as in Example 1.4, taken from Octandre). Note-passing is common as well,
beginning in the first measure where the final E in the flute is preceded by a sixteenth-note
anticipation in the 'cello, which holds the pitch through measure two. These passings
occur in almost every measure, and the structure of the overall timbre is in some ways
simply a network of the minor second and note-passing sounds. A third motif in this
movement is a vertical texture derived from interlocking minor seconds, first heard in
measure 5 with the entrance of all the instruments for the first time (Example I.5). The
sonority recurs, like a chime tolling the musical progress, every few measures. Another
interesting aspect of note-passing, or timbral melody, is the passing of the minor second
motive through different instruments in complex ways. Often, the source of the first note in
a half-step pair is in another instrument, as in the E in the violin to the D# in the flute in
measure six. (This measure shows several uses of all these ideas). Measure six also marks
the end of (ms. 7) (Fhote) Example I.3 Clarinet) Example 1.4 (ms.1) Octandre (Varese)
Example I.5 (ms.S)cl Bogh LGminar nds 131 the first "section", and the low 'cello and
bassoon theme in the next measure begins the next, characterized by sustained minor
seconds (Example I.6a; I.6b) and more extended melodic passages, (such as the already
cited figure in measure 8). Measures 10-11, a slow duet for clarinet and bassoon in minor
seconds and rhythmic unison, return the piece to the initial theme, in measure 12, a kind of
third section similar to the first in and motivic figuration. Note that in texture measures 10-
1l (see example I.6b), the clarinet is several times voiced under the bassoon, in much the
same way as Varese does so often; for example, the frequent voicing of the piccolo under
the Eb clarinet in the second movement of Octandre, and in the final measure voicing of
the clarinet under the bassoon in that same movement. Example I.7 shows this same type
of voicing in a common chord from Integrales. Much later on, in "A History of (ns7) Bayoon
Example I.6a Example I.6b 077 ‫ ے‬Bayoon Example I.7 5,8. Pic 大 132 'Consonance' and
'Dissonance'", Tenney himself analyzes this same voicing in Varese in a discussion of
Helmholtz's idea that the instrumental voicings of certain dyads affect the degree of
dissonance because of the particular spectral con- figurations of each instrument.
Helmholtz declares that, for acoustical reasons, a major third will "sound better" between
a clarinet and oboe when the clarinet takes D and the oboe F# (because of the coincidence
of the 5th partial of the clarinet with the fourth of the oboe). If it were voiced the other way,
the clarinet's lack of even partials would alter this sonority. Tenney relates this nicely to
Varèse: "Now the question as to which of these two arrangements sounds "better" than the
other obviously depends on what I have called "esthetic attitudes" toward consonance and
dissonance, and it is possible to cite musical examples - especially from the 20th-century
literature - in which the same acoustical considerations (and perhaps, therefore, the same
form of the CDC) may well have determined the composer's decisions regarding
instrumentation, even though the esthetic attitudes have been reversed. Thus, for
example, the wonderfully searing dissonance (in the sense of CDC-5) created by the
piccolo and Eb clarinet at rehearsal number 1 (measure 16 in the revised edition) near the
beginning of the second movement of Varese's Octandre would have been far less
effective (assuming, as we may, that a strong dissonance is what Varese wanted here) if
the parts had been arranged in the more "normal" way, with the piccolo above the clarinet,
since the latter instrument has very little if any energy in its second partial (i.e. at the
octave) for the production of beats with the high F, whereas most of the energy in the
piccolo's tone is probably concentrated precisely in that second partial." (p. 113)
Movement II, is, for lack of a better phrase, a kind of textural rondo. It begins with a single
note being "passed" through all the instruments (horn, clarinet, bassoon, 'cello, flute, and
violin), a kind of natural extension of one of the generating motifs of the piece (like the
famous "single-note" movement of Carter's Eight Etudes and a Fantasy for woodwind
quintet (1952)). This D above middle C is the sole pitch of the first six measures. At
measure 7, there is a unison flute and violin theme, consisting of the three notes E, Bb, and
Eb, and this two measure duet can be seen as either a release from the first six measures
or as a seperate variation in itself. The following three measures (9-11) are again middle D,
with the order and rhythmic material being almost the same as the first three measures
133 (though the late entrances of the flute and violin are shifted in earlier, each more or
less takes on the rhythm the other had before). The next section, at measure thirteen, is
similar to the flute/violin variation. The 'cello's theme B-F-C descending, is the inversion of
the earlier theme, and the bassoon's C#-C ascending is followed closely in canon by the
'cello. Note that when these "variations" of the unison theme occur, they are in half step
relationship, or at least heavily based on half steps to the D natural. Measure 14 marks the
beginning of the final seсtion, which commences with the D being passed once again, but
now all the other themes enter above it: the flute and violin theme in measure 16, the C#-C
'cello and bassoon idea of ms. 14-15. In measures 17-18, there is a sort of climax of
sustained D's, earlier motives, and the use of minor ninths. At measure 19 (through 22)
there is a sudden thinning of texture, and once again each instrument in its turn sounds the
D natural, ending softly in the 'cello. This movement seems to have the most transparent
form of the six, yet the subtleties of rhythmic, dynamic and orchestrational manipulation
are quite brilliant. An astonishing variety of musical ideas is packed into a two minute (22
measure) span, and in terms of overall elegance of design and sheer beauty of form and
technique, this movement must certainly be considered one of the most interesting
examples of the miniature form. Movement III comes close to using a row, though it does
not seem to employ any other standard serial techniques. The opening four-and-one-half
measures (ending in ms. 5 with the C#/D sonority in the bassoon and 'cello) state the pitch
sequence E-D#-G-Ab-F#-F natural-B-C, with some minor overlappings (like the bassoon
repeat of E). The timbral shifts are quite beautiful and inventive, as in the first two
measures between the horn and bassoon (Example I.8). Also, for the first time there seems
to be a use of the "natural" Example I.8 Baycon 88) Hom Lissando) 134 -2 pairings of the
instruments: violin/'cello, horn/bassoon and flute/clarinet. In contrast to the quite airy and
spacious textures of the first four measures, the second presentation of the "row" happens
quite rapidly, in the next two measures (5-6), beginning on the clarinet D#, in almost the
same sequence. The last two measures consist of one sustained chord (Example I.9), with
an underlying soft pulsating bassoon, whose intervallic structure is predominantly half-
steр relations. The tri-sectional form of this movement is rendered transparent by the
contrasting textures of each part, yet there is an underlying coherence to it all, effected by
the pitch system and by very ingenious transitional timbres at the "seams" (e.g. the violin
glissando at the end of measure four and the sustained notes in the flute, 'cello, and violin
arising from measure six). As in the other movements, there is an economy of rhythmic
material that is easy to see in the score, and obvious to the ear, but hard to define. Certain
simple motives are used consistently, as in Example I.10a,b and c, and in the midst of all
the other complex contrapuntal activity this helps to retain (in all of the movements) a
sense of stability and directness not unlike that achieved by the use of a fixed set of
skeletal rhythmic prototypes in Webern's Concerto (opus 24). Although the "essentials" -
the germinating ideas and motives - are substantially the same, the forms of the first three
movements are relatively precise and well defined, while in the last three these same
primitives are used to generate larger and freer structures. Movements four Example I.9 d.
礼 (inc) 2 135 2 Example I.10a (m5,5) 'celle bass pedal Tm 5NVving bass (slow) mpmfmp -
moring bass (quich) 176-quotatión 217 bass in hatmony,slow moving example (VI.6)
indicate the number of beats (an eighth equals approx. 104, making the total length a little
over 14 minutes). We can see as well that there is an overall dynamic "envelope" over the
piece (YV) which also is self-replicating at lower levels (as in the last section). In Quiet Fan,
we can see clearly Tenney's interest in careful, consistent hierarchical structuring of
simple processes, not unlike his formal procedures in the computer works. Like a large
snowflake, Quiet Fan can be seen to have the same shape in several levels of detail. It is a
more complex work, upon inspection, than I'd thought after an initial hearing or two, and its
particular place in the chronology and history of Tenney's music reveals much about the
development of many of his musical ideas and techniques. Hey When I Sing These Four
Songs Hey Look What Happens, (SATB) is my personal favorite of Tenney's music, and in
many ways it is one of the simplest pieces he has written. It is also one of the most joyous,
and it contains many of the more interesting characteristics of all his work: a stripping
away of unnecessary dramatic materials to produce an even greater, starkly dramatic
effect; a very simple and predictable form; and the probing, sensitive, and imaginative use
of indigenous materials. The whole score is printed in Example VI.7. Analytically, only a few
things need be said aside from the fact that it deserves to be performed more often (I don't
know of any recent performances). The technical aspects of the rhythm and melody are
similar to much of his later work: the rhythms are derived from the natural speech rhythms
(this may be most clearly seen in the last line of the basses), and the pitch materials are
derived from a limited set, a pentatonic closely related to the harmonic series (in this case:
C-D-E-G-Bb, or 1,9,5,3,7 - the first five odd numbers of the harmonic series). This latter is a
simple, early usage of the type of harmonic thinking that would become the predominant
tonal theme in virtually all of his later music. The "swell" and "clang" ideas are also present
here, in the sustain and phrasing of the upper voices from the initial attacks of the bass. I
cannot overstate the need to hear this short but powerful work - music more illustrative of
James Tenney than any analysis or description could ever aspire to be. 191 Example VI.7
Hey When & Sing These 4 Songs Hey Look What Happens (the text is a translation from the
troguois by Terome Rothenbers) =112 JamesTenney (xovibsato!) S/A 4 hey hey (..)5 yeah
yeah so 3 stroug 8 8 8 (almost a shout) Cord) T hey! hey! (ee...) yeah! yeah! So strong B hey
when & sing hey---if can help her yeah it can yeah it's so strong P S/A 4 hew hey (e) yeah
yeak so 3 strong 8 T 3 14 hey! key! (ee....) yeah! yeah! 50 strong B タ hey when sing hey-.. it
can raise her yeah it can yeak it's so strong 5/A ney. hey (ee...) yeal yeak so Pstrong T FA P 3
hey! hey! P(ee...) yoah! yeah strong B hey when sing hey---- her arm getsstraight-er yeah it
can yeak it's 50 strong poco‫)م‬andtime only) S/A hehey hey 5 (ee...) 8 yeah yeah so strong T P
hey! hey! (ee....) Yeah! yeah 50 strong R 3 Alpheus Munic hey Hullywood. Calif hay whent
M-116 sing her bo-dy gets straight-er yealk it can yeah it's so strong Noter tenorsand
basses sing sacl line through twice;sopsanos and altes join in the second time only. 10)
1971 Sames lenneg 192 VII. Postal Pieces The postal pieces, written between 1965 and
1971, but actually produced in 1971 (with the help of Alison Knowles and Marie McRoy at
Cal. Arts), are a series of ten short works printed on post cards. Several of the pieces were
written in and around 1971 for a few of Tenney's friends at Cal. Arts. His explanation of the
set is that he hated to write letters, and since he had a number of very short compositions,
what could be easier than to make postcards out of them. Whether this was an idea
original to Tenney or not is rather academic (Pauline Oliveros' wonderful postcards are the
only other example I know), but musically, I think that the series is certainly unique. The set
consists of: Scorecard #: 1 Beast (7/30/71) 2 A Rose is a Rose is a Round (3/70) 3 (night)
(876/71) 4 Koan (8/16/71) 5 Maximusic (6/16/65) 6 Swell Piece (12/67) 7 Swell Piece #2 and
Swell Piece #3 (March 1971) 8 August Harp (8/17/71) 9 Cellogram (8/17/71) 10 Having
Never Written a Note for Percussion (8/16/71) (The entire set is reproduced in the following
pages). Seven of the ten pieces were written in 1971 (the same year as Quiet Fan and Hey
When I Sing..., with the second two "swell" pieces in the same month as the latter), and of
these, Beast, (night), Having Never Written a Note..., (two percussion pieces written on the
same day), Koan, August Harp and Cellogram (the latter two again written on the same
day, Koan the day before) are all written within about two weeks of each other. On the back
of each is the indication 1954-1971 for the set, which is somewhat confusing since none of
the pieces seem to have been written that early. He had originally intended to include two
small songs written on short poems by the important experimental filmmaker Stan
Brakhage, but that was never completed. of Most of the pieces deal with one of three
fundamental ideas: intonation; the swell idea, which we have seen earlier but which here
becomes explicit; the unadorned use musical structures which will produce meditative
perceptual states. In these latter, the listener, and to some extent the performer have to
create their own "dramas" and interpretations (in this sense, Tenney and others sometimes
refer to all the pieces as "koans", although only one is named). None of these ideas are
new in the context Tenney's work, but in these pieces he is presenting them almost as
theorems, and leaves no doubt as to their intent. so of 193 Tenney elucidated these ideas
at some length in an interview (1978) with Canadian composer, writer and instrument
builder Gayle Young: "GY: How do you deal with musical form, in that light? You obviously
wouldn't be concerned with the release of tension which is the conclusion of the usual
type of classical music? JT: No. I think of form as the same thing, on a larger temporal
scale, as what's called content on a smaller scale. That old form/content dichotomy is, to
me, a spurious one, because they involve the same thing at different hierarchical levels of
perception. What we take to be the substance or content of some sound - say, a string
quartet - is really the result of forms - formal shapes and structures at a microscopic, or
'microphonic' level: particular envelopes, wave-forms, and sequences of these - details in
the signal. All form is just the same thing at a larger level, involving spans of time over, say,
five or ten or twenty minutes or more. It's. precisely the same thing physically. When you
begin to see it that way, you can begin to feel it musically. So my interest in form is identical
to my interest in sound (LAUGHS). GY: Your postcard pieces, for example, are essentially a
single musical gesture that continues until it's over. JT: Those pieces have a lot to do with
this attitude toward sound, but also with something else, which is the notion of the
avoidance of drama. They involve a very high degree of predictability. If the audience can
just believe it, after they've heard the first twenty seconds of the piece, they can almost
determine what's going to happen the whole"est of the time. When they know that's the
case, they don't have to worry about it anymore they don't have to sit on the edge of their
seats... GY: Waiting for the big bang. JT: What they can do is begin to really listen to the
sounds, get inside them, notice the details, and consider or meditate on the overall shape
of the piece, simple as it may be. It's often interesting how within a simple shape there can
be relationships that are surprising. It's curious - in a way, the result in this highly
determinate situation is the same as in an indeterminate one, where things are changing so
rapidly and unpredictably that you lose any sense of drama there, too. Now people react to
that in two different ways: some are angry about it, because they expect, and demand,
meaningful drama. But if you can relax that demand and say 'no, this is not drama, this is
just 'change' (LAUGHS) - 194 then you can listen to the sounds for themselves rather than
in relation to what proceeded or what will follow. GY: Would you go so far as to say, 'Sound
for the sake of sound'? JT: It's sound for the sake of perceptual insight - some kind of
perceptual revelation. Somehow it seems to me that that's what we're all doing - searching
to understand our own perceptual processes. In a way, science is about the same thing,
but its enterprise seems to understand the nature of reality through thought and
intellection. It seems to me art is about understanding reality to the same extent, and as
singularly, but through a different modality - through perception. (p. 16 "Only Paper Today"
- June 1978) Beast, written for Buell Neidlinger, the great jazz and classical bassist, is one
of the most well known and performed of the set. It is a study in rhythm, using the low
frequency first order difference tones (or slow beats) produced by the simultaneous
sounding of two bass strings whose relative intonation is constantly changing. The bass
low E-string is tuned to Eb, and assuming an A=440 с.p.s (55 c.p.s. three octaves below), a
little calculation shows that, as Tenney says, the open tritone below it has a frequency of
about 38.8 c.p.s.. The maximum amount of beats produced, or the "quickest" tempo of the
piece is about 16 per second. (In comparison, a just tuned E below would have a frequency
of 41.25 c.p.s., producing 13.75 beats per second). One can see that the number of beats
per second produced is directly proportional to the distance from the unison, since the
frequency differences increase accordingly. (This should not of course be confused with
the relative consonance of an interval, which might be related more closely to the entire
system of beat frequencies between the spectra of two tones. See Tenney's "consonance-
dissonance" theory for more on this). Beast, whose title is a double-entendre on the word
"beats" and on jazz vernacular (in homage to Neidlinger's virtuousity), is seven minutes
long, and its form is rather simply related to the Fibonacci series and to the idea of
recursive replication of inner forms (as in so much of his other music, like Quiet Fan, and
the computer pieces). Incidentally, this type of thinking predates by many years the recent
interest of many composers in the use of fractals, functions whose "shape" is replicated at
infinitely many levels of detail. The score indicates the "target" values for the beat
frequencies, connecting them sinusoidally, with each of the four large humps made up of
smaller ones which resemble the "swell" type shape. The durations of the four large
humps, whose respec- tive target beat per second values are 3,6,10, and 15, (in a roughly
exponential series); are 1 minute, 1 minute, 2 195 1 minutes and 3 minutes (as in the
Fibonacci series). In addition, the intermediate values in each of these larger shapes are
1,3,6,10 and 15. The intervals that roughly correspond with these target values are: a 53
cents flat major third (55/45 = 11/9 = 1Ø beats per second); a 32 cents flat major second
(55/49 = 6 beats per second); a 3 cents flat semitone (55.52 = 3 beats per second); and a
sixth-tone (55/4 = beat per second). In performance, this is all done course by the players'
ear, who need not be versed in the intonational arithmetic manipulations. I have now heard
Beast several times, and my impression is that of a stark and unassumingly beautiful sonic
meditation, that like the other pieces, asks more questions that it answers. of A Rose is a
Rose is a Round is written for Tenney's old friend Philip Corner who, for a short period in the
late 1960's, composed rounds almost exclusively. Tenney's postcard (the only one in color
- rosy pink) is a very direct homage to his friend's interest. (Corner has told me that
originally the intention was that of an exchange of pieces - a Tenney round for a Corner rag,
but Corner has been delinquent in his end of the trade). It is, I think, meant as a kind of
amusement, and is a clever use of simple diatonic melody that cycles out of phase with
itself. It is written in circular notation for reasons more visual than musical, and could just
as easily be written conventionally. Each successive musical phrase starts on a different
word of the three word pattern, resulting in the three repeating lyrics (A ROSE IS/ ROSE IS
A/ IS A ROSE), since the melody has 11 notes in it (non-divisable by three until it is repeated
three times). The only remaining "trick" is the traditional canonic requirement of finding the
best place to start the repetition (I use the word "start" loosely here). Tenney's solution,
beginning the "inner" melody six beats behind the outer, minimizes the number of vertical
seconds in the melody and emphasizes a conventionally consonant contrapuntal texture.
The second best solution (Example VII.1, beginning on the third eighth), will not observe the
metrical and lyrical structure and will also result in two fourths. There are other reasons
why Tenney's is the "optimal" solution. As Philip Corner has pointed out, Tenney's canon is
also interesting in terms of its harmonic (tonic-dominant) implications, symmetry,
contrapuntal obliquity, and textual alignment. What can be seen from this is how carefully
Tenney explored someone else's idea, and I think this meticulousness is central to the idea
of proper homage prevalent in so much of his music. (night), for the composer Harold
Budd, whose lush and lyrical music made a deep impression on Tenney at Cal. Arts, and
who became a good friend, is a piece about which little can be said. It seems to be a kind
of musical poetic evocation of the nature of Budd's music, and is rather singular 196
Example VII.1 BEAST FOR STRING tor BuallH Tems e Rahing denne ta Eb. The pisee hagins
with d uodiaom (areg on lneeeom asen A-Digneigad We (toegped) 516)-ts. hiuoR indinathe
changan n egre os Whe heate anndiend by Mase tie etrings sorundins thair erler ne We
pitol ofeHhe Soner othisg is wery gandeally changed (Aperfest urtson would dave a hast
Srareermen onf geras the angmentat fomrthe hotider the adin A-ithieg sud, he rpri E/-t l lare
a lat Sequenes of approsiataly 16 chould he ae sontins ane as poscible, and veny
hesomant Mougd not mecensorile lou Example VII.2 ARoSE Is A RosE Is A RoUND for Philip
Corner rose rose is a ro oseisa rose is a rose a rose isa a rose is a oseisar rose i6 c rose a i
James Tenney March,1970 197 Example VII.3 wice A Sim's solution altenativo Solestion D
foye is a lose isa fgelsa lose is a lose is a loge is a loseis a rose.... A A rose is a lose is a
loge1s a loge 1s a loye is a loge 15 loss is a vage is a logis a lose is a bgels a roselgia tose is
in the set and in Tenney's opus as well. Koan is written for violinist and composer Malcolm
Goldstein, one of the co-founders of Tone Roads with Tenney and Corner. A kind of
miniature For Ann (rising), it consists of a perpetually ascending tremolando double-stop.
The continuity is effected by dovetailing the glissandi on adjacent strings (e.g., the G rises
to an A above D before the D string begins to ascend). In a sense, it is a tribute to and study
of the rather personal and introspective nature of Goldstein's work, both as performer and
composer. It can be quite long, and Goldstein has said that although at first it was
physically difficult to perform, on successive playings the piece became much easier, as
he relaxed and ceased to worry about it. As I've mentioned above, a koan is a "question" in
classical zen tradition which a teacher or master poses to his student, not so much to
answer as to ponder. Typically, it involves some apparent paradox or inconsistency, as in
"There is a high mountain in a range where all others have snow on top, yet this one is
snowless". Something that has interested me about this piece, after several hearings, is
the question "In this koan, who is the teacher and who is the student?" Maximusic was
written for another good friend of 198 Tenney's: percussionist, composer, sculptor, etc.,
Mаx Neuhaus. This piece is an inversion of the swell idea, with the attack happening in the
middle. It is the earliest of the postal pieces, and I think that the era in which it was written
(the middle 60's, when Tenney was involved in various artistic movements in N.Y.C. like
FLUXUS and the "art happenings") has something to do with the form and nature of the
piece. (Tenney has said that it is also a "parody on European music of that period"). Swell
Piece for Alison Knowles, N.Y. artist, sculptor, composer and poet, is perhaps the
expression of the swell idea in its simplest form. It is an early example of what is currently
called "minimalism", though I think Tenney would likely reject that description of any of his
work. It was about this time, 1967, that Alison Knowles created the famous House of Dust,
a computer aided poem/sculpture, with Tenney's assistance. (The poem/computer
program actually grew out of an informal "course" in FORTRAN Tenney gave to several of
his friends, including Philip Corner, Dick Higgins, Alison Knowles, Jackson MacLow, Max
Neuhaus, Nam June Paik, and Steve Reich). Swell Piece #2 and #3 were written for Pauline
Oliveros and LaMonte Young respectively, two composers whose work Tenney admires.
These are two lemmas (or variations) on the swell "theorem". The first stresses personal
sonic/perceptual processes (with respect to 0Oliveros' sonic meditations), and the second
is a "parody" of LaMonte Young's famous "B-F# (hold for a very long time"). August Harp
was written for the harpist Susan Allen (in August), a study of possible pedal combinations
of an adjacent diatonic tetrachord. Each one of the combinations is to be played four
times, until the harpist feels she has run out of combinations. Since each of the four strings
can take three possible values, there are 81 possible combinations (thus 324 notes at a
slow tempo). Note that many of the pedal combinations produce enharmonic octave
doublings, with seconds being the most predominant interval, as a kind of secondary
statistical resultant. Cellogram, written for Joel Krosnick the same day as August Harp, is
similar to Beast in its use of resultant tones, similar to Koan in instrumental technique, and
strangely similar to Quiet Fan in its use of a kind of aborted coda at the end. Once again,
the ideas of inner canonical form and replication of small shapes at large levels are
present. Having Never Written a Note for Percussion is my favorite of the postal pieces,
and extremely popular among many percussionists I've known. Written for John Bergamo,
199 Example VII.4 very soft For Percussion Perhaps, Or.... very long nearly white Example
VII.5 KOAN (very slow glissando) mp (night) for Harold Budd James Tenney 8/6/71 for solo
violin for Malcolm Goldstein (A)) (b) (A) E) (A (4) (e) (E) ADAD AБА GDOD ADAD fairly sbu
tremolo (8-10 nete-paire sar bow 200 (mp) -AP graduallymove towerd bridge, untilnothing
but noise is heard. Jamor Tennay 8/16/71 Example VII.6 MAXIMUSIC for Max Neuhaus
Example VII.7 (1) Soft roll on large cymbal; constant, resonant, very long. (2) Sudden loud,
fast improvisation on all the other (percussion) instruments except the tam-tam(s)-
especially (but not only) non-sustaining ones; constant texture; continue until nearly
exhausted from the physical effort, but not as long as (1); end with tam-tam(s) (not used
until now)- just one blow, as loud as possible. (3) Same as (1), but now inaudible until all
the other sounds have faded; continue ad lib but not as long as (1) or (2), then let the
cymbal fade out by itself. James Tenney 6/16/65 Example VII.8 Swell Piece for Alison
Knowles To be performed by any number of instruments beyond three, and lasting any
length of time previously agreed upon. Each performer plays one long tone after another
(actual durations and pitches free and independent). Each tone begins as softly as
possible, builds up to maximum intensity, then fades away again into (individual) silence.
Within each tone, as little change of pitch or timbre as possible, in spite of the intensity
changes. James Tenney 12/67 201 SWELL PIECE NO, 2 (for any five or more different
sustaining instruments): for Pauline Oliveros Each performer plays A-440, beginning as
softly as possible, building up to maximum intensity, then fading away again into
(individual) silence. This process is repeated by each performer in a way that is
rhythmically independent of any other performer, until a previously agreed-upon length of
time has elapsed. Within each tone, as little change of pitch or timbre as possible. SWELL
PIECE NO. 3 (for any eight or more different sustaining instruments): with respect to
LaMonte Young and his COMPOSITION 1960, No. 7 Divide the instruments into two
approximately equalnumbered groups, primarily on the basis of a treble-bass distinction.
The higher-pitched group plays the F-sharp a tritone above middle C, the lower-pitched
group the B a fifth below. Play as in Swell Piece No. 2, but "for a long time". James Tenney
March, 1971 Example VII.9 AUGUST HARР for Susan Allen + 옥 Harp Let the tempo be
determined byy and synchrorous with the breath (↓=exhalation, = inhalation) Play this
figure four times with each peda1- combination. After every fourth repetition, improvise a
new pedalchange for one or more of these four strings. Try not to repcat a pedal-
comination already used. Continue as long as any variation still seems possitle. James
Tenney 8/17/71 Example VII.10 CELLOGRAM for Joel Krosnick EAD6- (0=10-30 seconds) A
(mmpAp Jamai Tenney 8/17/71 percussion teacher at Cal. Arts, the piece (and the multiple
entendre) usually consists of one continuous roll on a tamtam (although that instrument
does not appear on the score, and I think it would be interesting to perform the piece
occasionally on another instrument), with a crescendo from quadruple piano to quadruple
forte and then back down again. The only duration inidication is "very long", and the several
performances I've heard range from eight minutes to about 20. All are quite astonishing, as
the gentle inaudible hum of the instrument builds into a complex and somewhat 202
frightening chaos of non-periodic spectra, room resonances, illusory tones, and
indescribable concurrences with the listener's psyche. I think it is fitting that it is the last of
the scorecards, for in a way it most clearly expresses the intent of the whole set.
Incidentally, the titular claim, as far as I know, was true. Example VII.11 HAVING NEVER
WRITTEN A NOTE FOR PERCUSSION for John Bergamo (very long) H Apep 203 ま James
Tenney 8/6/71 pppp VIII. Clang In Clang, we see Tenney's first use of the "diminished"
mode made up of the first eight primes of the overtone series: 1,3,5,7,11,17,19. In
ascending scale order (octave reduced) it is as follows (Example VIII.1). +5 +3 -14 -49 +a -
59 -31 17 195 7 13 3 ‫ ון‬This scale, though not this particular justification for it perhaps, has
been of some importance in the music of the twentieth century, in everything from
Stravinsky, Ives and Lou Harrison to Herbie Hancock and jazz. Harrison calls it the
"octaphonic mode" and has used it often, most recently in the riveting second movement
of his Double Concerto for violin, 'cello, and gamelan. It is also called the "octatonic", and
jazz players know it as the "altered dominant" or the alternate mode of the diminshed scale
(which is whole/half step rather than half/whole step). Its interesting property of containing
two major triads a tritone apart has been of some harmonic consequence in modern
music, and in the music of Chopin, Scriabin, and many others. Tenney's thoughts on the
ramifications of these chords and this scale are best left for him to express, but it clearly
ties in with many of his thoughts on the nature of consonance and dissonance, and the
acoustical foundations for these concepts. Since Clang, most of his music has concerned
itself with the overtone series, and this scale in particular. Clang is the first statement of
the idea, and is extremely straightforward and elegant in this regard. Tenney borrows the
title from his own earlier reference to gestalt theory, and as in Fabric for Che, conceives of
the piece as "one single modulated sonic event". That it is itself a "swell" is no surprise,
and like many other works (August Harp, Chorales) it concerns itself both on the large and
small scale with the single breath. "... each player chooses, at random, one after another
of these available pitches... and plays it very softly (almost inaudibly), gradually increasing
the intensity to the dynamic level indicated..., then gradually decreasing the intensity again
to inaudibility... this crescendo-decrescendo sequence 204 should be timed so that both
segments of the tone are of approximately the same length, and so that the total duration
of the tone is as long as it may comfortably be within one breath..." (- from Instructions to
Clang) Clang is scored for orchestra, and the score consists of available pitches for each
instrument in a set of temporal sections, gradually building up the entire scale (sections 1-
7) and then breaking it down over the course of about fifteen minutes (sections 8b-). The
buildup is achieved by gradually widening the "bandwidth" around the initial E natural, until
the entire orchestral range is filled. The rate of density increase is of course exponential, as
is the decay after about ten and a half minutes, and the timbral manipulation achieved by
the choice of instrument entrances is done with great care to achieve a smooth textural
transition throughout. The decay is a rather interesting octaval canon, beginning with the
higher primes in the lowest octaves. At section 8b, the pitches F and G drop out (17th and
19th harmonic) in the lowest register. In the next section, these same pitches drop out in
the next highest register while A# and C# (llth and 13th) drop out in the lowest. In the
following section, the pattern continues up into the next highest register (17 and 19,11 and
13,5 and 7 for the three lowest octaves starting from the top) and so on until we percieve
an approximation of the actual harmonic series, since the highest partials are only present
in the higher octaves. Eventually, they drop out as well, and the piece ends with a six
octave unison E. Note that the rate of "pitch-loss" is also exponential. Clang, one of
Tenney's finest and clearest works, awaits a valid performance. It has been played only
once, to my knowledge, in a kind of reading by the L.A. Philharmonic, and one senses from
the recording that the musicians were not entirely committed to the act of playing simple,
sustained tones. With the growing acceptance of new music by more conservatively
trained orchestral musicians, we might hope to someday hear the piece as it is intended,
and I think this will be quite an experience, though we should have waited ten years for it. In
the Aeolian Mode (reproduced in full in Example VIII.2) is one of Tenney's simplest pieces,
and was also written about this time (along with a few other experiments along the same
lines). It was written for the California New Music Ensemble, an excellent group of Cal. Arts
student performers. Like many of the postal pieces, it very simply expresses Tenney's
continued interest in soft, continuous, and unassuming textures. Tenney has never been
paricularly interested in improvisation, and this piece is one of the few cases where he
allows the musicians to improvise 205 melodically, though in a very limited way. 206 IN
THE AEOLIAN MODE Example VIII.2 (for the California New Music Ensemble) James Tenney
3/73 for prepared piano, marimba, vibraphone, flute and alto voice (this ensemble may be
augmented by harp, clarinet, muted violin or viola, and/or other similarly gentle
instrumental timbres). Each player improvises a continuous melodic line on these pitches
(always beginning on A, and using the G and F as neighboring tones only) -- legato, mp,
mostly in eighth-notes at about mm = 180, with all players synchronous on the eighths. Let
a performance begin with the prepared piano, the other players entering freely.
Occasionally any player may drop out for a short time, but this is to be preceded by a
"cadence" consisting of a sequence of different A's (in any octave), at any higher multiple
of the eighth-note unit (i.e. quarters, dotted quarters, half-notes, etc.). The pianist should
prepare the following strings in such a way that the aggregates produced each contain a
prominent pitch at the octave (or the twelfth). The damper-pedal should be held down
throughout the performance. The vibraphone pedal should be held down, with motor off.
Soft mallets should be used for both vibraphone and marimba. The performance may be of
any duration, but the longer the better. The end will be signalled by the pianist playing (for
the first time) his lowest A, thus: The other players then play their own "cadences",
sustaining the last note until a cut-off cued by the pianist. (f) 8ve lower 1973 James Tenney
207 IX. Quintext (Five Textures for String Quartet and Bass The five movements of Quintext
are individual studies in the abandonment of melody and drama, the exploration of certain
"essential" characteristics of string instruments, and in the creation of static textural
environments in which microstructural motion is undetermined, but whose
macrostructure has a clear, precise, and powerful unification. In three of them, this
unification is an harmonic idea derived from the harmonic series, in the two others it is
mainly textural. Each is dedicated to a different composer, and in much the same manner
as Koan and A Rose is a Rose is a Round, reflect some aspect of that composer's ideas,
though all pay quite different sorts of homage. Quintext #1, Some Recent THOUGHTS for
Morton Feldman, takes both its title and much of its texture from Feldman's own pieces.
Tenney has said that "it's the closest I've ever come" to stasis, and therein lies the nature of
the experiment (though I might point out Ergodos II along the same lines). The harmonic
scale is clearly defined, and it is much the same as was described above for clang. The
precise scale here is the first 13 odd harmonics, or in order of their appearance in the
harmonic series on F: FC A D# G B Db E F# G# Bb в C# 1 3 5 7 9 11 13 15 17 19 21 23 25
Example IX.1 shows the pitches placed in scale order and with the cents deviations from
tempered tuning added above. In Some Recent THOUGHTS.., the total range (from triple
low F to double high F#, or about five octaves) is partitioned among the five instruments as
shown in Example IX.2. In the piece, each instrument plays those notes only (with one
exception), and just once. The exception is that both the second violin and the viola play
the D# (seventh harmonic) above high C, though at different times - because Tenney
wanted to end the piece on a spread out dominant seventn chord (or the first four primes of
the harmonic series). Note that there are 67 pitches used in the piece (14 in the 'cello, bass
and second violin; 13 in the viola; 12 in the first violin), though one note (G above middle C,
or the 9th harmonic) is omitted, while the first pitches of another octave are included. The
reasons for this are not clear to Example IX.1 Lonts deviations: +5 +4 -14 -44 +a -28 +4 -31 2
Harmonic 17 a "19 5 21 23 3 25 7 15 208 Example IX. 2 Suale Pistibutions Some Recent
Thoughts Bass E Cello VidiaII Viola ViolinI me, and probably insignificant, as the piece was
composed using some random procedure (according to Tenney "cointosses, dice or
telephone numbers, etc...."). The durations and the particular partition of the total range
was I think selected by Tenney in a rather simple way - by ear. Certain things which recur a
few times and give much of the movement it's characteristic form (like successive octaves
in the same instrument and quasi-canonic passages) are probably happenstance. What he
is aiming for (through vastly different techniques) is an evocation of the soft, static,
vertical, and almost inexplicably beautiful harmonic structures that are somehow peculiar
to Feldman's music. Quintext #2, CLOUDS for Iannis Xenakis, is structurally one of the
simplest of the set. The piece is diagrammed in Example IX.3, with the second half the
exact retrograde of the first. In each successive section (sound plus silence), the sound
portion increases by one second while the silence decreases by two. After six such, there
are five seconds of 209 Example IX.3 (sownd) ‫ )در‬silence) a 10" 3 4 6" " " 5 a 611 " 8 ‫ ور‬a 4 20
sound and no silence, and then begins a perfect retrograde. At this point the piece
"rotates" upon itself, and the seven-second sections are created by the juxtaposition of the
mirrored five second sound segments with the two second silences surrounding them.
Another way to explain it is that each silent section combines with the sound preceding
and succeeding it, creating two different ordered sound/silence combinations. The effect
is that of, say, a cloud gradually covering the sun and then moving on. It is, though
extremely simple, quite an exhilarating work. The pitches are only approximately indicated
(one problem is that the notation seems to encourage players to only play "white notes"),
and are plotted randomly along what seem to be sinusoidal paths. The pitch configuration
in the second half is also the exact retrograde of the first. Example IX.4 shows an entire
page of the score (the first). Quintext #3, A Choir of ANGELS for Carl Ruggles, is a textural
parody of Ruggles' short masterpiece Angels (usually performed with four trumpets and
three trombones, 210 A सम 쇼 + 生 E 2 4 + A oe A + サ 时 A + 4 # 中 # 3 + # # 开 全 A 女 호 * 4

4 中 中 + A 호 호 女 (0 표 12" 10" 211 Example IX.4 although there are other versions,
including one for violins and 'celli). #3 begins on the same close position min/maj chord
with an added seventh as does Angels (though in a dif ferent key - Example IX.5), and there
are several more subtle stylistic homages embedded in the work. The distinctive melodic
writing in Angels (Example IX.6, shows the first eight measures in the top trumpets),
characterized by major and minor third leaps which wind slowly back upon themselves, is
seen in rhythmic augmentation in each of the voices in #3 (though generally the intervals
are wider - fourths and tritones). Example IX.7 shows this in the pitches of the 'cello line
(without rhythms). The contrapuntal texture also recalls the parent work in the way the
registers are used so that each instrument's range is about the same as any other (the
violins play in their low Example IX.5 Example IX.6 Serone 40 Tots. 100 91 Example IX.7 (2
212 ( rit register, 'celli and basses in their high). Though in Angels, the voices/instruments
almost never cross (except as brief suspensions), Ruggles always scores the lines as close
as possible, resulting in an almost constant texture of minor seconds. In #3, this
orchestrational technique is taken further, to the point where registral interweaving is quite
common. Example IX.8 shows this in the first two measures, and Example IX.9 shows the
last chord, which contains the closest possible minor-second network, as well as several,
voice crossings (incidentally, this is reminiscent of a Varesian orchestral nuance that we
saw in Seeds). Note that the final chord of Angels (spelled Ab-C#-Eb-E naturalC-E natural)
is just another version of the initial "Ruggles chord". If we take C# as the root, the chord can
be seen as a minor triad with an added major seventh, and an additional diminished third
degree, creating the same ambiguous third relationship (this time in the "other direction").
The timbre of %3 is quite unusual and ethereal (sul ponticello throughout), possibly
another reference to Angels, which calls for muted brass. Even the number of measures is
similar (45 in Tenney: 47 in Ruggles). Once again, the pitches used are intonations derived
from the odd harmonics. But perhaps the clearest aspect of this homage inherent in #3 is
the almost brutal "rawness" of the sonority and form. Tenney extracts this New England
personality from Example IX.8 cel!e Example IX.9 tvidin (mS.1-2) bags 213 とます tader
Ruggles' music, and sets it on a new and rather beautifully parallel path. #3 is, in the final
consideration, a simple kind of poetic remembrance of a great American composer.
Quintext #4, PARABOLAS and HYPERBOLAS for Edgard Varese is similar in form and
simplicity to Clouds. In it, Tenney is to some extent experimenting with the various
"thresholds" of our harmonic perception (the last chord, for example, is an approximate
dominant seventh). The compositional process consisted of drawing random points on the
staff for each instrument within a continually decreasing vertical range, in which all of the
instruments are eventually assigned to the small region around middle C, Tenney then
"connected the dots" via hyperboloid and paraboloid line segments. The piece is 5 minutes
and 36 seconds long, with the second half (2'48") the exact mirror image of the first. This
process is a primitive version of the stochastic computer programs written at Bell Labs,
with the "mean value" taken to be the same for each instrument (though in reality what this
does is skew their distributions slightly) so that the total string quintet range of possibilities
converges stochastically to a fixed point (zero range). Not only the title, which is a
paraphrase, but the sound itself is reminiscent of Varèse, especially of his early and
revolutionary use of instruments like the siren and the natural occurrence of industrial
sounds in his music. Example IX.1Ø shows the second page of the piece. Quintext #5,
SPECTRA for Harry Partch is at once the simplest and most complex of the five. It makes
the most extensive use of harmony and the ability of strings to produce complex just
intonation in a simple way (by natural harmonics), yet aurally, it is the most free and
formless. I think that the use of scordatura and natural harmonics here merits some
detailed explanation - the piece is quite visionary in its approach and important in light of
its early solution to the problem facing composers today who are interested in just
intonation. What Tenney does, by the careful scordatura and the use of the harmonic
nodes up to seven on each string (higher than that would be risky in performance) is
produce a total harmonic spectrum of 23 different pitches, the highest being the 105th
term in the harmonic series (the seventh node of the string tuned to the 15th partial).
Example [Link] shows the scordatura and the available pitches as natural harmonics up to
the seventh on each string. Roman numerals indicate the string number, and the smaller
arabic numerals under certain pitches (including the open strings - the scordatura)
indicate the harmonic number (irrespective of the octave placement) of the given node.
Tenney selected as open nodes the first eight odd harmonics, (1,3,5,7,9,11,13,15) so that
the first seven in each of their resultant series might be produced as well, as "secondary"
harmonics (e.g., 3 of 3, 5 of 7, etc.). This is of course inspired in part by Partch's method of
compound 214 Example IX.10 215 Example IX.11 An.1 (8v) 紅 15 7 (8va) Vin.a Viola Cello 3
5 3 ち DO 13 T 4 8 scale construction, ("otonality") though Tenney utilizes it in a vastly
different way. Note that there is quite a bit of duplication in the pitches produced (for
example, the seventh node on a string tuned to five is equal to the fifth node on a string
tuned to seven, and so on). On the bass, only the E string is used, tuned to F (the
fundamental), which functions as a drone throughout, and resonates wonderfully with the
higher partials of its spectrum being sounded. No string is tuned up more than a major
second, and most are tuned down (in the case of the second violin's E string, as much as a
small minor third). All tuning can be done from higher harmonics sounded on the bass's
low F (which can quite easily produce partials well past the thirteenth). The notation is
simple and one I have also found to be effective in this usage: sounding pitches are
notated, but the nodes of the untuned strings are given in parentheses as a sort of
tablature. If the player simply knows the nodes for producing the natural harmonics
(octave produces the second, fifth the third, fourth the fourth, major third the fifth, minor
third the sixth, and diminshed third the seventh), he/she can play the piece perfectly
without understanding the first thing about just intonation! The complete scale of pitches,
without octave equivalences and in their harmonic series order, is displayed in Example
IX.12. Numbers below the pitches are their harmonic series numbers, and those above are
their Example IX.12 Cents denatton: to 42 -14 31 4-63 社 (9-47-45 +43-10+"S(-et tet 2
t20/41o -19 216 respective deviations from tempered tuning, and are easily computed for
the higher harmonics by considering them to be complex ratios, simply summing the
smaller ratio's devia- tions. (For example, 75 = 5 x 15, or a major seventh above the major
third, yielding G#. The cents deviation is computed from the sums of the deviations of its
components. The fifth harmonic is 14 cents shy of a tempered major third, and the 15th is
12 cents shy of a major seventh - thus the resultant "compound" minor third is 26 cents shy
of its tempered neighbor.) One interesting aspect of this tonal system, besides its being
one of the earliest manifes- tations of what would become one of Tenney's main interests,
is that all first order difference tones produced are members of the set (though they may be
octaves of some other pitch). This has the positive aural effect, as in the idealized version
of For Ann (rising), of ensuring that unwanted dissonances will not be produced by such
combination tones, maintaining a "purer" harmonic sonority. #5, Spectra..., has a simple,
direct form. It is nine minutes long, with the first and last minutes being a kind of outer
border for the piece. In the opening minute, only open strings are used (first harmonics on
the given strings) and they gradually enter from lowest to highest harmonic until an eight-
part chord made up of the odd harmonics 1-15 is sounded. Over the next seven minutes,
several things happen. The temporal density of pitch change (event) becomes greater and
greater, beginning with about one per four second measure and ending in about six per
measure. Note that since no instrument ever "sits out", (all pitches are sustained until they
are changed), the texture (vertical density) remains constant, but the rate of change
increases. Each possible node of each string is used at least once over the seven minutes,
with the lower nodes in general used more often (though some higher nodes, like the sixth
on the 'cello G string, are used as much as the lower ones). The general direction is from
lower nodes to higher nodes or from simpler harmonic ratios to more complex ones. That
is, first the second (octave) harmonics appear, then the third (perfect twelfth), etc., though
this is not a precise system. There seem to be two general stochastic envelopes which are
subjected to more or less random processes: the height of a node on a string (upper pitch
range), and the rate of change. Thus, as the piece progresses, it moves faster and gets
harmonically richer. The last minute is almost the mirror image of the first, as the harmonic
motion gradually builds into a recurrence of the open string chord at the eight minute
mark, and fades out in much the same way as the piece begins, over the course of the last
minute. This movement (#5) might almost be а separate work in itself, for its effect on the
listener is quite different from that of the others. For one thing, it is longer. The next
longest, %1, is only a little shorter but moves much more rapidly. %2, %3, and %4 are
about 2, 3, and 5 minutes 217 respectively. Another distinguishing feature is that though it
shares its harmonic motivation with Clang, and #1 and #3 (to some extent), it is a more
developed use of that idea, and in some way presages the complexities of a piece like the
string trio. I think that Tenney is a little uncomfortable with the overall length of Quintext.
My suggestion might be to occasionally perform #1 or #5 by themselves. In that situation,
Spectra... especially might be heard as one of his finest and most successful works. 218 X.
Chorales Perhaps no piece of Tenney's is easier to explain, yet whose aural effect is more
difficult to describe than the Chorales for Orchestra. While its construction is childishly, or
elegantly, simple, its musical and emotional impact is rather awesome. Though many of
the pieces of this period have this same quality of being discovered rather than composed,
Chorales is the most transparent. Once again, Tenney is exploring the ramifications of the
octatonic scale, made up of the odd harmonics. Chorales is the one piece, however, where
the melodic aspects (the diminished mode) of this scale are explored, and indeed the only
piece since Monody where Tenney has shown a real interest in melody per se. Here, the
harmonic series is built on A, and the scale of alternating half steps and whole steps can
be seen in Example X.1, where the first half of the melody is transcribed. Chorales for
Orchestra is in four movements, each exactly the same in form but differing in
instrumentation. The first is scored for strings, piccolo and contrabassoon; the second for
brass, two vibraphones and harp; the third Example X.1 Maladry, maguves 1-32,
1s*trumpet, Chorales(I) 219 for woodwinds with harp; and the last (marked "tutti") for the
whole orchestra, with a percussion section consisting of celeste, chimes, tam-tam and
harp. Each movement is sixtyfour measures long, with the last thirty-two more or less the
mirror image of the first. (I should note here in passing that several other versions of this
piece exist, all realizations of the same harmonic/melodic idea for different instrumental
combinations. I think that these are all more or less experiments, and though I have heard
one or two performed, I have not seen a final score for any of them. One very beautiful
version is for viola and piano, and this was performed by Tenney and Ann Holloway as part
of Maple Sugar in Toronto). Each movement is completely determined by two things: the
melody (which is the same for each), and the initial voicing of the first chord. Each vertical
chord in all four movements is an "inversion" of the first chord, composed of the eight
notes in the scale, with doublings only in nonsectional instruments (like the vibraphones in
the brass movement, and the piccolo and bassoon in the first movement). Given the first
chord voicing, the set of "inverted" chords and the "leading voice" melody, the remainder of
the piece is predetermined. It is a kind of extreme organum, but using (ideally) the
properties of the harmonic series to bring about certain complex consonances, and, what
Tenney expects, the feeling that we are really listening to the spectrum of one pitch, in two
dimensions. The melody itself is simply a horizontal realization of any given vertical
sonority, and so there exists a wonderful ambiguity between melody and harmony,
movement and stasis. The melody itself has certain shaping factors. As one can see from
Example X.1, it winds slowly upward, stopping periodically to breathe, and with the four
minor thirds of the diminished seventh chord as its preliminary goals before reaching the
octave. Because it does not have any intervallic leaps, and because of pervasive inner
repetition, it seems to ascend interminably, yet ever propulsive (like the glissandi in For
Ann (rising)). The melody, listened to by itself, is quite beautiful and mysterious, and it
must have taken Tenney some considerable care, effort and skill to work it out. In its shape
and modal use, it reminds one a little of Lou Harrison's music, with which Tenney is quite
familiar, and its gradual perceptual ascension bears more than a little resemblance to
Ruggles. The initial voicings for each movement are shown in Example X.2. Each
represents a simple orchestrational concept. The strings are spread-voiced approximately
in fifths, the brass are voiced in the closest possible cluster (with the vibes replicating this),
and the woodwinds are more or less in thirds. In the final movement, the melody starting
on A is played by the entire string section in octaves, second trombones, third horns, first
trumpets, contrabassoon, bassoon, first clarinet, first piccolo, and the 220 Example X.2
Initral volcongs I Brass strings Woodwinds strings Perc. Woadwnds Drass + percussion
inner voices. The other pitches in the eightpart chord are divided among the remaining
instruments so that higher harmonics tend to sound in the higher registers, with the
greatest harmonic density in the middle register. The brass play in parallel dominant
seventh chords, while the woodwinds sound the higher extensions in close voicing. In the
first movement, the melody on A is reinforced on both registral ends by the piccolo and
contrabassoon, and in each of the other movements there is the added "dramatic" effect
of a "punctuating" instrument about every four measures. In the second, the harp and tuba
sound a low A, usually under the sustained pitches. In the third movement, which is in the
key a tritone higher (for reasons of range, though it has the same harmonic construction,
and in some sense is still in the same "key"), the contrabassoon and harp are the
punctuating instruments. In the final movement the punctuations are made by the tuba,
harp and tam-tam. In all movements, these become more frequent towards the midpoint
of the piece, accompanying the melodic ascension and continual crescendo, and then
less frequent from the midpoint (as everything is in retrograde). For some reason, it is the
"unnecessary" aspect of this device which attracts me so much to this work, for these
puntuations are in no way determined, as is the rest of the work, but are in every way
consistent. It is such a straightfoward and simple effect that it can only be seen as
evidence of the composer's good will towards the listener! One anomoly exists in the
second movment. The initial chord has no G natural (or seventh harmonic) in it. This is the
only incomplete chord in the piece, though of course, every chord in that movement has,
consequently, one pitch missing (not always the seventh). This absence may be due to the
particular cluster voicing of the brass, where the A is in the lead trumpet, so that the G
below it would tend to obfuscate the direction of the melody. Because of this simplified
harmony, the second movement is unique in that its seven voices move in complete
parallel motion throughout. 221 Chorales for Orchestra, like Clang, has I believe seen only
one performance, and only a mediocre homemade recording exists. Neither of these
pieces is at all difficult to perform, and one has to wonder at the reluctance of orchestras
to play truly contemporary music - music that completely transforms our notion of the
ensemble itself. 222 XI. Spectral CANON for CONLON Nancarrow for Harmonic Player
Piano (1974) Although I believe this work was composed, or at least concieved in 1972, it
was not realized until a little later because of the various technical difficulties involved.
First, the roll had to be punched, and because of the very precise durational algorithms
involved (worked out on the computer), it most likely presented an arduous task.
Nancarrow himself punched the roll on his custom-built machine, as a favor to Tenney,
and Gordon Mumma helped him record the piece on an old player piano found somewhere
near Santa Cruz, Cal. In the recording that now exists, one can even hear the electric pump
faintly in the background. The player piano is tuned to the harmonic series on triple low A,
up to the 24th harmonic double high E. For the first time in his harmonic series-related
works, Tenney was able to use the overtones in their natural octave placement. There are
accordingly, 24 voices in the canon, each having the same durational structure, and the
nature of the canonic configuration is interesting to explore. The key is to understand the
analogy Tenney draws between durational ratios and harmonic ones (as in Henry Cowell's
Rhythmicon). The successive durations for any given voice in this piece are determined by
the logarithm of the ith superparticular ratio in the harmonic series: Di = k log2 (i+8)/(i+7)
(where Di is the ith duration in the sequence) In other words, starting with the ratio 9/8,
durations decrease exactly as do the pitch intervals in the harmonic series between
"successively higher terms". "k" is a constant chosen to make the initial duration in any
given voice 4 seconds, and can be determined by simple algebra: One rather k log2 (9/8) =
4 k = 4/log2 (9/8) = 4/.1699 = (approx.) 23.54 startling ramification is that this durational
series, primarily because of the logarithm (which maintains the relationship to frequency,
or at least our psycho-acoustic perception of frequency) forms temporal octaves at the
same places that the pitch series would - 8, then 16, 32, 64 durations/pitches etc.. Put
another way, the sum of the first eight durations is equal to the sum of the next sixteen and
so on, just as it takes "more and more" superparticular ratios to add up to an octave the
higher in frequency one goes. This fact is the basis for all the simultaneities in the piece. An
intuitive way to see it is to look at the sum of certain simpler higher ratios, for example: log2
17/16 + log2 18/17 223 = log2 17 - log2 16 + log2 18 - log2 17 (by the properties of
logarithms) = log2 18 - log2 16 (simple algebra) = log2 18/16 (log properties) = log2 9/8 - so
that the first two durations at the "higher" temporal octave are equal to the first in the
"lower". This indicates how the successive voices enter, the equation for the starting time
for any given voice being: ST(n) = k log2 V(n) - where ST is the starting time of voice n, and k
is as before. Looking at this more closely, we see that the starting time of the second voice
(n=2) is ST(2) = k log2 2 = k (since log2 2 = 1), and that successively higher octaves (4,8,16)
begin after 2,3, and 4 times the value of k (log2 4 = 2; log2 8 = 3; etc.), or at the temporal
octaves corresponding to their durational octaves. Note that at any point in the first part of
the piece the first voice is moving twice as fast as the second, three times the third, and so
on; and that these ratios are true for any pair of voices corresponding to their harmonic
number. The same thing holds for other intervals (the third voice enters after a durational
"twelfth", the fifth after a durational "double octave and major third", etc.). To see these
non-octave relationships, we can examine the relationship of the durations of a higher
voice to that of the first, as follows (leaving out k): For voice 3, the starting time is ST(3) =
log2 3, and this is equal to a sum of successive durations in the lowest voice which can be
expressed as: j (i+8) = 3 Σ i=1 this can be proved by writing out the successive terms in this
series, and remembering the property of logs: k): log2 a/b = log2 a - log2 b, thus (again
leaving out log2 9/8 + log2 10/9 + log2 11/10 + log2 12/11... + log2 24/23 = log2 24/8 = log2 3
- because any number which is in both a numerator and a denominator gets cancelled out
by the occurrence of opposite signs, and only the "outer" two, 8 and 24, are left. This
means that after 16 terms (an "octave and a half", since the 2nd voice enters after 8 terms
and the 4th after 16 more, or 24 terms of the lowest voice), the third voice enters. At this
point the duration of the third voice is k log2 9/8. The k "cancels", and we have the following
equation: log2 25/24 + log2 26/25 + log2 27/26 224 (the first three durations of the first
voice at the entrance of the third voice) = log2 27/24 (by explanation above) log2 9/8 (first
duration of any voice) - and so at this point the first duration of the third voice is equal to
three durations of the first. We could prove the same thing for all voices with relation to the
first and with relation to each other (for example, at that point the third voice stands in the
duration ratio 3:2 with the second, and so forth). Example XI.1 shows these relationships in
the first page of the score. The upshot of all this algebra (of which I am no fonder than the
reader), is that a remarkably beautiful integrity is constructed using the very simple and
elegant idea of the analogy of pitch and duration harmonic ratios, in a way unlike any I've
ever scen. Like many of Tenney's ideas, it is remarkable in its simplicity but wonderfully
complex and multilayered in its ramifications (perhaps this is what Phi- lip Corner meant
when he referred to Tenney's music as "resonant suchness"). The form of the piece is
simple. Each voice goes through 192 terms of its series (always increasing in tempo), and
then retrogrades. The 24th voice enters pre- cisely when the first voice is beginning its
retrograde (192/8 = 24). The piece terminates when the 24th voice ends its forward motion,
which is, for some reason I can't quite determine, a point of total synchrony for all voices.
This is preceeded by some breathtaking "parabolas and hyperbolas" (see Example XI.2,
page 15 of the score), whose evolution I understand even less, but are somehow a natural
result of the logarithmic cross rhythms. Note that no voice except the first completes its
retrograde, so there is a kind of assymmetry to this aspect of the work. Nothing I could say
in this short description/explanation could ever substitute for the pure joy of listening to
this marvel, which is heard once again more as a fact of nature than as a composed piece.
It is, like most of Tenney's music, nearly impossible to come by, and a commercial
recording done under good technical circumstances would be very welcome indeedl 225
Example XI.1 生 스 Spectral CANON for CONLON Nancarrow for Harmonic Playcs-Pians 示
ames Tennes 1974 226 Example XI.2 227 XII. The Drum Quartets These three pieces, which
are probably Tenney's most frequently performed ensemble works, were an outgrowth of
the music he wrote for Stephan Von Huene's mechanical drum. The drum's construction
and means of reading its "program" " (encoded on large plastic disks) encouraged pieces
with canonical and cumulative structures, and the three pieces Tenney wrote for it all have
these structural traits in common. These three works for the drums are titled Wake, The
Popcorn Effect, and Tempest, and are still to be heard regularly on the drum itself at the
Exploratorium in San Francisco, where it now resides. Although I do not have access at
present to the "scores" (large plastic disks) for these works, I can offer some general
description (Von Huene's drum is described in more detail in the A.R.C. Edition of Michael
Byron's Pieces I). Wake for the drum is almost identical to the setting for four tenor drums
in the quartets, and will be described below. Tempest, besides having much in common
with Hocket, is difficult for me to describe at present, not having the score. It is a study, like
Hocket, in gradually evolving tempi, and in Tenney's own words, "achieved within the
mechanical constraints of the drum...i.e., everything was cyclic (once begun, a rhythm for
a particular beater repeated exactly until it was turned off again). A gradually changing
tempo could only be achieved by a kind of 'trick' (since the actual speed of the control
mechanism for the drum was unchanging)...", the "trick", as in the bass drum setting
involves consistently changing durations which suggest changing tempi. Wake for Charles
Ives is the first and best known of the Three Pieces for Drum Quartet. It has a certain
appeal to professional percussionists, amateurs, handclappers and kitchen table beaters
alike. It's rhythmic concept is so simple, yet its resultant structure so interesting, that it
has almost a childlike wonder to it. It is at the same time clearly a memorial tribute to Ives,
with its evocation of the ostinato snare drum material of the last movement of his Fourth
Symphony (Example XII.1). The title itself is a tyріcal Tenney double-entendre, for not only
does it describe a joyous remembrance of a loved one but also the musical effect of the
piece, whose rhythms move like ever accumulating waves, swells and breakers, with
smaller echoes left in their wakes. Example XII.2, an excerpt from the first page, shows the
gradual accumulation of beats toward the full rhythmic THREE PIECES FOR DRUM
QUARTET © 1982 by Caveat Music Publishers Ltd., Toronto. Reproduced by courteous
permission of E.C. Kerby Ltd., Toronto. General Agent. Ed. note: the examples in this
chapter arereproduced by theauthor fromthe composer's original ms., instead offromthe
published scores. Permission for this was kindly granted by the publisher. 228 Example
XII.1 (Ives theSumphi 4 lanke To Snare rum line in any given voice (here it is the first voice,
the beginning of the piece). The complete phrase is two measures (eight beats) long, and
each voice canonically repeats this accumulation in its respective entrance. No voice
enters until the preceding voice has reached its full construction, and when a new voice
enters it is displaced one beat behind the preceding voice, so that a complex pattern of
waves, echoes, and gradual filling in of the rhythmic space is created in what very quickly
becomes completely predictable. This predictability is quite exciting, and as I have
explained above, is essential to Tenney's musical intent. Listening to each of the four
drums enter one can play a kind of guessing game as to what the next "gestalt" will sound
like. Example XII.3 shows the point of the piece where all four drums have finally grown to
their full rhythm, and the "wake" is readily visible from the score. The next few measures
end the piece, and Example XII.4 shows the final "wave", which is a slight alteration of the
rhythmic process, resulting in a wonderfully powerful unison rhythmic climax. Hocket for
Henry Cowell is the only piece of Tenney's that I know of, (except for some of the computer
pieces) which uses spatial location as a structural parameter (though I have also heard
him conjecture in this regard about For Ann (rising), truly turning it into a "barber pole"
piece). The four bass drums are placed around the audience, often creating some
performance problems (being able to see and hear each other) and necessitates a
conductor (though I have heard this piece performed without one). The piece is in three
main sections, each being canonical, and in some way a hocket. The first section consists
of rolls which gradually "move" around the room by means of crescendo/descrescendo in
adjacent drums. From the time that all four drums have entered, each one maintains its
roll and the hocket illusion of circular movment is effected by a canon in dynamics. Tenney
has long been interested, I believe, in the famous second movment of Ruth Crawford's
String Quartet, in which only the dynamics change. (He has experimented with this further
in a little-known piece entitled Canon for bass quartet, written just a little earlier than the
drum quartets). The canon increases both in volume and in tempo, and then rather quickly
softens at around measure 25, in anticipation of the next section, beginning at measure 28
with a single stroke (mezzoforte) of drum I. 229 Example XII.2 =60 du WAKE for Charles Ives
for four Tenor Dr (dm( dws 用 James Tenney 8/74 0000 T Notas Each syatem ereept the
wnet is to be played through tviee beforeproceding to the next one; the notes with
downward-pointing stems are to be plaved outy at the cnd of the sesond time throngh &
give 230 Example XII.3 Example XII.4 (f) (4) ΕΞ 用 |== 1944 JAMES TENNEY The sections are
dovetailed, or overlapped, as the other drums maintain the crescendo/decrescendo
texture while underneath the roll-canon continues first in three voices, then in two, as they
gradually move on to the new texture. The next section, from ms. 28 to ms. 49, is a kind of
fragmentation, or incomplete (anticipation) version of the ful1 rhythmic canon to follow,
and gives a rather ungainly and 231 disjointed feel to the middle of this piece that is quite
striking in performance. Obviously intentional, the musical and structural motivations for
this section have always mystified me a little. We can find, of course, musical precedents
for this "awkward" rhythmic feel in the stop/start motions of Blue Suede, Seeds, Quiet Fan,
Viet Flakes, etc.. Each voice in this section gradually states some of the material of the
following canon, three measures apart, but plays it for an abortive nine measures and then
resumes the roll, also in canon with the other rolls. Thus, by measure 46, all drums are
rolling once again, in rapid imitation. At measure 49, what is to become the complete
canon begins. The leading voice of the canon is composed generally of successively
shorter durations, as follows: (in quarter notes) 7,8+15/16,5+35/48,5+1/3,3+1/2,2+5/6...
Once again, the entrances are three measures (12 beats) apart. This leads to aresultant
total rhythmic complex monophonic gestalt pattern of: 7,5,3+7/8,3+1/8,2+2/3,2+1/3,
1+7/8, 1+5/8, 1+1/2,1+1/3, 1,... (where durations are measured from one stroke to the next
considering all four drums as one voice). Example XII.5 shows the exponentially decreasing
curve that represents this duration series, the carefully planned result of the four voice
canon. Slight pathologies in the curve arise from the approximations that were needed to
transfer this curve from its original form in Tempest (where it could be realized rather
accurately on the mechanical drum) to traditional rhythmic notation. The canon more or
less ends at measure 61, cutting the series of the fourth voice short, and becomes a kind of
study in hocket and accents, as shown in Example XII.6. Subsequently, the "tempo" is
gradually decreased once again and at measure 70, begins to decay by augmentation into
the main canonic material, over which the rolls are gradually superimposed. Measures 76
through 84 are a kind of mirror image of the introduction of the canon itself (ms. 49 - ), with
the voices entering into the roll canon from the top down. Note that in the score, this
circular motion looks like a kind of "sawtooth" wave: M -the circular spatial effect caused
by the immediate transference of the canonic material from voice 4 to voice 1 (which are
adjacent in the room). At measure 89, a curious and beautiful thing happens in the score -
the "sawtooth" changes to a "triangle" wave, and the spatial effect is that the canon
alternates direction: 000.... The similarity of this section to so much of Tenney's other
music which utilizes the "swell" idea should not be 232 Example XII.5 DURATION Example
XII.6 Σ Π Π K I I 16 1641 S f 省 (ete.) TIME CURVE OF RESULTANT DURATIONS: HOCKET (ms.
49- 233 (mf) cresc. (mf) crase, craseь (wf) cresc. (mf) eresc. 1 1 Σ 4 overlooked. The voices
drop out, at triple piano, from the top down. Hocket has a rather complicated arch
structure. Measures 1-48 constitute the first section (rolls and aborted canon); measures
49-(approx.)73 the second, being the full statement of the canon with a hocket at the end;
and measures 73 to the end being the inverse of the first section: first the aborted canon
(or in this case the decay), and then the rolls. The idea once again is that the "swel1"
structure is replicated at several hierarchical levels, in both the durational and dynamic
parameters. Crystal Canon for Edgard Varese is scored for four snare drums. The title of
this third quartet puns on both the structure of the piece (the gradual cumulative nature in
which the theme is built in the four voices resembles the formation of crystals), and on the
fact that it is all based on the famous snare drum theme from Ionisation (Example XII.7
shows the full theme as it appears in Tenney). The quartet is in three sections, the first a
canonic, gradual building up of the theme, with a little bit of the phrase added on each
iteration. Unlike Wake, the four voices follow immediately, and build the phrase
simultaneously, displaced a beat each (Example XII.8 shows one "sample" displacement).
At measure 13, they begin the complete statement of the theme for the first time (still in
canon). The second section of the piece begins at the end of measure 16 Example XII.7 234
Example XII.8 Vi Voie 3 (though the fourth voice has a few beats to complete its statement),
with an inverse of this process. With the snares off, and the rather distinctive idea of using
a rim shot on the accent (dividing the theme more or less in half), the four voices in canon
progressively state the theme in retrograde, shortening it each time. Each voice, after a few
iterations, turns into a short ostinato, "out of phase" with the others, and while voices II
and IV hold this, voices I and III commence the third and concluding section. It is similar to
the first, except that the theme is built more quickly, with each voice adding its part, and
each voice cumulatively including the increment of the previous voice. The first and third
voices are spaced four beats apart, as in the beginning, and by the time the other two
voices enter, the theme is nearly complete. The last few statements are shortened in
successive voices (other than the first), so that they gradually come into alignment by
measure 54, where the theme is stated once in unison (with a nice added touch in the final
measure) Crystal Canon is, along with Spectral CANON, Tenney's most extended and
successful canonic study up to this point, and as such provides us with a glimpse into the
way he would progress, especially with pieces like the string trio. 235 XIII. Harmonia The
four Harmonia, along with some related pieces (Saxony, Band and Chromatic Canon)
represent in a way the current stage of Tenney's thinking (although there are several later
pieces: Septet, Listen, Glissade, Voices, deus ex machina..., see Appendix II); all of which I
am omitting from this article because I have simply not had enough time to "live" with
them). The Three Indigenous Songs was actually written prior to many of the Harmonia, but
was copied and premiered later. Although in many ways, the Harmonia are clearly related
to the Chorales, Clang and other earlier works, there is a kind of unity to the set that
distinguishes them. For one thing, they are even more economical than most of Tenney's
previous works, and their avoidance of musical drama, and strict adherence to canonical
and harmonic formulations is taken almost to a compositional limit. They are each
different solutions to a certain harmonic/canonic/formal puzzle, and the ways in which
each solves this puzzle is unique and fascinating. There is some confusion about their
numberings, since there have been several revisions. Acting on Tenney's wishes about the
set, I have omitted #1 (later revised to be what is now #2), and retained the current
numerical ordering even though it does not necessarily correspond to the chronological.
#2 is, as Tenney calls it, the "ur" version, in that it clearly states the harmonic idea without
any artifice of orchestration, melody, rhthym, etc.. It is for any sustaining instruments, and
is simply a chorale of available pitches. The piece is dedicated to the great American
composer, theorist, teacher, instrument builder, etc., Lou Harrison, whose long interest in
intonation has been an influence on Tenney. In discussing its form, we can provide a basis
for the other Harmonia as well. It consists of a steadily growing and then decaying chord,
based on the primes of the harmonic series, which modulates roughly in the circle of fifths.
The voice leading can be seen in the diagram of Example XIII.1, in terms of the number of
the partial in relation to the new root. The basic principle is that of closest voice
movement, with each successive chord in the first half containing one higher harmonic,
and each in the second half one less. Tenney's idea seems to be one of an extended
dominant, since in the Harmonia he tends to omit the 13th and 19th partials, basing the
pieces on a chord that might be called an augll b9. (In the key of C: C-E-G-Bb-F#-Db). The
first half of the piece is highly ordered, with a kind of canon by partial in each voice
(1,3,5,7...; see Example XIII.1). The second half upsets this symmetry to some 236 Example
XIII.1 ---年-3-年-第一 5 3-5-7-1-3-5-手 F2-3-5-7-1-() 歐 1-3-16-16) Аb -Db VOICE LEADING
SYSTEM FOR HARMONIA (from #2) extent, mainly to facilitate the desired C dominant
ending (so that the piece might be "cyclic"). Another interesting aspect of the piece is that
in the first half (up to the Ab "tonality") the series is built up over a phantom root,
suggesting the "tonic" before it actually enters. In the second half, the tonic enters under
the old "tonality", and the higher pitches return to the new bass note. A look at the score
shows the extremely smooth and clear voice leading, with no voice (except the bass
moving in fifths) having a range of more than a major third. Example XIII.2 is the entire score
for this piece. #3 is a remarkable hocket for three harps dedicated to Susan Allen, and is
the one in the set that I have not heard in its completed version (though I am familiar with it
in an earlier sketch). Each of the three harps is at a different pitch, harp I being 14 cents
sharp of harp II, and harp III 14 cents flat. In this way, many of the intervals of the harmonic
series primes can be approximated quite closely. For example, the just third (fifth
harmonic) is exact between harps I and II, and between III and II, and the wider deviations
from tempered tuning (like the 31 cents flat seventh and 49 cents flat llth) are
approximated by the pitch distance of the outer harps (28 cents). The most common usage
of this, from "sections" IV through X (Bb through A in the modulatory scheme) shows the
way in which the full 237 Example XIII.2 HARMONIUM 2 JAMES TENNEY 9/76 for LOU
HARRISON The score ofof HARMONIUM $2 consists of seven sections (indicated by double
"bar-lines"), each of which is divided into two to five segments (single "bar-lines"). The
notation shows available pitches for each segment, with numbers above notes indicating
deviations from the tempered pitch in cents. Each performer chooses one after another of
the available pitches in the current segment. and plays it as follows: pppXppp, where "X" is
the dynamic level notated for that pitoh. Each tone may be from four to twelve seconds
long, but its duration should be equally divided between the crescendo and decrescendo
portions tio of the tone. After a pause at least as long as the previous tone, this process is
repeated with the same or any other available pitch in that segment.. Non-sustaining
instruments with fixed intonation (e.g., keyboard and mallet instruments) may play only the
lowest pitch in any segment. letting the tons fade away completely before sounding it
again. The transition from ons segment to another may be initiated by any player, simply by
introdusing the newly available pitch for the next segment (a "white" note). These
transitions the tatal dumat white" note). These transitions should be timed so that the total
duration of sach section is somewhere between one and three minutes in length. /+2 -14
+5 Te Te -14 -311 $2 -49 +2 -14 た +5

Tenney’s Aggregates Catherine Lamb While I am reading James


Tenney’s writings, I am often transformed by his particular terminology that somehow
infuses an entire atmosphere around the thing he is describing. He fluidly links points of
phenomenological perception and conceptual clarity with the theoretical, the intuitive,
and the creative form in this atmosphere of a consistently used word. A principle one in
this text and others is his use of aggregate. As I write this I wonder how it translates into
Russian. It is related to his use of clang which is overflowing in Meta- /Meta-Meta-Hodos,
but here suggests a more moveable and flexible semantic. I am romanticizing the word
aggregate because it has become integrated into my own cognition on most things related
to music, only after reading The History of Consonance and Dissonance. Tenney’s writings
and drawings are similar to how he structured his classes—little crystals always
suggesting much larger and diversified forms. He only cared to initiate a deeper discourse
by digging open the field around him to share. When I first met him, I was visiting the
California Institute of the Arts. He was in the main foyer preparing a piano for Cage’s
Sonatas and Interludes. He had no idea who I was, but saw that I was watching what he
was doing, so invited me over to take a closer look. Knowledge was something to
investigate with others, with anyone who was interested and present. This pedagogy
appears in his music as well, while guided by profound perceptual states of being—
praising phenomenology in its action. In class we mostly talked about other people’s
music, but one day he brought in a recording of the Bozzini playing Koan for String Quartet.
He spoke about transparent structures where immediately one understands how
something is to unfold, so one can simply perceive what is happening and not wonder to
where. The piece left a big impression on me. I found it to be radical—it opened up a whole
world of formalized beauty I had not been made aware of before, forever altering my own
approach to materiality and music. To me, this is due to the structure itself, being both
inviting and generous. After that I continued to be transformed by his music and ideas.
Tenney’s aggregate encapsulates all senses around musical discourse. Harmony, timbre,
melody, structure, and speed accumulate together in time and shape what and how we
perceive in a given moment, defining points in a whole field of possibilities. The aggregate
of timbre accumulates from a single tone or instrument resonating. The aggregate of
harmonic interactions between sounding sources, be it the Pythagorean concept of
alignment or the conglomerate of harmonic space. The aggregate as it relates to shifting
historical rules of trade and as it falls from their various graces. The aggregate of micro and
macroscopic time fusing material together. The aggregate in relation to what came before
(or what lingers beyond). An incision into the phenomenology of movement determines the
aggregate of these elements. Always referring to sound as an aggregate, Tenney
immediately places a sensation (sound) in its place from the perspective of the person
listening. Aggregate suggests the accumulation of matter within a perceived unit of
harmonic space. Precisely demonstrating that the mere intervallic description is not
enough. He is referring to a living structure and its totality in combination with others. He
reminds us again and again that harmony is not about pitch, but rather concerns the
totality of shifting systems and aesthetic perceptions. Tenney leads us through five phases
of historical perception, between what he calls qualitive and entitive oscillations in
symphonious practice. His writing is limited to European history initiated by the ancient
Greeks, but it suggests something on a much more expanded scale, urging us to go further
and to draw our own parallels between other historical perceptions so that we might
understand how to move beyond our current realities. As he indicates, he leaves us with
Helmholtz, omitting the most recent (complex and diverse) history for us to connect this
disparate opening on our own. Tenney praises the labor of investigation, the forces behind
these people he looks to in search of his own understandings. Rameau’s radical sonorous
body concept was developed after his death, but as Tenney indicates it was so big it
initiated its own phase. Some fragmented thought-perceptions from the philosopher
Rousseau (a close, younger contemporary of Rameau): The beauty of sounds is from
nature; their effect is purely physical, it results from the interaction of the various particles
of air set in motion by the sounding body, and by all its aliquots, perhaps to infinity….by
reinforcing one consonance and not the others, you disrupt the proportion….By nature
there is no other harmony than unison….M. Rameau claims that treble parts of a
comparative simplicity naturally suggest their basses, and that a man who has a true but
unpracticed ear will naturally intone this bass…. -Jean-Jacques Rousseau, ‘Essay on the
origin of languages’, trans. J.T. Scott, The Collected Writings of Rousseau (vol. 7,
Dartmouth, NH: University Press of New England, 1998) Rousseau goes on to be critical of
Rameau’s position, but he seems to simultaneously acknowledge its profundity by the
manner in which he describes it, while reinforcing a Pythagorean sense of unity. It is not
only in formalized artistic results, but in these little exchanges between human beings
where Tenney seems to find gestalt. The oscillations between people perceiving are active
structures in and of themselves, forming our realities. Even in the neurophonic music of
Maryanne Amacher (one of the few composers to yet expand phenomenological
aesthetics), or in Lamonte Young’s rejection of the 5th prime (the basis of Major-minor
constructs in European History), Tenney reveals to us the contemporary musician’s
situation in the greater historical aggregate. Tenney drew geometric structures for his own
musical and conceptual organization. Returning to his essay, I immediately turned to the
charts and began to follow an investigative exercise. He laid out two seemingly simple
pathways; an ordering of two parallel configurations fitting within a diatonic scale,
displaying the differences between Pythagorean and Just intervallic contents. He shows
that the increase in complexities through these two systems could be an argument for
what becomes increasingly dissonant to our ears, which is logical. So I’ve been sounding
through these progressions and comparing them, over and over, in succession. The
branches between these two progressions, he seems to suggest, become a determining
factor of functional harmony in the European traditions. My immediate perceptual
response is that the Pythagorean progression remains in a bright, activated quality,
through a multitude of sympathetic, spectrally aligned resonances. The further and further
it expands (multiplies), the more and more the harmonic beatings compress in range and
intensity, all the while maintaining their particularities. There is a threshold for each
individual as to where the intensity and compression becomes too much, too saturated to
distinguish the particular qualities clearly. For me, at this moment in time, the 243 touches
on an activated and expressive vibrancy, encapsulating all that has come before it as well
as suggesting what is beyond. Yet going further, to the last one in the list (the 729), the
quality begins to dissipate, or rather, its beauty is beyond my current comprehensibility,
too painfully compressed for my physical response to properly find pleasure in it, or the
vibrancy has actually disappeared (too many aggregates accumulating?). The progression
could continue beyond the intervallic system, thus defining an infinite number of qualities I
have yet to aesthetically comprehend or sound. I must consider these limitations, then,
that consonance and dissonance are regarding one’s own comprehension of the world
itself, and that what I understand now, others will understand more acutely (thus
differently) later. Since we cannot define beauty in concrete terms (other than Helmholtz
describing the roughness resulting from certain beatings and distortions), we are still left
with how to connect logic and intuition with a kind of searching intention into what we
cannot comprehend. If we are able to better name the thing being perceived as dissonant
more clearly (that is, by describing its acoustical properties—beatings, timbres, and
resonances), then we become more familiar with the sounding thing. The most profound
music to me has always first arrived from a place of confusion and in some cases disgust,
later transforming into incredible shades of beauty to be returned to. This was Tenney’s
intention with his last piece Arbor Vitae (referring to pathways leading to the cerebellum).
He constructed it beginning with harmonies beyond his own comprehension, slowly
descending to the roots of comprehensibility, aligning within the sounding body and then
out again. -May 2017, Stuttgart

ATMOSPHERES TRANSPARENT/OPAQUE "To


be listening is to be at the same time outside and inside, to be open from without and from
within, hence from one to the other and from one in the other.""1 * In Magdeburg late this
summer, I witnessed, for the first time, a secondary rainbow. I realised this was its name-
and was assured of its reality-because of my acquaintance with Catherine Lamb's Prisma
Interius series, which features an instrument built by her together with her partner Bryan
Eubanks, the 'secondary rainbow synthesizer.' Standing next to the organic structure of
Hundertwasser's Grüne Zitadelle, I saw the double rainbow light up the cloudy sky,
illuminating the Kunstmuseum Kloster Unser Lieben Frauen from behind. The primary
rainbow displayed a wide golden hue across the inner side of its arc, while the space
between it and its paler secondary twin were brought perceptually closer through the
colors they displayed in inverted order. Their mirrored edges-the outer curve of the one and
the inner curve of the other-were highlighted in pink and an infusion of pink spread across
the cloudy space between them, semitransparent on a field of blue. An eleventh-century
Romanesque monastery, the Kloster is now a contemporary art museum and concert
venue. Fitting that my experience should take place there, on the threshold between the
urban life of the city street and the flowing form of the Elbe River. various ways, so that it
undergoes alterations and appears unlike itself. Thus, bodies seen in shade or in light, in
more pronounced or softer sunshine, with their surfaces inclined this way or that, with
every change exhibit a different colour."2 Imagine looking to the outside world through a
long glass panel of changing color and density so that the bodies and objects on the other
side appear as blurry colored forms. In a similar way, the secondary rainbow synthesizer
filters the live environmental 'atmosphere' (microphones are placed outside the
performance space) so that sonic information shifts between abstraction and recognition
(narrow or wide filter) and the 'coloring/highlighting' of this atmosphere (through the filter's
resonance) takes place in the 'tonal temperament of each piece. This grounds
environmental atmosphere (which has no center or sense of periphery) into a system of co-
ordinates related to a fundamental tone. The outside world is thus incorporated into the
musical work, and so too is the musical work incorporated into the outside world,
connected and illuminated through extended harmonic space3 "... listening opens (itself)
up to resonance... resonance opens (itself) up to the self: That is to say both that it opens
to self (to the resonant body, to its vibration) and that it opens "We see no colour in its pure
state, but every hue is variously intermingled with others: Even when it is uninfluenced by
other colours, the effect of light and shade modifies it in 1 Nancy, Jean Luc, Listening,
translated by Charlotte Mandell, Fordham University Press, New York, 2007, p. 14. 2
Aristotle, Treatise on Colours, in Johann Wolfgang von Goethe, Theory of Colours,
translated by Charles Lock Eastlake, Dover, New York, 2006, p. 217. 3 The term 'extended
harmonic space' refers to specific tones arrived at through what Ben Johnston has called
'extended just intonation', a term he introduced to describe compositions involving ratios
that contain prime numbers beyond 5 (7, 11, 13, etc.). In 2003-4, composers Marc Sabat
and Wolfgang von Schweinitz developed a notation system that continues Johnston's step,
the "Extended Helmholtz-Ellis JI Pitch Notation," which Lamb utilizes in her compositions.
2 3 to the self (to the being just as its being is put into play for itself)."4 When starting a
piece by Catherine, I tune my instrument to a particular frequency and relearn the
geography of its tube; I alter the spaces of my body to increase the possibility of precision-
the shape of my mouth (formants), the cavity of my nasal passage, the position of my body-
all the time being led by my ears. My body learns its change in state between intervals and
my ears remember. When my sound is combined with others (resonating with others), I
listen or feel for combination tones, ear tones, beatings, shared partials. Being bound only
by the length of my breath, not by metrical time, I have the freedom to observe sound as a
phenomenon shared with others within structured form. I surf unisons that appear like
thick lines, I visualise vibrating patterns of complex ratios, hear my sound transforming
others and their sound transforming my own. I observe my own listening state, which, in its
purest mode, is light and detached and open. From the interior space of a room (perceived
as a wholeness around our bodies), the 'space outside' is imagined as a continuous spatial
area. Carrying the atmospheric effects of air and light, natural and man-made things-
mountains, trees, rivers, and buildings, vehicles, streets-are also perceived as being
situated in the unending flow of this atmospheric space.5 The role of the secondary
rainbow synthesizer is like a perceptual bridge between the two spaces. At its narrowest
point, the filter produces the effect of concentrating the musical object inside the walls of
the space (while conceptually referring outwards, towards infinite space). As the filter
opens, our ears extend outward and begin to 4 Nancy, Jean-Luc, Listening, translated by
Charlotte Mandell, Fordham University Press, New York, 2007, p. 25. 5 See Meisenheimer,
Wolfgang, "Of the Hollow Spaces in the Skin of the Architectural Body," in Daidalos # 13:
Between Inside and Outside, September 15, 1984, Berlin, p. 103. 4 identify the sounds
(construction works, children's voices, car engines). When the filter opens completely and
the synthesizer stops playing, our ears reach beyond the walls, localize the sound source
as being outside and identify place (while conceptually referring inward, to the contained
musical work and to enclosed space). (Later, we notice our heightened perception: the
wonderful S-bahn glissando, the tram tracks singing when a tram is approaching or
departing, the bedroom lamp emitting a high-pitched hum.) In a moment, the shades of a
tone coalesce with the others-the tone that opens into others, the tone that shifts and
holds, the tone that splits into two. Tones becoming a wash -(perhaps color exchanges in
the wash, passing). Tone and timbre as separated or as combined elements; becoming an
area, within space."% * In rehearsal,7 the piece sounded new, like a different piece
(perhaps it will always be this way). We had moved rooms and different spectral
information was apparent to our ears; we heard each other differently. We were sitting in a
new arrangement in space (sound was being produced and reflected in new ways); we
ourselves were different from last time (emotionally, physically, psychologically); the more
familiar we are with the material, the more our listening is transformed; the filtered
environmental atmosphere sounded different (itwas a different day after all). 6 Lamb,
Catherine, The Interaction of Tone (2012), р. 4. 7 Rehearsal for Prisma Interius VIII,
November 6, 2018, Berlin, with musicians Catherine Lamb (viola), Lucy Railton (cello), Jon
Heilbron (double bass), Rebecca Lane (tenor recorder), Xavier Lopez (synthesizer) and Joe
Houston (synthesizer). 5 A single print is attached, quite low, to one white wall in the room
where Catherine works, Joseph Albers' Study for Homage to the Square: Departing in
Yellow (1964). In this work I see four squares of yellow each diminishing in size and
gravitating around a descending center point. The largest square is dark mustard, then the
second, a lighter hue of this color. The third is a pale lemon yellow and the fourth, the
smallest, is a shade darker (almost imperceptibly so). Although I am observing four flat,
colored squares which are layered from largest to smallest, I cannot say which appears
closer to me. Is it the darkest or the smallest? Are the squares moving towards me or am I
looking into them? Are they four distinct shapes, or are they merging into one another? My
eyes cannot fix on a singular central point, causing my vision to expand toward
simultaneity and the movement between. Perhaps this is what Albers meant when he said
of this series in 1965 (which he worked on for 20 years until his death): "Choice of the
colours used, as well as their order, is aimed at an interaction-influencing and changing
each other forth and back."89 A prism is a transparent form that refracts light and
produces a rainbow (like sunlight through a raindrop). When we look at any object, our
eyes continually move around it as we visually comprehend its shapes and surfaces. In the
case of a transparent object like a prism, this complex process becomes even more so,
since we view some of its exterior edges and surfaces by looking through its interior.
Moreover, a prism absorbs the forms and colors of its immediate environment and
transforms them through the surfaces of its body. 8 Albers, Josef, in
[Link]
accessed November 9, 2018. 9 The title of Lamb's text The Interaction of Tone directly
references Josef Albers' book Interaction of Colour (1963). In this book he outlines his
experimental approach to the perception of color, much like Lamb's text describes her
experimental and subjective approach to the perception of tone. 6 Among the potted
plants on the windowsill in Catherine and Bryan's kitchen, hang small prisms of various
shapes; each one transmutes the outside world, reflects light internally and projects a tiny
rainbow, each of varying intensities, widths and lengths across the white wall. When I paint
I think and see first and most-color but color as motion Color not only accompanying form
of lateral extension and after being moved remaining arrested But of perpetual inner
movement as aggression-to and from the spectator besides interaction and
interdependence with shape and hue and light Color in a direct and frontal focus and when
closely felt as a breathing and pulsating -from within —J. A., 195910 (the prisma pieces
often deal with a shift in harmonicity, a shift in coloration. for each section is unfolding into
a new new harmonicity and then shifting, unfolding into a new. i'm trying to find this feeling
of something 10Albers, Josef, Words of a Painter, Art Education, Vol. 23, No. 9, Dec., 1970,
р. 35. 7 rotating, something that is not quite linear. something more total. you could be
looking around it. or it's the air. i think a lot about crystal forms. for structures. if i'm
drawing out a shape it is often a crystal shapе. you might get these very distinct triangles or
squares or rectangles. if youlook at itfrom the side you see this distinctform but if you turn
it, it becomes a slightly different shape and then you turn it again. but it is all completing
the whole structure. which is both very precise and also imprecise. it's precise in how it is
put together and how it is formed but then also one part is stretched or one part is
lopsided. each form is slightly different but at the same time the same. also finding this
point to this point to this point to this point. what is the role of sound makers, people
making sound and how those points collectively determine the perception of the shape.
the key is how to make an unfolding of the form so that it unfolds the perceptual space.
that's the thing that i'm most interested with form is how can a space be transformed or
how can a space be unfolded into. i feel like when i am in a musical performance what i
really want the most is to take away something and move inward and be present. so how to
activate that kind of a space. expanding the space. going inward but at the same time
expanding.)11 1b Vln. Vla R Viol 400 400 clear shadow In Prisma Interius IX (2018),12 we
hear the gradual unfurling, section-by-section, of points in upper harmonic space
transposed down into human dimensions. Starting out within a narrow harmonic range (as
a quartet) and expanding outward (towards tutti), the musicians trace harmonic space
together. Woven lines (melody)-revealing the materiality of the instruments, parts of their
bodies stretched, lengthened or made shorter so that they can voice these precise
frequencies-are illuminated and dissolved by transparent and opaque clouds (harmony).
(Sharp cuts at the end of tones and sections help us to perceive the edges of these layers
and prevent them from resolving into habitually expressive "musical" endings.) (We hear
echoes of musical traditions in the scalic traversing of the chant, folk music, symphonic
music, opera, electronic music-clothed in opaque light.) (We discover the
interchangeability or fine shadings between traditionally distinct categories: verticality and
horizontality, consonance and dissonance, musical pitch and noise, the space between
two notes. Where does one become the other?) When the acoustic instruments gradually
reduce their presence in section 5, this designates time, in section 6, for the secondary
rainbow filter to more distinctly reveal itself/ its space (harmonic field)/ the place (in this
recording, Centre Pompidou-Metz). It is not until section 8 that the ensemble is gathered in
tutti and this is the first and only time it takes place. The synthesizer does not reappear for
the next and last section, however we can perceive its aural stain coloring the remaining
material like an afterimage.13 Prisma Interius IX, excerpt of score, p. 1 11 Lamb, Catherine,
conversation with the author, Berlin, November 5, 2018. 12 Prisma Interius IX was
commissioned by Ensemble Dedalus in 2018 and premiered by the ensemble on October
4, 2018, at the Grand Théâtre, Albi (France). 13 Lamb, Catherine, conversation with the
author, Berlin, November 5, 2018. 8 9 alto sax 6 In her notation, Catherine rarely indicates
an exact dynamic or timbral technique; rather she uses language borrowed from color
theorists (Albers/Goethe/Wittgenstein) to describe the quality or intensity of sound. Key
words such as shadow, spectral, interaction, dispersed, vibrancy, coloration, saturation
appear in the explanatory notes, almost like the setting forth of a text score. For Catherine
it is a precise but intuitive way to describe how interaction between the instrumental parts
takes place so that the "overall harmonic space is allowed to unfold and shift as a
totality.14 Utilizing this language frees the musician from implementing a learned,
automatic gesture such as 'pp' or ‘ponticello', but rather allows them to make individual
choices based on listening (through relating to others).15 On the wall, to the right of me,
hangs a square photograph of the ocean by Uta Neumann. When observing from the
bottom of the frame, this silvery-blue rippling body of water appears solid and close. As it
recedes into the distance, the water's corporeality diminishes as a silvery-grey descending
fog dissolves its horizon line. This vapory substance, an opaque transformation of water
and salt crystals in the air, blurs spatial distinctions. Like with Turner's clouds, or an Agnes
Martin painting, dimensionality appears simultaneously close and deep and luminous.
Here, seeing can generate an inner sense of expansiveness. * Overlays Transparent/
Opaque, Overlay arrangement no. 6 (alto saxophone part) In the score for Overlays
Transparent/Opaque (2013),16 arcs of various lengths and heights represent individual
instruments: their "overlay" in linear time. These arcs are graphic representations of
"gradations of presence," the highest point of an arc representing a clear resonant tone
(presence in the tonal field) and the lowest, a quiet spectral noise (less present in the tonal
field). The composer asks not for a shift in amplitude between these two points, but rather
for a shift in perception from transparency into opaqueness, where the core of the sound
becomes clear, or less clear, to reflect "the presence of the relational material between
instruments."17 Derived from frequencies of the electrical currents that occur in our
everyday environment (50 herz or 60 herz or a mixture, depending on the country), one can
say that the instrumental parts in Overlays Transparent/ Opaque act as an overlay to the
environment, 14 Lamb, Catherine, Prisma Interius IX, score, Sacred Realism, 2018. 15
Lamb, Catherine, conversation with the author, Berlin, November 5, 2018. 10 16 Overlays
Transparent/Opaque was commissioned by Ensemble Dedalus in 2013 and premiered by
the ensemble on September 9, 2013 at Roulette, Brooklyn (USA). 17 Lamb, Catherine,
Overlays Transparent/Opaque, score, Sacred Realism, 2013, pp. 4-5. 11 as well as to each
other. In each overlay arrangement, we hear the layers arising and receding, pushing and
pulling but also embracing and cutting across one another. In this recording of Overlay
arrangement no. 6 (it appears last on this CD), the stringed instruments shade-in diverging
intensities-a constant present, while the wind instruments insert two pedal notes (the
trombone) and a glistening shape (alto saxophone and flute lines), forming a shifting
conglomerate whole. (overlays are literally like little miniatures. like a shape each. what is
transparent, what is opaque. something you can see through to another tone. what is solid.
like color. the line between. do they remain distinctly two things or do they combine.
combining into harmonicities and harmonic space.)18 Traditional notions of virtuosity have
no place in Catherine's music, instead a different, subtle kind of virtuosity is called for; one
founded in perception. It is an invitation to listen, to open up a sensitive listening space
where boundaries between musicians, audience, and the environment are fluid. The
material asks me to listen and when I listen, others are invited to do so too. Catherine does
not expect precision from musicians (“... the imperfections of instruments and tools, the
changes in air density, and environmental chaos... pure ratios are exactly between the
unnatural and the natural world")19 but is concerned, rather, with a 18 Lamb, Catherine,
conversation with the author, November 5, 2018, Berlin. 19 Lamb, Catherine, Interaction of
Beings, 2017, [Link] accessed November 7,
2018. clear attempt towards intonation, and "failing beautifully."20 It is through this
attempt that a heightened listening space is activated. Familiarity with the material-the
deeper you get to it-is proportionate with the more subtle and activated the sonic material
appears, but the "beautiful"listening state still remains21 * (the intention is to narrow the
filters and to approach a kind of thread that could have a feeling of an infinite [Link]
your innerpoints of listening that is very individual and personal, fromyour pointyou could
listen with the others into the outer atmosphere and see the connectivity of everything,
that's ideal. that's what i am trying to find. thatspace. what is the limit of connectivity from
your point to the absolute, outside.2 tri-forms no. 2, Catherine Lamb 20 See Lamb,
Catherine, [Link] In
The Interaction of Tone, Lamb writes about her former teacher: "Mani Kaul described the
musician/being as a moving, fluctuating consciousness, and in her striving for perfection,
she fails in unusual and distinctly personal ways. The sound is interacting with the being
making it." 21 Lamb, Catherine, conversation with the author, November 5, 2018, Berlin. 22
Ibid. 12 13 Atmospheres are surfaceless spaces.23 As opposed to looking at a painting or
an object where one sets up a relationship of distance, to be in an atmosphere is to be
amidst something.24 Being of the air, sound also has no surfaces and so in this sense can
be called an atmosphere, like the weather. It moves in us and around us. But sound or
more specifically, tones, can also be the carrier of an atmosphere of feeling. In this sense,
musical works, as collective experiences, can generate a home for the emergence of such
atmospheres.25 In the concentrated listening to the shifting interaction of these tones and
their interior dimensions, an atmosphere of expansive relations can be felt, linking inner
and outer worlds and opening us to the interrelatedness of all things.26 * "Through
elemental, relational, layering, we begin to listen to the reality of the world, more closely,
more intimately. Through that intimate space is transformation."27 -Rebecca Lane 23 See
Schmitz, Hermann, Intensität, Atmosphären und Musik," in Hermann Schmitz,
Atmosphären, Karl Albers Verlag, 2014, Germany, (English translation forthcoming). 24
Cobussen, M., Schulze, H., and Meelberg, V. (eds.), Towards New Sonic Epistemologies,"
in Journal of Sonic Studies, vol. 4/1, May 2013,
[Link] accessed November 10, 2018. The authors
refer to the essay by Peter Sloterdijk, "Wo sind wir, wenn wir Musik hören?". 25 Schmitz,
Hermann, "Intensität, Atmosphären und Musik." 26 See Lamb, Catherine, Interaction of
Beings, 2017, [Link] accessed November 7,
2018. 27 Lamb, Catherine, The Interaction of Tone, p. 5. 14 Rebecca Lane is a musician
who explores intonation using various flutes (microtonal flutes, recorder) and voice. She is
a colleague of Catherine Lamb and has performed many of her works. Composer's Note I
have been attempting to describe, in more elemental terms, the perceptual roles between
musicians who are activating interactions in harmonic space. Overlays
Transparent/Opaque (2013) was an initial attempt (as was Material/Highlight) towards
showing forms aside phenomenological clarities in which to enter from relational and
therefore parallaxical points, in this case through shifting overlays. As though to place
individual crystals, one by one, amongst the musicians, and to have them find their place
of vibrancy or shadow due to the angle in which they are seeing the form. Rather than
terms like loud/soft or foreground/background, opaque might suggest a tone that is filled,
dense, and vibrant, whereas transparent might indicate a tone that is losing its
fundamentality, becoming fused into the intensity of opacity; or that one might see through
its sound, becoming atmospheric. The seven overlays are in constant flux, but the forms
are synoptic, placed on their own and in their own space, as objects. Prisma Interius IX
(2018), in contrast, would be one large crystal placed amongst the musicians, rotating with
filtered light. So that each unfolding of the tonalities illuminates the form that is always
present, allowing for a feeling of constant expansion. Here the roles have become tertiary
distinctions: "clear," "shadow," and "spectral." An individual with their instrument adds
further complexity to the reduced terminologies once they produce sound in the air, and
that sound is combining with another's. So if the attention focuses on producing a vibrant
tone that can interact clearly with another, that in itself is enough. "Shadow" and "spectral"
indicates that an individual's tone situates within the atmosphere of the total sound,
activating what is already there by highlighting it or becoming transparent to it. Prisma
Interius IX is the culmination of a series of pieces written between late 2016 to summer
2018, examining particular (perhaps archaic) musical roles, and how they situate within
the phenomenological/perceptual space my work has been growing into for 15 the past
fourteen years. Elemental questions have been important in the series, like how is one
tone a pivot between activating a total harmonic space as well as expanding a contour in
time? There were many threads in the series, such as how to create structural changes
through various conceptual shifts of a prism, the role of the voice, but the most obvious
was the development of the secondary rainbow synthesizer, in collaboration with Bryan
Eubanks since 2014, named after the faint shadow to the more brilliant primary visual. The
instrument filters the adjacent environment to the listening space by literally fusing
harmonically with chaotic atmospheric elements being picked up by the microphones
outside. The role becomes a kind of highlighting continuo or tanpura to the more clearly
articulating musical activity played by the ensemble, while also attempting a bridge for the
listener towards an infinite, expanding space (in ideal terms). It is felicitous that the last
piece in the series is a large-scale aggregate for Ensemble Dedalus, who are friends.
Catherine Lamb (b. 1982, Olympia, WA) is a composer exploring the interaction of
elemental tonality and their shades. She began her musical life early, later abandoning the
conservatory in 2003 to study Hindustani music in Pune, India. She received her BFA in
2006 under James Tenney and Michael Pisaro at CalArts in Los Angeles, where she
continued to compose, teach, and collaborate with musicians such as Laura Steenberge
and Julia Holter on Singing by Numbers. In 2008 she received a W. A. Gerbode Foundation
and W. & F. Hewlett Foundation Emerging Composers Initiative for Dilations, premiered at
the Other Minds festival in San Francisco. She mentored under the experimental
filmmaker/ Dhrupad musician Mani Kaul until his death in 2011. In 2012 she received her
MFA in music/sound from the Milton Avery School of Fine Arts at Bard College in New York.
She toured Shade/Gradient extensively and was awarded the Henry Cowell Research
Fellowship to work with Eliane Radigue in Paris. In 2013 Lamb relocated to Berlin,
Germany, where she currently lives, and has written for ensembles such as Konzert
Minimal, Dedalus, NeoN, Ensemble Proton, and the London Contemporary Orchestra,
while collaborating regularly with Marc Sabat, Johnny Chang, Bryan Eubanks, and Rebecca
Lane. Her first orchestral work, Portions Transparent/Opaque, was premiered by the BBC
Scottish Symphony Orchestra at the 2014 Tectonics Festival in Glasgow and was 16
conducted by Ilan Volkov. She is a 2018 recipient of the Grants to Artists award from the
Foundation for Contemporary Arts, a Staubach Fellow for the 2016 Darmstadt Summer
course, as well as a 2016–2017 Schloss Solitude Fellow. Her writings/recordings are
published in KunstMusik, Open Space Magazine, QO2, NEOS, Another Timbre, Other
Minds, Winds Measure, Black Pollen Press, and Sacred Realism.
[Link]/catlamb Since 1996, Ensemble Dedalus has been forming one by
one, by those interested in the experimental nature of the work, the egalitarian
atmosphere, or simply the depth of musicality. First initiated by guitarist Didier Aschour
and flutist Amelie Berson, it is now a highly regarded, modular ensemble known for its
long-term relationships with such composers as Tom Johnson, Christian Wolff, Pascale
Criton, and Michael Pisaro. Collectively, the ensemble finds camaraderie in the work that
invites the interpreter into expanded creative roles, such as open/improvisatory elements,
geometric/non-linear forms, or the total listening space that is asked of the musicians,
sometimes even playing different instruments. One could say the group functions together
more like a rock band, which becomes more apparent in their committed realizations of
Moondog, but also in the striking way they interpret Music With Changing Parts (Philip
Glass). Though its members have arrived together from vastly varied and skilled musical
lives, be it baroque, free-improvisation, spectralist, jazz, or minimalist, each of the very
unique and high-caliber individuals bring something exceptional to the group. Based in
France, members have expanded to other regions (Italy, Spain, Switzerland, Germany...)
The ensemble is not defined by a particular aesthetic, but rather by the process it takes to
realizing a piece of music together. They also choose to work with composers who are
blurring the edges between artistic forms and hierarchical roles, and as a result become
part of the collective ensemble. Ensemble Dedalus: Didier Aschour, electric guitar, music
director Amélie Berson, wood and metal flutes 17 Cyprien Busolini, viola Yannick Guédon,
voice, treble viola da gamba Thierry Madiot, trombone Pierre-Stéphane Meugé, saxophone,
synthesizer Christian Pruvost, trumpet Silvia Tarozzi, violin Deborah Walker, cello, voice
SELECTED DISCOGRAPHY in/gradient. Sacred Realism sr004. Mirror. Neos 11501.
shade/gradient. Black Pollen Press BLKPLN03. three bodies (moving). Another Timbre
at53r. untitled 12 (after agnes). Sacred Realism sr001. SELECTED BIBLIOGRAPHY Lamb,
Catherine. "The Interaction of Beings." Schloss—Post/Schlossghost#2 ([Link]
[Link]/the-interaction-of-beings/) -. ‘"The Interaction of Tone” Kunst Musik #17 (Spring
2015): pp. 14-21. -. "Moments of Air.” The Open Space Magazine, issue 10: pp. 20-24.
Produced by Dedalus & GMEA—National Center for Musical Creation (Albi, France)
Engineered and mixed by Benjamin Maumus Recorded July 2-7, 2018 at Centre Pompidou,
Metz, France. Mixed October 16–19, 2018 at GMEA, National Center for Musical Creation,
Albi, France. Digital mastering: Paul Zinman, SoundByte Productions Inc., NYC Front
cover: Seaside On A Soft Day, from the series ‘No Naked Lights," 2005, C-Print. Copyright
Uta Neumann. Used by permission. Back booklet cover: Turmalin/Splitter (scattered
stone), 2014/2017, C-Print-Paper on PhotoCardboard. Copyright Uta Neumann. Used by
permission. 18 Design: Bob Defrin Design, Inc. NYC All compositions published by Sacred
Realism. This recording was made with the support of FACE and SPEDIDAM. Thanks also to
Association Fragments (Metz) and Centre Pompidou Metz. This recording was also made
possible by a grant from the Francis Goelet Charitable Lead Trust. FOR NEW WORLD
RECORDS: Lisa Kahlden, President; Paul M. Tai, Vice-President, Director of Artists and
Repertory; Paul Herzman, Production Associate. ANTHOLOGY OF RECORDED MUSIC,
INC., BOARD OF TRUSTEES: Amy Beal, Thomas Teige Carroll, Robert Clarida, Emanuel
Gerard, Lisa Kahlden, Herman Krawitz, Fred Lerdahl, Larry Polansky, Paul M. Tai. Francis
Goelet (1926–1998), In Memoriam For a complete catalog, including liner notes, visit our
Web site: [Link]. New World Records, 20 Jay Street, Suite 1001,
Brooklyn, NY 11201 Tel (212) 290-1680 Fax (646) 224-9638 E-mail:
info@[Link] & © 2019 Anthology of Recorded Music, Inc. All rights reserved.
Printed in U.S.A.

Harvard Lecture: the Form of the Spiral


Catherine Lamb I come here as an active composer and an amateur theorist/musicologist
(I look to the French definition, “the love of…”) I would like to discuss the future of musical
experimentation, through reimagining the elements of musical form and our perceptions of
harmonic space. We are transitioning into a new era of music. How we listen and how we
perceive is different than it was one decade ago, 50 years ago, 100 years ago, 500 years
ago... We cannot and do not know what the music of Gioseffo Zarlino or Mian Tansen was
really like, because they were experimenting from their current positions in the world, (16th
century Italy/India), but we can imagine some things from our own position towards them
now. So that everything begins from our individual point outward
(forward/backwards/up/down/in/out), through filtering and windows of reality and in our
own interactions with others, throughout our musical lives. As James Tenney said, if one is
investigating and furthering their work, then one is in the act of experimentation. So that if
we question what we do and push it forwards in some capacity, we are experimenters.
Experimental Music as a term has become tainted with its association in historical
semantic meaning, but if you look at thousands of years of experimental music all over the
world, from Pythagoras to Zia Mohiuddin Dagar to Sun Ra, then the term becomes less
tainted and more accurate to the act of what we are doing when we are composing music.
Maryanne Amacher in the early 90s indicated that we were moving into a new era, the one
of the listener’s music, based on the listener’s initiative. So are we there now? and if so,
what separates this from any other musical phase that has already become? She argues
that the active listener is the experimenter. Further, that the active listener has an entirely
new neurological structure to the previous listener, and therefore perceives differently. I
am going to follow this logic, that we are indeed in the era of the new listeners or the new
perceivers. In this new era, we need to divorce ourselves from habitual musical thinking
and terminologies. Not that we can’t describe rhythm, melody, harmony as we have, but
that we need to expand what these elemental musical terminologies really mean and
function as. Standard harmonic theory simply does not apply anymore to the modern
composer. It was helpful 100 years ago when looking towards a particular past, and then
when serialists took that particular theory and inverted it and warped it, or how Ruth
Crawford Seeger approached her own dissonant counterpoint. It has been helpful for
songwriters, and a more complex variation of it for jazz musicians over the past 100 years,
but now in our new era of the listener’s music, this simply does not apply anymore. Let’s
consider the intersections between, for instance, Cage’s idea that the world itself is a
psycho-acoustic space with the ancient Sanskrit term Sruti, or, “that which is heard”. Our
collective perceptions have changed, just as our collective neurological pathways contain
a new architecture. What we listen to in the chaos of the world is particular to our own and
collective investigations, and the chaos (our surrounding environment) changes. How
many of you follow Gioseffo Zarlino’s study of consonance and dissonance in contrapuntal
harmonic progressions? How many of you follow Schoenberg’s analysis? I would imagine a
very rare composer today utilizes these theories in their own work, other than trying to
understand through a new modernized filter, looking to the past, to understand where we
are now. I mention the need to re-imagine harmonic theory because Tenney was asking
this in the 90s and we have still not universalized the concept of multi-dimensional
harmonic space as opposed to flattened harmonic verticality, which is one very critical
step into the new era of listeners. To the modern experimental composer, harmony is often
avoided altogether. Instead the composer turns towards
rauschen/noise/timbre/chaos/de-tonation/spectrality, pursues randomized or
approximated harmony (as in happened upon microtonal variations or using standardized
equal tempered tuning as a measuring tool to go against). Composers have either taken
standardized harmony for granted, OR have gone completely in another direction more
linked to the practice of other art forms by utilizing conceptual, theatrical, performative, or
visual pursuits in a manner where harmony is not an important factor. Electronic
composers (as Eliane Radigue and Maryanne Amacher have demonstrated) have arguably
been most directly capable of divorcing themselves from standardized harmonic musical
theory by diving rather into pure acoustical materiality. However, acoustic composers are
just as capable as demonstrated by many who are currently composing today. So let us
describe together multi-dimensional harmonic space rather than speak of a flattened
vertical one (in which standard pedagogical training would have us visualize). The vertical
harmonic thinking certainly still persists today (as we all know, because of our training),
but it is the very thing that most modern composers hit a brick wall with when approaching
harmonicity in their writing, due to its imprecise and narrow world vision (this universalized
tempered logic of the past 200 or so years AND its theoretical genetic make up). Not that
the evolution of the piano, for instance, doesn’t have its musical place today, but we
simply don’t need to be so attached, as composers, to the piano anymore. The piano does
not inform us of all that we need to know about harmonic space, it simply does not. It can
suggest and approximate and inspire us. It can be used as an instrument, but not as TRUTH
(particularly regarding harmonic structure). Let there be death to the romantic vision of the
composer composing at the piano. It’s not useful for us anymore. Instead, I propose to re-
imagine with me today, the elemental shape that organizes our harmonic structure. The
slide show* that has been happening might give to you a clue as to what shape I might be
referring to. This is a rather universal and primal form to all of us. Not just that it references
the galactic as well as the molecular, or is a great inspiration for artists and thinkers in
every century, but also that you can pretty much find this shape anywhere, particularly in
the manner in which things move and grow and are inherently structured. This shape could
be more useful than a straight line. In actuality, it is the result of a straight line. If we were
to perform together La Monte Young’s 1960 Draw a Straight Line and Follow It, if we really
truly performed it to perfection, we would, in fact, be drawing a spiral. When we listen to
two tones in interaction, we can also define their meeting points on an x/y axis, suggesting
horizontal/vertical constructs, but the singular crossing points generates a multiplicity of
x/y points that extend and hover, in space, and in our inner being. We can draw straight
lines between the interacting points, and from there pull the x/y into other dimensions.
Let’s imagine together for a moment, that there are no more letters in musical language
(a/b/c), when conversing about tonal relationships. The “A” string, or the “A” fingering, or
the “A” key, doesn’t exist anymore. We are post-alphabet. Rather, let’s humor Plato for a
moment and simply talk about the numbers. So we have the number 1, in which an
instrument is centered, (or isolated/filtered with finger or air) and from there, multiple
shapes are swirling outwards, in the form of logarithmic spirals. Between these spirals we
can find absolute points where interconnecting lines might be drawn. These are points
where harmonic and spectral alignments might be found as common partials and
fundamentals between two or more shapes. (one tone = one shape or spiral). The
logarithmic spiral suggests the ideal image that our neurological selves complete internally
while listening to an approximated relation and their corresponding series. (Absolute
perfection only exists inwardly, never outwardly). If a tam-tam, for example, were placed in
relation to what I will describe as one focal point (where a series is generated, or a
fundamental sounding tone), the spiral of that tam-tam might contain some bends and
contortions, however when placed next to the bowed, open string of a cello, or a low
frequency of a bassoon, which would both have various portions of their spirals enlarged or
emboldened over the others (such as formants), between these two or three sounding
spirals, logarithmic shapes that are very clear begin to form, and our inner beings complete
the missing links the materiality is generating.

* this lecture is
accompanied by a slide composition made up of 125 spirals This brings me back to
Amacher’s reference to the listener’s visualizations of shapes when listening to music. It is
through relation that we find these ideal forms. One tone in relation to another suggests
triangular motions or formations. In fact when you have two tones, 5 already immediately
exist (the difference tone, the summation tone, and the first common partial), in addition to
the unique spiraling timbres of the two generating points themselves and the extended
common alignments, not to mention room resonances etc... When we talk about these
points that exist we immediately are describing both an inner and an extended outer
experience. The inner ear and the mind together, interacting with the acoustics of the
space one is listening in and responding to resonances being produced, are also in relation
to the greater atmosphere (or the known surrounding environment). All of these things
come together to find corresponding, simplified, and idealized points and lines forming
shapes. The closer the focal points are situated in clear and defined relation to another,
the clearer the shapes appear in the minds eye of the perceiver. In Peter Ablinger’s
Weiß/Weißlich, for solo cymbalist and ensemble (one piece in his extended series), the
unique cymbal is first recorded and sent through a spectral analysis software, which then
reads out a field of approximated tones for the ensemble to read from. The open ensemble
softly sounds this open field below that of the cymbal, which is sounding for a long
duration at an even and full volume. What happens then is a sense of highlighting towards
points that are accumulating together over time in the space. In this case, precision of
pitch is not necessary because it is pointing towards an impure spectrum that in
aggregated time is also suggesting a pure spectrum….so that you have instruments
suggesting the spectra of the cymbal and the cymbal suggesting the spectra we would
imagine in our own neurological structure. As Ablinger would say, the real is imaginary. So
what, then, is the experience of a piece of music? I will focus attention on form since that is
the subject today. James Tenney defined form in four categories. I’m sure we could all
come up with variations of these but for now let’s look at his four simply because they are
useful. [Link] I would consider this to be one tone, one timbre, one articulation, one
syllable, or one materiality that in some manner is singular to itself (even if it might suggest
something more complex, it’s function in the sounding environment is singular). In this
sense, perhaps a noise, particularly a computer generated one, could also be described as
an element, depending on how it is perceived, just like the slow roll of a tam tam might
already move towards the next category, depending on whether you are listening to it as a
singular element, or a complex event. [Link] To me, Tenney’s reference to clang (which he
describes in Meta-Meta-Hodos and other texts, yet it is still is somewhat undefined to
me…) moves into the direction of interaction between elements. For instance, I like to
describe the study of tonality as the interaction of tone, simply because the focus of the
listener moves towards the relation rather than the absolute. So here, clang, is where
elements begin to aggregate together. The most simple is one tone sounding with another,
but it also refers to the beings perception of one thing or event with another, so the term
takes on a more generous position. We could call it the complex sounding event of the
tamtam element or one noise containing multiple parts. It also includes how we perceive
the past in a given moment, so that the memory element might mix with a current element
and therefore create an inner sounding clang. Regardless, an element is always in
interaction with another, but we need to define an element as such so that we might use it
as a particular focal point to our own construction of a temporal form. The listener
constructs the form. So we create our own filters. Maryanne Amacher often described the
listener hearing shapes in music. Amacher’s shapes would be Tenney’s clang. The critical
instrument in Indian Classical music, the tanpura, functions both as element and as clang,
where it focuses the attention on a particular clang that is in constant motion, for the
musician to be in direct interaction with. A melody can also be a clang depending on how it
is used. A melody can define and highlight shapes and reveal the clang more clearly to the
listener. A melody can also link the memory space with the actual space. A melody is the
movement of the elements, occasionally generating clearly defined interactions as it
moves. [Link] I describe a sequence as a shift or transition from one clang/element
to another, and then another, and so on. Depending on how microscopically you are
defining a sequence, for instance the change of a bow or breath playing the same tone
could even be a shifting to a new sequence, just as a new complex event in Xenakis’s
music might be. The point is more to define where an element, a clang, and a sequence,
might have differing perceptual characteristics. A melody can be perceived as a clang if it
is reinforcing the elements to create a moment-form event. A melody can also be a
sequence as it unfolds the piece itself. [Link] That brings us to the final category, the
piece, or the total recognition of form made up of smaller proportions. A piece is what you
are left with or walk away from after having an experience. Without trying, temporal form is
created any time a listener has an experience within a kind of frame. If the listener steps
into a sound installation, what they leave with is the frame of their experience in the
temporal happening of their arrival and exit. Therefore, total form is unavoidable, but it can
also suggest something much larger than itself. One of my favorite sound installations is
the Max Neuhaus Time Square, simply because of what it asks of the perceiver on a
concrete island in the middle of total chaos. It places its own frame on the space through
subtle tones emerging from air vents below, and when the perceiver focuses on the
elements, clangs, sequences that the piece is having one focus attention on, the
surrounding space transforms. When the perceiver walks away, they are left with the
memory of that transformation. (Gestalt is an organized whole that is perceived as more
than the sum of its parts). In my own work, I am often interested in the long introduction
form. This is most likely inspired by my personal interest/study of a particular dhrupadi
approach to establishing a raag. In this dhrupadi approach, (referencing Zia Mohiuddin
Dagar) every element is a clang, every clang a sequence, and every sequence is the total
form or gestalt perception. When performed well, the small defines the large through its
own subtle suggestion of movement and timbre. When performed as such, the listener
only requires the experience of the alaap and nothing else. In standard Hindusthani music,
alaap is generally the first of three major portions within a musical piece, and is generally a
short introduction to what comes later in the sequence. In some practices, alaap is the
total piece and can last around 45 minutes. “In order to make explicit the concept of a total
musical space, I employ Akbar Padamsee’s formulation of the included/excluded space in
painting. The notes included in the melodic structure constitute the consonant space
whereas the excluded make up the remaining into a dissonant and absent space. The
melody when restricted to its ‘sweet’ character, in fact, excludes the excluded space and
therefore in its elaboration fails to achieve the status of what has been earlier termed the
perspectiveless totality. The absent notes impinge on those present and threaten to
disintegrate that melodic structure if brought out in the open. Between any two included
notes in a raag lies in darkness the excluded area. But in a way similar to how all-rational
discourse has eternally addressed itself to the irrational, the structured melody addresses
itself to the unstructured dissonance. The dissonant area for a specific raag permits a
specific path to be traversed by the luminous shruti and of course traversed in a certain
way. When the included-excluded space is thus brought together to actively shape the
elaboration, the total space, the unified space or the integral whole seems to emerge from
the individual features of a raag, without in any manner mutilating the sensuous extensions
of these individual features…. To the whole form of perspectiveless rendering, a duration
or half and quarter of that duration prescribes no structural measures to be faithfully
observed. Nor working within a finite space of time, the music obviously does not, in terms
of its expansion, strive for an infinity. On the contrary, absence is felt as a real experience
of space and the purpose of such an experience is to unfold an ultimate quality in
attention….” -Mani Kaul, Seen from Nowhere. In: Kapila Vatsyayan: Concepts of Space:
Ancient and Modern. New Delhi 1991

square wave and its progeny The square wave is the


‘electronic signature’ of anamorphosis: an alternation between two states without an
intermediary position. What if this diagrammatic form should also apply to the alternative
between ‘action and exposition’ in narrative, or to the opposed positions of the Hegelian
dialectic? As if that’s not enough, consider the Bergsonian time sequence, which Henri
Bergson compared to the cinematic progression of still images that psychologically
produce the illusion of movement, via the phi (really ‘beta’) phenomenon? But, wait!
There’s more! The temporal sequence contains, by virtue of its coherence, an uncanny
relation to the forms of discourse Lacan outlined in The Other Side of Psychoanalysis. 1.
square wave basics The square wave expresses anamorphosis’s center-most feature: its
ability to waver between one form and another without passing through a middle,
composite version. It’s either a duck or a rabbit in the classic psychology class illustration.
In Bergson’s classic/infamous adaptation of cinema projection to describe the relationship
between ‘time sections’ and the phenomenon of durée, he appropriated the so-called ‘phi
phenomenon’ (really the ‘beta’ phenomenon say purists) to explain how smooth motion
could be perceived in the face of evidence that was fragmentary and static. Even with a
notable and irreducible gap between time ‘slices’ (the individual still photographs), the
mind stitched the images together. This metaphor was so compelling that Bergson
extended it to normal perception: our idea of ‘what’s happening’ — the meaning and
structure of an event — is a stitch job. The ‘phi’ (Ø) in the case of ordinary time sense is the
metaphoric continuity that identifies an event as what it is, distinguishes it by attributing an
intentionality, a past, and a future. In this way, the immediate and fragmentary evidence of
the senses finds a ‘ready-made’ place in a framework that is learned from culture and
experience. ‘What’s happening?’ is the tonality behind the particular individual formation
of any otherwise unique action. What could we gain by comparing the phi progression
(although we may, with Deleuze, reject the logic behind Bergson’s comparison) to the
‘anamorphic’ square wave? The basic anamorphic condition proposes an active role of the
‘negative’ or ‘antipodal’ counterpart of the more normative perception. The inverse of the
positive perception of reality is not, in this case, the black bar separating the photo images
on celluloid. It is whatever is provided by the perceiver, at the ‘invitation’ of the Real, so to
speak. Where does it come from? Memory and imagination. In the classic alternation of
action and exposition in narrative, the imagination is pulled into the ‘real time’ of the story
to flesh out the interactions of characters and framing of the setting. Exposition —
knowledge that frames action to let us know what is happening, how, and why — is added
through a variety of techniques. A character can function as an ‘informer’ about the past or
context. That informer can be truthful or not. The device of a narrator can use an objective
or subjective voice to frame events and consequences. The story-in-a-story (mise-en-
abîme) can subtly serve as a backstory or analogy of what is happening in the ‘present’ of
the story. In the anthology, a linking tale gives a contents to a sequence of stories that may
be strangely connected. In all cases, a hypnotic effect comes from the movement back
and forth across the line separating fiction from reality, and fiction from a fiction within it.
2. storyboarding The shots in a cinema typically hold to a principle of continuity. Rules
must be followed to avoid disorienting the audience in what in reality may be a set of
disjunct filming constructions made at different places and times. How is continuity
provided in real life, where the scenes are what the sociologist Roger Barker called
‘behavior settings’ — stages where subject meet, respect their roles, and obey systems of
decorum? Generalizing from film to life acknowledges a fundamental link: the square wave
operates at all levels in the life of the mind — in ‘live theory’ as well as life looked at from a
theoretical perspective. This is the fundamental presence of the ‘uncanny’ as a blurring of
dichotomies that, like inside/outside, dark/light, life/death, etc. are not simply overlapped
or confused but structured ‘like a language’, that is to say, topologically. This is a topology
that, unlike other spatial models, must implicitly involve the temporal succession of
events, the structure of events, and the notion of time sections that motivated Bergson’s
theory of the durée, duration. The temporal success, viewed as a square wave, involves a
gap that refuses mediation between the two opposed states. In the classic live theater
auditorium, the house lights are either on or off. The audience in the darkness watches the
lit scenes on stage; the lit auditorium ‘turns off’ the centrality of the performance. When,
as in Woody Allen’s The Purple Rose of Cairo, an actor attempts direct contact with the
audience, this attempt is ‘blind’ — a stare into a theoretical as well as real darkness, an
absolute barrier that, unable to see normally, induces a ‘blind sight’ within the viewers in
the auditorium. In effect, the blindness of the actor is a counterpart to the conditioned
sight of the audience, which ‘sees what it’s supposed to see’ through some mysterious
rule that interpellates the imagination to follow prediagrams: square wave donald kunze /
[Link]/boundaries/ the classic duck-rabbit illusion drawing is interpreted either
as a duck OR a rabbit, but there is no hybrid species in between the two. This is,
graphically, the ‘square wave function’, a wave that jogs abruptly between two states,
leaving no middle ground in between. formed paths. This radical dichotomy-plus-gap is
embedded in the word and idea temps, which in French means both time and weather, two
systems, one ‘fast’ and one ‘slow’, that operate simultaneously. In the fast system, space
is divided between viewer and viewed, split by a proscenium plane of representation which
the imagination crosses, disembodied, and reincarnates the body within the terms of the
work of art. The square wave, just as any temporal sequence subject to the effect of ‘mise-
en-abîme’ (story in a story, image in an image, etc.). The upper or lower horizontal bars can
themselves be square waves, as the effect of mise-en-abîme explores the depth of the
segment. This fractal quality further ‘blurs’ the division between the Enlightenment
opposition of terms. There can be ‘darkness shining through the light’, as the poet William
Blake put it (and James Joyce quoted), the Lacanian extimate (subjective objectivity,
objective subjectivity), an externalized interior and vice versa. In such classic films as
Kurosawa’s High and Low and Hitchcock’s North by Northwest, directionality and space
are referenced directly. In High and Low, a kidnapper spies on a rich industrialist from the
slums of Yokahama spread out below the villa. This interpellation (the kidnapper’s
demands) leads to the ‘police procedural’, the exposition of the film, which forms an
antipode with the kidnapper’s controlling actions. In North by Northwest, hysterical Cary
Grant moves from scene to scene where he is ‘confused’ by KGB agents for a non-existent
US agent, Kaplan; in the exposition, he is guided by the CIA to play along with their double
agent to trap the Russian spies. Each episode comes with its own oscillation between
interior and exterior that creates the story’s depth. The 1947 film, Dead of Night, offers an
especially intensive example of square-wave structure. An architect, Walter Craig, has a
déja-vu experience when he visits a prospective client during a house-party weekend.
Guests sympathize with his initial bewilderment and each offers a confirming example
from their own experience. The final tale involves a schizophrenic ventriloquist whose
dummy has taken control. 3. discourse as the fuel of forward motion The square-wave
alternation between subjective and objective states, analogous to the film’s creation of
illusory motion, is sustained by the ‘fiction’ of discursive form which we must embrace in
order to remain within the gravitational pull of the event before us. Lacan’s four discourses
(university, hysteric, masterservant, psychoanalysis) drive events of their own self-evident
settings. For our purposes, the discourses of the university and the hysteric are especially
useful. The university discourse is particularly instrumental in forming the ‘ideological’
narratives present in daily life and politics. The hysteric’s discourse is almost always the
dominant form used by works of art, because it’s division of space follows a ‘theatrical’
model, with the ‘subject on stage’ in the place of the Agent. A square wave sequence could
be defined typographically: \ agent / production \ agent / production / … or \ A / (p) \ A / (p)
\ A / … or even \ / \ / \ / … where each upwardly opening space of the agent corresponds to a
stage structured like an auditorium. In between are the Ø-related segments of ‘exposition’
that orient the audience to the context and direction of the action in narrative examples. As
a part of the work of art, this segment materializes the audience’s experience and puts it
‘on stage’ although the scenery portrays a ‘backstage’. In North by Northwest, Walter
Thornhill ($) is running from the police, S1, and he hides in / (p) \ locations, where the the
other two elements of the Lacanian discourse model combine in a cyclic way. In the
discourse of the hysteric, these are knowledge (S2) and truth (desire). Thornhill is,
characteristically, caught in the ‘headlights’ of false accusation. At the Plaza Hotel bar,
KGB agents ‘spot him’ by paging Kaplan, the non-existent American agent. In the United
Nations scene, the murdered UN official falls into his arms and photographers ‘catch him
red-handed’. When he tries to catch a train to Chicago, the station is filled with
policemen’s eyes looking for the suspect. The (p) position is dominated by the CIA/FBI, S1,
who generate the cyclical exchange between knowledge and desire. They ‘pull the strings
in the background’ so that while Thornhill is running from the police, he must
simultaneously pursue the KGB, via the double agent, Eve Kendall (the theme of doubles
dominates here as it does in Hitchcock’s earlier film, Shadow of a Doubt). The cycle is the
gapped circle, the graphic representing the logic of Lacanian desire: a linear project turned
on itself so that it continualy returns to the same impasse, the gap occupied by the object-
cause of desire, S2→|a|→S2. The 1947 British masterpiece, Dead of Night, uses hysterical
discourse to structure the sequences of its anthology of ‘Gingrich tales’ that support the
architect’s déjà vu claims. Craig’s dream functions as the S2 element. His memory is both
‘super’ and ‘failed’, in that it predicts the future by remembering the past, but cannot recall
the past clearly. This super-failed memory returns to the theme of darkness (the crisis is
instigated by a power failure), blindness (the psychiatrist breaks his glasses), and voice
(the theme of the voice is the central theme of the final story, told by the psychiatrist). The
hysterical sequence of \ $ /S1\ $ /S1… with the cycle of S2→|a|→S2 is useful everwhere the
protagonist must simultaneously run from one thing and pursue another, as in The Wizard
of Oz. 4. temps moderne So-called ‘theories of everyting’ should connect at points where
connections provide insights. In this case of discourse + square wave, temps, ‘weather’ or
‘time’ in French (the Italian ‘tempo’ has the same ambiguity) captures correctly the
contrast between something that is happening in a ‘now’ mode and what is seems to
endure at a slower pace. Although speed is the hallmark of the modern in the propaganda
of the Futurists, the modern could more correctly be seen in terms of the hysterical
opposition of subject-as-agent and master signifier as the ‘slower’ instigator of desire
portrayed through projects of knowledge. Fast/slow is more the case than either element,

cousmatics,
but the combination is never blurred, only alternated. a

song-lines, raising the dead METALEPSIS


SEMINARIANS, 9-ERS, AUXILIARY MEMBERS, and OCCASIONAL GUESTS
Troy Bennel, Noongar Song Lines, Part 1, acrylic and sand on canvas bruce chatwin
revisited Those who have forgotten the impetuous, manic traveller, Bruce Chatwin's travels
through Patagonia, Eastern Europe, Morocco and Northern Africa, Australia, etc.,
sometimes resulting in semi-autobiographical novels, sometimes photo albums, will want
to at least remember the principle central to all architects and geographers that he
manifested in Songlines. This is about the idea of the cosmogram, an ideal geometry that
activates, charges, and/or otherwise vivifies (cf. the old animus and anima idea of the
Stoics) spaces and times. The aboriginal Australians believed that the landscape left on its
own would die. It had to be lifted up from the dead by a walk guided by a song that was
partly learned, partly invented as the walker went along. The song called nature into being,
it named its parts, turned everyday words into love songs, as the lyrics ofLa vie en rose
remind us. Vergil's Georgics has much of the same flavor. Nature is fine but it needs
something, the human touch. Nowadays we see that the human touch is more likely to
spell disaster, degradation, the end of the world. It is hard to recover Vergil's idea of care,
an investment in what is found to occur, to turn it into more of "what it wants to be." Louis
Kahn authored the famous line about letting a brick "want to be a brick," vastly overquoted
and misunderstood. This is not a naive crude materialism, but rather the perception of a
small remainder or margin, a place for human attention, or, dare we say? love? What
creates an affinity between people can also be an attraction to the world, where the world
waits at a margin, the edge of the stage, so to speak, for our attentions. Whether it's bee-
keeping or picking up trash, this intervention can amount to more than a good deed. In fact,
it was the original motive of architecture, Vitruvius's "opportunism" to make a great thing
out of a good thing. Fire? Yes, let's create conviviality, cooking, and culture. Overhanging
branches? Yes, let's make them a bit better so that they fend of fierce storms. But, wait,
there's more. Nature is affordance. Lacan tells us that it is also interpellation, a message
from the thunder (Vico) that horrifies us and throws us into doubt. We are not simply in the
garden, gardeners. We are thrown into a wasteland. The opportunities we find sometimes
have to be wrestled out, forced out, worked out by ingenuity. In the process, the mandate
of nature/God has made us into hysterics. The "body" is the body of nature we load,
through cathexis, investment. This body of nature is interpellated by our idea of the divine
(spooky idea coming up) and its voids are sites of exceptions — new theory of sacred
space! one to "beat the band" of phenomenology — where miracles, epiphanies, love just
happens. Who are we to … well, who are WE? Our subjectivity is involve in this brand of
sacredspace making, because cathexis is all about ideology but these voids are all about
what comes next after ideology. From Mladen Dolar to Žižek, we know that the "end of
analysis" has two things to it: (1) we see that Truth is in the first and fourth, or last position;
this means that we find at the end what we put there at the beginning; this has to do with
the subject-in-pieces, the corps-morcélé retroactively established by the damn image in
the mirror, the ideal ego. The ideal ego is imaginary (hence the mirror image plays a critical
role), the ego-ideal is symbolic: it is what amounts to "keeping up appearances." The ego-
ideal contrasts with the superego, which enjoins us to "Enjoy!" in an obscene manner. It is
the "dirty little fantasy" that is the counterpart of the official view that "nothing happened."
Humphrey Bogart and Ingrid Bergman in Casablanca (1942) There is an interesting essay by
Žižek that uses the film Casablanca as an example. Near the end of the film, Ilsa comes to
Rick to get the essential letters of transit she and her husband Victor need to escape
Morocco. There is a portion of the scene, in the bedroom, that the audience doesn't see.
Instead, the camera shows the view of a lighthouse tower (don't go there, amateur
Freudians!!!) and afterward Ilsa comes out with the letters while Rick smokes a cigarette,
Hollywood code for "they did it." But, there are equally strong signals that they didn't, that
the ego-ideal injunction was kept. RICK: You said you knew about Ilsa and me? VICTOR:
Yes. RICK: You didn't know she was at my place last night when you were... she came there
for the letters of transit. Isn't that true, Ilsa? ILSA: Yes. RICK: She tried everything to get
them and nothing worked. She did her best to convince me that she was still in love with
me. That was all over long ago; for your sake she pretended it wasn't and I let her pretend.
VICTOR: I understand. Well, no one else understands. The critic Richard Maltby gives a
partial answer: both interpretations are correct, and the film allows us to see the reality,
the Real of both. They both did it and didn't do it. The absence of the bedroom scene
constitutes one of those holes in the inventory matrix of cathexis, a site of exception. The
exception is the co-presence of both aspects of the fantasy, the superego dirty fantasy and
the ego-ideal cleaned up version. Neither is an exception, the combination is the
exception. Places where two opposite things can be true at the same time, and now we are
in to Alireza Moharar's insistence that we think about time, and the times inside of times
and times outside of time. These allow for +x/-x convergences,coincidentia oppositorum,
but unlike phenomenology, we must be explicit about these and not simply quote Jung. It's
often necessary to substitute the goal of dissensus for consensus. In consensus we must
all agree about which is the correct reading. With dissensus, we agree that we can't agree.
This is the condition necessary for places where contradictions abound: memory places,
sacred sites, terrain vague, disaster sites, ruins, haunted landscapes — in short, the
uncanny. NOW REMEMBER, CATS AND KITTENS, the uncanny and the canny convert. This
is what makes the uncanny really really uncanny! The home and the unhomely — "that
which should not have been exposed/let out has been exposed/let out." In other words, the
sexual space that Actæon stumbles across during his stroll through the forest, certainly
one kind of hole in our inventory grid of cathexis! When you see sites of exception do not be
picky. They are here there everywhere. They can appear and vanish. They are timeless but
also time dependent (Dirac function). They can be parts of stories where the narrator does
not know what the reader can imagine (Raymond Carver's "Cathedral"). There is not at
present any critical methodology adequate to account for them, THIS IS YOUR JOB!
summer At the end of school semesters, terms, whatever we all have travel plans or work
or play plans of some sort. The Newslitter will shift to an irregular publication schedule.
Basically, it's the schedule of "whenever!" There are two events that some members will
want to look out for however. Summer Vico Institute. Some members have requested a few
days of Vico study, a bit like the "retreat" held in the fall of 2013 for the combined
Alexandria/PennState members. We will try to accommodate all who may wish to come.
The general limit on our sunroom is 14, and again we will try to get Mary McLaughlin to cook
meals for us. The general date will be around June 21, Vico's birthday, midsummer's day
(John the Baptist = Giambattista = midsummer, the saint's day, get it?). We will talk about
the humors in relation to Vico's theory, go over the role of the graphic materials at the
beginning of The New Science, and review the basics of the theory. My book, Thought and
Place: The Architecture of the Imagination is available on-line, and you should always refer
to the Bergin Fisch translation of The New Science, not that other schlock edition, whose
translator seems to know nothing about Vico's concern for the union of imagination and
memory. The shorter works, The Autobiography of Giambattista Vico, The Ancient Wisdom
of the Italians, The Study Methods of Our Times, and the Inaugural Lectures are important,
in that order (in my view). Don Verene's book, Vico’s Science of Imagination (Cornell, 1992)
is a fine book. Verene was my dissertation advisor, and the author of other good books on
Vico. My view is a bit skew of the standard. It comes from conversations with Ernesto
Grassi, Eugenio Battisti, Ivan Illich, Giuseppe Mazotto, and Giorgio Tagliacozzo. My book
got some good reviews, even from David Leatherbarrow. Not to toot my own horn, but I just
want to assure you that sometimes little books can be more important than Big Books, if
you know how to read them. Before you "go there," let me advise you that almost
everything Alberto Pérez-Gómez says about Vico in his book, Built upon Love, is
dangerously misleading or flatly wrong. Sorry, Alberto, you need to take a hit on this one,
from someone who actually read The New Science, something you weren't planning most
readers to actually do. Bitter tirade aside, we will try to reach personalized understandings
of The New Science (whose Italian title could easily infer "the science of nines") directed
towards the "spatial studies" of architecture and geography. The Ivan Illich Lovefest. Some
local friends, including Sajay Samuel, have long wanted to convene discussions from
topics introduced by our one-time guru, Ivan Illich. Anyone not knowing his work will be
amazed at the span and depth of his interests. Illich came to Penn State and Penn for over
four years back in the 1990s. He was already suffering from a cancerous tumor on his jaw,
which he treated for pain only, saying that any doctor would have killed him years ago. He
was right … he lived long enough to convene many interesting discussion groups,
seminars, and large events — he liked to pay for everything and often sent people plane
tickets to come to events to make sure they would arrive. We don't know the topic exactly.
Gender? Love? Politics? Illich's themes were broad, but of course his main theme was
conviviality: what it means to talk together, food and wine in close proximity, with friends
bound together by love rather than professional interests. All are welcome to come; locals
will take up about 8 of the "slots," and we have room for about 16 in total. This will probably
take place in August. Impromptu visits. After June 15 (London/UK trip) Elaine and I will be
around the house mostly and visitors are welcome. We can accommodate about six
people at a time in the house but have friends who also will put you up (think of the
Kleindorfer's farm for example). I expect some people to come up just to fantasize about

Some Ontological Remarks


horses. MISS YOU ALL!

about Music Composition Processes Music composition


processes can be envisioned as complex systems involving a plurality of operating levels.
Abstractions of musical ideas are mani- fested in myriad ways and degrees, one of which is
of course their suitability for implementation as al- gorithms, enabling musicians to
explore possibili- ties that would otherwise lie out of reach. However, the role of algorithms
(finite computable functions, in Turing's sense) is not to be simply reified in a composition.
Composers use computers not only as "number- crunching" devices, but also as
interactive partners to perform operations where the output depends on actual
performance. Composers are concerned with the creation of musical situations emerging
con- cretely out of a critical interaction with their mate- rials, including their algorithms.
This task cannot be exhausted by a linear (a priori, non-interactive) problem-solving
approach. Interaction is here matching an important feature of musical composi- tion
processes, giving room for the emergence of ir- reducible situations through non-linear
interaction. Irreducibility is perhaps a key word in this con- text, as we are dealing with
music's categories and ends. Music is not dependent on logical constructs unverified by
physical experience. Composers, es- pecially those using computers, have learned--
sometimes painfully-that the formal rigor of a generative function does not guarantee by
itself the musical coherence of a result. Music cannot be confused with (or reduced to) a
formalized disci- pline: even if music actually uses knowledge and tools coming from
formalized disciplines, formal- ization does not play a foundational role in regard to
musical processes. I will refer in this article to a "realist" ontological principle relying on
"com- mitment to action" which can shed light on the nature of musical compositional
processes in re- gard to formal constructivism. Additionally, musi- cal processes, at least
from the composer's point of view, are not situations "out there" waiting to be discovered:
they are rather to be composed (since they did not exist anywhere before being com-
posed), and hence they cannot be considered prop- erly as modeling activities, even if they
use-and deeply absorb-models, knowledge, and tools com- ing from scientific domains
(acoustic and psychoa- coustic modeling, for example). In fact, music transforms this
knowledge and these tools into its own ontological concern: to create specific musical
situations (musical "states of affairs"). To this end, a palette of diverse com- positional
instances is needed, including strategies for controlling and qualifying results and choices,
according to a given musical project. These com- positional instances, to reiterate, are not
envisaged here in the frame of the traditional approach to al- gorithmic (automatic)
composition: they are in- stead seen in the light of the ongoing paradigm shift from
algorithmics to interaction (Wegner 1997, Bello 1997), where the general-purpose com-
puter is regarded as one component of complex systems (Winograd 1979), and where the
com- poser, being another component of these complex systems, is imbedded in a
network within which he or she can act, design, and experience concrete tools and
(meaningful) musical situations. It is under this perspective, I believe, that the for- mal
status of musical processes can be approached- in a certain way "revisited"-as I will try to
do in this article, focusing on ontological questions. Com- puter music practice (computer-
generated and com- puter-assisted composition) is of course the underlying frame of the
discussion here offered, be- cause these reflections have arisen from the author's daily
exposure, as a composer, to a situa- tion in which algorithms, choices, and "musical
theses" are themselves confronted within an "ac- tion/perception feedback loop" which
seems to con- Computer Music Journal, 25:1, pp. 54-61, Spring 2001 O 2001
Massachusetts Institute of Technology. 54 Computer Music Journal This content
downloaded from [Link] on Mon, 23 Nov 2020 [Link] UTC All use subject to ht
stitute definitively the pertinent instance of valida- tion of musical processes. Approaching
Music's Ontology Schiinberg's Criticism of "External Calculus" Schdnberg states in his
Style and Idea that "a purely external calculus system calls for a formal construction
whose primitive nature is suitable only to primitive ideas" (Schanberg 1951). This re- mark
points, in the particular language of its au- thor, to the mismatches that may be caused by
literal application of operations which may be suc- cessfully applied in other fields, but
which are not guaranteed to function pertinently in a musical context, as long as they are
not absorbed and trans- formed into elements proper to "music itself." The Difficulty of
Defining "Music Itself" However, it can be argued here that the very idea of "music itself"
encounters a major difficulty: no- body can say what music is, other than by means of a
normative proposition, because "music itself" is in fact a non-demonstrable thing, and its
prac- tice is neither arbitrary nor based on physical or metaphysical foundations: It is not
because we know, in one manner or another (and without being able to say how), what
music is that we also speak of atonal or concrete music as music. We use the word
"music" according to certain rules, and these are neither very precise nor based on the "na-
ture of things", even if they cannot be con- sidered as arbitrary. (Bouveresse 1971, p. 318)
Certainly, we know that there is no necessity to define completely the concept of music in
order to create, play, or listen to music. Furthermore, we know that the very existence of
music, as a shared practice, would in fact be impossible if one should previously have to
define completely the concept of music. This being the case, an ontology of mu- sic should
refer to the music's status cautiously, taking care to not fall into reductionist traps.
"Universals" Are Not Needed On one side, there is no necessity to affirm the ex istence of
"universals" standing above musical practices, whatever these universals might be: a
Platonic Idea, the dogmatics of proportion, a nor- mative foundation of harmony, and so
on. Of course, there are primitive principles underlying musical practices, but these should
not be quali- fied as foundations of "music itself," for this would negate the possibility of
developing other musical practices related to different assumption Schanberg's famous
statement about the "libera- tion of the dissonance" can be seen in this light: "the
expressions 'consonance' and 'dissonance', i referred to an antithesis, are erroneous; it
depends only on the capacity of an analytic hearing to be- come familiarized with the
higher harmonics" (Sch6nberg 1951, p. 16). Evidently, there are man musical practices
(including functional tonality) that are based precisely on the antithesis that Schbnberg
does not accept, as he is looking here for another reference concerning musical relation-
ships. But this does not invalidate his statement about analytic hearing: on the contrary,
his state- ment affirms the possibility of "music" beyond the musical world based on a
given functionality (tonality, in this case) by stressing the fact that there may be other
equally conceivable musical assumptions and constraints to which the percep tions of a
given musical world are to be related. Music Reveals Its Own "Creation Principle" On the
other side, there is an ultra-relativist thesi affirming that "music is everything we call mu-
sic"; but to follow this line would meant to fall into another reductionistic trap, analogous
to th first one. The example just referred to, showing the relationship between hearing
(lower or highe harmonics) and specific musical assumptions an Vaggione 55 This content
downloaded from [Link] on Mon, 23 Nov 2020 [Link] UTC All use subject to ht
constraints (specific kinds of relationships and functionalities, such as consonance and
disso- nance), tells us why it is so. We can understand, then, that in spite of many attempts
at reduction, music-making remains an activity revealing its own "creation principle"
where, to paraphrase Finsler (1996), "consistency implies existence," taking the word
"existence" to mean the presence of a given state of affairs. We continue to use the word
"music" according to certain rules, which are "neither very precise nor based on the nature
of things" (in the words of Bouveresse, quoted above), to refer to musical practices that
cannot be considered arbitrary. We do this while focusing on certain operations,
categories, facts and ends that we determine to be specific to music, or at least to musical
"possible worlds." Of course, this use of the word "music" does not bring up the ultimate
argument about the nature of music, but only refers to its existence in onto- logical terms,
referring to a given state of affairs. A complementary "anthropo-logistic" argument may
also be considered here, as musical practices exist within a given "style of life," or "a
culture of one period," as Wittgenstein (1953) would say. On an- other account,
Goodman's nominalism (Goodman 1976) may be evoked as well. But I will not discuss
these matters further, as the aim of this article is not to engage in a discussion about
current philo- sophical approaches: the aforementioned "creation principle," I think, may
be sufficient to assess mu- sic "as is," without falling into reductionism. Formalization
Versus Commitment to Action: A Realist Ontology As stated earlier, music uses knowledge
from for- mal disciplines and creates a myriad of abstrac- tions (operations encapsulating
operations, etc.). However, we should assume that what falls under the heading of formal
abstraction becomes, in mu- sic, part of the reality in which music develops its productive
categories. A musical process includes a plurality of layers of operations of diverse kinds:
it can certainly use formal tools as generative and transformative devices; however, other
instances are needed, involving concrete actions and percep- tions, in order to qualify
results and choices ac- cording to a given musical project. Here, formalization is not
foundational, but operational, local, and tactical (see Sinaceur 1991 and Granger 1994). A
(musical) system of symbols can be for- mally structured (i.e., built as a system including
functions manifesting diverse degrees of abstrac- tion) without being completely
formalized, the last case arising, strictly speaking, when all non- defined symbols present
in the system are properly enumerated (or, if preferred, when nothing is hid- den). As
Wegner noted with respect to other do- mains, the key argument against complete
formalization of such things as musical composi- tion processes is "the inherent trade off
between logical completeness and commitment to action," because "committed choice to
the course of action is inherently incomplete" (Wegner 1997). We can recall here Finsler's
ideas expressed in the 1920s and cited by Wegner as pioneering a "re- alist ontology,"
where a "creation principle" is posited: "concepts exist independently of formal- isms in
which they are expressed" (Finsler 1996). Finsler "went beyond Hilbert's formalism in ap-
plying the principle 'consistency implies exist- ence', accepting the existence of concepts
independently of whether they are formalized" (Wegner and Goldin 1999). We can easily
para- phrase Finsler, substituting "concepts" for "musi- cal ideas" to reinforce a "realist
ontology" affirming that musical ideas exist independently of their possible formalization or
even "constructability" (since they can emerge from a plurality of interactive factors).
Algorithms, Interaction, and Complex Systems Evidently, using computers (the most
general sym- bolic processors that have ever existed) drives mu- sic activity to an
expansion of its formal categories. Computer algorithms (whatever the paradigm on which
they are based) can be considered as formal constructs where reasoning is embodied in
ma- 56 Computer Music Journal This content downloaded from [Link] on Mon, 23
Nov 2020 [Link] UTC All use subject to ht chines. Computer algorithms differ however
from their pure logical (disembodied) ancestors by an important feature: they are
dynamically oriented, involving networking with other machines as well as human
interaction. Computer algorithms are embedded in complex (and heterogeneous)
systems, within which they are used as processing tools. As Winograd pointed out 20 years
ago, [C]omputers are not primarily used for solv- ing well-structured problems, but instead
are components in complex systems.... Pro- gramming in the future will depend more and
more on specifying behavior. The systems we build will carry out real-time interactions
with users, other computers, and physical systems (e.g. for process control). In under-
standing the interaction among independent components, we will be concerned with de-
tailed aspects of their temporal behavior. The machine must be thought of as a
mechanism with which we interact, not a mathematical abstraction which can be fully
characterized in terms of its results. (Winograd 1979) Computer music can be envisioned
as one such complex system in which the processing power of computers deals with a
variety of concrete actions involving multiple perspectives, in terms of time scales and
levels of representation. This situation leads us to rethink basic issues related to com-
poser-machine interaction, as Bello remarks: Traditional approaches toward composer-
ma- chine interaction have been fundamentally based on the machine itself, with perhaps
very little consideration placed on our exter- nal experiences in the world, particularly our
interactive experiences. Many of the tradi- tional approaches appeared to have been con-
centrating on a micro-world perspective, whereby well defined problems in composi- tion
and sound design have been explored. Such an approach ignores, or at least fails to
acknowledge, the existence of an external in- teractive environment in which the composer
is definitely a part. (Bello 1997, p. 30) Constraints and the Composer's Posited
Relationships Composers build musical situations by creating constraints that act as
"reflecting walls" inside which a tissue of specific relationships is spun (Vaggione 1997). I
use the expression "constraint" in the sense of its etymology: limit, condition, force, and,
by extension, definition of the degrees of freedom assumed by an actor in a given situa-
tion within self-imposed boundaries. In this broader sense, the composer's constraints are
spe- cific assumptions about musical relationships: multi-level assumptions that can be in
some cases translated into finite computable functions (algo- rithms), and in other cases
satisfied only by means of the composer's interaction (performance). Con- straints are
embedded at every level in the "world" posited in the musical work. We can also say,
particularly A propos in this case, that a musi- cal work presents, as Adorno has noted, a
"the- sis"-a musical thesis which encompasses all its dimensions, even the most
elementary materials: "Everything that might appear in music as being immediate and
natural ... is, in reality, the result of a 'thesis'; the isolated sound cannot escape this rule"
(Adorno 1963, p. 319). Can we say, in this case, that this thesis (posited world) and
constraints (embedded specific assump- tions) are specifications? Surely, but we must
con- sider carefully the kinds of things (the classes) that are specified: local computable
functions are on one side, with the classical condition of consis- tency satisfying a
specification. On the other side, we find global instances (actors) controlling the
multiplicity of local computable functions through interaction, with the non-classical
condi- tion of consistency as a state of affairs, and the sat- isfaction of a specification as
something that is not formally granted, but must be reached through ac- tion: consistency
"performed" by the composer. So musical thesis, constraints, and specifications (re-
ferring to the same "reflecting walls" metaphor at different perspectives) are not categories
encapsu- lating linearities, but vectors of posited relation- ships that may or may not
become satisfied, Vaggione 57 This content downloaded from [Link] on Mon, 23
Nov 2020 [Link] UTC All use subject to ht depending on a certain way of interactively
match- ing inputs and outputs. The role of the composer here is not one of setting a
mechanism and watch- ing it run, but one of setting the conditions that will allow him or
her to perform musical actions. Being Cautious with "Rules" Debussy's saying, "The work
makes its own rules," summarizes well the situation of the composer's constraints alluded
to above. However, it seems necessary to be cautious when using the word "rule" in an
artistic domain: To be considered rightly as such, a rule must necessarily be followed
many times. A private rule is already in a certain sense a contradic- tion in adjecto
(Bouveresse 1976, p. 429). Computer algorithms (which compute outputs non-
interactively from their inputs) are generally quite consistent in regard to rules, in the
classical (Hilbertian, so to speak) sense-in any case, to an extent that musical works never
show. Concern- ing the latter, we can recall Donald Byrd's state- ments on common music
notation: The point is that the supposed rules of com- mon music notation are not
independent; they interact, and when the situation makes them interact strongly enough,
something has to give way. It is tempting to assume that the rules of such an elaborate and
successful system as common music notation must be self-consistent. A problem with
this idea is that so many of the "rules" are, necessarily, very nebulous. Every book on
common music notation is full of vague statements illus- trated by examples that often fail
to make the rule clear, but if you try to make every rule as precise as possible, what you get
is certainly not self-consistent. (Byrd 1994, p. 17) Someone can perhaps argue that the
above descrip- tion applies to a system of notation, and not to mu- sical processes
themselves. This criticism can also point to the existence of non-notateable music pro-
cesses (tape music, improvisation). Facing these ar- guments, I shall make the following
remarks: (1) I consider that the intelligibility of music is always revealed in the hearing, and
not in the score; and (2) if music were a "self-consistent formal system" in a Hilbertian
sense, music notation would reflect this status, as, for example, Hilbertian notations (of
logical reasoning systems) do. Of course, another matter is considering musical notation
from the point of view of Finsler's realist ontology, as referred to above, where consistency
implies existence. Byrd acknowledges the necessity of vagueness or nebulosity of music
notation "rules," as they articulate a complex system where heterogeneous referents
(some discrete, some ana- logue) are strongly interacting. Even an operation which seems
to be mechanical, such as orchestral part extraction, is difficult to realize with an algo-
rithm of average complexity, owing to the superpo- sition of information, some precisely
quantified, some only globally qualified, some dependent on the simple graphical space of
the page, some in- scribed in a much more precise topological space. Only the musician
who reads the score knows, for example, when it is time to turn the page-a func- tion of the
context conditioning his or her actions. This point is not irrelevant: it shows that music is
constituted of actions and perceptions, and that these actions and perceptions are what is
actually transmitted in the score and in the playing. A Plurality of Representational
Systems There is no musical composition process (instru- mental, electroacoustic, or
otherwise) without rep- resentational systems at work-a plurality of representational
systems, depending at which level or time scale we are operating. The problem that music
composition gives rise to is the articu- lation of these representation systems, because the
outputs of music's processes are interactively re- lated to their (multi-level) inputs. A
"note," for ex- ample (especially if we consider it from the perspective of an interaction
between macro-time and micro-time scales allowed by computer means) can be seen as a
chunk of multi-layered events covering many simultaneous temporal lev- 58 Computer
Music Journal This content downloaded from [Link] on Mon, 23 Nov 2020 [Link]
UTC All use subject to ht els, each one having its own morphological fea- tures that can be
captured, composed, and de- scribed using adequate representational systems. We must
take into account, however, the fact that some types of representation that are valid on
one level cannot always retain their pertinence when transposed to another level (see
Vaggione 1998 and Budon 2000). Composing music (creating musical morphologies)
includes defining, articulating, and bringing into interaction these varieties of levels. "Of
What Use Is It To Know Before... " Of course, every musical process contains "primi- tives"
which derive from a specific common prac- tice. One can say that "constraints" become
"rules" if they exceed their use within a particular musical work to become part of a
common prac- tice. (In this sense I use the distinction, in order to avoid reference to
"private rules," as discussed above.) The rules we learn at the conservatory are the result
of a long historical effort of codification of evolving practices (each codification represent-
ing a vertical cut in this evolving body, freezing a given state in order to clarify its main
characteris- tics). These rules (at least a good number of them) are pedagogical in nature.
Their purpose lies in de- scribing a certain musical practice so that we may imitate it to
become "cultivated" musicians. As such, they must be collectively understood and
validated. Often, the analyst-musicologist fol- lows-albeit unconsciously-this approach,
which lies at the root of much confusion concerning the role of musical analysis (to find the
rules of a given work). Debussy's expression refers to this and was directed precisely
against this amalgam, which re- duces music to rules, thus ignoring the ontological status
(the "creation principle") of a work. With regard to artistic creation, an "insidious question,"
as Bouveresse would put it, comes to mind: "Of what use is it to know before, in what- ever
sense of the expression 'to know', what we will do later in a concrete case?" (Bouveresse
1971, p. 235). This is the kind of question often posed (to themselves) by young students
who desire to be- come composers (this has been my personal case), as they struggle to
gain musical craftsmanship without yet realizing its inherent heterogeneity, i.e., the fact
that music's "primitives" can always be modified, that new significations may emerge
during a compo- sitional process, changing and "enriching" the sense of any chunk of
musical knowledge. Beyond an Exercise in Style Here lies what seems to be one of the
sources of confusion regarding the nature of music composi- tion processes: on the one
hand, we must make as careful a distinction as possible between the collec- tive rules and
the composer's own constraints; on the other, this distinction seems irrelevant be- cause,
according to the "creation principle," the terms can always be modified. That is to say, any
primitive (coming from a common practice or pos- tulated ad hoc) is to be considered as a
part of what is to be composed, in order to produce a musical work affirming itself as a
singularity, beyond an ex- ercise in style. Adorno was of course conscious of this dialectic:
his statement about sound material considered not as something "given" but as a "re- sult"
of a musical thesis clearly points to this fact. Action and Perception I must recall that I am
considering an ontology of music where action and perception are principal components.
In any case, I assume that such things as thesis, constraints, choices, and so on would not
be musically pertinent if they were de- void of implications touching directly on ques- tions
of action and perception, i.e., revealing a commitment to action that relies on perception
as a controlling instance, hence as an ontological fea- ture of the interactive situation
itself. So thesis and constraints are revealed through perception. They are to be heard, first
of all, by the composer who is also a listener. The composer as a listener is the correlate of
the composer as a pro- ducer: in order to produce music, an act of hearing is necessary,
whether it be the "inner hearing" (the silent writing situation) of pure instrumental Vaggione
59 This content downloaded from [Link] on Mon, 23 Nov 2020 [Link] UTC All use
subject to ht music composition, or the "concrete hearing" of electroacoustic music
composition. These situa- tions involve variants (there are many others) of an
"action/perception feedback loop" which can be defined as an instance of validation
proper to mu- sical processes. Multi-scale Processes Validated by Perception We must
now consider a new situation arising from the use of computers for building musical
processes. By using an increasingly sophisticated palette of signal processing tools,
composers are now intervening not only at the macro-time do- main (which can be defined
as the time domain standing above the level of the "note"), but they are also intervening at
the micro-time domain (which can be defined as the time domain standing within the
"note") (Vaggione 1998). The micro- time domain is manifest at levels where the dura- tion
of events is on the order of milliseconds (Roads forthcoming). Operations realized at some
of these levels may of course not be perceived when working directly: in order to perceive
(and therefore validate) the musical results, the com- poser should temporarily leave
micro-time, "tak- ing the elevator" to macro-time. As a painter who works directly on a
canvas must step back some distance to perceive the result of his or her action, validating
it in a variety of spatial perspectives, so must the composer dealing with different time
scales. This being so, a new category must be added to the action/perception feedback
loop, a kind of "shifting hearing" allowing the results of operations to be checked at many
different time scales. Some of these time scales are not audible directly and need to be
validated perceptually by their effects over other (higher) time scales. Any computer
program dealing with audio data includes some kind of zooming facility. This is not a trivial
feature, though. Since the different time- levels present in a musical situation strongly in-
teract, morphologies can circulate from one level to another. However, such circulation
cannot take place, in many cases, except under non-linear con- ditions: as noted, some
types of representation that are valid on one level cannot always retain their pertinence
when transposed to another level. Thus, multi-level operations do not exclude frac- tures,
distortions, and mismatches between the levels. To face these mismatches, a multi-
syntacti- cal strategy is "composed." Object-oriented pro- gramming strategies, as I have
noted elsewhere, can help to encapsulate diverse syntactical layers into a multi-level
entity (an object) able to inte- grate a given compositional network (Vaggione 1998). But
this kind of situation needs to be con- stantly checked from a musical point of view. The
action/perception feedback loop is here the perti- nent instance where this situation can
be musi- cally controlled and validated. Conclusion What a composer wants comes from
the "singular- ity" of his or her musical project-from the composer's manner of performing
a critical act with relationships. Hence, composers can-at will-reduce or enlarge their
operational catego- ries or their field of control, producing and apply- ing constraints as
well as making the numerous choices necessary during the compositional pro- cess. In
this article, I have stressed the fact that a musical process involves a plurality of layers of
operations of diverse kinds. Musical processes can be produced using formal tools
(algorithms) as gen- erative and transformative devices, yet other com- positional
instances call for strategies relying on interaction in order to control and qualify results and
choices. Using computers drives musical ac- tivity to an expansion of its formal categories.
These categories are dynamic, precisely owing to the use of computers: vectorized,
presupposing networking and interaction, including hidden terms, without which music
creation would be re- duced to the exploitation of a linear mechanism. There is no musical
process without representa- tional systems at work-a plurality of representa- tional
systems, depending at which level or time scale we are operating. Algorithmic
representations cover a substantial part of this plurality and are cer- tainly pertinent, as
they can match at least some of 60 Computer Music Journal This content downloaded
from [Link] on Mon, 23 Nov 2020 [Link] UTC All use subject to ht the
assumptions underlying a given music produc- tion system, especially when including the
condi- tion of interaction, revealing its many simultaneous levels of articulation as well as
its di- rect anchoring in perception. This leads us to valo- rize what is perhaps the most
important issue for an ontology of music: the fact that situations orga- nized around the
production of music would not be pertinent if they were devoid of implications touch- ing
directly on questions of action and perception. So the approach presented here
presupposes a basic assumption, namely, that the meaning of any com- positional
technique, or any chunk of musical knowledge, arises from its function in support of a
specific musical action, which in turn has a strong bearing on the question of how this
action is per- ceived. Action and perception lie at the heart of musical processes, as these
musical processes are created by successive operations of concretization having as a
tuning tool-as a principle of reality-- an action/perception feedback

Composing Musical Spaces By Means of


Decorrelation of Audio Signals Horacio Vaggione Université de
Paris VIII Centre de Recherche Informatique et Création Musicale (CICM)
[Link]@[Link] Since the seminal work of John Chowning on the
simulation of moving sound sources (Chowning 1971) there has been in computer music
research a tremendous amount of work concerning perceptual sound space definition.
Chowning's model implemented some complementary cues regarding localization,
distance, and movement, among which we can recall (1) an interaction between spectral
brightness and loudness; (2) a Doppler shift, well known technique for simulating speed
and movement in terms of time-varying frequency rates; and (3) a control over the Azimut,
the horizontal plane regarding physical separation between channels. Chowning himself
referred at the last year's DAFx Conference (Chowning 2000) to the first aspect (the second
being already well understood). I will deal myself this year with another aspect apparently
related to the third point, concerning inter-channel temporal decorrelation of audio
signals. I do care specially about this subject because it constitutes for me, as a
composer, an important aspect of my own concern with layered musical activity arranged
in a variety of space perspectives (Vaggione 1989, 1998; Budon 2000). This aspect
underlies a detailed articulation of sound objects and textures, which can be enhanced
through techniques controlling degrees of temporal decorrelation of waveforms in a multi-
channel setting. Moreover, decorrelation is actually embedded, in one way or another, in
many sound spatialization systems (see for example Lindemann 1986). Hence one of the
goals of this presentation is to uncover the role of decorrelation, besides showing my
personal use in electroacoustic music composition. Decorrelation can be realized by
direct manipulation of waveforms with the help of any sound editor program featuring an
extended zooming facility, or more algorithmically by means of convolution and FIR or IIR
filters. Phase specification can be used for the synthesis of decorrelated signals. Offsets
between channels can be straight or interpolated. They can also be controlled dynamically
by means of functions stored in an array of look-up tables (Vaggione 1984). To introduce
the subject I will like to present an example taken from one of my electroacoustic pieces:
Agon, for multi-channel tape (Vaggione 2000). I hope it will illustrate the features of the
dynamic sound images which are postulated here: a multiplicity of layers belonging to
different time scales, merged in a kind of virtual soundscape (Sound example 1). Figure 1:
Final gesture of Agon. Addendum of the COST G-6 Conference on Digital Audio Effects
(DAFX-01), Limerick, Ireland, December 6-8, 2001 DAFX Addendum-2 Figure 1 shows the
waveform display of the final gesture of the piece, where an object (composed of three
different figures) is replicated in a second channel and decorrelated with a 31 msec offset.
Note the slight differences with respect to phase relationships between channels, as well
as to respective global amplitude. This gesture flows very quickly, during 270 milliseconds.
It would not have the same musical effect without decorrelation (Sound example 2). The
next example, taken from near the beginning of Agon, features the same gesture but
enriched by another superposed gesture that adds not only more morphological diversity,
but also a richer interplay of space activity, including asymmetric inter-channel crossed
movements (Sound example 3, Figure 2): Figure 2: Another gesture from Agon. Realizing
the Importance of Decorrelation Through Algorithmic Sound File Mixing I become myself
aware of the importance of temporal decorrelation of audio signals for music composition
while working in an Electroacoustic Studio situation. The use of performable diffusion
systems based on arrays ("orchestras") of multiple loudspeakers contributed to develop
this insight. I learned then to accommodate my perception to slight variations of
synchronization of multi-channel mixed signals. Moreover, having used computers since
the 1970s (specially the Music-N family of programs), I was confronted with the problems
and advantages of pure digital sound file processing, where a "note" statement could
instantiate many sound files in precise ways, but at the prize of controlling heavily the
amplitudes and the phases of the sounds to be mixed, in order to avoid clipping and phase
distortion. This lead me to define algorithmic mixing procedures (Vaggione 1984, Roads
1996). Paul Lansky was at that time also involved with algorithmic mixing (Lansky 1990).
Control over the decorrelation of audio signals arise naturally from this algorithmic file
handling where synchronization was of paramount importance. I realized then that slight
deviations in the scheduling of data rows, specially when handling in the time domain
many different sound files simultaneously, produced a sensation of space. About
Simultaneousness In music, as in everything else, there is not temporal coincidences
other than relative: we perceive simultaneities because our perception is not fast enough
as to hear microscopic temporal differences. These time intervals are indeed acting and
influencing our perception of musical facts. Speaking roughly, «any micro-temporal
analysis of onset times for supposedly simultaneous attacks in musical performance
would reveal asynchronisms on the order of dozens if not hundreds of milliseconds
(this...is exploited by the Musical Instrument Digital Interface (MIDI) protocol in which
simultaneous musical events are impossible, even in chords!)» (Roads 2001). I suppose
however that this impossible simultaneousness is not only manifested at the macro scale
of MIDI definition, but that is pervasive all along the human perceivable temporal range,
and, moreover, that it is a positive factor in making music sounding "alive". S. MacAdams
reminds that "tone onset asynchrony is a useful technique in musical practice for
distinguishing certain "voices", and it is obvious that this cue is used with great versatility
by many jazz and classical soloist" (McAdams 1984). McAdams cites research conducted
by R. Rash (1978, 1979) describing "how asynchronization allows for increased perception
of individual voices in performed ensemble music, which Addendum of the COST G-6
Conference on Digital Audio Effects (DAFX-01), Limerick, Ireland, December 6-8, 2001
DAFX Addendum-3 also may be used in "multi-voiced" instruments such as guitar and
piano. Across these studies, asynchrony values in the range of 30 - 70 msec have been
found to be effective in source parsing" (McAdams, [Link].). On another register, Gerald
Strang have stressed, many years ago, the necessity of incorporating "imperfection" in
computer music (Strang 1970), referring mainly to the "dry, boring nature" of fixed
(periodic) waveforms, something that Risset was trying at the same time to overcome by
articulating microtime inside spectra (Risset 1969). We can generalize this need of
"imperfection" (i.e., of decorrelation) to temporal intervals of any size. Some Perceptual
Thresholds Our experience (see for example Green 1971, and, for an overview about the
subject, Roads 2001), indicates that clicks of few milliseconds can be perceived as having
already a spectral content, as well as a global intensity. But, more interesting, these clicks
can be already perceived as decorrelated sources creating spatial sound images. Let's
introduce inter-channel temporal decorrelation in the most simple way: by straight
waveform manipulation. Consider a single sound produced by a Spanish castagnette,
lasting 150 msec (Sound example 4). In order to verify the thresholds mentioned above we
can select a portion of the attack lasting 2 msec - very short, but still bearing a spectral
content (Sound example 5): Figure 3. and then replicate this grain in a second channel and
decorrelate the two signals of 1 msec (Sound example 6): Figure 4. We can continuing in
this way decorrelating the same grain with different, increasing offsets, but under the
condition of maintaining two constraints: (1) a stereo setting, and (2) an "unary shape" (no
repetitions); if these constraints are not respected, we will switch from the time-space
domain to the frequency domain (see Addendum of the COST G-6 Conference on Digital
Audio Effects (DAFX-01), Limerick, Ireland, December 6-8, 2001 DAFX Addendum-4 next
section). In terms of space, the 2 msec grain sounds completely "dry". However, when
replicated and decorrelated by an interval of about 3 msec, we are introducing space
perception, that is, creating a field where sounds are localized in more than one point in
the listening space (Sound example 7). This constitutes an elementary example of what is
called here inter-channel temporal decorrelation. Now, if we increase the inter-channel
decorrelation more substantially, another threshold arrives: not only we can hear "space",
but we begin to perceive "directionality", that is, movement (through the Azimut, the
horizontal plane) inside the sound image. This is a quite obvious effect, but nonetheless
very important for our purpose here. The next examples will take again the full source
sound (Sound example 4) and proceed to perform various degrees of decorrelation, up to a
48 msec offset (Sound example 8), where directionality can be evidently grasped (I recall
that this is a straight example, without any interpolation, filter or analysis/synthesis devise
controlling the trajectory between the channels). The Space Domain We must stress again
and again the fact that the creation of space by decorrelating audio signals is effective
when the decorrelated replica are placed in different channels, that is, when they refer to
an inter-channel relationship. As we will see in a moment, a multi-channel setting, where
many different signals are decorrelated at different rates, is the most effective. Of course,
the examples shown here are stereo mix-down of multi-channel originals, but they still
work as expected: as complex waveforms built up of several decorrelated layers. Indeed,
stereo is here the minimum setting: in the case of a monophonic output, as I said,
decorrelations will not cause space perception but a series of effects mostly related to the
frequency domain: the signals will be "colored" or "combed" (as in flanging, etc.). Figure 5,
taken from Kendall (1995), summarizes these perceptual results. Figure 5. Decorrelation of
musical signals as a technique to create perceptual space is well known in audio industry:
stereoizers and all kind of spatializators are working on the principle of a monophonic
input signal which is replicated without changes and then decorrelated and routed to
different channels. Gary Kendall, cited above, have apply the principle in very
sophisticated ways (Kendall 1994, 1995). He used phase specification in order to perform
a FFT analysis and an IFFT to get controlled decorrelations, creating libraries of FIR (Finite-
Impulse-Response) filter coefficients to get precise measures (from .9 to 0.; see Figure 5).
He also used IIR (Infinite-Impulse-Response) filters to create "dynamic decorrelation", by
far more interesting, as Kendall says, because "dynamic variation produces a spatial effect
akin to the sound of an environment with moving reflecting surfaces" (Kendall 1995).
Addendum of the COST G-6 Conference on Digital Audio Effects (DAFX-01), Limerick,
Ireland, December 6-8, 2001 DAFX Addendum-5 We can recall here the work done at
CNMAT's Sound Spatialization Theatre about Volumetric Modeling of Acoustic Fields (Kaup
et al., 1999), where the authors incorporate, among other things, decorrelation techniques
as control features of both magnitude and phase spectra, in order to make equal signals
"differ in the speakers where the vector panning is operative", reducing in this way the
"precedence effect" (Haas 1951) and improving the perception of volumetric sound
projection for all people in the audience, no matter where they are seated. They
acknowledge the fact that "decorrelation techniques give rise to considerable ambiguity as
to the location of the source", concluding that "here appears to be a real trade-off between
the enveloping nature of the spatial audio experience and the precision of the localization"
(Kaup et al., op. cit.). Phase Distortion Inter-channel decorrelation is certainly not a
product of a negative phase value. A situation where many decorrelated signals are
summed up is not causing necessarily phase distortion phenomena. When a significant
phase distortion comes out, this is a sign that at least one of the inter-channel
decorrelated signals have lost phase coherence. This is why is important, when working in
an Computer Music Studio, to have continuous access to information about global phase
status by means of a phase correlation measuring display. Once the source of phase
distortion is detected, we can slightly move the signal in the Azimut interchannel plane
looking for the point where the distortion occurs, in order to suppress or to attenuate its
negative value; usually this takes a very small offset quantization, so small that the
decorrelation effect does not suffer any significant alteration. Moreover, many moments
where slight negative differences in phase relationship occurs can be kept without any
post-correction, as they appear in quick time-varying situations, contributing in fact to
enhance the dynamic effect of decorrelation (see Figure 1 for an example). Panning, Delay,
and Decorrelation Decorrelation, as described here, is of course very different from simple
panning. The last aims to positioning sounds in a stable field, and also to make them
moving from one channel to another, but always inside this stable field. By "stable field" I
mean a space where inter-channel settings are not subject to time-varying decorrelation.
However, decorrelation can work together with panning, as in the CNMAT's Sound Theatre
"vector panning" cited above (Kaup et al. 1999; but see also Pulkki 1997). In this particular
case, the purpose of using decorrelation is to break the "precedence" effect: the "first
wavefront" (Lindemann 1986) which makes the sound space image collapsing if the
audience is too near to one particular loudspeaker. Kendall have demonstrated that
decorrelation preserves the sound space image, no matter where the audience is seated.
This is something that panning alone cannot do. In fact, panning only does source
positioning, fixed or mobile, but without defining the space itself, that is, without altering
and controlling the inter-channel offsets of the signals. Concerning delay: as we saw
already, delay alters often (specially when the offsets are small) the frequency content of
the signal, unless inter-channel decorrelation is also involved. Delay becomes
decorrelation from the moment that control is allowed over inter-channel offset times,
which includes a constantly updated information about phase's status (an information that
delay does not need). Some algorithms of the GRM Tools package (Favreau et al., 1999;
Teruggi 2001), for example, where access to time offsets between delay lines as well as to
separate channel assignments are possible, can achieve space definition. Starting from
here, the interesting thing would be to obtain time-varying control of offsets, along of time-
varying routing of decorrelated signals through multi-channel arrays, as to get precise
control over multi-layered (polyphonic) textures. The plug-in version of this package allows
something of this, while depending of the flexibility of the host program (normally a multi-
channel graphic sound file editor). Multi-local Strategies However there are other options
that may correspond to diverse compositional strategies. As I have already said,
decorrelation can be controlled dynamically by means of functions stored in an array of
look-up tables, or in a phase specification library. These functions can be seen as
"attractors", linear or non linear. They can be currently implemented using open packages
as, for example, MSP, SuperCollider, or OpenMusic. I have used since years some
interpolation algorithms based on polar coordinates, starting with some designed by
Stephen MacAdams in 1982, which contained specifications about several spectral
configurations. I managed to include in these specifications some data affecting waveform
decorrelation to control the spectrum's space definition (McAdams and Wessel 1981,
Vaggione 1984). My piece Fractal C (1984) was built Addendum of the COST G-6
Conference on Digital Audio Effects (DAFX-01), Limerick, Ireland, December 6-8, 2001
DAFX Addendum-6 algorithmically by defining inter-channel decorrelation as changing
offset rates in a "multi-local" fashion, in order to give to each "unary shape" (belonging to
any time scale: spectrum, sound object, texture) a particular, "scintillating" spatial quality
(Vaggione 1989). "Multi-local" strategies have been later developed in Fractal Theory under
terms like "multi-resolution", "multi-fractal", etc., sometimes using wavelet-like
representations (see for example Arneodo 1996). These strategies are evidently based on a
multi-scale approach, but not necessarily showing the typical properties of rough self-
similarity. In a somewhat parallel way, I have myself used the concept of "singularity", in
the mathematical sense, to mean the power of single events (morphologies, "unary
shapes") to structure a space in which many fractional dimensions are present (Vaggione
1997). Compositional decorrelations of waveforms can be based on these concepts, if
controlled dynamically, that is, not concerning single global processes but "multi-local"
ones. Spatial Polyphonies In any case, inter-channel temporal decorrelation of audio
signals constitute an interesting way of working in the direction of a musical situation
developing an approach to "poly-spatiality". Let me introduce another musical example,
taken from my electroacoustic work Schall (Vaggione 1995). This fragment is based on a
diatonic glissando performed on an acoustic piano and sampled as to be processed by
diverse means. What interest me to show here is the constant interplay of layers having
different decorrelation offsets (Figure 6, sound example 9). Note that this interplay not only
creates a spatial activity in the horizontal dimension (which diverse rates of speed and
channel's splits positions resulting in crossed sound trajectories) but also in depth (in the
perception of the far and the near). Hence layering decorrelated textures constitutes also a
mean of controlling perceptual image distance. Figure 6 (example from Schall). Addendum
of the COST G-6 Conference on Digital Audio Effects (DAFX-01), Limerick, Ireland,
December 6-8, 2001 DAFX Addendum-7 Conclusion I have referred in this presentation to
a simple and straightforward (but somewhat difficult to conceptualize) way of defining
musical spaces. Temporal decorrelation of audio signals are imbedded in many
spatialization systems. However, my point have been to show its direct use in
electroacoustic music composition. Starting with the very fact of an impossible
simultaneousness, and the assessment that music benefits of this fact to create "alive"
sounds, I pointed out that our perception is sensitive to very slight temporal
decorrelations. The difference of delaying with monophonic output (which leads to
operations in the frequency domain) was stressed in relation to inter-channel
decorrelation (which leads to operations in the space domain). Diverse ways of controlling
inter-channel decorrelation were mentioned (straight waveform manipulation,
interpolation, variable look-up table design, convolution, FIR and IIR filters, and direct
phase control through FFT/IFFT). Finally, I have recalled that this approach is interesting for
electroacoustic music composition when used in a multi-layered approach, where diverse
channels with diverse time-varying inter-channel decorrelation values create not only a
perceptual diffuse space, but also a spatial depth and a multi-directional dynamics.
Combining decorrelated audio signals is a mean to create complex sound space images,
internally articulated and controlled, aiming to approach the idea of composed poly-

The Art of Articulation: The


spatiality.

Electroacoustic Music of Horacio Vaggione


Curtis Roads The composition of music has evolved into an interactive process of directly
sculpting sound morphologies on multiple time scales. A prime example is the
electroacoustic music of Horacio Vaggione. This music’s complexity and subtlety
challenges mere textual description, posing formidable problems of discourse. This article
traces the aesthetic and technical path followed by the composer during his career, and in
so doing begins the task of developing a new analytical vocabulary. Fortunately, Professor
Vaggione has written a considerable amount about his aesthetic approach. For this article,
I have relied on Vaggione’s texts as well as his extensive comments on a draft of this
article. Keywords: Composition; Vaggione, Horacio; Electroacoustic Music; Multiscale
Composition; Algorithms; Micromontage The composition of music has evolved into an
interactive process of directly sculpting sound morphologies on multiple time scales. A
prime example is the electroacoustic music of Horacio Vaggione, whose music’s
complexity and subtlety challenges mere textual description, posing formidable problems
of discourse. This study traces the aesthetic and technical path followed by the composer
during his career. In so doing, it begins the task of developing a new analytical vocabulary.
Fortunately, Professor Vaggione has written a considerable amount about his aesthetic
approach (Vaggione, 1984, 1995, 1996a, 1996b, 1996c, 1996d, 1998, 1999, 2002). For this
article, I have relied on these texts as well as his extensive comments on a draft of this
article. Algorithms and Interventions: Early Encounters with Technology At an early stage of
his career, Vaggione recognized the pertinence to composition of emerging digital
technology. Computers capable of generating sound were very rare in the 1960s. It
required unusual persistence to gain the necessary programming Contemporary Music
Review Vol. 24, No. 4/5, August/October 2005, pp. 295 – 309 ISSN 0749-4467 (print)/ISSN
1477-2256 (online) ª 2005 Taylor & Francis DOI: 10.1080/07494460500172121 expertise as
well as access to such facilities. At the age of 23, Vaggione had the opportunity to visit the
University of Illinois, where Lejaren Hiller and Herbert Bru¨n first showed him how
computers could be applied to music composition (Vaggione, 1967). He studied the
stochastic composition algorithms used in Hiller’s Computer Cantata (1963) as well as the
coding language of the CSX-1 Music Machine, the first program to produce digital sound at
Illinois. Later he became acquainted with the programs that Hiller wrote to produce the
piece HPSCHD in collaboration with John Cage. Hiller gave Vaggione the source code of
these programs (written in the Fortran language), and introduced him to the Music N series
of sound synthesis programs written by Max Mathews and his colleagues. Vaggione began
his own experiments with computer-generated sound in 1970 at the Computer Research
Center of the University of Madrid (Budo´n, 2000). From the start, he explored a musical
aesthetic based on a fabric of short duration events scattered in time. This approach,
which Vaggione refers to as an ‘aesthetic of discontinuity’, is equally present in his
instrumental music of the same period. In the compositions Modelos de Universo (1971)
and Movimiento continuo (1972), the composer used a digital sound synthesis program
called ‘Papova’ (Briones, 1970; Vaggione, 1972) running on a large IBM 7090 mainframe
computer, to generate up to 20 sounds per second in each of four voices. He had followed
a similar procedure—worked out manually—in composing a Triadas for orchestra (1968),
the last piece realized by Vaggione before leaving his native Argentina. In these early
pieces, Vaggione was already extending his compositional discourse into the micro time
scale, and the power of the computer became essential for the full development of his
musical ideas. The score of Modelos de Universo IV (Figure 1) provides an early example of
the principle of micromontage—the assembly of many short sounds in high densities. A
collection of musical figures was generated in common music notation, using several
strategies, going from simple algorithms to direct handwriting, and then assembled in
diverse patterns which were in turn agglutinated so as to form finite sequences. Each
measure of the score had a duration from one to two seconds. I wanted, through high
density sequences of discrete steps, to produce Figure 1 Excerpt of the input score for
Modelos de Universo IV (1970). Each measure lasts less than one second. 296 C. Roads
continuous sound phenomena arising at the edge between corpuscular and ondulatory
representations, including transient intermodulations, differential sounds, foldovers and
so on. Hence, as I realized later, I was already dealing, through macroscopic notation, with
the micro-time domain. The score sheets were translated into machine language (the first
version was realized on punched cards), in order to be entered as data into the computer,
which produced the sound synthesis. The reason I began by writing a score in music
notation derived from the inherent noninteractivity of the system, and the necessity of
developing a strategy to produce the wanted sounds before entering the data for synthesis.
(Vaggione, 1982) Vaggione’s output in the 1980s can be seen as a consistent development
of these initial explorations. Examples from the 1980s involving microsonic techniques and
multi-scale perspectives (using computer languages for synthesis and transformation)
include several pieces realized in Paris at IRCAM: Octuor (1982), Fractal A (1983), Fractal C
(1984), Thema (1985); and later at the Technische Universita¨t Berlin Elektronisches
Studio: Tar (1987) and Sc¸ir (1988). To these we must add Ash (1989) realized in Paris at
INA/GRM using the SYTER sound processor. Octuor was composed with the Music-10
programming language developed at Stanford University’s Artificial Intelligence
Laboratory, which ran on IRCAM’s DEC PDP-10 mainframe computer. The work, which won
the first prize at the NEWCOMP competition in Cambridge, Massachusetts (1983), is well
documented in an article written by the composer for Computer Music Journal: The main
compositional goal was to produce a musical work of considerable timbral complexity out
of a limited set of sound source materials. The process began with the generation of five
synthesized files, employing additive synthesis and frequency modulation (FM) algorithms.
Once this collection of sound files was completed, the next step was to analyze, reshape,
multiply and combine its elements through relatively simple software manipulations, using
the program S as the main analytical tool, SHAPE for control of the overall amplitude
envelopes, MIX as a means for blending sound objects into complex timbral entities and
KEYS for immediate random-access playback. With the help of these programs, the sound
files were segmented into small portions, regrouped into several pattern and timbral
families, processed, and mixed into medium and large sound textures. The product of
these compositional procedures was stored as a set of new sound-object files. Then, using
the KEYS program, these files were organized and finally played automatically in eight
channel polyphony according to a score that specified the overall form of the piece.
(Vaggione, 1984) The interaction between formal algorithmical control and direct
intervention is a hallmark of Vaggione’s compositional strategy. Specifically, he combines
both algorithmic procedures and purely manual, interactive operations, the latter realized
on the products of the first. The philosophy behind manual intervention on algorithmically
produced morphologies was affirmed by Vaggione in these terms: A composer knows how
to generate true singular events, and how to articulate them in the larger sets without
losing the sense (and control) of these singularities. Contemporary Music Review 297 This
is why purely global causal formulas are problematic in musical composition, if their
automation is not compensated by other levels of articulation, notably unique
compositional choices, as much global as local, as much relational as functional, thus
being integrated explicitly in a compositional strategy. (Vaggione, 1989; see also Vaggione,
1992) Vaggione’s description of one of the source sounds used in Octuor illustrates his
preoccupation with the micro time scale: The durations were, in general, very short.
Silences of different lengths were placed between events. The density (or speed of
succession) was very high: more than 20 events per second. This rate exceeds the limit of
applicability of the Poisson law, which is valid to control sound distributions whose density
are lower than 10–20 events per second. Beyond 20 events per second, one is no longer
dealing with sounds as individual entities. However, the goal in building this linear
structure by combining high density of sounds with highly contrasted parametric values
was to create a texture showing a kind of kaleidoscopic ‘internal’ behaviour. (Vaggione,
1984) Another work realized at IRCAM, Fractal A (1983), is one of the few pure algorithmic
compositions that Vaggione ever realized. The theoretical model was Cantor’s triadic set, a
set of points obtained on a given interval by throwing out the middle third and iterating this
operation on the remaining intervals. The composer’s goal was to create a multilayered
tapestry of microsounds. He wrote code in the programming language AWK (Aho et al.,
1983), a subset of the well known C language, to generate scripts that acted as sound
granulators. (A sound granulator chops a continuous sound into tiny sound particles.) The
result was a systematic ‘powdering’ of the sound material (Vaggione, 1983). Taking the
simplest solution, one could make each of Cantor’s segments correspond, determined by
the temporal size, to a window or grain of sound. To each step of iteration will correspond
an increasingly contracted window; hence one obtains and increasingly sparse object,
comprising—if one suitably regulates amplitudes of the different strata of iteration—a
particular flutter, presenting itself like a particle of sonic dust: granular textures which,
even if the density tends towards the infinite, will never arrive at any laminar state, but to a
space saturated of void. The paradox here is that Cantor’s set, of an infinitely divisible
appearance, is only this in the grains, and not in the space that surrounds them. Thus this
process generate flows of grains of different sizes, flows with are at the same time irregular
and intermittent. According to whether it is closer to one edge (time scale) than to another,
there will be denser granulations, figural or turbulent, or sparser, at the same time emptier
and more homogenous. It is thus a criterion which can be applied to the generation of
granular textures and figures with precise quantitative descriptions that can be driven by
strict algorithmic means. (Vaggione, 1989) In his next piece Fractal C (1984), Vaggione
returned to the approach of Octuor, combining pure algorithmic methods with manual or
direct interventions, using the 298 C. Roads interactive tools of the CARL system—a
software package for sound synthesis and sound processing originally developed at the
University of California, San Diego (Loy, 1984). Using a DEC VAX-11/780 mainframe
computer at IRCAM, the composer stipulated UNIX commands (such as pipes) to enchain
a series of musical processes. Another feature of the CARL system used in Fractal C was a
‘fast interactive mode’—a set of commands that the composer used to select portions of a
sound file and create new files containing only these selected portions. According to the
composer (Vaggione, 2004), this kind of selection and subdividing technique was from this
point on a typical feature of his compositional strategy. Micromontage In these early works
and continuing to the present day, the technique of micromontage is an essential
component of the Vaggione style. In micromontage, the composer extracts particles from
sound files and rearranges them in time and space. The term ‘montage’ derives from the
world of cinema where it refers to cutting, splicing, dissolving and other film editing
operations. The term ‘micro’ refers to the manner in which a composer can position each
sound particle precisely on the canvas of time. ‘Digital micromontage’ refers to operations
dealing with small sound particles, belonging to the micro-time domain (usually less than
100 ms). In this detailed manner of working, we have the musical equivalent of the
Pointillist painter. It is notable that in music, the term ‘Pointillism’ has long been
associated with the sparse serial style of Webern and his followers. Ironically, the
technique of the Pointillist master Georges Seurat was anything but sparse. His canvases
present a dense sea of thousands of meticulously organized brush strokes (Homer, 1964).
Granulation techniques share many similarities with micromontage (Roads, 2002).
Perhaps the best way to draw a distinction between granulation and micromontage is to
observe that granulation is inevitably an automatic process: the composer’s brush
becomes a refined spray jet of sound color. By contrast, a sound artist can realize
micromontage by working directly in the manner of a Pointillist painter: particle by particle.
It therefore demands unusual patience. Of course, micromontage and granulation
techniques can be seamlessly intermingled. Thema for bass saxophone and tape (1985)
and Tar for bass clarinet and tape (1987) are early examples of micromontage. Thema
features streams of microsounds, such as resonant bass saxophone breath-bursts,
scattered in both synchronous and asynchronous patterns along the time line. Once again,
the composer used the CARL software in the realization, writing Cmusic instruments and
scores in the form of alphanumerical texts. The construction of Thema by script meant that
the material could be organized on an unprecedented level of micro detail. Figure 2 shows
an excerpt of the code for Tar, in which the composer defined operations dealing with
micromontage. In particular 2(c) shows an excerpt of the note list that functioned as a
script for micromontage. In realizing Tar, the composer developed what he called ‘object-
based’ composition methods—that is, by Contemporary Music Review 299 Figure 2
Cmusic example from Tar (1987). (a) List of sound files to be processed. (b) An instrument
for reading sound files. Note: in the full listing of the program the composer designed
twelve additional instruments. (c) Excerpt of the note list. Each note is a microevent. In the
listing shown, no note lasts more than 58 ms. The first two notes start at time 0. The rest of
the notes start at indicated values in seconds, with duration indicated in milliseconds.
They have individual amplitudes and locations in quadraphonic space. The full score
stipulated 870 notes. 300 C. Roads means of the scripting language built into the CARL
system, the composer was able to create subclasses of a specific sound object through
transformations such as time-stretching or pitch-shifting. The transformed sounds inherit
the morphology of the original sound. The composer has written extensively about this
approach (Vaggione, 1991). Emergence of a New Direction An important transition took
place with the spread of personal computers in the mid-1980s. By 1988, inexpensive
personal computers had become powerful enough to support high-quality audio recording
and synthesis. Experiments on the micro time scale—granular or particle synthesis—
became more feasible (Roads, 1978, 1985a, 1985b; Truax, 1990a, 1990b). (My book
Microsound (Roads, 2002) traces the history of particle synthesis from the theories of
Gabor (1946) and Xenakis (1960) to the first implementations on digital computers.) In
addition, two essential software tools became available in this period: the graphical sound
editor and the graphical timeline audio mixing program. It is difficult to overestimate the
significance of these advances that are so commonplace today. The simple ability to align
multiple sounds along a timeline, to zoom in and out, and jump across time scales with the
click of a button changed the nature of electroacoustic composition. As Vaggione (in
Budo´n, 2000) has observed, composition on multiple time scales involves no distinction
between music structure and sound materials: ‘I assume that there is no difference of
nature between structure and sound materials; we are just confronting different operating
levels, corresponding to different time scales to compose’. With the new interactive sound
tools, suddenly it was possible to apply directly any kind of sound transformation, on any
time scale. The sound material itself became a composed structure. Vaggione’s Till (1991),
for piano and tape, signals the emergence of a new direction. As personal computers
replaced shared mainframe computers, Vaggione and others began to use graphical sound
editors, furthering the dialectic between algorithmic and direct operations, which in turn
influenced his way of dealing with the micro-time domain. In Till, what begins as a spiky,
sharp-angled piano etude, by 8 minutes and 21 seconds starts to melt into a dense cloud
of sound energy, driven by the torrential flow of thousands of tiny sound particles. This new
direction crystallized in his 1994 electroacoustic composition Schall. In the rest of this
article, I would like to focus my attention on this piece and the subsequent compositions
Nodal (1997), Agon (1998), Pre´ludes Suspendus (2000) and 24 Variations (2001). Schall
The raw material of Schall consists of thousands of sound particles derived from sampled
piano, which are granulated and transformed by such operations as convolution,
waveshaping and the phase vocoder. Contemporary Music Review 301 The work plays
essentially with tiny textures of feeble intensity, composed of multiple strata, which
contrast with some stronger objects of different sizes, in a kind of dialog between the near
and the far—as an expression of a concern with a detailed articulation of sound objects at
different time scales. (Vaggione, 1995) A fascinating aspect of style in Schall, Nodal, Agon,
Pre´ludes Suspendus and 24 Variations is the use of continuously dithering or scintillating
textures, composed of more or less dense agglomerations of short-duration grains. These
sometimes crackling, frying or creaking textures serve as a stationary element in the
mesostructure of the pieces, holding the listener’s attention. By keeping these grainy
textures low in amplitude (usually over 10 dB down from the foreground peaks and
resonances), their background (or ‘far’) role is evident. The composer sustains these low-
level textures for up to 20 seconds or more at a time, keeping the listener engaged while he
prepares the next explosive release (the ‘near’). Like any highly detailed background
pattern, their intricate design emerges into the foreground only when there is nothing else
superimposed upon them for several seconds. Schall is an outstanding example of the use
of creative micromontage. The sound material consists of thousands of sound particles
distributed on multiple layers of time. The music is focused on a limited collection of
objects of different sizes, which appear in diverse perspectives. The work plays essentially
with contrasts between textures composed of multiple strata, as an expression of a
concern with a detailed articulation of sound objects at different time scales. (Vaggione,
1999) What makes Schall unique is its brilliant use of the notion of switching between
different time scales: from the microsonic (5100 ms duration) up to the sound object level
(4100 ms) and down again into the microsonic. The laws of physics dictate that the shorter
the particles, the more broadband their spectrum, as in the noisy section between 2:10
and 2:28, or the final 30 seconds of the work. Thus the interplay is not just between
durations, but also between pitch and noise. In Schall, the micromontage was mediated
through interactive sound editing and mixing software. Considering the hand-crafted side,
this is the way I worked on Schall (along with algorithmic generation and manipulation of
sound materials): making a frame of 7 minutes and 30 seconds and filling it by ‘replacing’
silence with objects, progressively enriching the texture by adding here and there different
instances (copies as well as transformations of diverse order) of the same basic material.
(Vaggione, 1999) Here each microsound in a track is a kind of sonic brush stroke. As in a
painting, it may take thousands of strokes to fill out the piece. Graphical sound editing and
mixing programs offer a multiscale perspective. One can view the intimate details of sonic
material, permitting microsurgery on individual sample points. Zooming out to the time
scale of objects, one can edit the envelope of a sound until it has just the right weight and
shape within a phrase. Zooming out still further, one can shape large 302 C. Roads sound
blocks and rearrange macrostructure. The availability of dozens of tracks lets the
composer work extremely precisely on every time scale. In 1997, at his studio on the Iˆle-
Saint-Louis in Paris, Maestro Vaggione demonstrated to me some of the micromontage
techniques used to make Schall. These involved arranging microsounds using a sound
mixing program with a graphical time-line interface. He loaded a catalog of previously
edited microsounds into the program’s library. Then he would select items in the library
and paste them onto a track at specific points on the time line running from left to right
across the screen. By pasting a single particle multiple times in succession, the particles
fused into a sound object on a higher temporal order. Each paste operation was like a
stroke of a brush in a painting, adding a touch more color. The collection of microsounds in
the library was the palette of colors. Since the program allowed the user to zoom in time,
the composer could paste and edit on different time scales. The number of simultaneous
tracks was essentially unlimited, which permitted a rich interplay of events, even if they
were not rendered in real time. Nodal With Nodal (1997), the composer elaborated the
materials used in Schall several steps further, while also opening up the sound palette to a
range of sampled percussion instruments. The identity of these instruments is not always
clear, however, since they articulate in tiny particles. The composition lasts 13:06. For the
purpose of this discussion, I divide it into three parts: Part I (0:00 to 5:46), Part II (5:49 to
9:20) and Part III (9:21 to 13:06). These three sections are separated by silences that are
clearly visible in a sound editor. The strong opening attack establishes immediately the
potential force of the sound energy and sets up a dramatic tension. Although the
continuously granulating texture that follows is often quiet in amplitude, one realizes that
the floodgates could burst at any moment. This effect is highly enhanced by ‘creaking’
sounds that give the impression of reins being strained. Part II begins with a warm fluttering
texture that turns into a chaotic noise. While the ear tracks this low-frequency rumbling, at
6:18 a distinct mid-high crotales ‘roll’ with a sharp resonance at 1600 Hz sweeps across.
The overall texture becomes unpredictably turgid and chaotic, until at 7:11 the composer
introduces an element of stasis: a rapidly repeating piano-like sound during which the
granulation background briefly lets up. This leads to a section of tactile noise, soft like a
wet snowstorm. At 8:46 another wood-tapping pattern appears. This part cadences on an
incongruous major chord from what sounds like a toy piano. According to the composer,
this sound was the product of a variable time-stretching function applied to a short
percussive sound, manipulated in time and frequency with a phase vocoder algorithm
(Vaggione, 2004). Part III introduces a ‘drum-gong’ sound deformed by means of a
waveshaping technique. Waveshaping selectively bends sound waveforms according to a
user-specified shaping function. As a result of this deformation, the waveform’s
Contemporary Music Review 303 timbre changes (Roads, 1996; see Vaggione 1996b, 1998,
for an explanation of the composer’s application of this technique to sampled sounds).
The background texture in Part III is high in frequency, sounding like rain on a thin roof. Its
density gradually builds, as new bursts and resonances sweep into view. The background
texture ebbs at 11:35, letting up until 12:09. The closing texture (a low-frequency rumbling
that also concludes Agon) is a long 39-second fade out. This texture continues (at a low
amplitude) for several seconds after the final gesture of the piece—a concluding three-
event percussive tag ending. Agon Agon (1998) refines the processes and materials heard
in Nodal. This virtuoso composition opens with a continuously fluttering band of sound in
the range between 6 kHz and 16 kHz. The rate of the fluttering modulation is between 10 Hz
and 20 Hz. The continuity of the high-frequency band is broken up by various and sundry
colored explosions at key moments. It is as if different percussive sounds are being
dropped into a gigantic granulator to be instantaneously mulched into bits of microsound.
On first hearing, Agon appears to present a continuous stream of new material. Repeated
listening reveals that the work recycles sound material in an efficient manner. For
example, the penultimate gesture of the work—a turgid swirling mid-low frequency band—
is already heard in the first 35 seconds. The final gesture of the work, a triple stroke ‘tom-
click-hiss’, appears first at 2:59 and again at 3:08. Certain of the recycled sounds in Agon
are strange mutations of other sounds, while others are drawn by hand in a graphical
sound editor and derive from no original source. Consider the sound first heard 40 seconds
into the piece that seems like a small metal bell. According to the composer, the origin of
this sound was not a bell, but was the result of a convolution cross-synthesis procedure.
The bell-like sound first appears with a resonance at 750 Hz, then 59 seconds it shifts up to
1080 Hz (approximately an augmented fourth). Another frequently recycled sound is like a
tom-tom stroke. According to the composer, it was actually a hand-drawn waveform. The
tom-tomlike sound is first heard in a burst of strokes at 34 seconds. Both the ‘bell’ and the
‘tomtom’ reappear at many points in Agon. A shimmering cymbal-like sound interweaves
throughout the work—a component of the high-frequency band that streams through most
of the piece. A ‘piano tone cluster’, which originated according to the composer as a
mutation of a percussion sound, first appears at 2:01. It then signals the end of a quiet
zone at 5:54, and marks a turning point of the finale at 8:10. Pre´ludes Suspendus
Pre´ludes Suspendus (2000) dedicated to Jean-Claude Risset, is well worth analysis. In
concert (especially when diffused by the composer), its impression is one of almost
overwhelming power and dynamic energy. By contrast, in the controlled environment of
the studio, we can carefully study the pattern of its intricately embroidered 304 C. Roads
design. Beneath the dramatic rhetorical flourishes is a delicate arrangement of elements. I
recommend listening at moderate amplitude to catch the details. While Schall was limited
to highly processed sampled piano tones, Pre´ludes Suspendus incorporates coloristic
resources from Nodal and Agon (such as percussion samples), as well as adding new
ensemble samples of brass instruments, sometimes used in sweeping arpeggiated figures.
At other times, these samples are radically mutated by analysis-resynthesis techniques. In
these techniques, a given sound is analyzed, the analysis data can be altered and a
mutation of the original sound is then resynthesized from the altered data. Jean-Claude
Risset was a pioneer in analysisresynthesis (Risset, 1966, Risset & Mathews, 1969), as
Vaggione pointed out in the program notes of the piece: Pre´ludes Suspendus is an
electroacoustic work using as basic material some instrumental (mostly brass and
percussion) sampled sounds, which were processed and transformed by means of
analysis-resynthesis procedures. In designing these procedures I was often inspired by
Risset’s pioneering work on ‘analysis by synthesis’, especially regarding brass sound
modeling, including detailed spectral and phrasing articulations. Thus the musical figures,
sometimes assembled additively as to form virtual ‘symphonic’ images, were written
(specified) at several time scales, including note-by-note articulation, by means of these
synthesis procedures. (Vaggione, 2002) It is not surprising, given Vaggione’s predilections
for mutating sounds, that only some of the sound objects used in the work retain the
gestural or morphological features of the original sources. Certain sounds in the Pre´ludes
are detached from any perceivable source. The work opens violently with a series of 21
forceful attacks—some of which smear together—in the first 22 seconds. The
characteristic mesostructural syntax of Pre´ludes is based on long sections of background
scintillation interjected with swells of low frequency energy that emerge from the
background. A prime example is the swell that begins at 46 seconds and lasts until the
climax at 59 seconds. Another example is the relentless series of eight successive swells
that carry the energy through the peak of the piece, which transpires in the section
between 6 minutes and 7 minutes 35 seconds. The articulation of two specific sound
objects stand out in Pre´ludes, and deserve further commentary for their symbolic and
structural roles. One is a deep resonant sound, like a cross between a bass drum and a
gong, with a slight downward pitch bend. It is one of Vaggione’s signature sounds,
appearing for example in the opening of Part III of Nodal. When this drum-gong sound first
appears at 6:34 (the piece is already half over), it comes as a foreboding surprise, like the
unexpected toll of a funeral bell. It tolls four more times in the next minute. It only
reappears once more: as the final sound in the piece at 9:40. The other object is a brass
flourish ascending melodically, reaching a peak, and then either sustaining, trilling or
arpeggiating downward. It first appears 11 seconds Contemporary Music Review 305 into
the piece, and reappears many times, never quite the same. The flourishes stand out
because they launch and release major swells of energy in the piece. Vaggione’s
deployment of these melodic flourishes is quite clever. First, they emerge out of an
ongoing texture. Second, their ending is always ambiguous; he inevitably superimposes
other sounds at the peak of the flourish so the pitch trajectory simply merges with the
ongoing texture. In effect, pitched melodies coalesce and disintegrate as a natural part of
the flux and flow of the noise. 24 variations 24 variations was composed in 2001. If
Pre´ludes suspendus is Dionysian in its raucous energy, 24 variations is the cool and
restrained Apollonian. This is, to me, the most gracefully poetic of Vaggione’s
electroacoustic compositions. In order to appreciate this, I also recommend listening at a
moderate volume in order to savor its subtleties. One is drawn in not by the expectation of
spectacular climaxes, but by the originality and virtuosity of the articulations as they pass
by. To realize 24 variations I used various programs written in the SuperCollider II and
Max/MSP languages. For the second version of the piece I also used IRIN, a micromontage
and sound file manipulation program developed in Max/MSP by Carlos Caires at the
University of Paris VIII. (Caires, 2003, 2004). Figure 3 shows a 40-second fragment of the
score for 24 variations. Figure 3 Excerpt of the score of 24 variations (version 2), showing
the timeline designed with the IRIN program. Each rectangle represents a sound clip or
sample. The vertical position of a sample within a track is not significant (i.e., it does not
correspond to pitch). IRIN lets one encapsulate figures within tracks and represent them
as a single fragment, permitting a hierarchical building up of mesostructure. 306 C. Roads
The narrative of 24 variations unfolds deliberately, as the composer parsimoniously
scatters dabs of energy over a ubiquitous background stream. Much of the sonic material
has been distilled down to timbral residues: residue of piano, cymbal, tomtom, maracas
and so on. The raucous horns of Pre´ludes are absent. Other objects stand out as
electronic artefacts: jagged clicks and sinusoid-infused residues of radical spectral
mutations. The odd percussive resonance at 1 minute 52 seconds is an example of the
latter. This is a hollow shell of a concre`te sound, perhaps the remains of a convolution.
The sound lexicon features the classic Vaggione foreground versus background contrast.
In the foreground are attack-resonances (piano chord, drum), pops, claps, up and down
sweeps. The rhetoric of 24 variations is dominated by interjection. Instead of grand swells
and accumulations, the foreground and background dance together. Each foreground
gesture eventually dissolves into the background, while the masked background emerges
into the foreground. It is in the arrangement of the carefully chosen elements repeating at
just the right moments that this work stands out. A prime example is the constant-pitch
asynchronous grain stream, which sounds like a kind of Morse code tapping in the texture
between 4 minutes 40 seconds and 4 minutes 50 seconds, returning again and again.
Another subtle touch is the triple dose of silent intervals inserted between 6 minutes 30
seconds and 6 minutes 55 seconds. As in all of Vaggione’s electroacoustic compositions
considered here, the work concludes with a characteristic ending tag or flourish, as if the
composer were closing the door on a virtual world. Conclusion I am interested in
investigating further the relationship between meter (as a cyclic force) and rhythm (as a
non-cyclic movement) and this is not only at the level of macrotime, but at the most
microscopic level reachable with our present tools. (Vaggione, in Budo´n, 2000) Horacio
Vaggione’s path to composition has been particularly focused. Early in his career, he
recognized the pertinence of combining computer technology with the technique of
micromontage. Like Xenakis, he also recognized the need for a balance between
algorithmic composition and direct intervention: ‘To articulate a highly stratified musical
flux by statistical means is unthinkable. On the contrary, it depends on singularities:
discontinuities, figures, contrasts and details’ (Vaggione, 2003). Through their strategies,
certain high talents have a baffling ability to make fine art look like an easy game. The
elements are well defined, the structure is clear, the technique is obvious. Anyone should
be able to make it! Of course, this is not so. We do not really understand fully, and we
eventually realize that there are deeper, unaccounted for layers. We will never
comprehend the choice or the timing of singularities that break the symmetry, shatter
expectation, and liberate the energy. I am convinced that what we call ‘talent’ is a
combination of aptitude with an intuitive sense of choosing the right problems to solve.
Horacio Vaggione consistently chooses Contemporary Music Review 307 the most
pertinent problems. In so doing, he sets the standard for the electroacoustic music of
today. Acknowledgments I would like to thank Brigitte Robindore´ for her careful and
insightful comments on a draft of this manuscript, which led to improvements in the
presentation. I would like to also thank Horacio Vaggione for his substantial comments on

Computers
a draft of this text, which in particular targeted his early works.

and Music as a Complex System "Computers are not


primarily used for solving well-structured problems ... but instead are components in
complex systems" (Winograd 1979). Music composition can be envisioned as one of these
complex systems, in which the processing power of computers is dealing with a variety of
concrete actions involving multiple time scales and levels of representation. The
intersection of music and computers has created a huge collection of possibilities for
research and production. This field represents perhaps one of the highest areas of cultural
vitality of our time. It would be somewhat presumptuous to intend to sum up such richness
in a few lines. Hence I will dedicate this article to surveying some of the musically
significant consequences of the introduction of the digital tools in the field of sound
processing, allowing musicians, for the first time, to articulate-to compose-at the level of
microtime, that is, to elaborate a sonic syntax. Surface Versus Internal Processing To
clarify these notions, consider using a MIDI note processor (a typical macrotime protocol)
and increasing the density of notes per second to the maximum that it can handle. In this
way, we can obtain very rich granular surface textures, and even provoke morphological
changes in the spectral domain as side effects of these surface movements. However, we
cannot, by this procedure alone, directly reach the level of microtime, by which I mean we
cannot explicitly analyze or control the time-varying distribution of the spectral energy. The
difference between surface and internal processing is well understood today. We can
recall, among the disciplines studying the macroscopic domain, the recent developments
of a macrophysics Computer Music Journal, 20:2, pp. 33-38, Summer 1996 ? 1996
Massachusetts Institute of Technology Articulating Microtime of granular matter (Guyon
and Troadec 1994), aiming to define its territory taking distance from both micro-physics
and chemical analysis-synthesis. But already, Antoine Lavoisier has clearly traced the
edge between these domains (Lavoisier 1789): Granulating and powdering are, strictly
speaking, nothing other than mechanical preliminary operations, the object of which is to
divide, to separate the molecules of a body and to reduce them to very fine particles. But
so long that one can push forward these operations, they cannot reach the level of the
internal structure of the body: they cannot even break their aggregate itself; thus every
molecule, after granulation, still resembles the original body. This contrast with the true
chemical operations, such as, for example, dissolution, which changes intimately the
structure of the body. Naturally, once this distinction is clearly stated, there is room to
define all kinds of intermediary (fractional) levels where the different domains can interact.
To refer again to our example concerning MIDI macro-processing, that we can bring about
changes in the spectral domain as side effects of surface movements can be useful if we
also have the necessary tools to analyze and resynthesize the morphologies thus obtained.
What is interesting for music composition is the possibility of elaborate syntaxes that
might take into account the different time levels, without trying to make them uniform. In
fact, the sense of any compositional action conscientiously articulating relations between
different time levels depends essentially on the general paradigm adopted by the
composer. Evidently, he or she must make a coherent decision concerning the status and
the nature of the levels involved. This means placing them in a continuum organized as a
linear hierarchy, or assuming the existence of discontinuities-or simply non-linearities-and
then considering microtime, macrotime, and all intermediary dimensions as relative-even
if well-defined-domains. Vaggione 33 In this article I will first recall some of the steps
leading to control over the microtime domain as a compositional dimension, citing some
examples of multi-scale approaches deriving from this perspective. The Edge When
computers were first introduced, the musical field was concerned only with composition
at the level of macrotime-composing with sounds, with no attempt to compose the sounds
themselves. This holds true even in the case of early musique concrete, which basically
consisted of selecting recorded sounds and combining them by mixing and splicing.
Operations in the spectral domain were reduced to imprecise analog filtering and
transposition of tonal pitch by means of the variable-speed recorder, which never allows
the separation of the time and spectral domains, and only attains spectral redistributions
in a casual way. "Electronic music," as developed in the West German radio studio in
Cologne (Eimert and Stockhausen 1955), did have the ambition of composing the sound
material after the assumptions of parametric serialism, theoretically appropriate to be
transferred to the level of the "internal structure of the body," as Antoine Lavoisier would
say. However, the technique at hand, being approximate as it was purely analog, was in
fact contradicting these assumptions. Analog modular synthesizers improved the user
interface, but were especially inconvenient due to their lack of memory. The control
operations possible with them were not supported by any composition theory; articulation
(which is mainly a matter of local and detailed definition of unary shapes) was not allowed
beyond the case of some simple (and yet difficult to quantify) inter-modulations. It was
only the development of digital synthesis, as pioneered by Max Mathews (1963, 1969), that
finally allowed composers to reach the level of microtime, that is, to have access to the
internal structure of sound. One of the first approaches to dynamic spectral modeling to
emerge from the Mathews digital synthesis system was developed by Jean-Claude Risset.
In this work, trumpet tones were analyzed/synthesized by means of additive clustering of
partials whose temporal behavior was represented by piecewise linear segments, i.e.,
articulated amplitude envelopes. Given the complexity of the temporal features imbedded
in natural sounds, reproduction of all these features was an impossible task. Risset
therefore applied a datareduction procedure rooted in perceptual judgment-what he
called analysis by synthesis, reverting the normal order of these terms (Risset 1966, 1969,
1991; Risset and Wessel 1982). Beyond its success in imitating existing sounds, the
historical importance of the Risset model resides in the formulation of an articulation
technique at the microtime level, giving birth to a new approach for dealing with the syntax
of sound. The panoply of digital synthesis and processing methods that we have at our
disposal today is rooted in the foundations provided by Max Mathews and the first sonic-
syntactical experiences of Jean-Claude Risset. Global synthesis techniques such as
frequency modulation (Chowning 1973) and waveshaping (Arfib 1978; Lebrun 1979) share
these roots. Long-time considered only as formulae for driving synthesis processes (in a
non-analytical manner), they have recently been reconsidered as non-linear methods of
sound transformation, strongly linked to spectral analysis (Vaggione 1985; Beauchamp
and Horner 1992; Kronland-Martinet and Guillemain 1993). On the other hand, the
morphological approach derived from the qualitative (non-parametric) assumptions of
Pierre Schaeffer (1959, 1966) has been passing the time barrier, having been given access
to microtime control since the development of Mathews's digital system. In the mid-1970s,
the Group de Recherches Musicales in Paris developed a digital studio that had as a goal
the transfer to algorithmic form the strategies developed previously with analog means
(Maillard 1976). Specifically, the goal was to process natural sounds, carrying this
processing to "the internal structure of the bodies" in a way never envisaged with the
former analog techniques. That trend continued with the SYTER realtime processor (Allouis
1984) and the recent DSPbased tools (Teruggi 1995). We can here recall also 34 Computer
Music Journal the work of Denis Smalley on what he has called Spectro-morphology
(Smalley 1986). I have myself employed parametric (elementary) and morphological
(figurative) strategies combined into the same compositional process to link features
belonging to different time domains. An early example of this is described in (Vaggione
1984), and some of the conditions allowing one to think of the numerical sound object as a
transparent category for sonic design are stated in (Vaggione 1991). Another approach
rooted on the idea of sound object as a main category for representing musical signals is
being developed around the Kyma music language (Scaletti 1989; Scaletti and Hebel
1991). This is an important area of experience where the idea of sound object meets some
of the assumptions underlying the object-oriented programming paradigm (Pope 1991,
1994). The MAX block-diagram graphic language developed at IRCAM (Puckette 1988) was
strongly inspired by Mathews's family of programs. It has been used to define complex
interactions using MIDI note processing (a typical macrotime protocol, as we noted) and
finally crossing the edge of microtime with the addition of signal processing objects
(Puckette 1991; Settel and Lippe 1994). This allows one to create control structures that
include significant bridges between different time scales. New Representations of Sound
Accessing the microtime domain has confronted composers with the necessity of using a
variety of sound representations. A survey of this subject must include the important work
of Denis Gabor (1946, 1947), who was perhaps the first to propose a method of sound
analysis derived from quantum physics. Mr. Gabor followed Norbert Wiener's propositions
of 1925 (see Wiener 1964) about the necessity of assuming the existence in the field of
sound of an uncertainty problem concerning the correlation between time and pitch
(similar to the one stated by Heisenberg, regarding the correlation between the velocity
and position of a given particle). From this, Denis Gabor proposed to merge the two classic
representations (the time-varying wave-form and the static frequency-based Fourier
transform) into a single one, by means of concatenated short windows or "grains." These
grains do not have the same status as the MIDI note-grains discussed earlier, since they
constitute an analytical expanse into the microtime domain. Meanwhile, the engineering
community had been improving techniques for traditional Fourier analysis, attenuating its
static nature by taking many snapshots of a signal during its evolution. This technique
became known as the "short-time Fourier transform" (see e.g., Moore 1979). However, the
Gabor transform still remains conceptually innovative, because it presents a two-
dimensional space of description (Arfib 1991). This original paradigm, theoretically
explored by Iannis Xenakis (Xenakis 1971), has been taken as the starting point for
developing granular synthesis (Roads 1978), and, later, the wavelet transform (Kronland-
Martinet 1988). While the first granular-synthesis technique used a stochastic approach
(Roads 1988; Truax 1988) and hence did not touch the problem of frequency-time local
analysis and control-though this aspect was considered later (Roads 1991)-the wavelet
transform gave one a straightforward analytical orientation. The main difference between
the wavelet transform and the original Gabor transform is that in the later, the actual
changes are analyzed with a grain of unvarying size, whereas in the wavelet transform, the
grain (the analyzing wavelet) can follow these changes (this is why it is said to be a time-
scale transform). The wavelet analytic approach, while still in the beginning of its
application to sound processing, is interesting also because it is being applied in other
fields; for example, in modeling physical problems such as fully developed turbulence, and
analyzing multi-fractal formalisms (Arneodo 1995; Mallat 1995). Thus it contributes to
extend the study of non-linear systems, where the problem of scaling is crucial. The
somewhat artificial attempts made to date to relate chaos theory to algorithmic music
production can find here a significant bridge between different levels of description of
time-varying sonic structures. It is to be stressed that all these new developVaggione 35
ments are in fact enriching the traditional Fourier paradigm, rather than replacing it. In
other words, they do not free us of the uncertainty problem concerning the correlation
between time and pitch, but rather give a larger framework in which to deal with it. Another
recent technique used to explicitly confront the basic acoustic dualism was developed by
Xavier Serra and Julius Smith (1990). They proposed a "spectral modeling synthesis"
approach, based on a combination of a deterministic and a stochastic decomposition. The
deterministic part included the representation of Fourier-like components (harmonic and
inharmonic) in terms of separate sinusoidal components evolving in time, and the
stochastic part provided what was not captured within the Fourier paradigm, namely, the
noise elements, often present in the attack portion, but also throughout the production of
a sound (think of the noise produced by a bow, or by the breath, etc.). The mention of these
latter elements leads us to recall the existence of another different approach to sound
analysis and synthesis, which cannot be characterized in terms of spectral modeling, but
must be identified by physical modeling. Pioneered by the work of Lejaren Hiller and Pierre
Ruiz (1971) and later expanded by Claude Cadoz and his colleagues (1984), today this
approach has a considerable following, with many systems attempting its development
(Smith 1992; Morrison and Adrien 1993; Cadoz et al. 1994). I regard physical modeling as a
field in itself, which seeks to model the source of a sound, and not its acoustic structure.
However, I think it gives a complementary and significant picture of sound as an
articulated phenomenon. Physical modeling can be effective in creating very interesting
sounds by extending and transforming the causal attributes of the original models. In turn,
it lacks acoustical and perceptual analytic power on the side of the sonic results. Spectral
modeling brings us the tools for such analysis, even if we have to pay for this facility by
facing certain difficulties in dealing with typical time-domain problems. In spite of these
difficulties, spectral modeling has the advantage of its strong link with a long practice, that
of harmonic analysis, and hence the power to give an effective framework in which to
connect surface harmony (the tonal pitch domain) with timbre (the spectral
frequency/time domain). MultiScale Approaches: Beyond Microtime In any case, it is quite
possible that, in years to come, the two main paradigms-spectral and physical modeling-
will be increasingly developed into one comprehensive field of sound analysis, synthesis,
and transformation. To reach this goal, it is perhaps pertinent to introduce simultaneously
a third analytical field based on a hierarchic syntactic approach (Strawn 1980; Vaggione
1994). This approach can serve as a framework for articulating the different dimensions
manipulated by the concurrent models, as well as deal with the many nonlinearities that
arise between microtime and macrotime structuring. Object-oriented software technology
can be utilized here to encapsulate features belonging to different time levels, making
them circulate in a unique, multi-layered, compositional network (Vaggione 1991).
Moreover, there are in progress several complementary approaches dealing with
intermediary scales relating microtime and macrotime features, such as Larry Polansky's
morphological mutation functions (Polansky 1991, 1992), and Curtis Roads's pulsar
synthesis (Roads 1995). One can cite as well-among others-some recent integrated
systems, such as Common Music/Stella (Taube 1991, 1993), or the ISPW software (Lippe
and Puckette 1991). These systems support different multi-scale approaches to
composition, allowing a parallel articulation of different-and not always linearly related-
time levels, defining specific types of interaction, and amplifying the space of the
composable. Having reached microtime, we can now project our findings to the whole
compositional process, covering all possible time levels that can be interactively defined
and articulated. This situation, as Otto Laske says (Laske 1991a, but see also Laske 1991b)
"paves the way for musical software that not only supports creative work on the microtime
level, but also allows for acquiring empirical knowl36 Computer Music Journal edge about
a composer's work at that level, with an ensuing benefit for defining intelligent sound tools,

TRANSCENDENT
and for a more sophisticated theory of sonic design."

MACHINE: AN ANALYSIS OF ÉLIANE


RADIGUE’S ARP 2500 SYNTHESIZER MUSIC
Daniel Alexander Silliman A DISSERTATION PRESENTED TO THE FACULTY OF PRINCETON
UNIVERSITY IN CANDIDACY FOR THE DEGREE OF DOCTOR OF PHILOSOPHY
RECOMMENDED FOR ACCEPTANCE BY THE DEPARTMENT OF MUSIC Adviser: Dmitri
Tymoczko May 2023 © Copyright by Daniel Alexander Silliman, 2023. All rights reserved.
Abstract From 1971 to 2000, the French composer Éliane Radigue (b. 1932) recorded a
singular body of work using the ARP 2500 analog modular synthesizer, an exceedingly rare
electronic musical instrument. Drawing on interviews, archival research, spectral analysis,
and my own expertise with modular synthesis, this dissertation offers an account of
Radigue’s method of composing and recording the music she made with her ARP 2500. I
argue that while Radigue worked mostly alone on these compositions, the working
relationship she had with her ARP 2500 takes the form of an essentially collaborative,
intersubjective process. Along the way, I locate Radigue’s aesthetic thought in a broader
context of French and American (post)modernism. I also provide an in-depth discussion of
what is perhaps Radigue’s best known electronic work, Kyema (1988), a composition for
ARP 2500 synthesizer and traditional Tibetan wind instruments, which comprises the first
movement of Trilogie de la Mort (1988-1993). iii Acknowledgements To my adviser Dr.
Dmitri Tymoczko, I am grateful to have shared for the better part of a decade a most
productive dialogue on music and philosophy. Our conversations have challenged my
convictions, compelled me to craft stronger arguments, and played a pivotal role in my
development as a writer and a musician. I am grateful to my dissertation’s second reader,
Dr. Gavin Steingo, who provided thoughtful, constructive critiques of my dissertation. With
Dr. Steingo’s comments I have been able meaningfully expand the scope of my arguments,
which led to marked improvements of the present text. I offer my gratitude to the faculty
and staff of the Department of Music at Princeton University, in particular Dr. Donnacha
Dennehy, Dr. Steven Mackey, Dr. Juri Seo, Mr. Gregory Smith, Dr. Jeff Snyder, Dr. Dan
Trueman, Dr. Dmitri Tymoczko, Mr. Andrés Villalta, and Dr. Barbara White, under whose
leadership, trust, and support I have grown as an artist, scholar, teacher, and individual.
There’s truly no place like our Department, and I will always be grateful for the years that I
spent there. I am in the debt of Mr. François J. Bonnet at the Groupe de Recherches
Musicales in Paris, who has been a most valuable interlocutor. It’s difficult to overstate
how essential Mr. Bonnet’s support has been, from answering technical questions on the
most minute scale all the way to facilitating dialogue with Radigue herself. To Mr. Charles
Curtis, I am grateful for early support and interest in my project, highly productive
conversations, and a priceless scan of an early Radigue score. It was through those v early
dialogues with Mr. Curtis that I began to see the potential in my research. I doubt I’d have
gotten past the first page without them. Many images that appear in this dissertation would
not be here without the kind support of their owners. I am grateful to Ms. Dina Pearlman-Ifil
and the Alan R. Pearlman Foundation for their permission to use ARP 2500 promotional
and technical materials, and must express my admiration for their commitment to
stewarding Mr. Pearlman’s legacy for future generations of musicians and scholars. I also
gladly offer my thanks to the Fondation A.R.M.A.N. for their permission to use scans of
Radigue’s ‘patch scores’; these images greatly improve the text. At the invitation of Dr.
William F. Dougherty and Dr. Luke Nickel, I contributed a segment of my research for
Contemporary Music Review, which greatly motivated improvement of the text, due in no
small part to that singular pressure to appear competent under the scrutiny of experts.
Early drafts were also permitted brief excursions away from my desk thanks to the support
of some key individuals: Dr. Benjamin Piekut and the students of his Fall ’21 minimalism
seminar at Cornell University offered thoughtful comments and questions that advanced
my project apace; other early versions of the text received encouragement from Ms. Julie
Zhu and GP Klangtest. Ms. Jenny Beck provided critical info on another intrepid synthesist,
Laurie Spiegel; Mr. Lawrence Kumpf filled in some of the blanks on Blank Form’s 2019
Radigue retrospective; and Mr. John Brien offered some important insights on IMPREC’s
contribution to documenting Radigue’s compositions on compact disc. And not least to
family and friends near and far, I offer my deepest gratitude for their love and care. vi Table
of Contents
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 1: Éliane and Jules: The beginnings of a
partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1 Radigue at a
glance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2
Intersubjectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..6 1.3 A brief
account of modular synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.4 The ARP
2500 in general, and Jules in particular. . . . . . . . . . . . . . . . . . . . . . . . . . . .15 Chapter 2:
Presence and absence: notes on Radigue’s ARP 2500 synthesizer technique . . . . . . 28 2.1
What, how, and why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.2 Don’t
touch that dial! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.3 The game of
partials. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.4 Sustained tones
with a certain roughness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.5 Close encounters
with a third kind of feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.6 The recording
process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.7 The woman behind
the curtain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Chapter 3: Being and
nonbeing: an analysis of Kyema (1988). . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.1 Kyema from
afar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.2 In the
beginning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.3 Timbre as
melody . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 vii 3.4 Harmony as
experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.5 Into the
breaks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 3.6 Crossing and
returning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109 Epilogue: The limits of
an avant-garde . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .125
Partial Radigue Discography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
viii Introduction How did Éliane Radigue make her synthesizer music? We are fortunate to
know as much as we do, yet there is still much that is lost to time, memory, and natural
disasters1. What we can piece together yields an impression of a remarkably deliberate
and consistent artist, one whose working method appears to have varied little in the period
1971-2000. During that time, Radigue produced—with what she later called “an ant’s
obstinacy” (avec une obstination de fourmi) —a 2 substantial body of unusually still and
slowly moving music, mostly using her ARP 2500 modular synthesizer, an analog mixing
console, three reel-to-reel tape machines, and a stopwatch (Holterbach 2013, 30). In
Chapter 1, “Éliane and Jules”, I will introduce the two main characters of this dissertation:
the French composer Éliane Radigue (b. 1932), and her instrument, an ARP 2500 analog
modular synthesizer that she named Jules. While biographical sketches of Radigue are
generally quite well represented in existing literature, and particularly in Holterbach (2013)
and Eckhardt (2019), I would like to articulate further some salient connections between
Radigue’s aesthetic thought and her studio practice. Specifically, I will give an overview of
Radigue’s aesthetics as they apply to her synthesizer music. I will also describe Radigue’s
ARP 2500, detailing some of its important functions, and sketch out some preliminary
technical details that will be employed throughout this dissertation. While general
knowledge of modular synthesizers is more widespread than ever before, I would like for
this material to be useful for theorists and Bonnet: “Most of [Radigue’s] ‘patch scores’
have been lost. They've been shared for an exhibition, a 1 long time ago, and the gallery's
basement got flooded” (email correspondence with the author, January 8, 2021). Unless
otherwise noted, all translations from the French are my own. 2 1 musicologists who don’t
have a technical background in electronic music. I would also like to reach people with
generally nonmusical technology backgrounds who might otherwise be interested in
unusual music. With those goals in view, Chapter 1 will establish some technical terms
that will appear throughout the text; locate the ARP 2500 in a rapidly-changing history of
electronic instrument design; and address Radigue’s interest in acquiring the ARP 2500 in
particular, ultimately providing a context within which we can evaluate her singular
approach to using this instrument. Throughout this initial discussion, I will suggest that
Radigue demonstrates a commitment to fostering intersubjectivity in her creative practice,
a notion that will also frame some of my arguments in the following chapters. In Chapter 2,
“Presence and Absence”, I will dig deeper into Radigue’s synthesizer practice, from
composing to patching to recording, treating each of these technical aspects of her
process as a ‘case study’ in order to develop further a duophonic theme of self-affirmation
and self-abnegation. Analyzing sketches, recordings, and technical information about her
studio practice, I will fill in some gaps in currently available accounts, such as those
offered by Rodgers (2010a), Holterbach (2013), and Eckhardt (2019), outlining the process
she undertook to commit her synthesizer compositions to tape. Through this endeavor, I
hope to evoke in the reader’s imagination a vivid portrait of Radigue and Jules at work. I see
this articulation of the body that was always there as one task of electronic music
historians in particular. This is not least because so much about the body’s presence in
electronic music is often obscured by a lack of familiarity with the connections between
haptic actions and sonic outcomes. The problem is rather well framed by Deniz Peters
(2012, 19), who writes “[electronic] music becomes an interrogation of human presence or
absence by the very difficulty that composing this presence in fact entails”. 2 Along those
lines, Chapter 2 offers an interrogation of Radigue’s simultaneous presence and absence
in her synthesizer music. In Chapter 3, “Being and Nonbeing”, I will discuss in greater
detail what is perhaps one of Radigue’s best-known works, 1988’s Kyema from the cycle
Trilogie de la Mort. While I examine passages of other Radigue works throughout the
dissertation, Chapter 3 concerns a deep and prolonged examination of a single
composition, drawing on the technical and philosophical precepts outlined in Chapters 1
and 2, and applying them to Kyema on both small and large scales. In the preceding
chapters, contradictions and points of tension arise with respect to Radigue’s synthesizer
practice. In a brief conclusion, “Epilogue: the limits of an avant-garde”, I will explore in
greater depth some of these paradoxical aspects, and argue for their integration in future
critical studies of Radigue’s electronic music. 3 Chapter 1 Éliane and Jules: the beginnings
of a partnership 1.1 Radigue at a glance One of the most important creative relationships in
Éliane Radigue’s life is the one that she shared with an ARP 2500 modular synthesizer that
she named “Jules”. Over the course of nearly three decades—1971 to 2000—they
produced together a substantial body of compositions recorded onto magnetic tape.
These works are the main object of study in this dissertation; however, to meaningfully
discuss this music, we must first deepen our understanding of the two subjects who
created it: Éliane and Jules. The life and work of the French composer Éliane Radigue (b.
1932) is generally well documented. Notable publications include the book-length
interview she produced with Julia Eckhardt in 2019’s Intermediary spaces/Espaces
intermédiaires, an invaluable text that has made possible much of the work I carry out
here. Holterbach’s 2013 biography of Radigue in the INA/ GRM book Éliane Radigue:
Portraits Polychromes is excellent, tracing the composer’s beginnings as an assistant in
the musique concrète studios of the 1950s, all the way up to her recent turn towards
composing acoustic chamber music. With these publications at our disposal, it would be
somewhat redundant to attempt a full sketch of Radigue’s life here. Instead, I will draw on
these and other biographical texts, as well as some accounts by Radigue’s close
collaborators, to articulate proclivities in her aesthetic thought as they pertain to the work
she made with Jules. I will also offer an account of “intersubjectivity”, which informs the
arguments I make in this and subsequent chapters. 4 Though we are concerned with
different periods in Radigue’s creative life, in a general sense I am following the model laid
out by William F. Dougherty in his 2021 dissertation Imagining Together: Éliane Radigue’s
Collaborative Creative Process. There, Dougherty uses interviews, writings, and archival
materials to inform a critical discussion of the acoustic chamber music that Radigue
began composing first with the cellist Charles Curtis. The result of that initial
collaboration, Naldjorlak I (2005), would mark a decisive shift in Radigue’s career, in which
she would move away from composing electronic works with her ARP 2500 and towards
the exclusive production of acoustic chamber music for classically-trained musicians.
Looking back, one could be tempted to say that Radigue’s twenty-first century turn toward
acoustic chamber music represents a culmination of a way of working: an actualization of
desires and ambitions never fully realized when she was working with Jules. A recollection
Radigue offers to Eckhardt strengthens this impression, in which she refers to her
electronic period as a “detour” (Radigue and Eckhardt 2019, 36). In her 2008 theoretical
text “L’mystérieuse puissance de l’infime”, published in English in 2009 as “The
mysterious power of the infinitesimal”, Radigue similarly looks back on the electronic
pieces as preparations for her work with acoustic instrumentalists (2009, 49). This all
makes good sense. After all, the course of one’s life surely makes more sense in
retrospect. And if we were to now imagine the composer no longer toiling alone in her Paris
apartment to produce electronic works in relative obscurity—for she is certainly more well-
known these days if the 2019-20 and 2022 retrospectives and festival appearances
throughout Europe and North America are any indication —then we might similarly dismiss
these electronic works as mere stepping stones towards her present-day notoriety. 5 I
have no intention of questioning the satisfaction Radigue now takes in reflecting upon the
path her creative life has taken. That satisfaction is amply evident in her comment to Julia
Eckhardt on this very matter when she says, “Now, I’m rejoicing!” (2019, 53). At the same
time, in undertaking any retrospective analysis of a life—whether one’s own or another’s—
there is always the risk of producing an all-too-teleological account which ignores the
contingencies of life at its various moments. The contingencies I have in mind relate
specifically to Radigue’s working partnership with Jules the Synthesizer, for although we
are ultimately speaking about a partnership between a person and a machine, I think we’d
be remiss to ignore the intersubjective dynamics of their working relationship. 1.2
Intersubjectivity An intentional, curated intersubjectivity was a major facet of Radigue’s
creative practice —and well before her present period of collaboratively-devised acoustic
chamber music. But what do I mean by “intersubjectivity”? Some elaboration and a brief
review of the literature on this term will be necessary; however, at root, I am talking about a
state of affairs between two or more beings in which the subjective agency of each and
every other is mutually acknowledged. Because “intersubjectivity” names many different
kinds of interrelation, it can be difficult to state when and where the concept first
appeared. The meaning of intersubjectivity as I intend to use it here has its origins in the
work of the German phenomenologist Edmund Husserl. My selection of Husserl as a
starting point is not arbitrary. As Brian Kane notes, it was Husserl, more than Merleau-
Ponty or any other phenomenologist, who exerted the greatest philosophical influence on
Pierre Schaeffer in his development of a theory of concrete music (musique 6 concrète)
(2014, 18-23), while Schaeffer himself had a most formative influence on Radigue: upon
first hearing his Étude aux chemins de fer on a radio broadcast, she instantly felt a deep
rapport with him. “It was my eureka moment”, Radigue recalls to Julia Eckhardt, “I realized
that my way of listening to the planes [flying overhead in Nice] had a similarity with this
[concrete] music” (2019, 66). She sought Schaeffer out for an apprenticeship in 1955
(2019, 66), and at the conclusion of their three years of working together in an official
capacity, she would visit him every time she traveled to Paris (2019, 68); strikingly, Radigue
would later give lectures on musique concrète in Nice (2019, 70). To say that Radigue was
familiar with Schaeffer’s theories would be an understatement, so let us take a moment to
understand one of the philosophers who most informed them. In the fifth and last of his
Cartesian Meditations (1960), Husserl contends that the recognition of other subjects as
subjects takes place through empathy, a process on which Christian Beyer elaborates
when he writes that “intersubjective experience [according to Husserl] is empathic
experience; it occurs in the course of our conscious attribution of intentional acts to other
subjects, in the course of which we put ourselves into the other one’s shoes” (2022,
emphasis mine). Husserl ultimately uses “intersubjectivity” to support a theory about a
presupposed, objective reality, the experience of which is shared by multiple subjects
acting intentionally. I find Husserl’s rooting of intersubjectivity in what Beyer (2022) later
calls “acts of empathy” to be intuitively appealing, and a good basis from which to develop
a working definition of the concept. Frustratingly, Husserl’s account of intersubjectivity is
incomplete: we don’t seem to have a very good idea of how these acts of empathy can give
rise to a sense of a shared world. At the same time, while there are various means by which
Radigue would attribute “intentional acts” to Jules the 7 Synthesizer over the course of
their work together, these empathic attributions of intent were always mediated by an
overarching sense of control and constraint imposed on both Radigue and her instrument.
This more constrictive aspect of intersubjectivity, and the mechanisms by which
intersubjectivity can take place at all, come more clearly into view when we consider the
revision of the term as a psychoanalytic concept in the post-war period. Jessica Benjamin
credits child psychologist Colin Trevarthen (1980) with first bringing intersubjectivity out of
philosophy, and she builds upon his arguments and those of Daniel Stern (1983) to
produce a definition of the term: “the intersubjective view, as distinguished from the
intrapsychic, refers to what happens in the field of self and other. Whereas the
intrapsychic perspective conceives of the person as a discrete unit with a complex internal
structure, intersubjective theory describes capacities that emerge in the interaction
between self and others” (1988, 20, emphasis mine). On the one hand, we have a view of
the person as somehow ultimately separate from the world (what Benjamin calls “the
intrapsychic perspective”); on the other, a description of selfhood which is largely
contingent on one’s relation to others . What is the mechanism of this 3 interrelation?
Benjamin elaborates: “intersubjectivity theory postulates that the other must be
recognized as another subject in order for the self to fully experience his or her subjectivity
in the other's presence. This means, first, that we have a need for recognition and second,
that we have a capacity to recognize others in return—mutual recognition” (1990, 35,
emphasis mine). It’s been pointed out (for example in Cavell 1993) that this conflict
between “internalist” and 3 “externalist” views of mind is a foundational philosophical
tension in psychoanalytic theory and practice. 8 Something akin to Husserl’s “empathic
experience” is clearly present in Benjamin’s mechanism of “mutual recognition”, but as
Benjamin notes, there’s a problem here: “recognition is a capacity of individual
development that is only unevenly realized. In a sense, the point of a relational
psychoanalysis is to explain this fact” (1990, 35, emphasis mine). It would seem, then, that
the full and ongoing recognition of an other as a complex, autonomous subject seems ever
elusive and imperfect. Why might this capacity for the recognition of another’s subjectivity
be “unevenly realized”? Though Benjamin credits Trevarthen with first bringing
intersubjectivity out of philosophy, the contributions of the French psychoanalyst Jacques
Lacan to this matter—which technically precede Trevarthen by some years—may help us
to understand some of the shortcomings of a supposedly mutual recognition. This is not
least because Benjamin’s ideas have deep resonance with Lacan’s argument that
subjecthood develops as an essentially interpersonal process. Not without controversy,
Lacan describes an initial, “primordial symbolic recognition” that gives rise to an
“imaginary struggle for power”, in which subjects “enter, so to speak, into a symbolic pact
that defines them as ‘slave’ and ‘master’” (Lacan 1977, and Vanheule et. al 2003, 323-
324). Lacan’s reading of intersubjectivity necessarily implies unequal power dynamics of
repression and control. Although a discussion about Lacan’s specific invocation of the
symbols of ‘master’ and ‘slave’ is perhaps better suited to other domains of inquiry, his
general argument does a lot of work to explain how the presence of arbitrary constraints—
potentially lying outside conscious awareness— can complicate what might have
otherwise been a fully empathic, intersubjective encounter. 9 There remains much to be
said about the specific ways in which the constraint of self and other figure into Radigue’s
compositional process, but for now, we seem to have arrived at a pretty useful working
definition of “intersubjectivity”: first, a subject shares space with others (socially,
spatiotemporally, etc); then, through the attribution of intentionality to others (Husserl’s
“empathic experience”), a subject comes to recognize these others as subjects in
themselves, each possessed of their own complex inner world (Benjamin’s “mutual
recognition”); however, this process is inherently imperfect (“unevenly realized”), and
there may always be some failure to adequately understand the other as a fully
autonomous subject; this act of mutual symbolic recognition may reduce the other to
state of objecthood, which can then be put to some arbitrary use in service of the
symbolizer (Lacan’s “imaginary struggle for power”). So, how specifically does our idea of
intersubjectivity as mutual symbolic recognition emerge in Radigue’s working process? I
would suggest there are three main avenues: for one, there is the attention paid to the
shared subjectivity between the composer and the audience, evidenced by Radigue’s
commitment to perceptually ambiguous, spare, and underdetermined musical materials.
In this context, Radigue evinces a remarkable attention to the autonomy of the subjects
which comprise her audience, an attitude that would be further amplified by nontraditional
recital and presentation formats: consider for example Vice Versa, etc… (1970), which was
distributed not in concert but as a limited edition set of tape reels for private use, to be
played in “any combination of two tracks, in one way or another, on several tape recorders,
ad libitum” (Holterbach, trans. Guitton 2021). Or consider Labyrinthe Sonore (1970), a
proposed sound installation involving an actual physical labyrinth through which the
audience could freely move (Radigue and Eckhardt 2019, 97); and
Transamorem/Transmortem (1974), which specified 10 a carpeted room in which four
copies of the tape would be diffused from every corner, each listener composing their own
version of the work through movements of the head (Radigue 2011). We find yet another
means by which intersubjectivity emerges in Radigue’s exploration of the connection
between composer and composition, whereby sound—in the composer’s imagination—
takes on the properties of autonomous living beings worthy of respect (Radigue and
Eckhardt 2019, 32). Third and finally, there is the connection between composer and
instrument, demonstrated by a particular recording practice and a way of relating to her
ARP 2500 modular synthesizer. One way of understanding this dissertation, then, is as a
discussion of these various forms of intersubjectivity as they take place in and around
Radigue’s electronic works. In what follows, I will discuss Radigue’s intersubjective
conception of sounds as forms of life, and introduce the complex, entangled relationship
between the composer and her instrument. 1.3 A brief account of modular synthesis We
can situate Radigue’s collaboration with Jules within a history of synthesis and electronic
music as articulated by Tara Rodgers in her 2010 PhD thesis, Synthesizing Sound:
Metaphor in Audio-Technical Discourse and Synthesis History. In that text, Rodgers
introduces the idea of an “audio-technical discourse”, a suite of metaphorical precepts
first developed in the mid-19th century, which would go on to influence the design,
manufacture, and marketing of electronic musical instruments—as well as the artists who
used them—over the next one hundred and fifty years (2010b). As the reader may know,
modular synths such as Radigue’s ARP 2500 11 are a variety of synthesizer, so named
because they contain modules that typically perform dedicated functions. Generally, each
module can be controlled with potentiometers on its faceplate, typically taking the form of
knobs or sliders, which adjust the module’s behavior; meanwhile, jacks or other
mechanical points of connection facilitate communication between other modules the
system. This realtime management of signal flow defines the modular synthesizer in
particular, affording the synthesist the ability to control various parameters of their
instrument’s behavior in a conceptually unified manner. Permit me this metaphor: as a
pipe organist directs through their instrument the flow of air under pressure to produce
various timbres, the modular synthesist directs through their instrument the flow of
electric pressure, also known as voltage, to produce unique instrumental configurations
known as “patches”. Rodgers notes that the dedicated functions of synthesizer modules
have generally corresponded strongly to Hermann von Helmholtz’s tripartite analytical
classification of sound, laid out in his treatise On the Sensations of Tone, first published in
1863 and translated into English in 1875 by Alexander Ellis. In Helmholtz’s original model,
sound is comprised of three constituent elements: frequency, timbre, and loudness. In
theory, each of these elements can be specified within a conceptually unified system in
order to describe individuated, particular sounds. Joseph Fourier—who in 1807 first
posited that any periodic function could be described 4 as the sum of infinitely many
sinusoids of various frequencies, amplitudes, and phases— provided a mathematically
rigorous foundation for the development of Helmholtz’s theories, giving rise to the latter’s
notion of sound as an aggregate of specifiable, constituent elements Although Fourier did
not publicly publish his results until 1822’s Théorie analytique de la chaleur, he 4 had
presented his work on heat propagation and trigonometric expansions of periodic
functions to Lagrange and Laplace in December of 1807 (see Herivel 1975, 153-154). 12
(Helmholtz and Ellis 1895, 34). For our present discussion—Radigue’s approach to working
with the ARP 2500—I will invoke two constitutive ideas from Rodgers audio-technical
discourse: 1) anthropomorphizing electronic sounds as “differentiated, lively individuals”;
and 2) subjecting those individuals to control by specifying in various doses the three
salient parameters of frequency, timbre, and loudness (Rodgers 2011). We will soon see
how this dual concept—that of thinking of sounds as alive and autonomous, while
simultaneously subjecting them to precise behavioral constraints—animates much of
Radigue’s thought and her compositional technique. In Chapter 2, I will consider these
same precepts alongside Radigue’s demonstrated commitment to a discipline of self-
abnegation. Taken together, an impression emerges of the artist as an unseen supervisor,
subtly directing the course of events in a given composition, while leaving minimal traces
of her own physical involvement through a technologically-occluded form of self-
expression. While each module within the prototypical modular synthesizer is traditionally
specialized to address one or more of the three Helmholtzian parameters in some way—
for instance oscillators and frequency, filters and timbre, amplifiers and loudness—in
Radigue’s synthesizer practice, these distinct categories, as well as a linear conception of
a synthesizer’s inputs and outputs, are typically blended into complex manifolds of
interrelation that defy a neat, top-down understanding. (In Chapter 3’s analysis of the
composition Kyema (1988), we will also see how, in a similar fashion, Radigue blurs
together otherwise analytically distinct elements of European classical music theory—
namely melody, harmony, and timbre.) When we consider Radigue’s approach to working
with Jules, we will find that these complex, interlocking systems characterize her
synthesizer technique generally. Within this ecological sense of the synthesizer’s 13 signal
flow—a living system bound by rules of relation—I hear echoes of Rodgers’ audio technical
discourse in Radigue’s words when she says to Julia Eckhardt, “I considered sound as an
autonomous life that needed to be respected” (Radigue and Eckhardt 2019, 32); however,
Radigue’s respect was not unconditional: in works bearing her name, sounds are arbitrarily
constrained to behave in specific ways, shedding light on an important internal conflict
between acceptance and control that, as we will see, is deeply characteristic of her
aesthetic thought. Furthermore, by constructing out of the ARP 2500 synthesizer
interdependent networks of action and reaction, I think we can identify in Radigue a
commitment to bestowing or at the very least acknowledging the inner life of her
instrument—as she remarks to Julia Eckhardt, “we tamed each other” (2019, 115).
Radigue’s language around taming and control should bring to mind our concept of an
“unevenly realized” intersubjectivity, one in which each subject takes on a role of the
superior and/or the inferior in a scheme of mutual symbolic recognition. Crucially,
however, there is a reciprocity of roles here: Radigue acts in deference to Jules, and vice
versa. This reciprocity recalls Benjamin’s conception of mutual recognition as, finally,
“reflexive. . . it includes not only the other’s confirming response, but also how we find
ourselves in that response. We recognize ourselves in the other, and we even recognize
ourselves in inanimate things” (1988, 21, emphasis mine). Prior to meeting Jules myself, I
was all-too-inclined to take Radigue’s anthropomorphizing gestures towards her synth as
mere jest or gentle self-deprecation. And to be sure, Radigue is possessed of a love for
word-play and humor that I think is too easily ignored in the face of her rather austere,
serious compositions. Nonetheless, I think we would be mistaken to ascribe only irony to
Radigue’s comments about her instrument. An opportunity I had to meet 14 and study
Jules myself concretized that very impression. During the visit, I was informed by Jules’
caretakers of a strong protectiveness that Radigue feels towards her instrument: for
although Jules is no longer kept in her direct care, she has entrusted the ARP to an entity
whose identity I am not presently permitted to disclose. After my extraordinary visit, during
which time I carefully studied and played her instrument, I found myself possessed of a
newfound appreciation for the depth of feeling and concern that Radigue extends towards
Jules, no less than that which a human being would extend to another. Before this meeting,
I’d intuited that the music she made with Jules was essentially co-emergent from the
unique human-machine relationship she had devised with her synthesizer. My visit
strengthened this sense that Radigue’s working process evinced a kind of distributed
cognition, which concerns a transference of mind into objects in the external world (Clark
1998). As my hands followed the movements Radigue herself made to produce this
unusually still and slowly moving music, I recalled Radigue’s comments that Jules has a
“voice” all his own (Radigue and Eckhardt 2019, 114). 1.4 The ARP 2500 in general, and
Jules in particular While I eventually heard Jules’ voice for myself, I ought to be describing
how Éliane met Jules, not how I met him—so allow me to paint that picture. In 1970,
Radigue capped off her experiments with tape machine feedback and
microphone/loudspeaker feedback with a suite of pieces summarizing her home-brewed
techniques in that domain, Opus 17. With the completion of this rather substantial project,
Radigue was on the hunt for something new. In New York City, 15 she sampled a few
modular synthesizers—then cutting-edge tools for music composition— including the
Buchla 100, the Putney (a.k.a. EMS) Synthi, and the ARP 2500. In 1971, she would buy an
ARP 2500, serial number 71001, and spirit it away to Paris (Radigue and Eckhardt 2019,
115). Why this instrument? By Radigue’s own account, it came down to how it felt and how
it sounded: familiar metrics for anyone trying out a new musical instrument. Elaborating
further, Radigue’s longtime assistant and biographer Emmanuel Holterbach notes the ARP
2500’s apparent ease of use with respect to its unique pin matrix patching interface, as
well as its singular sonic qualities, which offered to Radigue’s ears the most graceful
harmonics of any the synthesizers she’d tried (Holterbach 2013, 30). Most prior accounts
are content to leave it at that, but I want to dig deeper into what makes the ARP 2500 a
unique instrument, with a view towards ultimately understanding how Jules’ design and
sound influenced Radigue’s music-making. The ARP 2500, completed in 1970 by Alan R.
Pearlman with circuit designs by Dennis P. Colin and additional contributions by Gerald
Shapiro, was notable in that it was a large format modular that made use of a pin matrix
patching system, as opposed to the jack and patch cord format of Buchla and Moog (Pinch
and Trocco 2004, 257-259; Colin 1971, 927). To briefly summarize the idea of pin matrix
patching: each column of the sliding pin matrix corresponds to a particular input or output
on a particular module. By sliding a given pin onto a particular row, one can route these
inputs and outputs to specific destinations in the synthesizer. The ARP 2500 cabinet itself
is laid out with a single row of modules and two sliding pin matrices, one above the
modules and one below (Fig. 1.1); Radigue herself generally preferred to make all of her
patches using the matrix below the modules (Radigue and Eckhardt 2019, 116), as the
upper matrix was optimized for use with a traditional keyboard controller, a device which
Radigue famously left 16 behind when spiriting Jules away to Paris (Holterbach 2013, 30).
Ergonomically, the pin matrix patching system confers a few benefits over patch cords,
some of which had direct repercussions for Radigue’s music-making. One such advantage
of the system is that a user could very easily route one output to multiple inputs, as well as
freely combine multiple outputs on their way to the same input. While the ARP system did
have dedicated mixing functions on several of its modules, each row of the pin matrix
essentially functions as its own sub-mixer, averaging the instantaneous amplitude of all
signals in that row before passing them on to their next destination, no cables required
(ARP 2500 Owner’s Manual 1970, 21). When looking through the patch drawings that
survive, one can see that in a typical Radigue ARP 2500 patch, a signal was often widely
distributed throughout the system: the same 17 Fig. 1.1 Detail from a sample ARP 2500
modular system. Note the pin matrices above and below the middle row of modules.
(Image courtesy of the Alan R. Pearlman Foundation.) oscillator’s output may be routed to
several destinations at once, while also acting as a sound source in itself (Fig. 1.2). This
blurring between roles—audio and control signals—characterizes Radigue’s patching
technique in general, yielding a deep interdependence between modules in a system. That
interdependence would naturally generate a significant degree of nonlinearity, one in
which small changes to a patch’s parameters could yield disproportionately large changes
in timbral, rhythmic, or harmonic content. Although Radigue’s synth work prefigures the
concept by some decades, here I want to introduce synthmaker and philosopher Peter
Blasser’s notion of the “form-flow synthesizer” as a comparative framework by which we
can understand Radigue’s relationship to Jules. Building on an analysis of historical
synthesizers such as the Moog, Buchla 100, and Serge Modular, Blasser writes that in
contrast to these devices, the design of a “form-flow synthesizer seeks to explode the fixed
organs—timing, pitch, timbre— of the embodied signal and instead create a synthesizer
without organs. Like 18 Fig. 1.2 From 7th Birth (1971). Oscillator 1, producing a sine wave
at about 8000Hz, is routed to no less than five destinations: the FM input of Oscillator 3;
the audio input of a ring modulator; the control input of an amplifier, the audio input of one
of the multimode filters, and the control input for modulating the resonance of another
filter. (Image courtesy of La Fondation A.R.M.A.N.) Deleuze’s ‘Body without Organs,’ the
fixed functions of the historical synthesizer 5 become temporary and contingent, neither
input nor output, nor inside or outside. The outside of the embodied signal—manifesting its
internal rhythms and pitch signifiers—can convolute such that its structural pulsing, its
internal form, becomes heard within the flow of the music” (Blasser 2015, 31-32). Blasser
provocatively suggests a concept of modular synthesizer design that breaks away from the
top-down, analytical concept of sound as espoused by Fourier, Helmholtz, and their
intellectual progeny. And though she precedes Blasser, Radigue’s approach to the
modular synthesizer surely embraces this nonhierarchical spirit; and yet, crucially, she
abandons neither organizational musical structures nor a desire to regulate, albeit
indirectly, the flow of events within a given composition. By contrast Blasser, a musician in
his own right, abandons the notion of self-expression through musical performance
entirely, instead using his output as a creator of instruments for other people to play in
order to express what might be a more genuinely anarchic ideal of sonic expression .6 For
Radigue, the ARP’s atypical matrix patching interface would facilitate the creation of highly
complex systems within her synthesizer, over which she could execute precise haptic
interventions to produce infinitesimally gradual timbral variations. I can show this idea with
a basic example, not drawn from any particular piece, but which is generally representative
of her patching technique (Fig. 1.3). Here, the sine wave outputs of two closely tuned
oscillators at 100 Hz and 101 Hz are mixed together; they are also ring modulated,
producing new sinusoidal partials with frequencies that are the sums and differences of
the two input’s original frequencies. Though this excerpt may suggest that Blasser credits
Gilles Deleuze with originating the concept, “body 5 without organs” was first introduced
by Antonin Artaud in his 1947 play To Have Done With the Judgement of God, before later
being adapted by Deleuze in The Logic of Sense (1969). This post on Blasser’s Instagram
page eloquently summarizes the idea: [Link] 6 B1sEoYEB23w/ 19
The mixed and ring modulated signals are then summed together and sent to the output.
Even though the notions of inputs and outputs are still in play, a preliminary version of
Blasser’s formflow synth is evident in the blurring between audio and control signals, and
the routing of individual signals to multiple inputs. As we will discover in the chapters to
come, this sort of technique effects a complex, interdependent system of relations, one in
which Radigue, serving as Jules’ unseen operator, functions as one node of many in a
nonlinear, dynamic system. It should be remarked that the ARP 2500 was not the only
modular synth of its time that readily facilitated splitting one output to many inputs, or vice
versa. In addition to the Putney/ EMS Synthi which also featured a matrix, multiplexed
routings were also possible with the Serge and Buchla 100 series modular by means of
stackable “Banana” cables. The Moog Modular’s 984 Matrix Mixer also allowed single
inputs to be routed to many outputs; with the exception of the Synthi, however, these
designs almost always yield a tangle of wires which can obscure the modules and their
panels’ controls. The sliding pin matrix of the ARP 2500 produces none of that 20 Fig. 1.3
The ARP 2500’s matrix system allows for routing one signal to multiple destinations.
characteristic cable spaghetti—and spaghetti would have been a problem for Radigue in
particular. As I demonstrate in the following chapters, composing her synthesizer music
required immense care to yield musical results that would be acceptable within a narrowly
defined set of aesthetic preferences. Unimpeded access to a few dozen potentiometers on
the synth was an ergonomic necessity for Radigue’s compositional praxis, and cable
spaghetti would have introduced needless complexity and struggle into a process that was
sufficiently demanding of sure and dextrous hands. Even the color-coded spaghetti touted
by Don Buchla, with each cable’s color corresponding to arbitrarily defined musical
parameters (Blasser 2015, 28-30)—apparently did not appeal to Radigue. In her early
experiments with Don Buchla’s eponymous 100 series modular at Morton Subotnick’s
studio at New York University (Gluck 2012), Radigue recalls that more effort could be spent
determining the path of cables in the system than exploring the sonic possibilities of the
instrument itself: “if you weren’t careful, you would accidentally disconnect one of the
cables slightly, and finding it again became the real exploration” (Radigue and Eckhardt
2019, 114). While Radigue’s experiments with the Buchla 100 wouldn’t net a sale for Don,
Chry-ptus (1971), the single piece she composed using that instrument, nonetheless
proved to Radigue that she could create the sorts of sounds that she preferred with
modular synthesizers in general (Holterbach 2013, 28), making way for her eventual
meeting with Jules. Radigue’s involvement with the Subotnick studio also offered one of
several important opportunities for the composer to socialize with other artists involved in
New York’s downtown experimental scene, including Laurie Spiegel and Rhys Chatham
(Radigue and Eckhardt 2019, 78). (I will have much more to say about Radigue’s
connection to the New York avant-garde in Chapter 2.) 21 The ARP 2500’s patching
interface was distinctively ergonomic, but it was ultimately the “voice” of the synthesizer
that sealed the deal for Radigue. The ARP 2500’s distinctive sound almost certainly comes
down the filters, for which Radigue is unambiguous in her love, describing them as “the
reasons I chose the ARP 2500” (Radigue and Eckhardt 2019, 110). For the synth novitiate,
filters can be thought of as electrical resonators, passing some frequencies at greater or
lesser amplitudes than others, in order to shape the timbre of sounds that pass through
them. With voltage-controlled filters, such as those found in ARP, Moog, Serge, and Buchla
modular systems, users can dynamically change the timbres of signals in their systems
over time, adjusting the region in the spectrum at which certain frequencies are boosted or
attenuated . It 7 stands to reason that the ARP filters would play a substantial role in
winning Radigue’s affection. Through their inherent nonlinearity, filters impart a good deal
of character to any sound which passes through them; sonically, they’re of comparable
importance to the synthesist as pickup, pedal, and amp selection is to the electric
guitarist. The ARP 2500 module line contained two kinds of voltage-controlled filter: the
1006 lowpass filter, and the 1047 multimode filter. The impact that the latter of these filters
had on Radigue’s music cannot be overstated. A work of art in its own right, the 1047
multimode filter was designed by Dennis P. Colin for ARP, and featured what was at the
time a singular take on voltage-controlled filters. Offering four filter responses
simultaneously—namely lowpass, highpass, bandpass, and variable-width notch—the
1047 may have been the first multimode voltage-controlled filter that was also
“unconditionally stable”, meaning that regardless of the The user effects these timbral
changes by a turn of a potentiometer, or through a dedicated control 7 signal, such as from
another oscillator, or a timed voltage contour, also known as an envelope. 22 filter’s
resonance settings, it would never spill over into screeching, self-oscillating feedback
unless certain patching conditions were met (Colin 1971, 923). In addition to its highly
stable resonance circuit, I wager that the 1047’s slope was also an important factor in
winning Radigue’s affection. Generally speaking, a filter’s slope can be thought of as the
rate at which frequencies decay in loudness above or below the cutoff frequency. The
1047’s slope is quite gentle, especially compared to something like the Moog transistor
ladder filter, whose slope is very steep, and which produces drastic drops in volume
around the cutoff point. By contrast, the 1047’s slope might be considered gently blurry
and not overly harsh—and this quality in particular allows for smoother transitions while
moving between the spectral peaks of a given input signal. As a gently-sloped filter with
stable resonance, the 1047’s bandpass response, in particular, excels at ‘selecting’ the
various partials of an input signal without causing selfoscillation of the filter . In a paper
describing the theory behind his filter design, Colin highlights 8 this exact use case, noting
that with the 1047 the user may “select any single harmonic up to at least the 30th from an
oscillator, so that any note played will have this same harmonic emphasized” (Colin 1971,
926). At around the 15’00” mark in Kyema (1988) —perhaps 9 Radigue’s most well-known
ARP compositions, and the subject of Chapter 3—we hear the bandpass output of the
1047 front and center, gently scrolling through partials of an input signal to produce a
melody out of the input’s overtones. Furthermore, unlike most other voltage- A bandpass
filter can be thought of as a combination of a highpass and lowpass filter, which only
allows 8 through frequencies in a certain “window”, attenuating all others. This
dissertation’s reference recording of Kyema is available to purchase or stream for free
here: 9 [Link] 23 controlled
filters of the time, the 1047 also featured a dedicated transient generator that, when
excited by an external trigger or gate signal, would ‘ping’ the filter to produce a characterful
plucking sound not unlike striking a metal or wooden bar (Colin 1971, 926). This unique
feature of the 1047 would also play a tremendously important role in Radigue’s music, as
she would use its distinctive bell tones to mark important structural moments. Two
examples of this technique may be found starting around 25’00” in Adnos I (1974) , and
around 38’00” in Kyema. 10 Radigue loved the 1047 so much that she put two of them in
her ARP 2500 cabinet, which naturally brings us to the topic of module selection. After all,
the appeal of the modular synthesizer lies not only in the user’s ability to freely patch
modules together in any way they see fit, but also in their ability to choose which modules
they include in the system to begin with. Like the patching which follows it, this initial
process of module selection is certainly a mode of instrument design in itself. What was in
Radigue’s synth? While Radigue does not definitively answer this question even when it is
posed directly by Eckhardt, who asks, “[very] concretely, how was your ARP constructed?”
(Radigue and Eckhardt 2019, 116), archival photos and access to ARP 2500 documentation
proves enough to answer the question unambiguously (Holterbach 2013, 73; ARP 2500
Owner’s Manual 1970). From left to right, these are the modules in the cabinet of Éliane
Radigue’s ARP 2500: 1023 Dual Oscillator 1004-T Oscillator 1023 Dual Oscillator 1047
Multimode Filter/Resonator 1005 ModAmp (Ring Modulator + Voltage Controlled Amplifier)
1046 Quad Envelope Generator The reference recording for Adnos I is available to
purchase or stream for free here: 10 [Link]
24 1016 Dual Noise/Random Voltage Generator 1006 FiltAmp (Lowpass filter + Voltage
Controlled Amplifier) 1005 ModAmp 1047 Multimode Filter/Resonator 1050 Mix-Sequencer
(Dual 4-to-1 sequential switch) 1027 Clocked Sequential Control Altogether that’s five
oscillators, two ring modulators, three filters, three voltage-controlled amplifiers, two
noise sources, and a suite of utility functions, which include: a pair of fluctuating random
voltages, four envelopes, and two 4-to-1 voltage-controlled switches, each of which can
cycle through four input signals, and output the selected input signal at the module’s
dedicated output pin. Rounding out the modules is the sequencer, which produces a
series of stepped, programmable voltages at a rate determined by the user. Compared to
manufacturer-recommended systems of the same size from a 1972 ARP promotional
brochure (Fig. 1.4), Radigue’s synth skews further in the direction of filters and ring
modulators: the ARP brochure recommends no more than one 1047 (multimode filter) and
one 1005 (ring modulator) for any of their 12-module configurations. Radigue has two of
each, plus the 1006 lowpass filter, in all constituting more than one-third of the modules in
her system, a choice which clearly demonstrates her preference for these particular signal
processors. Memorably, Radigue also alludes to a bit of push-pull with the ARP
salespeople when she recalls how “they had almost imposed a sequencer on me”, even
though in her own music she rarely found a use for it (Radigue and Eckhardt 2019, 115).
Only in 7th Birth (1971) does one find that the switch and sequencer modules are
unambiguously utilized, and they are used in quite an unconventional way, clocked nearly
at audio rate to produce stepped, discontinuous waveforms. Because sequencers are
conventionally used to produce rhythms and melodies, not timbres per 25 26 Fig. 1.4 Two
“sample” ARP 2500 systems from page six of the ARP 1972 promotional brochure. The
systems shown are housed in the same sized cabinet as Radigue’s synth, so while the
modules inside of Jules are different, the samples above give a general idea of the
synthesizer’s proportions. The keyboard console below each system would have been
included with Radigue’s purchase of the instrument, but she left the keyboard interface in
New York (Holterbach 2013, 30). (Images courtesy of the Alan R. Pearlman Foundation.) se,
Radigue’s unusual application of the ARP 2500’s sequencer in 7th Birth traverses the
threshold from the time domain into the timbral domain. Radigue’s sequencer use also
calls to mind a comment Don Buchla made on this very subject: “If you build a sequencer
that will run fast enough, you can listen to the stepped output directly rather than using it
to control an oscillator, and by changing the height of the various steps you can change the
waveshape. That sounds interesting in theory, but the first people who built sequencers
that could run in the audio domain. . . found that all they were doing when they turned a
knob was varying the amplitude of some dominant harmonic, practically independently of
which knob they turned. The lesson there is that what we hear in the temporal domain we
hear one way, but in the harmonic domain we hear in a different way.” (Buchla and Aikin
1982). As we can see in the foregoing, Buchla strongly dichotomized the temporal and
timbral domains in his synthesizer designs, a split made manifest in his use of different
jack and patch cord designs for audio and control signals respectively: audio signals are
passed through minijacks, and control signals are passed through larger “banana”
connectors. We may recall that it is this very dichotomy that Blasser seeks to unravel in his
latter-day notion of the “form-flow synth” by turning pulses, discontinuities, and rhythmic
devices into audible rather than strictly organizational or structural conceits. Though she
would generally ignore the sequencer and switch in later synth compositions, Radigue’s
creative ‘mis-application’ of the ARP’s rhythmic and melodic devices in 7th Birth points to
a kind of ‘middle path’, one in which the physical fact of sound’s constituent elements,
namely its timbre, is foregrounded through a patching technique that transforms what is
typically an organizational musical device into a texture in its own right. In this way, the
‘voice’ of the instrument itself is resuscitated out of any merely structural or organizational
role—the instrument is allowed to sing. What would be the stuff of Éliane and Jules’ duets?
27 Chapter 2 Presence and absence: notes on Radigue’s ARP 2500 synthesizer technique
2.1 What, how, and why? In the previous chapter, I articulated some of the essential
characteristics of Radigue’s aesthetic thought. In a general and high-level way, I also
showed how these ideas informed her relationship with the ARP 2500 synthesizer that she
named Jules. I demonstrated through writings and interviews that Radigue thinks of sound
as alive and deserving of respect, a philosophy that places her in direct conversation with
Tara Rodger’s audio-technical discourse, one in which sound takes the form of
“differentiated, lively individuals” (2011). I also noted how Radigue enjoyed a reciprocal,
intersubjective relationship with Jules the Synthesizer, over whom she has since exerted a
measure of protective and self-preservational instinct, even after the cessation of their
working partnership. Finally, I showed how Radigue utilized a complex, nonhierarchical
patching technique, becoming part of the flow of the music herself and engaging in a form
of self-imposed, deferential recognition of sounds as living. In this chapter, I will explore
this interplay between composer autonomy and its deference in greater detail. To do so, I
will examine a handful of ‘case studies’ drawn from Radigue’s synthesizer technique and
recording practice: 1) Radigue’s approach to tuning her ARP 2500’s oscillators 2) her use of
filters to analyze complex spectra created by frequency modulation and ring modulation;
3) Radigue’s manipulation of feedback paths created through a technique called circular
FM; 4) the process she undertook to commit her compositions to 28 magnetic tape; and 5)
the performance practice surrounding the presentation of these particular works in
concert. My motivations for this endeavor are both practical and personal. As a
practitioner of modular synthesizer design, performance, and composition, I wanted to
better understand the work of an established yet relatively unheralded master of the form,
and was frustrated to find that previous accounts either lacked sufficient critical distance
to the composer; or in the case of interviews, I often found the interlocutor was
uninterested asking about, writing, or bringing to publish the muddy details of modular
synths and magnetic tape recording. In one notable case, I found an account of Radigue’s
recording practice that, if not misleading, is outright inaccurate . 11 This state of affairs is
understandable. To a previous generation who witnessed the proliferation of digital audio
workstations and all the modern conveniences of powerful personal computers, these
older techniques may seem outdated or even useless. One unfortunate outcome of these
attitudes, however, is that these classic production techniques risk becoming lost to time
and memory, although they increasingly find their latter-day adherents—the present
author among them. It was in the face of these dead ends and vague technical
explanations that I set out to explore, with as much detail as I reasonably could muster, the
details of Radigue’s method. This was tough going. One of the things that made this
exploration so tricky was the fact that Radigue went to substantial lengths to hide her
technique, and her own haptic contributions to the music-making process. We can locate
this commitment to self-abnegation on the part of Radigue within a wider context of
postwar American experimentalism, a strain of avant-gardism which would exert lasting
influence on her development as an artist during her visits to New That particular account
will be addressed in the present chapter. 11 29 York in the fifties and sixties (Radigue and
Eckhardt 2019, 56, 72). To understand what these experimentalists were about, I turn to
Georgina Born, who in Rationalizing Culture offers an extremely useful definition of these
artists, among whom we may include Alvin Lucier, Steve Reich, and many others: “often
composer-performers themselves, experimentalists gestured toward effacing the
composer’s authoritative role and wanted to lessen the hierarchical musical division of
labor between composer as creative authority, performer as constrained interpreter, and
passive audience. The emphasis was on the performance process, music as an unfolding
and participatory ritual event structured by time” (1995, 58). I’ve already alluded to some
of the ways in which Radigue would efface her own authoritative role as composer,
preferring to distribute the creative responsibilities with Jules and the “living” sounds she
held space with (Chapter 1). In the present chapter, we will see this abnegation of the self
taken even further, while a discussion of Radigue’s concern with empowering an otherwise
“passive audience” will be taken up in my analysis of Kyema (Chapter 3). Curiously,
Radigue would not recast the performer as a free agent—towards the end of this chapter,
when I discuss how Radigue performed, recorded, and presented her ARP 2500 works, we
will see that she placed immense constraints on herself. Those constraints would be
specifically applied through the composer’s use of technology. To understand this
mediated aspect of Radigue’s self-abnegation, it will be useful to consider the attitudes
towards technology expressed by the postwar institutional French avantgarde. First,
however, a disclaimer: the relevance of these institutions to Radigue’s thought has
typically been cast in terms of a lack, and there are very good reasons for this. As
Dougherty (2021) notes, gender discrimination played a decisive role in Radigue’s
professional attainment: 30 she simply was not afforded the same opportunities as her
male coworkers while employed by Schaeffer and Pierre Henry in their musique concrète
studios, and even after she began having success with her original compositions in the
United States especially, her institutional colleagues in France paid her little mind. The
result? “For decades I had no other choice than to fly solo…I am far from having had a
precocious career” (Radigue and Eckhardt 2019, 80). Complicating matters further,
Radigue ended her official ties with Schaeffer in the year of GRM’s founding (1958), and
never had direct involvement with IRCAM. How can a discussion of these two
organizations help us to understand Radigue? I would submit that as the largest
institutions in France dedicated to avant-garde music—although IRCAM has always has
more autonomy and funding—the values upheld by these two institutions can tell us a lot
about the intellectual climate of France in the postwar period, and which particular ideas
had currency among its culture workers. What were some of those ideas? As Born notes,
Boulez’s IRCAM and Schaeffer’s GRM were for decades embroiled in what on the surface
appeared to be mutually incompatible philosophies: GRM espoused an “experimental
empiricism”, while IRCAM valorized a “postserialist determinism” (1995, 59). This
antagonism played out in many ways, even extending to their preferred technologies: “the
contempt for analog technologies [e.g., magnetic tape]. . . embodied IRCAM’s rejection of
the previous generation of music technology, which was therefore seen as useless to
IRCAM compared with digital technology” (Born 1995, 262) IRCAM’s vision of a forward-
thinking modernism made the mixing of newer digital and older analog technologies
unthinkable. (The thought goes something like, why bother with outdated 31 magnetic
tape, when we have these powerful computers?) Accordingly, each institution preferred
technologies which supported majorly different aesthetic precepts. In IRCAM’s case, that
meant computer-based workflows most suited to realtime synthesis and live performance
(Born 1992, 259), whereas Schaeffer and the GRM would historically favor analog storage
media such as phonograph discs and magnetic tape. In contrast to an a priori,
deterministic, postserial, or “abstract music”, which “gave the ideal note a sonorous body
through the realization of scores by performers or engineers”, Schaeffer’s concrete music
“began with sounds recorded from the world and sought to perceive in them… musical
values” (Kane 2014, 17, emphasis mine). Clearly, there are serious and enduring
differences in the philosophies of IRCAM and GRM, differences which support their chosen
tools of the trade. At the same time, however, both are publicly-funded institutions that
were driven by modernist programs, which in imagined breaks with the past, used
technological research in music and acoustics to determine new directions for music
composition (Born 1995, 66-67). This, of course, is a characteristic of modernism in
general, wherein the personal creative act is recast as a rational scientific inquiry. The
institutional French avant-garde’s concept of musical research finds some resonance in
Radigue’s own thoughts on technology, and I think accounts for her pursuit of a self-
occlusion through technology. A telling example of this way of thinking can be found in
Radigue’s recollection to Eckhardt on her early experiments with electronic feedback in
the late sixties. Radigue remarks that “gradually, I established my initial vocabulary,
developing a particular way of formulating these electronic sounds. I felt myself to be in the
time of the Arcadian shepherds and Greek philosophers discovering the laws of natural
acoustics” (2019, 89). I find the composer’s use of “vocabulary” and “laws” to be very
striking here. In locating the object of her 32 very personal artistic inquiry inside
supposedly neutral terms implying validation through rational consensus, Radigue evinces
an essentially modernist conception of technology’s purpose to ‘reveal’ hidden truths in
the pursuit of progress (Piekut 2012, 12). At the same time, Radigue’s decidedly
postmodern attitude as an “experimentalist” means that she will ultimately resist any sort
of glorifying Promethean self-concept, even if, as the work’s true sole author, “the division
of labor [between composer, performer, and audience] remained intact” (Born 1995, 58).
In concert, Radigue would prefer to create electronic works that invited an open-ended
interpretation on the part of the audience; further, her role as the sole creator is effaced
through an intersubjective conception of sounds as living, and an her appraisal of Jules as
a collaborator. Through this very technology, however, she would mediate her presence,
making any trace of her involvement difficult to discern. 2.2 Don’t touch that dial!
Radigue’s commitment to ascetic self-abnegation may complicate the analysis of her
working method, but if you know what to look for, evidence of her mediated ‘absence’
abounds, from patching up her synthesizer, to recording and finally presenting the music
she made with it. To articulate this theme of ascetic self-denial in her work, I will primarily
address the what and the how of Radigue’s composition, recording, and performance
practice. As for the why, Radigue herself would later say on the matter of her synthesizer
compositions to Julia Eckhardt, “Don’t ask me too many theoretical questions…explaining
the why of the how is not possible” (2019, 115). I do not take Radigue’s imperative to mean
that the why is a dead end. On the contrary, I find that compelling explanations arise all on
their own when considered with respect to certain 33 technical constraints of Radigue’s
working method, the tools at her disposal, and the conditions outlined by her general
aesthetic ethos. By ‘aesthetic ethos’, I simply mean the general set of preferences and
values which shaped the sorts of sounds Radigue would commit to tape, and the sounds
to which she would lend her name as their author. Concretely established in the late sixties
during her early experiments with microphone and loudspeaker feedback, as well as tape
machine re-injection feedback, I will also show the ways in which this ethos subsequently
informed Radigue’s approach to the ARP 2500 modular synthesizer in particular.
Emphatically, Radigue was not trying to broaden her sonic vocabulary with the modular
synthesizer. She recalls to Emmanuel Holterbach that it wasn’t until she found ways to
produce sounds similar to those she had honed during her early experiments with
microphone and loudspeaker feedback, as well as tape machine re-injection feedback,
that she felt as though she was onto something (Holterbach 2013, 28). A closer study of the
works which precede her synth period shows that Radigue’s conversion to synth-ing in
1971 yielded no major changes in the aesthetic ethos she had developed during her
experiments with various instruments of the tape music studios in which she had trained.
She was evidently as interested as ever in minimally varying, slowing developing
soundscapes comprised of “pulsations, beating, and sustained tones with a certain
roughness” (pulsations, battements, sons tenus avec une certaine rugosité) (Holterbach
2013, 28). What can we gather from this continuity of interest? For one, I think it means
that we can interpret some of these earlier feedback works, like Usral (1969) and Opus 17
(1970), as exemplars of a particular aesthetic ethos. Said another way: how those earlier
works sound tells us a lot about the kinds of sounds Radigue liked to make. While it’s true
that a composer may change stylistic conventions during a long career (e.g. Stravinsky,
Miles Davis), 34 in Radigue’s case, she’s quite consistent. As Julia Eckhardt puts it:
“jokingly, Radigue says she has made the same music all her life…the next piece has
always been a way of resuming the previous” (2019, 36). Gentle self-deprecation aside, I
would largely agree with Radigue here. Usral—an abbreviation of ultrason ralenti, or
“slowed-down ultrasound” —is a key 12 work in understanding how Radigue would later
approach working with the oscillators—the main sound-producing components—of her
ARP 2500 synthesizer. To create Usral, Radigue generated ultrasonic tones through
electroacoustic feedback techniques, then iteratively rerecorded the material onto
magnetic tape, slowing the playback speed during each successive iteration in order to
transpose the ultrasonic tones into a human-audible range. Radigue would have control
over how many iterations each feedback tone would be subject to; and, as the composer
and recording engineer, she naturally would select which tones to include, in which order,
and in whatever proportions of loudness and duration relative to one another. By starting in
the ultrasonic range, however, the point of origin for this process was nonetheless always
beyond her direct control because she literally could not hear what she was doing.
Strikingly, it was only through watching the needles on her recording equipment that
Radigue could even be certain she wasn’t damaging her hearing while inducing these
ultrasonic phenomena (Radigue and Eckhardt 2019, 90). In listening to Usral , a few
characteristics merit emphasis. While quite noisy, each layer 13 of slowed-down feedback
typically adheres to a general spectral range; there’s practically nothing in the way of
glissandi or sliding pitches, short of some occasionally jarring, Radigue and Eckhardt’s
translation 12 The reference recording for Usral is available to purchase or stream for free
here: 13 [Link] 35 discontinuous bifurcations
of pitch caused by the nonlinear dynamics inherent to analog electronic feedback
networks in general—these striking effects can be found starting around 7’45”. Notably,
each layer also tends to fade in and out quite gradually: tones seem to emerge as if from
nothing, and then return to nothing. As we will see, the lack of glissandi—or stated
affirmatively, the stability of each sound layer’s general pitch area—as well as a
preponderance of gradual transformations in time by way of slow fade-ins and fade-outs,
would each go on to define Radigue’s synthesizer music in general. This brings us to the
matter of how Radigue used the oscillators in her synthesizer compositions. In my view,
this is one of the most compelling facets of her process. Once she started working on a
piece, she would not manually alter the frequencies of any of the ARP 2500’s five
oscillators until the work was completed (Rodgers 2010a, 57). Her commitment to this
course was clearly in place from 1971 onwards, a trend substantiated by sketches for the
very first ARP 2500 compositions, 7th Birth (1971) and Geelriandre (1972). Radigue herself
does not offer an unambiguous explanation as to why fixing the frequencies in advance
appealed to her, but with knowledge of her prior work involving electronic sounds, an
understanding of modular synthesis in general, and the ways in which Radigue approached
her modular synth in particular, I believe we can close in on some possible explanations to
varying degrees of certitude. Let’s begin with the obvious. When you rotate the frequency
potentiometer of a voltage controlled oscillator, the resultant pitch changes. This is
extremely easy to do: even a musician with no experience in synthesizers can quickly
determine that clockwise rotations raise the pitch and counterclockwise rotations lower it.
The process of changing an oscillator’s frequency, 36 known as frequency modulation or
FM, can be accomplished manually as just described, or by way of an external signal, such
as from another oscillator, called the modulator. At sufficiently slow rates of change, the
resultant effect on the modulated oscillator, called the carrier, might sound like either a
glissando, or vibrato (the latter especially if the modulator is triangular or sinusoidal). Such
effects are, more or less, entirely absent in Radigue’s ARP 2500 compositions. On the
basis of their absence, and the great ease with which these effects could be implemented,
it stands to reason that she evidently had little interest in hearing this sort of thing in her
music; however, when considered with respect to early works like Usral, I would suggest
that this is more than ‘just a preference.’ Rather, this refusal to alter the frequencies of the
ARP 2500 oscillators once work on a new composition began ultimately displays a
commitment to the discipline of self-abnegation that characterizes Radigue’s aesthetic
ethos. Notably, the tunings for 7th Birth and Geelriandre correspond precisely to legends
on the ARP oscillators’ faceplates (Fig. 2.1). As early works with a new instrument,
undoubtedly Radigue was still getting to know her ARP, and using the faceplates as
guideposts makes a fair amount of sense as a form of self-instruction. At the same time,
this correspondence makes these patch drawings seem more like schema for executing a
formalized, anonymous process, rather than a plan for the subjective, freewheeling
activities of a curious artist. Another example of this idea may be found in Fc 2000/125
(1973), whose title is likely a reference to markings on the filter module 1047’s faceplate,
which are used to determine its cutoff frequency. This very detachment and subsequent
deference to an already-established precedent (in this case, the graphical layout of the
instrument and the segmentations of frequency space as determined by 37 Alan R.
Pearlman and Dennis P. Colin), echoes the Radigue of a just a few years prior, setting off
from inaudible points of origin when inducing ultrasonic feedback tones for Usral. While it
might be too strong to say that Radigue outright rejected the label of composer during her
electronic explorations, the label of “researcher” did seem to come more naturally, at
least in the early years. Collected from Radigue’s archive for the liner notes to the 2011
Important Records release of Transamorem/Transmortem, a program note offered for the
March 38 Fig. 2.1 At left, 7th Birth; at center, Geelriandre; at right: panel for the ARP module
1023 dual oscillator. For reference, 4000 Hz corresponds to roughly the highest notes on
the piano— 8000 Hz is an octave above that. 31 Hertz is roughly the lowest B on the piano.
16 Hz is on the threshold between rhythm and pitch. (Images courtesy of La Fondation
A.R.M.A.N. and the Alan R. Pearlman Foundation.) 6 and March 9, 1974 concerts of some
synth works at Phill Niblock’s loft and The Kitchen in New York City shines some light on
Radigue’s fluctuating sense of identity during this period: “This series of works [Arthesis,
Biogenesis, Chry-ptus, Transamorem/Transmortem] represents a cycle of researches with
electronic sounds through Moog, Buchla, and Arp [sic] synthesizers” (emphasis mine). By
simply using the legends on the oscillators’ faceplates as guideposts for tuning her
instrument, Radigue evinces, in these early works at least, a kind of detached acceptance
of whatever sounds her oscillators might happen to produce. This posture of detachment
naturally puts Radigue in dialogue with other American experimentalists, who, in the
tradition of Cage, espoused “non-intention” in a postmodern rejection of European
classical teleology (Born 1995, 57). At the same time, Radigue’s self-abnegation can also
be connected to larger general trend in modernist musical aesthetics: the composer’s
direct involvement the work must be obscured and mediated by the tools of music
production, subsuming what is otherwise a subjective creative praxis into a supposedly
dispassionate program of research. Radigue’s identification with ‘research’ also places her
in intellectual conversation with other technologically-minded American experimentalists
of the period, including Maryanne Amacher, David Behrman and James Tenney. Radigue
acknowledges Tenney directly when providing to Eckhardt the names of her major
influences—for context, the only other names on that list are Pierre Schaeffer, Pierre
Henry, and John Cage (2019, 60). Radigue also recounts with evident gratitude Tenney’s
collegiality during her visits to New York, noting how he made many introductions and
provided technical support (2019, 72, 109). Artistically, Tenney was known for his
instrumental works exploring perceptual phenomena, as well as experiments in electronic
sound conducted with Max Mathews at Bell Labs (Polanksy 1983). Given Tenney’s evident
39 conviviality and their overlapping artistic concerns, it stands to reason why Radigue
would see fit to mention him so often when recounting her life. Fixing the ARP 2500’s
oscillator frequencies in advance provokes another question: namely, whether are there
any trends or general strategies Radigue might have employed when tuning her oscillators.
The composer states that in the early ARP works (which I take to mean 7th Birth,
Geelriandre, Transamorem/Transmortem, Psi 847, and Adnos I especially), she was mostly
interested in exploring extremes: that is to say, very high and very low frequency spaces
(Radigue and Eckhardt 2019, 115). Looking at the oscillator tunings for 7th Birth and
Geelriandre, we can see this concern laid out rather clearly (Fig. 2.1). Even without yet
understanding how Radigue would patch these oscillators together, when we listen to
some of these early ARP works, I think we can get a sense as to how these particular tuning
systems would generally lend themselves extremely austere soundscapes. With relatively
little happening in the middle of the spectrum, Radigue evinces in some of the early synth
works an overt concern with researching the extremes of human auditory perception: the
highest highs, and the lowest lows. The later ARP works do not have patch scores
associated with them, as Radigue eventually no longer needed them in order to recall the
state of her synth between sessions (Holterbach 2013, 30). The lack of a paper trail makes
the task of determining how she tuned the oscillators for those works much more difficult,
but not impossible: in a spectral analysis of any given work, there will always be certain
frequencies emphasized over others. Subsequently, any trends in that analysis may point
towards the initial tunings of the five oscillators. Nonetheless, I would submit that it
somewhat misses the point to try and recreate the synth’s oscillator tunings 40 exactly.
Instead, I think we can be content to make the following generalization: the fixed base
tuning of the five oscillators provided a generative framework, from which Radigue could
contravene, explore, and discover combinations of sounds in order to produce
harmonically and timbrally rich compositions. The fixed tuning scheme, and Radigue’s
subsequent playing and exploring within that scheme, further underscores the inherent
tension in her process between discipline and recreation; acceptance and control. (I will
have much more to say about this tension later.) What about those plus and minus
symbols next to some of the oscillator’s tunings? Here I offer some measured speculation.
When any two tones are close together in frequency (Heller 2013, 487), they engage in an
alternating pattern of constructive and destructive interference that is heard as periodic
undulations in volume known as beating (what Radigue calls battements). The rate of the
beating is almost always the difference in the two tones’ frequencies: tones that are 1 Hz
apart will complete a cycle of beating once a second. Tones that are 0.1 Hz apart will
complete a cycle of beating once every ten seconds. Notably, the variation in amplitude
caused by beating is sinusoidal, which means that the transformations in loudness over
time conform rather nicely to the pseudologarithmic nature of human perceptions of
magnitude, thereby creating a gentle, undulatory effect. As beating is an operative
principle in Radigue’s aesthetic priorities, I believe these plus and minus symbols indicate
her intent to generate beating by tuning the respective oscillators close enough together in
order to produce the effect. Given the difference in frequency determines the rate of
beating, it follows that any oscillator tuning scheme for a particular piece would yield not
only its harmonic resources, but also its rhythmic resources. 41 You can find beating in
pretty much every moment of any Radigue ARP 2500 composition to a greater or lesser
extent. As a technique, beating achieves two important ends vis-à-vis the composer’s
aesthetics. First, beating animates what might otherwise be in Radigue’s case a static,
opaque musical texture—thus lending sounds a lively character. At the same time, by
sounding out a difference between two or more tones, beating brings the listener into close
apprehension of the physical fact of sounds propagating in a space. This effect is quite
prominent in the opening minutes of Kailasha (1991) , which allegorizes a
circumambulatory pilgrimage 14 around the sacred Mount Kailash. Each tone beats at a
different rate, and as listeners we stand, The reference recording is available to purchase
or stream for free at this link: 14 [Link]
la-mort 42 Fig. 2.2 Top: spectrogram of the opening ninety seconds of Laurie Spiegel’s
Kepler’s Harmony of the Worlds (1977). Bottom: spectrogram of the opening ninety
seconds Radigue’s Kailasha (1991) perhaps, apart from the action, detachedly observing
each tone coming and going in their individual trips ‘around the mountain’. Compared to a
synthesizer work like Laurie Spiegel’s Kepler’s Harmony of the Worlds (1977), in which the
frequencies of six oscillators continuously oscillate in a model of polyphonic orbital
motion (Spiegel 2012), a work like Kailasha exemplifies Radigue’s approach to slow,
gradual, and periodic transformations in the time domain. These transformations are
coextensive with a revelation of timbral components through very gradual fade-ins, giving
the impression that these soft undulatory sounds were already there, dancing in epicyclic
bliss long before the piece began, and that they will continue to do so long after the the
work’s purported conclusion. Comparing spectrograms of the two works makes this
difference in Spiegel and Radigue’s technique quite clear (Fig. 2.2). In Radigue’s process of
spectral revelation, I think we can begin to see how her interest in disclosing a sound’s
inner life —founded upon an a priori assumption that sound is, in fact, autonomous and
living—exerts a deep influence on her compositional technique. 2.3 The game of partials
Let’s back up for a moment. As we know, sufficiently slow changes to an oscillator’s
frequency tend to produce the kinds of musical effects Radigue evidently had little use for
(e.g., glissandi). When the rate of change enters the audible spectrum, what was once a
transformation of pitch gives way to a transformation of timbre—that’s so-called “audio-
rate FM” in a nutshell. Sketches and interviews both indicate that Radigue made
substantial use of this effect. Furthermore, by varying the index of modulation over time—
that is to say, the extent to which one signal effects the frequency of another, a value
independent of the rate of change—Radigue 43 could exert direct control over the relative
complexity of a given FM timbre. As the number of sidebands generated by FM generally
scales with the depth of modulation, musically this 15 means that higher indices of
modulation generally produce thicker textures. On the ARP 2500, the indices of modulation
could be controlled via dedicated potentiometers, facilitating realtime manipulation of the
FM’s depth; archival evidence substantiates Radigue’s interest in performing this exact
technique. In a detail from the patch score for Geelriandre, part A (Fig. 2.3), the four lines
shown correspond to four different FM index knobs on the ARP; with the leftmost “0”
meaning off, the subsequent numerals correspond to increasingly clockwise knob
positions, indicating a gradual increase in the FM index. While there’s a number of other
parameters being altered in the opening minutes of Geelriandre , I do think one can hear
the spectrum gradually 16 widen, introducing more tones as the indices of modulation
increase. Supervising this organic 15 Additional partials above and below the signal being
modulated The reference recording for Geelriandre can be purchased or streamed for free
at this link: 16 [Link] 44 Fig.
2.3 Detail from score for Geelriandre, part A shows choreographed, gradual movements of
FM index knobs. (Image courtesy of La Fondation A.R.M.A.N.) transformation of timbre is
Radigue herself, who can determine the rate at which the process unfolds by the speed at
which she rotates the potentiometer. Nonetheless, one might ask: doesn’t frequency
modulation break Radigue’s self-imposed rule of leaving the oscillator’s base frequencies
unchanged? After all, by definition FM changes the oscillators’ frequencies. If there is a
workaround here, it’s contingent on the fact that audio-rate FM is generally not perceived
as a change in pitch. Traversing this boundary, Radigue thereby accomplishes a musically
generative workaround by constraining FM’s flux to the audible spectrum, yielding an
impression of timbral transformation, not one of pitch transformation. That being said, it is
important to address one aspect of frequency modulation that under certain conditions
may introduce an audible shift in pitch. To understand this phenomenon it must be
established that there are two main types of FM: exponential and linear, which refers to
how the modulator’s amplitude is scaled to changes in the carrier’s frequency. Most
modern voltage-controlled oscillators (VCOs) can accomplish both FM types
simultaneously, but the ARP 2500’s VCOs can only implement one or the other. Any ARP
2500 oscillator could be set to either exponential or linear FM, but this procedure would
have to be carried out at the ARP factory, as different submodules were needed to convert
the modulating voltages exponentially or linearly (ARP 2500 Owner’s Manual 1970). All of
Radigue’s ARP 2500 oscillators could only carry out exponential FM, which can be
confirmed by looking at the printed circuit boards on the rear of Radigue’s ARP 2500. There
you will find only one converter submodule per oscillator. As ARP’s implementation of
linear FM required two converters to function (one for positive voltages, one for negative),
linear FM is thereby ruled out (email correspondence with François J. Bonnet, 8 January
2021). 45 The ubiquity of exponential FM in Éliane’s ARP 2500 is germane to our discussion
of her music in part because of its unexpected effects on the perceived pitch of the carrier.
The electrical engineer Bernard A. Hutchins (1974) observed that higher indices of
modulation will cause the perceived pitch of the carrier’s fundamental to rise. Even with a
harmonic carrier/ modulator frequency ratio, this effect can produce a generally
inharmonic result by shifting the fundamental up with respect to the sidebands. With
sufficiently high indices, the entire harmonic structure of a Radigue synth patch could
shift, producing tones that can have little to do with the original frequency of the carrier.
Though utilized less frequently in her practice than frequency modulation, the technique of
ring modulation (RM) was another important method by which Radigue could expand the
harmonic and timbral resources of a given composition (email correspondence with
François J. Bonnet, 10 January 2022). As the reader may know, the ring modulator is a
specific kind of circuit that takes two inputs, multiplies them together, and outputs the
sums and differences of the two inputs’ constituent frequencies. Figure 2.4 shows an
example: if you had a sine wave at 440 Hz (note A4) and ring modulated it with a sine wave
at 110 Hz (A2), the output of the ring 46 Fig. 2.4 Basic ring modulation example in European
classical notation. Pitches are approximate. modulator would be two sine waves, one at
550 Hz (~C#5, the sum) and 330 Hz (~E4, the difference). The relative loudnesses of the
sum and difference tones is positively correlated with the relative loudness of the two
signals that enter the ring modulator. Notably, the volume of each signal passing into the
ARP 2500’s ring modulators can be specified under voltage control—this means Radigue
would be able to exercise some discretion over the ring modulated result, even if the base
frequencies of the oscillators heading into the ring modulator were fixed. From the
perspective of harmony, ring modulation is a form of derivation, but the effect also
discloses something essential about the mechanics of hearing, in particular the
otoacoustic phenomenon known as combination tones. Undetectable to conventional
microphones, these otoacoustic emissions occur inside the ear—one theory as to their
origins concerns distortion caused by the mechanics of eardrum movement (Heller 2013,
498-502). Not everyone hears the same combination tones when two pitches are played
together, but the most commonly heard combination tone can be expressed as difference
of two frequencies. Notably, ring modulation ‘sounds out’ this very process by setting the
combination tones in motion in the air, outside of our ears, making the effect detectable to
microphones and other analog recording processes. This meaningful connection between
ring modulation and otoacoustic combination tones is well articulated elsewhere: Robert
Hasegawa builds an entire music theory around the concept in his analyses of spectral
music (2019); and the Romanian composer Horațiu Rădulescu freely mixes the concepts
of RM and combination tones when describing to Bob Gilmore his own approach to
composing with otoacoustic emissions (2003). Radigue herself speaks of otoacoustic
emissions in a somewhat oblique way, if at all. She uses the word “sub-harmonics”
(subharmoniques) in her treatise “The Mysterious Power of the Infinitesimal” (2009), and
remarks to Rodgers that she 47 worked “with a great insistence on the game of the partial,
of the subharmonic and overtones” (Rodgers 2010a, 57); however, it’s not clear whether
she knowingly refers to otoacoustic emissions/combination tones in these passages. In
any case, I think we can generally describe her approach to using ring modulation in this
way: by multiplying the signals generated by various pairs of oscillators, Radigue could
derive harmonically and acoustically related timbres, from which she could build
musically coherent continuities in a given composition. Unlike other voltage-controlled
ring modulators that were commercially available at the time, the ARP 2500’s ring
modulator had the unique the ability to feed a pair of fixed voltages back into the
oscillators. The intention of this design was to allow the user to store ‘presets’ in the form
of frequency ratios, which could be engaged and disengaged either manually or by means
of voltage control (ARP 2500 Owner’s Manual 1970). More than simply an interesting
design choice, this unique feature of the ARP 2500 ring modulator bears particular
relevance to our discussion of Radigue. In a comment to Julia Eckhardt, she remarks that
“it was enough that I vary, for example, the proportion of the two main signals in a ring
modulator, for the whole structure to change” (2019, 114). While it’s not totally clear what
she means by “proportion” (la proportion), she may have been referring to this feature of
the ring modulator. At my request, François J. Bonnet asked Radigue about this during a
January 2022 meeting, during which she clarified that she occasionally, though rarely, used
the ARP 2500 ring modulator’s fixed voltage outputs in order to gently nudge the
frequencies of her oscillators. In the available accounts, Radigue acknowledges no
contradiction between this unusual technique and her oft-stated commitment to never
alter the base frequencies of Jules’ oscillators once work on a new composition began.
What to make of this? Rather than trying to ‘catch’ Radigue, I think of this 48 aspect of her
practice as an exception that proves the rule. It was only through an indirect, technological
mediation that a preference for changing the oscillator’s tunings could be expressed. If
techniques like frequency modulation and ring modulation increase the harmonic and
spectral resources available to Radigue, it is the voltage-controlled filters, which
selectively boost and attenuate certain frequencies, that further specify and shape these
complex tones. Through RM and variable-depth FM there is already plenty of timbral
control in play; but, with the ARP’s filters Radigue would sculpt FM and RM-derived spectra
with pinpoint accuracy. For Radigue, filters generally came last in the signal processing
chain, with the composer remarking to Tara Rodgers that “I work after [tuning oscillators,
frequency modulation, ring modulation, etc.] on all the partials. And of course, to end it up,
the two beautiful filters on the ARP really make it” (Rodgers 2010a, 57). The 1047
multimode filter on the ARP features a relatively shallow slope, along with a high
resonance factor (Colin 1971, 923). Together, these qualities facilitate a unique
combination of highly accurate spectral shaping, as well as very blurry, gentle spectral
transformations over time. Radigue utilized these very qualities of the 1047 to great effect
in some of her most well known compositions for synthesizer. A wonderful example of this
realtime filtering technique in action can be heard starting at 8’55” of Kyema, where she
crafts a melody out of contiguous partials, gently pulling them out of the fabric of a hazy
drone. As the partials materialize in and out of focus during this passage, I am reminded of
a statement made by Radigue, collected on the promotional webpage for the 2019
performances of Adnos I-III presented by the NYC-based nonprofit Blank Forms: “In the
conch formed by the flow of sounds, the ear filters, selects, privileges, as would an eye
fixed on shimmering water. 49 Only listening is required, like a gaze that is absent and
double, oriented toward an exterior image that lives as a reflection in the inner universe”
(2019). While she speaks here of the ear as metonymically representative of a listener’s
attentional focus over time, this linking between gaze and focus, between interiority and
exteriority, seems especially fruitful to understanding Radigue’s concept of sounds as
forms of life, and the central role that filters would play in disclosing what she sometimes
called the “inner life” of sound. Along those same lines, another phrase of Radigue feels
important to mention: “Le jeu des harmoniques” (the play of partials or perhaps the game
of partials). This phrase appears several times throughout her interviews and writings,
including her treatise “The mysterious power of the infinitesimal” (2009), and the
interviews with Rodgers (2010a), and Eckhardt (2019). This curious verbal construction
certainly calls to mind Rodgers’ audio-technical discourse, within which sounds could be
described as differentiated, lively individuals: after all, games and play are the domain of
the living. In the case of Radigue’s resonant filtering technique —and, come to think of it,
her polyphonic approach to beating as exemplified in works like Kailasha—we witness
interlocking, continuously variable aggregations of various partials. These so-called
constituent elements of sound, following Helmholtz and Fourier, each come and go in a
continuous exchange between figure and field. In these moments of periodic spectral
emphasis, Radigue depicts within each partial a fleeting subjectivity, emerging from the
manifold before disappearing again beneath the waves. 50 2.4 Sustained tones with a
certain roughness Running through some of the discrete functions of Radigue’s
synthesizer clarifies each technique’s general contribution to her music; however, in
practice Radigue often intertwined these synthesis techniques in complex manifolds of
interaction. With a sense of how some of these techniques work on their own, we can now
return to a more general, system-wide view of Radigue’s approach to working with Jules. To
think through Radigue’s overall conception of her synthesizer, I want to ponder her notion
of “sustained tones with a certain roughness”. I contend that this concept illuminates an
important connection between Radigue’s synth music and her earlier feedback works. To
articulate this connection, I will turn again to 1969’s Usral. We know that Radigue created
the materials for Usral by recording ultrasonic feedback tones onto magnetic tape, and
then pitched the tones down into the audible range by altering the playback speed of the
tape (Radigue and Eckhardt 2019, 90). This iterative process of pitching down and re-
recording would, over time, emphasize the analog artifacts of the recording medium. For
instance, as the tape’s characteristic hiss is also pitched down, it adds an audible, noisy
coloration to the pitcheddown, sustained feedback tones. Any ground loops or radio
frequency interference captured on the magnetic tape would also be introduced into
subsequent passes through the recording setup, adding a rich yet unpredictable
complexity to the resulting sound. The notion of iterative re-recording as a form of sonic
degradation was the operative principle in another feedback work, Opus 17 (1970). In the
first two movements of that piece, Radigue records, plays back, and re-records of snippets
of classical music in a process comparable to Alvin Lucier’s I am sitting in a room (1969),
but which she herself called 51 “electronicized erosion” (érosion électronisante). This
phrase, attributed to Radigue, is taken from Holterbach’s liner notes for INA/GRM’s 2021
release of Opus 17, provided in both French and English. As a neologism-by-way-of-
translation, I think “electronicized” gets this idea across rather well: the inherent
nonlinearities and distortions of analog electronic equipment impart unique colorations
onto the recorded sounds that pass through them. Across iterations, “electronicization”
erodes away a sound’s initial purity to create something far more complex, unstable,
and—to Radigue’s ears—alive. Radigue notes to Julia Eckhardt that the ARP’s voltage-
controlled oscillators (VCOs) were much easier to control than the convoluted setups she
had used to produce sustained tones with microphone/loudspeaker feedback and tape
machine re-injection (Radigue and Eckhardt 2019, 110); at the slightest provocation, the
latter techniques could collapse into extremely loud, even dangerous situations. It stands
to reason that VCOs would be easier to manage: with a potentiometer, one can precisely
tune a voltage-controlled oscillator to pretty much any pitch; and, as oscillators contain
built-in thresholds to limit the amplitude within the circuit, the risk of punctured eardrums
or burning plastic is substantially reduced (Holterbach 2013, 21). It may be safer and more
predictable, but just listening to the raw output of a VCO is quite dull and fatiguing to the
ear—just think of those abject test tones from the early days of broadcast television!
Radigue’s VCOs produced sustained sounds with unprecedented ease, but this ease
would have come at the cost of that “certain roughness” which she so highly prized. From
where would this roughness come? There are surely a few ways: as Jules Negrier noted to
me during a visit to Radigue’s archive at the GRM office in Paris, Radigue generally did not
consider herself 52 to be an “audiophile” ; certain projects may have had lower signal-to-
noise ratios than others, 17 and Radigue was eminently accepting of the irregularities of
the tools she used. Archival evidence suggests another method Radigue could have used
to introduce characteristic liveliness to her synthesizer music: a third kind of feedback,
wholly different from the microphone/ loudspeaker or tape machine re-injection
techniques. 2.5 Close encounters with a third kind of feedback Taken together, the
paradigm of voltage control in general and the analog modular synthesizer in particular
afforded an as-yet unprecedented form of feedback. Though this form of feedback does
not have a textbook-official term, I will refer to it as “circular frequency modulation” (one
may also encounter the term “feedback frequency modulation”). In circular FM, the
outputs of two or more oscillators modulate the frequency of one another, blurring the
boundaries between carrier and modulator by compelling each oscillator to function in
both roles simultaneously. This produces chaotic—and by chaotic, I precisely mean: highly
complex, but not purely random—timbral effects characterized by both stable and
unstable states. Intrepid synth-maker Peter Blasser offers this eminently useful definition
of chaos, which rather nicely describes what might happen in circular FM: “The meme of
chaos as a sort of random noise conflates all three of these concepts; random, noise, and
chaos…”; which he then clarifies by noting that “[chaos] can sound like noise, but with
more ‘eddies’ or characteristic turbulent patterns in it” (2015, 22-23). Personal
communication with the author, 3 February 2022. 17 53 Like microphone/loudspeaker
feedback and the tape machine re-injection technique, it’s only possible to generalize
about the effect of circular FM; and, like other chaotic systems, circular FM is extremely
sensitive to changes of state, producing disproportionately large changes in system
behavior in response to small changes in the initial conditions. We know the initial
conditions of FM in general are the following: 1) the ratios of the oscillators’ frequencies to
one another, and 2) the depth of modulation. Since we already know that the base
frequencies for the oscillators in a given Radigue composition would remain fixed, the
frequency ratios are likewise unchanged; however, the indices of modulation could be
adjusted throughout a piece to yield a variety of timbres. Unlike regular FM, the effect of
altering the indices of modulation in a circular FM patch is not terribly predictable: simply
stated, there’s not a direct correlation between the thickness of the texture and the depth
of modulation. In circular FM, there will always be steady states and peculiar nonlinearities
that can cause the sidebands to cancel each other out, only to suddenly reemerge once
the index knob is tilted ever so slightly. In this kind of interdependent feedback network,
the dull and static sound of a VCO roars to life with the characteristic roughness and
wildness that Radigue evidently favored. To hear Radigue’s synth music is to occasionally
hear tones that are on the threshold of breaking apart, bifurcating into chaotic
nonlinearities at their edges. When looking closely at Radigue’s patch drawing for
Geelriandre (Fig. 2.5), we can readily see this patching technique demonstrated with a bit
of detective work. In the upper half of the drawing, find the horizontal dashed line just
above the legend O1, written in graphite pencil. Along this line, you will see various colored
dots. Each of these dots represents a destination in the synthesizer towards which the
output of Oscillator 1 is being routed. The leftmost dot on this 54 55 Fig. 2. 5 Patch drawing
for Geelriandre. (Image courtesy of La Fondation [Link].) dashed horizontal line aligns
with a solid vertical line, labelled O5. This indicates that Oscillator 1 is the modulating the
frequency of Oscillator 5. Now, jump down to the dashed horizontal line just above the
legend O5, written in graphite pencil. The dots along this dashed horizontal line show the
destinations towards which the output of Oscillator 5 is being routed. The leftmost dot on
this dashed horizontal line aligns with a solid vertical line, labelled O1, which means that
Oscillator 5 is modulating the frequency of Oscillator 1. Because each oscillator is
modulating the frequency of the other, Oscillator 1 and Oscillator 5 are engaged in circular
FM (Fig. 2.6). If you follow the same process for the dashed horizontal lines above the
legends O2 and O4, you will find a similar circular modulation arrangement for those two
oscillators. This means that altogether we have two distinct sets of circular FM: between
O1 and O5; and between O2 and O4. In the following diagram (Fig. 2.7), I’ve extracted some
salient information from the patch drawing for Geelriandre, part A. Two pairs of oscillators
(O1 and O5; O2 and O4) are each engaged in circular FM, modulating one another’s
frequencies to produce chaotic, nonlinear 56 Fig. 2.6 Oscillator 1, a sine wave at 16Hz,
modulates the frequency of Oscillator 5, a sine wave at 4000 Hz; simultaneously,
Oscillator 5 frequency modulates Oscillator 1 (pitches are approximate). The circular
symbols intersecting the curved arrows indicate potentiometers which can attenuate each
oscillator’s signal, thereby changing the indices of modulation and facilitating dynamic-
depth circular FM. timbral systems. Different pairs of these same four oscillators are then
combined together in unique ways (O4 and O5 are ring modulating one another; O1 and O2
are simply added together). The mixture of O1 and O2 is then amplitude modulated by O4
and O5 separately, then this amplitude modulated result passes through a filter. At the
same time, the ring modulated result of O4 and O5 passes through a second filter. 57 Fig.
2.7 Diagram showing some of the interdependent signal flow found in Geelriandre, part A.
Note the potentiometers at each node in the system Looking at this patch as a whole (Fig.
2.7), I think we can deduce some further generalizations about Radigue’s synthesizer
technique. We can see that at each node in the system—oscillator to oscillator, mixer to
ring modulator, ring modulator to filter—there are potentiometers that would allow
Radigue to attenuate, in real time, the amplitude of the various signals passing through the
network. Because of the deeply interdependent nature of her patching technique—both in
the sense that one signal may be routed to many outputs, and in the chaotic sense of
voltage-controlled feedback networks in general—one such adjustment could have
dramatic results on the sound. She addresses this very aspect of her synthesizer
technique when she says to Eckhardt that “it was enough that I vary, for example, the
proportion of the two main signals in a ring modulator, for the whole structure to change.
Everything could change everything” (Radigue and Eckhardt 2019, 114). In this sense, we
might think of Radigue as a sort of supervisor who is guiding, though not outright
controlling, a semiautonomous network of actors engaged in unruly yet interdependent
behaviors. These actors each play distinct roles in the ecosystem of a patch, but to
Radigue’s ear, there were aesthetic limits that could not be crossed: “For each track, I had
between twenty-five and forty parameters, if not more, with which I could work. I started by
measuring, dosing, and determining the thresholds that should not be crossed, to maintain
the progression of the sound from within, with and by its own components, and to avoid
slip-ups. The aim was to make the sound progress through slightly changing one of the
parameters of the constant parts of this mass of sound. What interested me was the
internal structure of the sound matter” (Radigue and Eckhardt 2019, 115). Through
techniques like circular FM and beating, sounds could take on the qualities of unruly, living
organisms caught a complex web of interdependence. Beating sounds out acoustic
comparison between two tones, while circular FM lends to sounds a certain danger, as
they are 58 always on threshold of breaking out of their steady states and into chaotic
singularities. The fates of these sounds, which are given life in Radigue’s aesthetic ethos,
are then subtly adjusted by the composer in the measured exploration of a synthesizer
patch. At the same time, Radigue’s play was highly disciplined by a desire to avoid “slip-
ups” or sudden changes. This was mostly through the careful, gradual adjustment of
certain parameters such as indices of modulation, and the proportions between different
sound layers routed into filters and ring modulators. In avoiding sudden moves that would
meaningfully connect Radigue’s body to a specific change in the music—for instance,
rotating a frequency knob on an oscillator, re-patching a connection in the synth mid-
recording, or simply turning a filter cutoff knob too quickly—I find that Radigue’s own hand
is amply hidden. By mostly attenuating the flow between fixed elements in her synthesizer,
to Radigue the patch becomes a terrarium of sorts, where she, hidden from sight, can
subtly direct the lives of the sounds without explicitly controlling them. Radigue herself
often made pastoral, possibly even ecological metaphors about her music, writing of
Adnos II for its November 22, 1980 premiere at Mills College in Oakland, California: “To
move stones around in the bed of a river does not affect its course, but can only change the
play of the waves on the surface. And so, the sound energy alters the course of the flowing
fields of resonance” (Adnos CD Liner Notes, 2013). As a stone might divert, impede, or
alter the speed of the river, the potentiometers on her synth attenuate the flow of voltages,
relieving or increasing the amount of electrical pressure between the synthesizer’s
constituent elements. Nonetheless, the stones can’t reverse the river’s course, nor can an
attenuator alter the direction of signal flow, which is already set in advance by Radigue’s
patching. As a form of conditional acceptance, I find the Radigue’s composer-as-
supervisor approach strongly recalls 59 the notion of an unevenly realized intersubjectivity
that I developed in Chapter 1. In a mutual symbolic recognition of the sounds as living
equals, Radigue nonetheless constrains them and puts them to use; at the same time,
perhaps in recognition of her own symbolic role as the ‘supervisor’, she withdraws from
view, insisting on the autonomy of sounds even as she gently guides them towards her own
ends. There are fascinating paradoxes and contradictions in play here, and I will return to
them in some detail in Chapter 3, as well as in the epilogue. 2.6 The recording process I’ve
so far attempted to paint a vivid picture of Radigue in front of her synthesizer, carefully
adjusting and managing complex webs of interaction through voltage control; but, we’re a
little ways out from a fuller understanding of how she composed the pieces we hear on
CDs and in concert. To complete the story I will describe, in as much detail as I reasonably
can, the process that Radigue undertook to assemble these explorations into complete
musical works. How did Radigue’s open-ended synthesizer sessions turn into full, fixed
compositions often exceeding an hour in length? The accounts offered by Radigue to
Rodgers and Eckhardt both offer important signposts, but even in aggregate, they are not
terribly clear on how everything fits together. The lack of details in some prior accounts
may have led to Richard Glover’s misleading 2013 description in The Ashgate Research
Companion to Minimalist and Postminimalist Music. In this text, Glover potentially gives
the impression that Radigue largely composed her synth works by multi-tracking, writing
that “when realising her music, Radigue mixes the full length of the work in whole takes for
each single layer (often there are at least 15 layers): hence the reason why compositions
take months, or even years, to complete” (Glover 60 2013, 172). Radigue’s recording
sessions did take months or years to complete, but this is not because Radigue was multi-
tracking. As I will show, she recorded each of her synth works in a single take. Far from a
feat for its own sake, it was the constraints of magnetic tape as a recording medium, as
well as Radigue’s aesthetic ethos, that offered no better solution than onetake recording. It
was not until my correspondence with François J. Bonnet—who manages Radigue’s
archive at the Groupe de Recherches Musicales in Paris—that the details of Radigue’s
recording process finally slid into place. Through our correspondence, and some gently
persistent followup queries to Radigue herself by way of François, I was able to understand
the general approach that Radigue took to recording these works. Drawing on Rodgers,
Eckhardt, and Bonnet’s accounts, I break the general process down into five discrete
steps: 1) Pre-composition phase: some unifying metaphor, image, or initial structure is
established. In some ways, this is arguably the most important step, as it is through this
image or metaphor that Radigue will evaluate and determine the course for a particular
project. 2) Exploration phase: the basic settings for the synthesizer are determined,
including the tuning of the oscillators. The bounds of acceptable variation through
potentiometer movement within a patch is also determined. In the case of her earlier ARP
compositions, drawings of the patch and its knob settings may be used in order to recall
the synthesizer’s state between multiple sessions. 3) Preliminary recording phase: short,
roughly 10-minute segments of music are recorded directly onto quarter-inch magnetic
tape. Each segment may be said to explore different aspects of the synthesizer’s initial
settings/conditions. 4) Assembly phase: after some time away, usually a few months, the
various recorded segments are then considered in the context of the overall structure or
unifying metaphor (see precomposition phase). Not all session tapes are kept, and those
that make the cut are placed in an order. Radigue prepares two separate reels: each reel
contains alternating sections of the final piece, separated by blanks, and the lengths of
each section are precisely measured using a stopwatch (Fig. 2.8). 5) Final recording phase:
the piece is recorded in real time from two reel-to-reel tape machines (two Revox
machines that I call “A” and “B”), which are writing onto a third reel-to-reel containing the
master reel (Telefunken). Radigue slowly crossfades between the two 61 playback
machines in order to produce a seamless montage of interlocking segments of music (Fig.
2.9). Aided by a stopwatch, a mastering ‘score’ marked with precise timings guides
Radigue through this delicate process of both crossfading and superimposing program
materials from the two playback machines. If any mistakes are made during this process,
Radigue has to start over from the beginning. (Rodgers 2010a 57-58; email correspondence
with François J. Bonnet on January 8, 2021 and January 14, 2021; Radigue and Eckhardt
2019, 114-119). Before pressing on, I offer this important disclaimer: while the steps
outlined above account for the process in general, it does not account for the inevitable
exceptions and peculiarities that inhere in any creative process. Addressing the quirks of
making a particular recording will have to be taken up elsewhere. (Pieces like Geelriandre,
which also includes prepared piano, or Songs of Milarepa, which feature the voices of
Lama Kunga Rinpoche and Robert Ashley, could be worthy objects of study in this regard.)
Instead, I would submit that the value of this general account is not that it points to a
specific piece, but rather that it allows us to complete our picture of Radigue’s
compositional process for the ARP 2500 works. Taking a few steps back from that picture, I
see a rather persistent commitment to ascetic self-denial through technologically-
mediated touch. I’ve already shown how this commitment to self-abnegation could be
articulated through the exploration and preliminary recording phases (what I call Steps 2
and 3); upon closer inspection, we can also see it in the final recording phase (what I call
Step 5) through Radigue’s implementation of the extremely slow crossfade. In what sense
does the infinitesimally slow throwing of faders abnegate the self? I will attempt to
articulate this connection by way of a comparison to another analog editing technique that
might be considered the opposite of the crossfade: the tape splice. While the crossfade
creates smooth transitions between two sounds, the tape splice creates disjunctions.
When a segment of magnetic tape is cut and pasted to another, a jarring discontinuity can
occur if the two segments contain 62 63 Fig. 2.8 General concept showing how the two
playback reels are prepared for the final recording session. Manually and in real time,
Radigue would create the crossfade points using faders on the mixing console Fig. 2.9
Three tape machine setup with mixing console for the final recording session even
minimally different material. This kind of discontinuity poses no aesthetic problems if you
have silences and other musical discontinuities in your work—Pierre Schaeffer’s raucous
Étude aux chemins de fer (1948) demonstrates that principle well enough. If you want a
continuous sound with no breaks or jarring discontinuities, however, music like Radigue’s
is perhaps the least congenial to tape splicing. In a recollection offered to Holterbach, one
can hear almost Radigue’s frustration when she says, “Du montage, du découpage de
bande et du collage j’en avais assez fait avec Pierre Henry ou au Studio d’Essai!” (2013,
23). Beyond Radigue’s evident frustration with cutting 18 and pasting tape, it’s clear from
her aesthetic goals that these techniques simply wouldn’t have offered the effect she
needed in her own compositions. Even if she were to cut one drone tape and adhere it to
another similarly-sounding drone tape, it’s practically impossible to perfectly line up all the
phases on the cut, such that the transition sounds seamless. Small pops and clicks would
result from the instantaneous leaps between the disparate phases on each tape, and for
Radigue, the resulting discontinuities would simply be unacceptable. So, splicing is a no-
go, and while cutting tape at an angle relative to its edge can produce a smoother stitching
effect—what’s sometimes called a crossfade punch—this technique is best suited to short
intervals of time, generally less than a second. Reducing the tape recording speed to a
craw—7.5 inches per second—might extend that punch to a few seconds, but given the
continuous nature of Radigue’s synthesizer music—and the psychological ‘time dilation’
effect that seems to inhere in listening to similar, slowly-changing sounds for a very long
time—even this technique would likely prove inadequate. If the goal is to preserve for the
listener the illusion that any two session recordings 18 “Editing, tape splicing, collage. . .
I’d done enough with Pierre Henry or at the Studio d’Essai!” 64 are seamlessly connected
in a process of gradual, organic evolution through time, splicing’s simply out of the
question. If splices disrupt the flow of the music, thereby making audible the sound of
editing, perhaps they also make audible the sound of a composer’s musical decision-
making outright. In this sense, I would argue that the notion of illusory continuity,
generated by means of the slow crossfade, ultimately reveals Radigue’s commitment to
hiding her hand as the composer. In producing sounds that unfold by infinitesimally
gradual degrees, her technologically-occluded musical decision-making—while no less
deliberate than Schaeffer’s—suggests the spontaneous appearance of music without an
author, emanating from some place other than the self. This is not to say that Radigue
avoids transients outright. As an exception which proves the rule, consider the case of the
bell-like tones which occasionally appear in Radigue’s music, such as those which begin
around 25’00” in Adnos I (1974), and around 38’00” in Kyema. Typically produced by the
dedicated transient generators within the ARP 2500’s 1047 multimode filters, the
appearance of these resonant, bell-like tones often mark important structural moments in
these works (Colin 1971, 926). Considered with respect to Radigue’s avoidance of the tape
splice, I think we could reasonably claim that Radigue’s selective and restrained use of
transients is constitutive of her compositional technique with the ARP synthesizer in
general. 2.7 The woman behind the curtain Once a synthesizer work was completed, how
would it have been heard? We have a pretty good idea of how Radigue presented her
synthesizer music in concert. The accounts in offered Rodgers (2010a), Holterbach (2013),
Eckhardt (2019), and Anaïs Prosaïc’s documentary film 65 L’Écoute virtuose (2012) are, in
aggregate, sufficiently detailed and clear in this regard. I will briefly summarize the general
idea: in advance of a concert, Radigue would collaborate with the sound technicians at the
venue to place the array of loudspeakers in such a way so as to: 1) nullify stereophonic
effects; and 2) create essentially equivalent listening experiences no matter where
audience members were located in the space; then, once the speakers were set up and
the music’s volume was properly tuned to the acoustics of the venue, Radigue would
typically conceal herself in a small room, operating a small booth with the tape reels and
mixing console, where, hidden from view, she would diffuse her works in the venue
(Radigue and Eckhardt 2019, 120-123). I can think of no stronger evidence for Radigue’s
commitment to self-abnegation than voluntarily concealing herself and remaining unseen
during a concert of music that she spent months or years to make! In contemplating this
image of Radigue off to the side and out of sight, I am tempted to make a playful
comparison to Victor Fleming’s The Wizard of Oz (1939), though I will not be the first to do
so in the context of avant-garde electronic music . In a scene towards the end of that 19
film, a pesky little terrier tears down a curtain to reveal an ambitious charlatan hemming
and hawing at the height of his charade. The eponymous Wizard of that film, his
machinations exposed, evidently has an interest in intimidation, self-aggrandizement, and
melting the local competition. Such ambitions, it need not be said, are certainly not shared
by Radigue: beyond the traditional moniker of authorship, she apparently had little interest
in any direct representation of herself through her music. Nonetheless, in each artist’s
method we find some rather impressive prestidigitation. In Radigue, this might take the
form of immensely long crossfades, indirect See Chion 1999, 28-29. 19 66 manipulation of
parameters in synthesizer patches, and barricading herself in utility closet. As Dorothy &
Co. were instructed to “pay no attention to that man behind the curtain”, so too are we
called by the austere placement of speakers in the corners of an auditorium or nondescript
art gallery to pay no attention to the person who made this soft, beautiful, and barely-
moving music. It is surely a grand illusion: music whose source is hidden, whose
mechanics of composition are mysterious, and which seems to emanate from some place
far, far from Kansas. What happens to such an illusion once we understand how it works?
What happens to us? Salman Rushdie offers a poignant reflection on the nature of magic
and illusions at the close of his essay on The Wizard of Oz, entitled “Out of Kansas” from
the May 1992 issue of The New Yorker. “In the end,” he writes, “ceasing to be children, we
all become magicians without magic, exposed conjurers, with only our simple humanity to
get us through”. I hope the reader will join me in seeing a cautious optimism in Rushdie’s
beleaguered conclusion. If our enchantment is somewhat muddled by the torn curtain,
perhaps the revelation of the mechanisms behind the illusion can also endear us to the
complex and imperfect realities of being here, now, today. Éliane Radigue went to
extraordinary lengths to obscure her own bodily involvement in her synthesizer music—
from composition to recording and performance. Even as this process of self-abnegation
plays out in novel ways in her studio and performance practice, we can nonetheless
situate her approach in a broader French and American cultural context, one that sought to
negate the expressive qualities of musical production in favor of a more dispassionate,
research-oriented program, while also attempting to challenge the received hierarchy of
European high modernism through unconventional methods of performance, and
composition. 67 If Radigue evidently had no interest in disclosing her physical involvement
in the work, what’s to be gained by spelling it out? I believe there are at least a few merits.
For those of us who might be interested in producing electronic music of our own, I think
it’s instructive to study the methods of other artists; and in the technological limitations of
Radigue’s own time and place, we find evidence of rather incredible ingenuity and
commitment to a singular aesthetic ethos. An account such as the one offered here may
also allow us to remember the essentially human elements of the creative process—the
irregularities, the private moments of reflection, and the intimacy of working with a familiar
instrument—even if the creator herself had little to no interest in disclosing her own bodily
presence in the music. Finally, I hope this account facilities further close study of specific
Radigue synthesizer works. I will attempt this very endeavor in the following chapter with
1988’s Kyema. 68 Chapter 3 Being and nonbeing: an analysis of Kyema (1988) 3.1 Kyema
from afar With a fuller understanding of Radigue’s general approach to the ARP 2500 in
hand, we now turn to a detailed analysis of a particular synthesizer composition, 1988’s
Kyema, the first movement of what would become a three-hour cycle called Trilogie de la
Mort (“Death Trilogy”). This chapter begins with a general introduction to the piece,
including an overview of its poetics and inspirations. I also describe its structure on a large
scale, and discuss in brief the different kinds of thematic material Radigue employs in the
work. Emboldened by some of these ‘big-picture’ takeaways, we then embark on an open-
ended, curiosity-driven journey through a handful of moments from the work that merit
further attention. As far as musical analysis goes, Radigue herself offers a warning,
describing her ideal of music as standing “in opposition to the analytic nature of trying to
cut things up into little morsels in order to examine them”; “on the contrary,” she says,
“it’s about the life that engenders things… [a] human body can be divided up into into all its
roles but if it isn’t innervated it is just a corpse” (Radigue and Eckhardt 2019, 54). Though
my goals are to inform and deepen a curious listener’s appreciation of this work, I should
be wary of ruining for anyone the pleasure of hearing this music. In her preparation for the
composition of Kyema, Radigue sought inspiration from passages in the “Tibetan Book of
the Dead” (Radigue and Eckhardt 2019, 184), a syncretic document based partially on a
17th-century compendium of Buddhist funerary texts assembled 69 by Rigdzin Nyima
Dragpa called the Bardo Thödol (Kemp 2016). With Kyema, Radigue sought to evoke the six
intermediary states between death and rebirth as outlined in six verses found in a version
of the text that Radigue was instructed to read by her mentor, Pawo Rinpoche (Radigue 20
and Eckhardt 2019, 140). Other teachings of Tibetan Buddhist origins came to influence
many of her compositions following her 1974 conversion: in addition to Trilogie de la Mort
(1988-1993), preceding works that foreground Buddhist poetics in either their titles or their
programmatic descriptions include Adnos III (1982), Songs of Milarepa (1983), and Jetsun
Mila (1986). While a discussion centered on how specifically these Tibetan Buddhist
poetics inform works like Kyema far exceeds my expertise, I nonetheless draw from time to
time upon the ideas of death and rebirth as a metaphorical framing in my analysis. This is
not least because, as François J. Bonnet was insistent to clarify in our correspondence
from 8 January 2021, the seeding metaphor or picture which Radigue devises in the pre-
composition phase for each synthesizer work is, in many ways, the most important aspect
of her process: “the first step of the composition is always a mental image or story in her
head. She always insists on this. She doesn't start to explore from nowhere. She already
has a compositional idea, even if vague” (emphasis mine). A meaningful analysis of any
Radigue synthesizer work therefore ought to at least consider these precompositional
notions, if any record of them can be found. Fortunately, Radigue is quite effusive on the
matter of Kyema’s poetics and inspiration. Given her remark that “there’s a meaning to be
found in [a work’s] title, as often is the case,” we “The Tibetan Book of the Dead is
essentially a Western invention based on selections from the Bardo 20 Thödol. No text
actually titled Tibetan Book of the Dead ever existed in Tibet…A plurality of Tibetan funerary
texts could have been given the same title, but it is these passages of the Bardo Thödol
that have come to represent how the West understands Tibetan notions of dying, death,
and the process of taking rebirth” (Kemp 2016). 70 will start there (Radigue and Eckhardt
2019, 131). With customary linguistic erudition, Radigue describes “Kyema” as both a
Tibetan expression evoking “the sigh we make when faced with inescapable fatality”; and
as a neologistic vernacular Latin construction “[referring] to beings born from a mother:
‘ma’ is the suffix that marks the feminine, the maternal, and ‘kye’ refers to birth” (Radigue
and Eckhardt 2019, 140). Radigue thereby inscribes in the work’s title a dual concept of
birth and death, in which each is constitutive of the other. (We may pause here for a
moment to note the resonance between the themes of this work, and the composer’s
nonlinear and ecological conception of signal flow in her synthesizer, one in which she is
as much a part of the music as Jules.) In this bold engagement with some of the most
powerful forces known to our kind—life and death—Radigue offers with Kyema one of the
more dramatic, telos-driven structures among her synthesizer compositions. While her
works are never, despite their slow progression of change, truly ‘stationary’, Kyema’s
overall scale of contrast in nearly all musically meaningful metrics—including register,
dynamics, timbre, and rhythm—outpaces many of her other electronic works. Notably,
Kyema also features some outright dramatic conceits: these include the use of repeated
and cyclically-developing, contrasting sections, as well as dynamic swells and even a
climax right around two-thirds of the way through the piece. This much is apparent when
we take in a bird’s-eye view of Kyema (Fig. 3.1)—a basic plot of Kyema’s amplitude over
time. There’s clearly a high point just past the center mark (around 36’00”) which is then
followed by a massive drop in amplitude. 71 Accordingly, when we look at the spectrogram
(Fig. 3.2), we see that a spectral change from very saturated to thin accompanies this high
point in the work. There are a few more insights we can gather from the spectrogram: for
one, the distinctively blurry edges of each section beautifully demonstrate Radigue’s use
of the slow, realtime crossfade during the recording session. Additionally, as the equally-
spaced bands on either side of the noisy climax 72 Fig. 3.1 - Kyema amplitude over time
Fig. 3.2 Kyema spectrum and amplitude over time (linear scaling) indicate, there is a large-
scale timbral contrast between periodicity and aperidocity found in Kyema . 21 Building
upon the previous two figures, Fig. 3.3 shows a plot of Kyema’s amplitude over time, filled
in with colors and labels using a Roman and Arabic numeral convention; e.g., I.1, I. 22 2,
and I.3 are all thematically related. With the 2021 INA/GRM recording of Trilogie de la Mort
serving as a reference , Fig. 3.3’s time markings show the temporal divisions of the work;
23 each time marking shows the rough beginning of the fade-in for its respective material,
as well as the conclusion of its fade-out. As an example, what I will call section II.1 in the
following analysis fades in around 5’50”, and fades out completely around 14’45”. In order
to better get a sense of how each section relates to the other, I’ll now offer brief
descriptions of each of the six kinds of music heard in the piece, along with reduced
transcriptions of each passage’s main harmonic ideas in European classical notation.
Though the archival evidence I’ve so far presented suggests that Radigue tended to
organize her synthesizer compositions by ear using frequencies rather than discrete
musical pitches, I think that pitch can nonetheless help us in our analysis of Kyema,
particularly when we are trying to interpret a passage from the perspective of harmony and
its relation to a large scale structure. To that end, I’ll occasionally invoke the notions of
‘pitches’ and ‘chords’, with the understanding that this Due to the spectrogram’s linear
scaling, bands spaced equidistantly in the frequency plot will generally 21 indicate a
periodic signal, showing, for instance, a timbre with energy at integer multiples of a
fundamental with frequency F (e.g., 2F, 3F, 4F, etc.); on the other hand, regions of
unequally spaced bands of spectral energy generally indicate a noisier, or aperiodic signal.
For colorblind readers: the Roman/Arabic numeral combination show related sections;
e.g., I.1, I.2, and 22 I.3 are all thematically related and therefore share the same color. This
recording is available to purchase or stream for free at this link: 23
[Link] 73 74 Fig. 3.3 Kyema’s
amplitude over time, with material color coded to show cyclical structure. Bracketed
regions denote crossfades between contiguous sections. All time markings are
approximated within a few seconds analytical approach does not imply authorial intent.
I’ve designed this section to serve as a highlevel listening guide for grasping the work’s
overall structure, an essential task given its 60-plus minute length. Exhaustive details are
not present in these snapshot transcriptions. Deep dives and divertissements take place
later in the chapter. What I will call I.1, I.2, I.3 (00’00”-07’15”, 12’30”-20’53” & 42’25”-
45’40”) is the first material heard in the piece. Including the beginning, it is stated a total of
three times throughout the work in varied forms. Harmonically this material consists of
detuned, approximately integer multiples of an implied and occasionally audible ~110Hz
fundamental (~A2, m≈45.0) . This 24 music generally entails the gradual revelation of
harmonics through resonant bandpass filtering, iteratively and rhapsodically building
melodies out of the harmonics of this implied fundamental. This music possesses a
generally stable and periodic character, and to my ears functions structurally as a kind of
return or refrain; although in its final appearance at 42’25”, a low partial at around 60 Hz
(~Bb1, m≈34.51) produces a subtle yet unyielding tension by ‘suspending’ the harmony
over a dissonant bass. The “m≈” convention approximates raw frequencies in terms of
floating point MIDI numbers. The 24 numbers after the decimal point indicate hundredths
of an equally-tempered semitone (i.e., cents). 75 Fig. 3.4 Reduction of I.1, including the
main drone, and the melody of partials 1’30”-2’30”. II.1, II.2 (5’50”-14’45” & 17’30”-28’40”)
is mostly centered around an implied ~67Hz fundamental (~C2, m≈36.41) (Fig. 3.5).
Generally of a thicker, mellower, darker timbre than I, over its two iterations II further
develops the idea of melodies constituted by resonantly filtered partials. Rhetorically, I
think of II.1 and II. 2 as the “shadows” of I.1 and I.2, respectively existing in a kind of axial
opposition to that earlier music. I mean axial in the sense that II and I have many shared
characteristics—such as a generally harmonic distribution of partials, and the use of
resonant bandpass filtering to emphasize these partials—while also exhibiting a strikingly
different affect. I’d attribute this difference in affective valence to II’s thicker texture,
generally lower pitch level, and more elaborate partial melody. II is stated twice in the
piece, with subtle variations between each statement . 25 Notably when II occurs a second
time (II.2), the pitch is shifted up between 1 and 2%, a change which 25 naturally causes
the beat frequencies to adjust in response. This change in pitch may be related to analog
oscillator pitch drift due to fluctuations in ambient temperature or slight variations in the
playback or recording speed of the tapes used to capture these two sections of music. 76
Fig. 3.5 A transcription of a part of II.2, with approximations of the main partials of the
drone at the partial melody heard between roughly 9’00” and 10’00” III (26’15”-37’28”) only
occurs once, or possibly twice depending on how a later passage is interpreted.
Constituting the most dramatic departure in Kyema, III contains a teleological swell
towards the work’s loudest and perhaps most ‘dissonant’ moment, superimposing a low,
harmonic 61Hz drone (~B1 fundamental, m≈34.79) with several tones that dissonate with
the harmonic tone (Fig. 3.6). Bands of noise and electronically-processed recordings of
gyaling and rag-dung, two Tibetan wind instruments, contribute to the passage’s overall
increase in both thematic and timbral complexity. (I believe there’s an argument to be
made for a brief recurrence of III starting around 57’00”, as this music bears strong
spectral affinity with the beginning of III heard around 26’30”, but it’s quite a fleeting
appearance.) 77 Fig. 3.6 Reduction showing the main spectral peaks in the drone
accompanied by approximations of the electronically processed gyaling and rag-dung
loops. The pitches in the Tibetan wind instrument recordings are quite difficult to pin down,
in part because they have very similar spectral profiles to the synth drones with which they
are mixed. In IV (36’30”- 44’51”), we mostly hear a crackling band of noise which at first is
quite indistinguishable from the noise on the tape itself. IV occurs in a structurally
significant moment, immediately following the roiling climax of III; in this way, IV provides
one of the starkest contrasts in Kyema between saturation and emptiness. A plucking tone
accompanies the band of noise, sounding once every nine or ten seconds, corresponding
roughly to either the same general frequency region of the bandpassed noise, or an octave
below it, with approximate peaks at 330Hz (~E4, m≈64.01) and 1387 Hz (~F6, m≈88.88).
Notably, the plucking tone is not evenly spaced; there are moments where Radigue seems
to wait several seconds before sounding it again (Fig. 3.7). I would hazard a guess that
Radigue is using some sort of manually timed transient, perhaps from a channel of the
1046 Quad Envelope Generator, to accomplish this effect. As a constituent partial of the
returning I.3 music around 42’25”, this plucking tone also eases the transition into the final
restatement of the work’s opening material. 78 Fig. 3.7 Reduction of the beginning of
section IV, from about 37’30” to 38’30”. Note the unevenly-spaced bell tones in the upper
staff V (45’30”-58’20”) constitutes the longest unbroken section of unique material in the
work. Timbrally, it’s so dense and spectrally underdetermined that it’s easier to describe
its constituent elements than to attempt a summary description of its total harmonic
effect. Based on the spectral peaks, it appears that the texture is comprised of two
superimposed harmonic timbres with fundamentals approximately a major third apart,
with 50Hz (~G1, m≈31.34) and 62Hz (~B1, m≈35.07). Several other dissonant partials,
along with bands of filtered noise, complicate any single reading of V’s harmony. Starting
around 47’30”, filter plucks ring at approximately D#4 and G#4 (Fig. 3.8). VI (56’00”-61’07”)
is the final material heard in the piece, constituted by a band of dark, filtered noise that
gradually gives way to a brighter and softer band of noise, through which cuts a faintly
keening sinusoid around 2515 Hz (~D#7, m≈99.18). VI ofers a kind of elliptical, “open”
ending that doesn’t so much provide closure as it does point towards an ongoing process. I
think a spectrogram better conveys the character of VI (Fig 3.9). 79 Fig. 3.8 Reduction from
45’30” to about 48’00”, showing the cloudy aggregate of tones, a low partial melody, and,
toward the end of the passage, the plucked filter tones This chapter will end with a short
reading of this final section, but before we get there, I’d like to see what can be surmised
about Kyema’s structure, now that we have synopses of its different constituent musics,
as well as their chronological arrangement in the work. After all, Radigue is a composer,
and while intuition evidently figures heavily into her working process, so too does an
obstinance and intentionality that was partially constrained by available technology. As
outlined in the previous chapter, Radigue’s three-tape machine, one-take recording
process necessitated an arrangement of the progression of musical events for a given
composition well in advance (see pages 60-65). In order to ensure a successful recording
session, all actions in this scheme would need to be precisely timed, from the length of
blank segments of tape between sections on each reel, to the length of each fade-in and
fade-out executed on her mixing console. I take these technologically-mediated
constraints to mean that we can interpret any recurrence of material in Kyema as
significant to Radigue’s intended poetic or narrative conceits. 80 Fig. 3.9 Spectrogram for
the final section of Kyema, what I call VI (linear scaling). Note the band of noise with peak
energy between 600-800Hz and the lone sine tone at 2515 Hz. Radigue explains that
Kyema “is constructed with the six stanzas at the end of the [Bardo Thödol]. These
[stanzas] summarize the six intermediary stages that constitute a continuum of the
evolution of consciousness through the transmigration and development of these
‘Bardos’, meaning ‘intermediary states’…My composition respects the text.” (Eckahrdt and
Radigue 2019, 140). I’m unqualified to offer an authoritative Tibetan Buddhist reading of
Kyema, but I do see some narratively meaningful, dramatically signficiant patterns in
Kyema’s structure. Specifically, I would say that one the major formal conceits of the work
concerns iterative, successively grander departures away from familiar material. We can
see that the digressions from the familiar, introductory I.1 material lengthen as the work
unfolds, up until the point that I.3 actually becomes contrasting material with the far more
timbrally and thematically remote material of V and VI. As the familiar and unfamiliar trade
roles, the structure perhaps evinces some kind transformation of self or other. Along those
same lines, the Bardo Thödol’s evocation of cyclical death and rebirth may have informed
Radigue’s use of a cyclically-developing structure, in which familiar material returns, albeit
transformed each time. As Radigue evidently sought to somehow evoke the six Bardo in
her work, so too have I sought in my analysis to identify six reasonably distinct kinds of
music used throughout the piece. In her annotated works list (Radigue and Eckhardt 2019,
184), Radigue names the six intermediary states in Tibetan, French, and English: 1. Kyene—
la naissance [birth] 2. Milam—la rêve [dream] 3. Samtem—la contemplation
[contemplation] 4. Chikal—la mort [death] 5. Chönye—la claire lumière [bright light] 81 6.
Sippaï—traversée et retour [crossing and returning] I do not know how my six kinds of
discrete musical material correspond, if at all, with the six intermediary states in the Bardo
Thödol. It’s entirely possible that by “intermediary state”, Radigue was referring not to any
discrete sections of music, but rather the points of overlap between contiguous sections.
Both composer and the intrepid reader may even object to the way I’ve cut up the piece in
general, hearing affinity where I hear the opposite, and vice versa. As any final reading
would depend upon an authoritative interpretation of a liturgical text in a spiritual tradition
in which I have no training, this is ultimately a task I’m not prepared to undertake. Such a
reading would also entail an authoritative interpretation of Radigue’s own program notes
for Kyema, a text which, while comparatively rich in detail, doesn’t specify whether the six
states are evoked musically through six temporally discrete movements, or as otherwise
differentiated types of musical material. Radigue’s lack of specification here is somewhat
notable, as in other multi-section works where she intends to programmatically evoke a
sequence of events, Radigue is typically quite specific about the correspondence between
musical sections and their poetic intent. Take for example Jetsun Mila (1986), wherein the
nine sections evoke nine phases of the life of the eponymous Tibetan spiritual leader
(Radigue and Eckhardt 2019, 184); or Songs of Milarepa, where several distinct songs are
labeled in the chronological order of their appearance in the composition (2019, 182-184);
or Adnos III, which is structured in four continuous movements each named by the
composer in a program note (2019, 182). While Radigue does say to Eckhardt that “[her]
composition respects the text” (2019, 140), the question of how Kyema evokes the six
Bardo is not answered definitively. With the 82 above disclaimers in mind, I offer this highly
provisional outline of the work’s structure, using the six Bardo as names for each section:
I.1 II.1 I.2 II.1 III. IV. I.3 V. VI. 1st 1st 2nd 2nd Contemplation Death 3rd Bright light Crossing
& Birth Dream Birth Dream Birth returning 3.2 In the beginning We step down from this high
place to find the study of Kyema’s minutiae no less rewarding. In what follows I provide
more extensively detailed descriptions of various moments from the work, starting with the
beginning and proceeding in a mostly chronological fashion. As is typical of a Radigue
synthesizer work, Kyema opens with a slow fade-in from nothing—or, almost nothing.
Listen closely and you will find the crackling noise on the tape is quite audible. All of
Radigue’s electronic works have some amount of constant noise, but Kyema’s signal-
tonoise ratio is especially low. Whether this effect is intentional or accidental is amply
obscured by Radigue’s documented fondness for the irregularities of analog recording
media. Her preference for electronic aberrations is evident in the early feedback works
discussed in Chapter 2, but it is also clear in this recollection to Eckhardt, where the
composer describes the pleasure she took in controlling the level of tape hiss during live
diffusions of her synthesizer compositions: “…with analogue sounds, there is always
background noise, the noise of the tape recorders in the range of 10,000 to 12,000 hertz. I
always started playing by completely eliminating that range. Then I would put it back in,
little by little, to restore the presence of intriguing sounds in this frequency zone. At the
end, I did the reverse, in order to return to silence” (2019, 121). While the crackling noise
found throughout Kyema’s mid to lower range is, technically speaking, distinct from the
tape hiss described above, I think Radigue’s recollection bears relevance to 83 Kyema, as
it strongly resonates with her demonstrated commitment to treating electronic sounds as
forms of life. Rather than using equalization to remove unwanted frequencies—which
would be a typical application of EQ—she instead uses it to give the noise a dramatic
entrance and exit. Framing noise as an object of aesthetic contemplation may cause more
than a few professional sound engineers to balk; but, as Jules Negrier at the Group de
Recherches Musicales was keen to remind me, Éliane was not an audiophile . Her
accepting and deferential posture towards noise 26 in and as music, while redolent of
postwar experimentalists in the shadow of Cage, is nonetheless enveloped by one of the
precepts of her aesthetic ethos, namely that sounds should emerge and depart as
gradually as possible. On its own terms, Kyema’s persistent, crackling noise floor
constitutes an important element of the work, framing through spectral contrast the
generally periodic character of the other sounds, which, to my ear seem to emerge from
the noise itself. From Kyema’s crackling near-silence emerges a classic Radigue texture: a
polyrhythmic jostling of various tones at various frequencies, all beating at rates
independent of one another. In a section from Chapter 2 called “The game of partials”, I
suggested that this sort of texture is likely what Radigue had in mind when she invokes the
phrase le jeu des harmoniques in her writings and interviews; however, in my analysis I’ll
be calling this texture a ‘mobile’, in order to preclude any confusion from a potential
misattribution of her phrase to specific passages of music. In a Radigue mobile, some
elements are mixed somewhat louder than others, gently implying a hierarchy of
attentional focus. Within the first minute of this opening mobile, the element that we hear
frontand-center is likely the beating sinusoid at around 220Hz (A3, m≈57), which
completes a cycle Personal communication with the author, 4 February 2021. 26 84 of
beating between four and five times a second. Notably, there is a slight variation in the rate
of beating of this tone over time; listen closely around 1’55” and you’ll hear that the beating
of the 220Hz tone gradually speeds up and slows down. This variation over time has the
musically compelling effect of preventing the resultant texture from ever sounding too
rhythmically ‘even’ or predictable. As to how this technique of gradually changing beat
frequencies was achieved, one can only speculate. It’s quite likely that Radigue has tuned
two of her ARP 2500’s five oscillators within a few Hertz of one another, with fundamentals
in the neighborhood of 220Hz (or perhaps an octave lower). Manually adjusting the base
frequencies on one these closely-tuned oscillators would change the resultant beat
frequencies, but we already know that Radigue considered this technique verboten. And
while there is always some natural drift in analog oscillator frequencies caused by ambient
temperature changes, this explanation is also unlikely to be the cause of shifting beat
frequencies, as the ARP 2500 oscillators were generally very stable, with their pitch drifting
only about 0.1% per hour across a wide range of temperature conditions (ARP 2500 Manual
1970). Based on my own experiments with Radigue’s ARP and subsequent study with my
own modular synthesizer, I was able to recreate the effect heard in Kyema another way. I
tuned two oscillators close together and then engaged the pair in a basic frequency
modulation scheme, where one oscillator serves as the carrier and the other serves as the
modulator. The two oscillator outputs were then summed and routed to the output of the
system so that the results could be heard. Then, through careful and gradual adjustment of
the FM index knob on the carrier, I was able to exert quite a lot of control over the rate of
beating. If the lower of the two 85 oscillators is serving as the modulator, the resultant beat
frequency will continue to speed up again as the index increases and their base
frequencies are driven further apart. When the higher of the two oscillators is modulating
the lower of two, the beat frequency will slow down as their frequencies get closer
together, then speed up again as they pass. The trick here may have something to do with
what exponential FM does to the perceived pitch of the carrier. In 1975, Bernard A.
Hutchins demonstrated that as the depth of modulation in an exponentially-controlled
oscillator increases, so too does the perceived pitch of the oscillator’s fundamental (202).
This change in pitch follows the curve of a modified Bessel function, in which the change in
pitch is 86 Fig. 3.10 Hutchins (1975, 202) shows how increasing the depth of modulation
produces increasingly greater upshift in carrier pitch. rather gradual at a lower index of
modulation before becoming quite dramatic as the index increases (Fig. 3.10). One may
also recall that frequency modulation spectra increase in complexity as the index of
modulation increases. Therefore, in order to achieve very subtle variations in the rate of
beating, along with minimal timbral change or perceptible pitch variations, one would need
focus their attention on deliberate adjustments of the carrier’s index of modulation at low
values (which, according to Hutchins, would be generally less than onethird volts per
octave). In keeping with my intention to bring forth an image of the composer at work, I
would suggest with measured speculation that when we hear this variation in beat
frequencies in Kyema, we hear the results of Radigue very slowly and carefully rotating a
knob within a span of just a few millimeters. As we will soon see, that level of care extends
to nearly every aspect of Kyema’s composition and recording process. 3.3 Timbre as
melody I’d like to open this section with a longer passage from Julia Eckhardt’s interview
with Radigue, as I think it beautifully introduces the question of how Radigue approached
melody and timbre in her synth works in general, and Kyema in particular: “In my electronic
music, I never bothered to define the intervals, since I was working on sustained sounds,
for which the construction relied on an initial construction of various frequencies, which
stayed the same throughout a given piece. It was the proportion of the constituents
alone—[indices of modulation, mixing levels, redistribution of spectral energy through
filtering]—that made the sound evolve ‘from the inside’ through different types of internal
modulation… I’m not looking to construct a melody, but to frame the soft singing being
shaped by itself through the interactions within the sound” (Radigue and Eckhardt 2019,
164-67). 87 On the one hand, there is the composer’s intention as expressed through
“framing”; on the other, there is sound’s imagined autonomy, which “through the
interactions within the sound”, is “shaped by itself”. Radigue’s dialectic conception of
autonomy and its deference is by now quite familiar: rather than seeking to build a melody
in advance, she would look to the constituent elements of a given sound (namely, its
spectral components) as a constraining yet generative factor in the creation of melodic
ideas. Around 1’30”, we hear the first suggestions of how she would apply this aesthetic
precept in Kyema: an A5 (~880Hz, m≈81) subtly emerges from the manifold, which is then
chained together with a B5 (~990Hz, m≈83.04) and C#6 (~1100Hz, m≈84.86) to create a
sort of justly-tuned, ‘mi-re-do’ melody. Even though I call this passage a ‘melody’, it’s quite
important to distinguish this Radigue-specific notion of melody with that of melody as an
analytically distinct unit in European classical music. For Radigue, melody is deeply linked
to timbre; this much is evident when looking at a spectrogram for a segment of I.1 (Fig.
3.11). Here we see how Radigue uses resonant bandpass filtering in order to bring forth
partials that were already present in the sounding music, and by slowly and manually
varying the cutoff frequency on one of Jules’ 1047 multimode filters, subsequent partials
are emphasized one by one. By simultaneously bypassing the filter, and filtering that very
same signal, Radigue effectively blends together these otherwise distinct concepts of
melody and timbre. The resultant pitches of this ‘partial melody’ correspond strongly to
what would be the 8th, 9th, and 10th partials of a harmonic signal, perhaps a ~110Hz
sawtooth, which has energy at all integer multiples of the fundamental frequency;
however, without a patch score, it is not possible to be absolutely certain of what the
filtered signal is. The picture is complicated by the fact that in certain circumstances,
exponential frequency modulation (the only form of FM 88 available on Radigue’s synth)
yields harmonic spectra by upshifting the carrier’s frequency, such that it lies on some
integer multiple of the modulator’s frequency (Hutchins 1975, 202). In these more complex
cases, the resultant collection of sidebands doesn’t really square with our notion of
‘harmonics’ and their cardinality—that is to say, while this partial melody might sound like
the 8th, 9th and 10th harmonics, that doesn’t necessarily mean a sawtooth wave serves as
the input signal of the filter here. In opposition to the idealized example of a sawtooth
wave, whose partials lose amplitude at a rate inversely proportional to their frequency
(e.g., the 10th harmonic is one-tenth the amplitude), the upshifted, harmonic timbres
created by exponential FM contain a less systematic distribution of spectral energy,
meaning some of the higher sidebands can be 89 Fig 3.11 Resonant bandpass filtering
periodically emphasizes the beating partials which are already present in the mix. Note the
occasional “bumps” in each line of spectral energy: these are moments in time when the
bandpass filter emphasizes a particular partial. appreciably louder than lower ones
(Hutchins 1974, 203-204). Given that these partial melodies seem to be playing with
harmonics above what would be the 7th or 8th overtone, the upper region of a classic
sawtooth’s spectrum may have insufficient power to produce clearly audible partial
melodies. Our detective work is further obscured by Radigue’s patching technique, which
we’ve already seen to be quite complex and nonhierarchical. A single sound source could
be routed to many destinations, simultaneously producing many altered versions of a
signal acting in both the domains of audio and control voltage. In any case, the rhythmic
contour of this partial melody from around 1’30” to roughly 4’30” is certainly wandering,
unfolding in a rhapsodic way with phrases of successively greater lengths. In terms of
loudness, Radigue has mixed this partial melody quite evenly with the rest of the music,
making it difficult to even tell if this material has any current, let alone future significance;
but, as Kyema progresses, this technique of building melodies out of partials will become
an important signifying and organizational element in the composition. And although the
synthesis technique is generally the same throughout the work (i.e., using a bandpass filter
to select partials from a timbrally-rich input signal), there are meaningful differences in the
various applications of this technique. In tracing the development and variation of this
technique in Kyema, a certain resonance emerges with respect to Radigue’s dialectical
notion of sound/composer autonomy. Starting around 9’00”, II.1 further develops the idea
of building melodies out of beating, resonantly-filtered partials, but the partial melody here
is less clearly derived from a single harmonic series. I would speculate that Radigue
generated a larger collection of partials from which to build the melodic fragments by
superimposing two closely tuned harmonic spectra. An 90 unequally-spaced scale could
then be derived using the resonant bandpass filter, yielding something more like ‘ti-do-re-
mi-fa-sol-le.’ In deliberately rotating the bandpass filer’s cutoff frequency potentiometer,
various partials in this somewhat ‘chromatic’ collection would be selected each in turn;
however, we don’t hear every partial in this aggregate—some appear to be ‘skipped’ over
(Fig. 3.12). This curated presentation of a sound’s constituent elements surely brings to
mind Radigue’s idea of “framing” a sound’s “soft singing”; however, this connection to the
composer’s philosophy doesn’t bring us much closer to a clear and technical explanation
of how the effect 91 Fig. 3.12 The upper two staves show the tunings for partials with evenly
spaced frequencies, yielding approximations of harmonic series for 71Hz (~C#2, m≈37.42)
and 67Hz (~C2, m≈36.41). Because this figure concerns the harmonic series, I also use
Marc Sabat’s extension of the Helmholtz-Ellis notation for just intonation The lowest staff
shows an approximate reduction of the collection of partials used for the partial melody
used in section II.1, with enharmonic respellings of certain partials used in order to convey
modal or scale affiliation. m ≈ 70.9 72.4 74.5 76.3 77 79.4 80.2 might have been achieved. I
think a compelling answer to that query emerges a bit later in Kyema, when section I.2
begins to fade in around 14’40”. Over the next several minutes, we hear yet another
development of Kyema’s partial melody technique. Here, we get the impression of a
melody built out of a selection from something quite like the major scale (Fig 3.13, lower
staff); however, if Radigue was simply rotating a cutoff frequency potentiometer
throughout this section, she would naturally run into other partials at equally spaced
points in the spectrum, which we then would hear on the recording (Fig 3.13, upper staff).
So, how’d she do it? I offer this measured suggestion. Using the ARP 2500’s two multimode
filters, a signal chain like the following could be created: two filters are routed in series,
with the output of the first routed to the input of the second; the first filter’s notch output
removes unwanted harmonics from the available set, and the second filter’s bandpass
output, at a sufficiently high resonance, selects partials from the reduced set (Fig. 3.14).
The output of this second filter is then passed on to the output for capture on the tape
machine, recording the effect. When the final restatement of the I material, what I call I.3,
begins to fade in around 42’30”, it’s a mixture of old and new. On the one hand, this return
to the opening material grounds grounds the work back in familiar territory before its murky
conclusion, but there are meaningful differences in how the partial melodies are
constructed. They are now far more diffuse, taking the form of grand yet distant spectral
swoops that reveal harmonics with each rise and fall (Fig. 3.15). These spectral swoops are
far more periodic and detached than prior emergences of partial melodies. We are just
hearing each partial, one after the other; no more melodies per se, no more contrivances
or signal processing sleight-of-hand. 92 93 Fig. 3.14 Dual filter signal path that may have
been used to curate sets of partials from a complex timbre Fig. 3.13 Upper staff shows
theoretical gamut of available partials based on equally spaced frequencies,
approximating the harmonic series of an A2 (110Hz). The lower staff shows the curated set
of partials as heard in section I.2 (14’45” to about 20’00”). 7th, 11th, 13th, and 14th
harmonics are removed from the set m ≈ 76.02 81 83.04 84.86 88.02 91.88 With the full
progression of the ‘partial melody’ technique in view, I would argue that I.3 constitutes a
narratively meaningful departure from its earlier iterations. Now, we’re just hearing the
sound ‘as it is’, without any elaboration, removal, or notch-filtered sleight-of-hand.
Drawing on Radigue’s comparison between filters and the ear in Chapter 2, I argued that
resonant bandpass filtering metaphorically models a shifting of aural focus. In this final
statement of I.3, that very notion of an attentional, directed subjectivity finds rhythmic
accord with the periodic, undulatory amplitudes of the beating partials themselves
(battements). Now equipped with a better technical 94 Fig. 3.15 Reduction of the
crossfade between sections IV and I.3, followed by the crossfade between I.3 and V. Note
the smooth and gradual revelation of all harmonics in I.3; none are excised using notch
filters or clever mixing techniques understanding of her instrument, I think in listening to
this passage we find a beautiful embodiment of Radigue’s disciplined acceptance of
sounds ‘as they are’. At the same time, it is Radigue who, as the work’s author, devises the
context in which this acceptance of ‘sound as itself’ can be allegorized. 3.4 Harmony as
experience Through some close reading of passages in Kyema, I think it’s clear that
‘melody’ and ‘timbre’ in Radigue’s synthesizer works are inextricably connected, though
this still leaves open the question of harmony. As with our previous discussion of ‘melody’,
we’ll need to qualify the idea of ‘harmony’ in order to describe how the concept is
applicable to Radigue’s synthesizer music in general, and Kyema in particular. Generally,
this study of harmony in Kyema addresses two questions: 1) what is combined with what;
and 2) what can be said about the effect that this combination may have on a listener? I’ll
frame this topic with a brief digression: in a 1974 program note for Transamorem/
Transmortem, Radigue engages in dialogue with an anonymously-attributed text. When her
nameless interlocutor states, “the consonant things are vibrating together” , Radigue asks,
27 “where is the changing point? Within the inner field of perception, or the external reality
of something on the way to becoming”; the interlocutor then deduces, “time is no longer
an obstacle, but the means by which the possible is achieved” (Radigue 2011, 4). On the
one hand, this text’s willful ambiguity falls squarely in line with Radigue’s ongoing and
demonstrated commitment to fostering intersubjective encounters with her audience—
there’s many ways to The italicized passages are given in both French and English in the
original program note. 27 95 interpret such an elliptical text. On the other, I believe close
study of Radigue’s approach to harmony in Kyema not only reveals an authoritative
interpretation of this text, but also discloses the composer’s rhetorical position as a
committed advocate of composer-listener intersubjectivity. An illustrative passage in
Kyema begins around 18’00”. Here we find ourselves in the middle of a long crossfade that
superimposes I.2 and II.2. I give a reduced transcription of the passage in Fig. 3.16. As a
superimposition of an ‘A major’ type harmony with some sort of ‘F minor’ harmony, this
passages produces a remarkable harmonic ambiguity through mediant relationships, even
as I.2 gradually fades. This ambiguity is surely the sort of effect Radigue had in mind while
writing her retrospective aesthetic treatise, wherein she affirms the virtue of “[the] freedom
to be immersed in the ambivalence of continuous modulation with uncertainty of being
and/or not being in this or that mode or tonality. The freedom to let yourself be
overwhelmed, submerged in a continuous sound flow, where perceptual acuity is
heightened through the discovery of a certain slight beating, there in background,
pulsations, breath” (Radigue 2009, 49). By 21’00”, the I.2 material has faded completely,
but this does very little to actually alleviate the harmonic ambiguity. What remains is a kind
of ‘pseudo-chord’ that rather subtly confounds the boundary between harmony and
timbre. This ‘suspended’ harmony turns out to be a kind of false return, as Radigue
gradually fades in the remaining partials of II.2 starting around 21’22”, filling out the
harmony with a clear fifth and root member. Around 22’30”, the music finally feels as
though it has definitively ‘arrived’ at a new point of stability; however, a careful study of
what follows reveals an additional layer of ambiguity. 96 Throughout her synthesizer works,
Radigue demonstrates a propensity for superimposing multiple, closely-tuned harmonic
spectra in order to produce kaleidoscopic undulations of amplitude between ever-so-
slightly dissonant partials. That much is a given, but when we consider the implications of
this approach to harmony from not only an acoustic but also an otoacoustic perspective,
complex ambiguities arise that are not easily resolved by even highly precise FFT analyses
or other quantitative measures. Due to the presence of multiple, closelytuned harmonic
spectra, in practice it can be very difficult for the listener to differentiate between the
waveform beats caused by closely tuned partials, and dissonating combination tones
which may be produced by those very same partials. Radigue’s occasional use of ring
modulation further confounds this distinction: in producing the sum and difference
frequencies of all the 97 Fig. 3.16 Reduction of the crossfade between sections I.2 and II.2
partials for its two inputs, a ring modulator makes audible new sounds that share a strong
spectral affinity with combination-tone otoacoustic emissions. Those emissions, even if
strongly perceived in the inner ear, will not show up on the FFT for Kyema; however, any
sum and difference tones produced by ring modulation will be present in the FFT. 98 Fig.
3.17 2F2-F1 combination tones for six prominent peaks in Kyema from 24’24”-24’26”
Further complicating the issue, close study shows that Radigue will often place additional
tones into the mix which happen to be close approximations of possible combination
tones, yielding dissonances of a recursive complexity. An example drawn from the FFT at
24’24” for Kyema demonstrates this principle in action (Fig. 3.17). This chord is a
somewhat idealized example, in which I have taken six of the most prominent peaks over a
two second window, with each pitch representing a sinusoid at a particular frequency.
Below the chord I give the frequencies for a particularly common and studied distortion
production otoacoustic emission tone which occurs at 2f1-f2, also known as the cubic
difference tone (Frank and Kössl 1996, 105-106); in red, I’ve noted combination tones that,
if audible, would dissonate with frequencies that are already substantially present in the
FFT (within -20dB of the loudest peak) . With this 28 transcription we can readily see that
Radigue's approach to harmony and dissonance severely complicates musical analysis by
superimposing otoacoustic and acoustic phenomena. A healthy amount of skepticism
here would not be unexpected; after all, we’re just looking at two seconds of music, cut off
from all context. How could my argument possibly be relevant to a listener’s real-time
apprehension of Kyema? I would argue that both the extraordinarily gradual progression of
change and the utilization of minimal materials allows the listener to attend to these
complexities in real time. Even so, one might reasonably ask, doesn’t all music that uses
irrationally-tuned harmonies confound acoustic and otoacoustic perception? Of course
this is true, but the reason why this confounding merits particular emphasis in The FFT
always introduces additional distortion. The quantization noise caused by rounding each
peak 28 to the nearest whole Hertz value introduces some complexity the harmony that
might not actually be present, for even though the bandwidth of each bin is extraordinarily
small (less than 0.7Hz), the peaks are rounded to the nearest Hertz. 99 Radigue’s case is
because the music unfolds at a comparatively glacial pace, affording a level of
attentiveness to these phenomena, and to the complexities of hearing. While any music
that uses instruments with rationally-tuned spectra playing irrationally-tuned harmonies
(for example, European classical music, hip-hop, and jazz) would produce some version of
this effect, in Radigue’s work I argue that we can take these phenomena to be constitutive
of the music, rather than merely incidental to it. This is possible only because of the work’s
selectively minimal materials, and its long durations over which changes unfold very
gradually, thereby sensitizing the listener to these phenomena in real time. There’s another
fascinating example of this intersubjective approach to harmony and dissonance found
later in the work. At 46’ 30” and throughout Section V, the bass is comprised of two
resonant sinusoidal peaks around 50Hz (~G1, m≈31.34) and 62Hz (~B1, m≈35.07). On a
purely theoretical level, this dyad would be reasonably consonant, given that it rather
closely approximates the 5/4 syntonic major third. At the same time, the difference of their
frequencies is 12 Hz, which means that a rapid beating at this difference is produced,
complicating our ability to resolve these two tones into distinct pitches, and obscuring the
sense of a strong ‘fundamental’ or ‘root’ through the rapid undulation in amplitude. The
effect is quite unsettling. If Radigue contends that there’s a freedom to be found in
ambiguity, then the relevance of Transamorem/Transmortem’s program note to our
present discussion should be sufficiently obvious: “where is the changing point?” she
asks, but does not definitively answer. Instead she says it is either “within the inner field of
perception, or the external reality of something on the way to becoming.” In practice, it is
surely both of these, for what we have in these passages in Kyema is a remarkable
articulation of multiple simultaneous modes of listening: the acoustic and 100 the
otoacoustic, represented by waveform beats and combination tones respectively. When
we also consider the notion of resonant bandpass filtering as representative of a shifting
attentional focus, we can then add to this mix Radigue’s concern for the psychoacoustic:
resonant filtering articulates certain partials, thereby modeling a higher-level, interpretive
process of differentiation between figure and field. With respect to harmony, the
apprehension of consonance or dissonance in a Radigue composition is, therefore, a
whole being, mind and body endeavor, one in which the full and active participation of the
listener is welcomed. Even so, there are some interesting paradoxes with respect to
Radigue’s authority as composer, and the audience’s authority as participants, which I will
attend to at greater length in the epilogue which follows this chapter. 3.5 Into the breaks
For Kyema’s full duration of sixty-one minutes and seven seconds, there will be no
complete silences until the work’s final fadeout. This continuous absence of total
silence—which in affirmative terms we could call the continuous presence of sound—is an
important part of Radigue’s aesthetics in general. In Chapter 2, I argued that this resulting
illusory continuity obscures the composer’s direct interventions by disrupting any direct,
moment-to-moment connections between haptic interventions (e.g., tape splices) and
sonic outcomes (e.g., sudden changes in timbral or thematic content). Through the
technique of slow, interlocking crossfades, we are instead given the impression that the
music is emanating from somewhere else, unfolding at its own pace and on its own terms,
an impression that is strengthened by the composer’s practice of concealing herself during
presentations of these works. Although in retrospect she considered this endeavor only
partially successful, the consequent impression of infinitesimal 101 development through
a certain recording and presentation practice nonetheless constituted Radigue’s best
attempt to create “an unreal, impalpable music appearing and fading away like clouds in a
blue summer sky” (Radigue 2009, 49). With that pastoral metaphor at hand, we might think
of the composer’s ideal encounter with her music as analogous to observing the flowing
river or the turning earth: the music just does what it does without the composer’s
involvement. Situating the composer outside the flow of the music might give the
impression that she is not in control, merely and only accepting things as they are, but I’ve
already shown that Radigue was tremendously deliberate in her technical approaches to
both synth-ing and recording. Along similar lines, as listeners to Radigue’s synth music, we
may similarly feel somewhat passive in our reception of it, simply allowing the music to
wash over us, or to borrow Tom Johnson’s memorable phrase from a review in The Village
Voice, watch as it seems to “ooze out of the side wall” (1973); however, we’ve seen how
Radigue went to great lengths to afford participatory, intersubjective encounters for
listeners who approach her compositions. This can be the case even if the capacity for
such an encounter is, by virtue of the composer’s authority, “unevenly realized”. As one of
the major gambits of Radigue’s aesthetic thought, she insists on the listener’s ability to
perpetually re-calibrate their interpretation of a piece in key ways: employing long
durations and minimal materials; foregrounding sound’s physical fact (e.g. through
waveform beats and the emphasis of partials as constituents of sound); and aestheticizing
otoand psychoacoustic experience through combination tones and resonant filtering. With
all this talk of smooth continuity, it is perhaps surprising to find that Kyema is a work of
ruptures, both small and large. These moments of discontinuity merit our attention, as in
each of them we shall find complications and elaborations of the themes of
intersubjectivity and 102 self-abnegation that I’ve been weaving into this account of her
synthesizer works. For the first of these moments, we return to what by now is a rather
familiar section in our analysis of Kyema: the long crossfade that begins around 18’00”.
Based on my detailed description of Radigue’s recording process in the previous chapter—
not to mention all the smooth crossfades previously heard in the work—we may
reasonably expect that I.2 will fade out imperceptibly to nothing; however, listen closely
and you will hear that I.2 suddenly drops off at 20’53”, leaving II.2 hanging in the aether. To
account for what’s happening here, I want to linger for a moment on the technical side of
Radigue’s recording process. In Chapter 2, I showed how Radigue spaced segments in a
given composition with blank sections during the final recording session (page 63, Fig. 2.8).
She did this so that while fading in the next segment of music, it would begin playback at
the proper time within her meticulously planned timing scheme. The sudden arrival of an
intercalary blank segment might be all that’s happening at 20’53”; however, Radigue’s
synth recording process has been shown to be rather unforgiving of error, in that any major
slip-ups necessitated a restart. Given both Radigue’s fastidious attention to detail, and her
position as the work’s sole author, I would then argue that we can interpret the content of a
given synthesizer piece to be congruous with her artistic intentions at the time of the
composition. Perhaps controversially, I include anything which Radigue may have
retrospectively called a mistake: as she recounts to Julia Eckhardt, “if something went
wrong at eighty minutes, I had to start all over again. That’s why, in all my [synthesizer]
pieces, there’s always a place where something goes wrong, but there’s a point where you
need to know to stop yourself, too” (2019, 119). 103 It’s of course possible that the sudden
dropout of I.2 at 20’53” is merely an unintended consequence of Radigue’s recording
technique; however, at almost every other crossfade point in Kyema, this sudden removal
of material doesn’t take place. So even if this moment happened without Radigue’s
planning, at the very least she allowed this discontinuity to be here. This has a profound
effect. For the first time in over twenty minutes of music, we hear the subtle yet sudden
removal of prior material. While the continuity of the already faded-in II.2 material
smoothes over this transition, I think that this moment challenges what may have been, up
to that point, some fundamental assumptions about the work. In so doing it facilitates
what Elizabeth Margulis elsewhere calls a “bridge” between the listener’s inner world and
the sounding music (2007): in the context of what, until now, has been only infinitesimally
smooth crossfades, this sudden deletion of prior material heightens the listener’s
anticipation of what is to come by introducing a notion of gently disrupted continuity into
the work. Furthermore, what remains after this sudden break is tremendously ambiguous
from a harmonic and timbral perspective, only gaining clarity when the lower tones are
fully faded in around 22’15”, in what turns out to be a full return to a varied form of the II
material (II.2). The sudden break at 20’53”, and the ambiguity in which it suspends us,
makes manifest that participatory bridge between the inner world of the listener and the
sounding reality of the music—in so many words, this moment of disjuncture invites
reflection, anticipation, and an increased awareness of the passing time. We can also
discern some structural significance from this disjunctive moment when we consider
Kyema as a whole. Figure 3.19 shows how this break anticipates the disruption of what up
to now has been a rather straightforward, repeated binary structure with variation (A B A’
B’). 104 As we will soon hear, this return to II.2 is also the beginning of a departure into
quite timbrally and materially distinct music. This subsequent music, what I call III,
concludes with another notable discontinuity; however, I want to spend a little bit of time
talking about III in greater depth before we get there. Timbrally, III features a less
harmonically-organized collection of partials; a higher prevalence of noise that audibly
distinguishes itself from Kyema’s constant noise floor in order to become musically
significant; and the electronically processed loops of two Tibetan wind instruments, rag-
dung and gyaling. The former is a large horn, and the gyaling is a reed instrument whose
timbre Radigue compares to the oboe (Radigue and Eckhardt 2019, 140). This inclusion of
other instruments is not only a source of relatively substantial contrast in Kyema; it is also
comparatively unique in Radigue’s output. Of the nineteen ARP 2500 works composed
since the beginning of their collaboration in 1971, only six use additional musical
instruments: in addition to Kyema, there’s the prepared piano in Geelriandre (1972); piano,
flute, and voice in Fc 105 Fig. 3.19 An excerpt of Kyema’s thematic structure. The rupture at
20’53” prefigures a dramatic departure into thematically distinct material 2000/125 (1972);
ondes martenot in Schlinen (1974); voices in Les chants de Milarepa (1983); and finally the
Serge modular in L’Île re-sonante (2000). What does Radigue do with these instruments in
Kyema? As the listener will note starting around 29’20”, she positively buries them beneath
a cloud of noise and electronic tones. In addition to the substantial spectral masking
caused by Radigue’s mixing technique, there is also some unusual degradation of the
original wind instrument recordings. Radigue doesn’t specify what exactly she means
when she says she “electronicized” these instruments with her ARP (Radigue and Eckhardt
2019, 140), but that needn’t prevent measured speculation. It’s possible Radigue engaged
the instrumental samples in a process of iterative degradation through rerecording akin to
the techniques used in Opus 17, what she called “érosion électronisante”; however, given
the wealth of signal processing operations available with the ARP 2500— including
filtering, amplitude modulation, and ring modulation—those might all be fair game. By
whatever means that Radigue brought these recordings into the mix, they are nonetheless
quite obscured, to the extent that a new listener to Kyema might not even recognize these
instruments as themselves upon a first or even second listening. What might we infer from
Radigue’s occluded inclusion of these instruments in Kyema? As Tibetan instruments,
their connection to Radigue’s spiritual practice is rather clear. As aerophones, the rag-
dung and gyaling also necessitate a strong connection between bodily intervention and
sonic outcome, a connection that in a performance would be anatomically evident by the
expanding and contracting chests of the performers. In this way, the rag-dung and gyaling
constitute a meaningful riposte to Radigue’s modular synthesizer practice, where I’ve
shown time and again that she skillfully hid her bodily involvement through certain
patching and 106 recording techniques. In her decision to not only include traditional
instruments but also inter them beneath a thick cloud layer of electronic sound, we find
here an unexpected elaboration of Radigue’s discipline of self-abnegation. Specifically,
Radigue allows instruments to sound that are requisite of a more conventional connection
between haptic intervention and sonic outcome, albeit she does so in a disembodied way.
That disembodiment is achieved not only materially through the phantasmagoria of
magnetic tape, but also compositionally: first, by mixing these instruments at no greater
loudness than the synthesizer, and second, by masking them with synthesizer tones
possessed of similar spectral profiles. This fleeting, heavily distorted appearance of a self
or other supplies additional significance to discontinuity that follows it. Around 36’30”,
Section III begins to fade out as Section IV fades in, and at 37’27”, what remains of III—
gyaling, rag-dung and all—is cut off quite abruptly. The instantaneous deletion of material
is tremendously effective in part because of how rarely such a moment occurs in Kyema; in
our analysis we note only one such prior moment, occurring at 20’53”. By disrupting the
music’s flow, this exceptional discontinuity at 37’27” refocuses our attention on the
bleeding edge of the present, as one moment gives way to the next. While I can’t
specifically address how Radigue’s Buddhist poetics inform Kyema’s structure, I do find
that the work’s concerns with the passage between life and death—which are made
explicitly clear by its title—are beautifully articulated in these moments of rupture. If, as I
speculated with the earlier discontinuity at 20’53”, this moment is the accidental
consequence of using intercalary blanks in the recording process, then Radigue’s
disinclination to re-record the piece in order to remove this break elegantly allegorizes her
longstanding desire to defer to a sound’s inner life. Furthermore, by refusing to censor the
unexpected nature of a sound’s 107 inelegant ending, this instantaneous silencing of the
rag-dung, gyaling, and their accompanying spectral mask perhaps offers a simple yet
profound comment on the nature of loss: it just happens. In total, these subtle breaks in
continuity may ultimately register as nothing more than a blip, and perhaps I ascribe far too
much significance to what are basically minor errors that an artist chose to ignore or
otherwise allow. Even so, I think as listeners we stand to gain by allowing these interstitial
moments to speak their full weight: as markers of time’s passing; as invitations to reflect
and to anticipate; and as signifiers of acceptance and imperfection on the part of the artist.
What remains in the wake of this rupture? At 37’27”, we hear a band of mid-to-high noise
around 330 Hz (~E4, m≈64.02), likely accomplished by passing noise through a resonant
bandpass filter, with its cutoff set to about 330 Hz. Emerging rather gradually from this
fluctuating noise is a ringing bell tone at roughly the same frequency, at first masked by the
sustained noise band, but eventually growing in loudness to the point of surpassing it,
marking time with an almost abject, yet inconstant simplicity. In Chapter 2, I spoke at
some length about how Radigue uses these transient phenomena to mark important
structural moments in certain works. It’s a terribly effective technique, and to find it used
so sparingly—and in music that is almost entirely defined by continuity—points to
Radigue’s full appreciation of the potentialities of her chosen musical materials. It may
also point to an earlier chapter in Radigue’s musical life. At the conclusion of her high
school studies, Radigue was sent away from her hometown of Paris to stay with relatives in
Nice; while there, she took up studies at the Conservatoire, including a harmony class and
a harp class (Radigue and Eckhardt 2019, 61). “I took a lot of pleasure from learning the
harp”, she recounts to Eckhardt, “I liked the physical contact with the instrument, 108 but
with great regret I have never been a good instrumentalist” (2019, 62). I cannot be sure if
these filter plucks are conscious references on the part of Radigue to her background as a
harpist, but I would very much like to differ with her appraisal of her musicianship. In the
resonant ‘plucking’ of Jules’ filters, I hear something very much like a harp, played with all
the skill of a talented instrumentalist, and at just the right moments, ushering us into
Kyema’s murky, mysterious conclusion. 3.6. Crossing and returning Starting around
56’00”, a dark and swirling cloud of noise gradually saturates the music before giving way
around 59’00”-59’15” to a thinner, brighter band of noise accompanied by a lone sine wave
at about 2515 Hz (D#7, m≈99.18) (Fig 3.20). For a full minute, that high keening drone
sounds, bright and aloof, before fading away and leaving nothing but a distant noise that
gradually dissipates into the crackling background of the tape. I have long been mystified
by this 109 Fig. 3.20 Spectrogram of Kyema’s final minutes. A lone sine wave at around
2515 Hz soars over a cloud of noise. ending, but I will attempt a brief analysis of this
section with the full view of Kyema at hand. By superimposing a lone sine wave with a band
of noise in mutually exclusive frequency spaces, Radigue effects a kind of ‘spectral
differentiation’: the lone sine wave soars above the noise, representing in timbral terms its
opposite: the former is clear, thin, and precise, the latter disorderly and dense. If Radigue
thinks of sounds as living, is this sine wave not alive? It would be unwise to make any
definitive claims about Radigue’s intentions here; however, the pure sine wave’s
superimposition with noise, which as Tara Rodgers notes has long been considered its
opposite (2011, 525), may point to a particular reading. I would suggest, provisionally, that
the sine wave at Kyema’s close represents some idealized, willing, and nearly bodiless
subjectivity emerging, if only momentarily, from a manifold of disorder. Eventually, that
subjectivity fades away, enfolded by the crackling tape noise that delimits, however
unsteadily, Kyema’s boundaries. 110 Epilogue: The limits of an avant-garde It is difficult to
overstate my admiration for Éliane Radigue. From 1971 to 2000, she produced a body of
extraordinary work in collaboration with a synthesizer she named Jules. She did so without
much in the way of institutional affiliation or support in her home country, pursuing her
single-minded vision with frank conviction. My goal with the foregoing has been bring to
light the details of that endeavor, and some of the music which resulted from it. I explored
how Jules and Éliane came to meet; I clarified the details of Radigue’s composition and
recording processes; and drawing on my own expertise in modular synthesis, I mounted an
indepth study of a major composition, 1988’s Kyema. In Radigue’s synthesizer practice, we
find a compelling mixture of what Georgina Born calls “the two avant-gardes” (1995, 57).
On the one hand we have a modernist conception of technology as a tool for rational
discovery, developed through Radigue’s apprenticeship in the musique concrète studios,
and evidenced by the severe constraints she placed on herself in her recording and
composition process. On the other hand, rather than presenting her work within the high
modernist aesthetic of a Boulez or a Schaeffer, Radigue embraces the precepts of
American experimentalism. Influenced by formative contact with figures of the New York
avant-garde, particularly John Cage and James Tenney, Radigue’s synthesizer work
constitutes a highly personal melange of that movement’s overt concerns with ritual, non-
teleological structures, and unconventional presentation and performance practices. 111
In a Cagean gesture of supposed rebellion against the tyranny of the European modernists,
Radigue effaces her authority as the work’s sole author through an intersubjective
conception of sound as living. She also claims something akin to what Benjamin Piekut
calls “distributed authorship” (2011), in that the work she produced with her synthesizer is
described in distinctively collaborative terms. And although I do not discuss it at length
here, Radigue’s conversion to Tibetan Buddhism in 1974 may have strengthened what
might be called some of the more ‘postmodern’ aspects of her practice, by reframing the
authority of the composer in terms of acceptance and compassion, rather than outright
control or constraint—even if her music after 1974, as John Rockwell notes in a 27
December 1980 New York Times piece on Adnos II, “doesn’t sound very different” . And
yet, there remain some paradoxical aspects of 29 Radigue’s synthesizer practice—and the
philosophies which have supported it—that I would like to grapple with further in this brief
epilogue. Radigue’s compositional praxis with the ARP 2500 yields a dialectical conception
of autonomy and its deference. This interplay between intuition, chance encounters, and
rigorous pre-compositional schema animates much of the discourse surrounding
twentieth-century European and American musical figures in general, perhaps epitomized
in the adage attributed to Stravinsky in Poetics of Music, “the more art is controlled,
limited, worked over, the more it is free” (1970, 63). The domineering posture of the
European high modernist is by now widely acknowledged, but it is far too simple to equate
one kind of modernism with ‘control’ and As a journalist on the front lines of the present,
Rockwell doesn’t have the benefit of hindsight, even 29 though I mostly agree with him
here; however, Radigue’s later synth works do show a greater facility with the instrument,
and are generally less austere in form and content: the Adnos trilogy (1974-1982) and
Trilogie de la Mort (1988-1993) are night and day in these respects. 112 another kind of
modernism with ‘freedom’. As we will soon see, such oversimplifications can be politically
naive at best. Even so, Stravinsky’s adage articulates a key tension that not even an avant-
garde project in the tradition of Cage seems prepared to resolve. In thinking through the
limits of such an avant-garde, it will be useful to consider Benjamin Piekut’s work more
deeply. In Experimentalism Otherwise, Piekut uses a conflict between Cage and the New
York Philharmonic during the 1964 presentation of Atlas Eclipticalis to support a larger
claim about the ways in which a politically-engaged avant-garde in the tradition of Cage
may paradoxically reproduce “the lineaments of hegemonic liberalism” (2011, 64). Cage’s
“non-intention”, as expressed through chance operations, would seem to support a
tolerant utopia of self-determination and personal freedom; however, Piekut shows that
this “tolerance” is enforced through Cage’s dominant position as the composer at the
mixing desk: for example, whenever one of the New York Philharmonic musicians decided
to “get cute” and play music other than that which Cage’s chance operations had
compelled him to compose, the composer would simply mute them (2011, 48). It is ironic,
in Cage’s case, that his aspirations to political alterity fail to elude the capture of the
liberal-democratic hegemony he might otherwise have sought to undo, making manifest
Foucault’s evergreen, if pessimistic, observation that “power relations are rooted deep in
the social nexus, not reconstituted ‘above’ society as a supplementary structure whose
radical effacement one could perhaps dream of ” (1982, 791). Still, avant-gardes dare to
dream. Radigue follows Cage, “from whom,” she remarks to Eckhardt, “we’re all
somewhat descendants” (2019, 56). And while she feels a strong aesthetic connection to
him, and acknowledges Cage’s place in musical history—for instance, when she 113 says
to Ludger Brümmer, “I consider John Cage as our father, of all musicians…for my
generation” (2020)—there are many points of distinction in their biographies that we would
be remiss to ignore. Unlike Cage, Radigue worked almost entirely alone on her pieces from
the period under present discussion, relying on no performers other than herself to bring
them about. Complicating matters further, we have every reason to think that Radigue’s
solo compositions from about 1967 to 2000 weren’t necessarily solo by choice, but rather
solo by necessity. Lacking any institutional support from a largely “macho” culture that
she was also at odds with aesthetically—for her work, on the surface at least, did not
neatly fit into Boulez’s vision for IRCAM, nor did it fit into Schaeffer’s vision for GRM—
Radigue’s experiments in electronic feedback and the later synthesizer works came about
through sheer willpower and a commitment to her aesthetic vision (Radigue and Eckhardt
2019, 80-81). Radigue’s initial rejection by Schaeffer at the GRM is especially regrettable
when we consider the work of one of GRM’s founders, Luc Ferrari, whose most famous
piece, Presque rein (1967-70), would explore a lot of similar territory to Radigue’s work. In
what is sometimes called an early ‘field recording’ piece, Ferrari made Presque rein by
placing his stereo microphone at the window of his bedroom in a Dalmatian fishing village
in the early hours of morning. Kane elaborates: “The lack of audible manipulations gives
the sounds a found character. There is no obvious mixing, splicing, or editing—nothing that
seems to resemble the careful manipulation of recorded sounds… The traces of the
composer’s hand are erased ” (2014, 125, emphasis mine). As we have seen, abnegation
through technology found its expression in both the postwar French and American avant-
gardes, regardless of the variety of modernism espoused by their adherents. One imagines
an alternative past where Radigue enjoyed a fruitful collaboration with GRM early 114 on,
each party benefitting from the ingenuity of the other. Regrettably, Radigue’s work did not
find much institutional support in France until the turn of the century. Another major point
of difference from Cage is that Radigue articulates no explicit political program: for
instance, while today she recognizes with gratitude her status as an icon for young women
involved in electronic music, she also says to Eckhardt that “feminism wasn’t my fight”
(2019, 78). She acknowledges the leftist revolution of May 1968 in passing, while explaining
to Eckhardt, without any ambiguity: “I have never been truly politically involved”, preferring
instead to retain her energies for her creative practice (2019, 77). Nonetheless, Eckhardt
observes in her biography of Radigue that the composer demonstrates a commitment to
values often associated with postwar liberation movements: “nonhierarchical equality,
listening instead of commanding, attention for respectful interaction and personal
relations, holistic starting points”, and the “acknowledgement of intuition and
spirituality”—values which are likewise “mirrored in her music and creative strategies”
(2019, 33-34). I throw my lot in with Eckhardt here; even if Radigue espouses no explicit
political program in the style of a Cage (or a Cardew or a Nono), this doesn’t mean her
music is politically inert. I also generally agree with Eckhardt’s analysis here: from synth-
ing, to recording, to performing, Radigue evinces many of the values associated with the
liberal democracy. Even so, there are important points of tension within Radigue’s avant-
garde synthesizer practice. These tensions play out in contexts that, as I will show, are
decidedly powerasymmetric. If we accept Foucault’s contention that “power relations are
rooted deep in the social nexus”, then we will need to specifically consider the
intersubjective aspects of her practice. The first of these, which I have dealt with at some
length in my dissertation, is 115 Radigue’s conception of sounds as forms of life. Though
we should be wary of taking her too literally here, it needs be remarked that this idea is
pervasive in her reflections on her process. As she puts it in a 2006 documentary portrait
from the Institut national de l’audiovisuel (INA), as quoted in Ben Ratliff’s 20 August 2015
piece on Radigue for The New York Times: “above all I did listen to them [the sounds] with
the greatest respect, trying to understand what they had to say. Here, you’re saying this;
oh, there you’re saying that. Do you get along well together? Yes, that seems to work. So
we can go on.” In the above recollection, we get the impression of Radigue as a facilitator
of discussion or a mediator of conflict. As we have seen with Radigue’s nonlinear
conception of signal flow, mediation would have been key to prevent the entire structure of
a synthesizer patch from passing over “thresholds that should not be crossed” (Radigue
and Eckhardt 2019, 115). This is a distinctly social dynamic, and Radigue’s role as the
mediator would seem to efface her dominant position in this space, given that she
couches her involvement in terms of deference. She listens, respects, and understands
sounds. At the same time, Radigue’s capacity to listen to, respect, and understand sounds
is situated in a specifically modernist ontology which is predicated on sound’s objectively
verifiable, constituent elements (e.g., the “audio-technical discourse”, Rodgers 2010b,
2011). Drawing time and again from an analytical vein that emphasizes the physical fact of
sound through beating, combination tones, and partials, Radigue (2009) contends that
sound is alive. How are we to know that she knows sound is alive? By her allowing the
sound to speak on its own terms—in other words, by using pulsations, beating, partials, le
jeu des harmoniques, as compositional material. Nonetheless, we might like to ask, are
these the terms by which sound comes to live? And, if Radigue is the one doing the
allowing, how neutral is her position? 116 When Radigue situates herself as an observer of
sound, this speaks of a remarkable deference, even as it belies what is an essentially
dominant position. This fascinating contradiction calls to mind what Piekut calls the
“modest witness”, the central problem of which “is its self-invisibility, which causes open
and contingent decisions about structuring the world appear to be closed and beyond
dispute. When modest witnesses ventriloquize the objective world of nature, they dictate
right actions while giving the impression that they are merely following a path of
transcendental truth” (2012, 15). Radigue’s ventriloquism of an objectively-conceived
“nature” is well supported by her interviews and theoretical writings. In her aesthetic
thought, thinking sounds as forms of life is also taken as a given which is “beyond dispute”;
and, by modestly framing her approach to working with sounds in terms of collaboration
and deference, how could we possibly see this as her doing anything but taking the “right
actions”? In the liberal-democratic context within which Radigue’s art operates,
collaboration and deference to others are virtues par excellence. As for the
“transcendental truth” which Radigue seeks, she locates this in a scientific/scientistic
creation myth at the opening of her 2009 text, “The mysterious power of the infinitesimal”:
“In the beginning, there was the air’s powerful breath, violent intimidating tornados, deep
dark waves emerging in long pulsations from cracks in the earth, joined with shooting fire in
a flaming crackling. Surging water, waves streaming into shimmering droplets… Was it
already sound when no ear was tuned to this particular register of the wave spectrum in
this immense vibrating symphony of the universe? Was there any sound if no ear was there
to hear it? The wind then turns into a breeze, the base of the earth into resonance, the
crackling fire into a peaceful source of heat, water, the surf against the bank, cooing like a
stream. Life is there. Another level, another theme begins. 117 An organ adapts itself to
transformation of a minuscule zone from the immense vibrating spectrum decoded into
sounds captured, refined, meaningful. Crackling, roaring, howling and growling, the noises
of life—cacophony punctuating the deep ever-present rhythm of the breath, pulsations,
beating…” (47). I find it extremely provocative that the sounds most favored by Radigue
(e.g., “breath, pulsations, beating,” and to which I would add “sustained tones with a
certain roughness”) are also considered by her to be best approximations of a primordial
soundscape from which all life emerges (the “vibrating symphony of the universe”).
Radigue’s philosophy is doing a lot of work here. To think through some of the implications
of this discussion of primordial origins, a feminist reading would not be unwarranted—
even if the composer herself claims no explicit affinity with that constellation of thought. In
a compelling analysis of Radigue’s 1973 work Biogenesis, a composition which
superimposes Radigue’s ARP 2500 with auscultated heartbeats of her pregnant daughter
and soon-to-be granddaughter , Rebecca Lentjes suggests that the long drones and tones
of 30 Radigue’s synthesizer music should not be heard as an already-having-been,
foreclosed space of predetermined outcomes, but rather “as the possibility for worlds, and
for life, beyond our reach” (2017). Lentjes cautions against an objectifying gaze towards
Radigue’s synthesizer music, criticizing Timothy Morton, who in comparing long drone
music like Radigue’s to an apocalyptically enveloping and maternal, womb-like presence
in his 2013 book Hyperobjects, “[negates] any possibility of subjectivity. [Morton] gets
stuck in the cave of Biogenesis, hearing The reference recording for Biogenesis can be
purchased or streamed for free here: 30
[Link] 118 it as an
all-encompassing room or environment which is, like the female body it represents,
formless and dangerous” (2017). While I do take Radigue’s conception of sounds as forms
of life to be, in a sense, the ‘enveloping’ context for engaging with her music, a work like
Biogenesis readily complicates this reading by recording the sounds of the living. This feels
like a very important distinction to make. What I am trying to show here is that Radigue’s
overall conception of sounds as forms of life is itself an objectifying move, and as a
deliberate aspect of her work, any engagement with her electronic music must take this
very personal sonic ontology into account. Further, we must situate her ontology within a
specifically modernist conception of “nature”. As Piekut notes in his paraphrase of Latour
(2010), “nature” in modernist aesthetics “is a deeply ideological term, one that does not
describe a portion of reality so much as create it” (2012, 12). Through a purported
consonance with the “vibrating symphony of the universe”, Radigue’s ‘naturalistic’
conception of breathing, pulsating, and sustained sounds delimits its own private portion
of reality, as much as it holds space for the “possibility for worlds, and for life, beyond our
reach”. This state of affairs has interesting ramifications for the public presentation of
Radigue’s synth work. As I have shown in Chapters 1 and 2, Radigue has long
demonstrated a commitment to fostering intersubjective encounters with her audience in
the presentation of her electronic music (e.g., Vice Versa, Etc…, Labyrinthe Sonore,
Transamorem/Transmortem); and as we saw in Chapter 3, she explicitly and knowingly
uses compositional materials that engage multiple modes of listening simultaneously:
otoacoustic, psychoacoustic, and otherwise. There is a generosity and an intimacy here
that I think mustn’t be discounted. In his 29 March 1973 column 119 for The Village Voice,
Tom Johnson remarks on these very aspects of her work after hearing a performance of Psi
847 at The Kitchen in Lower Manhattan: “There is something very special about the music
of Eliane Radigue, but after thinking about it for almost a week, I still can’t put my finger on
what it is. Is it the intimacy? The way one feels that the music is speaking only to him
regardless of how many other listeners may be sitting in the room?” The music’s intimacy
is further amplified by Radigue’s absence throughout the work’s presentation. As I
described in Chapter 2, the composer’s deliberate absence goes a long way towards
inviting us into these slowly developing sound worlds without the overwhelming presence
of their author. Slow, strange, minimal music—and the artist is nowhere to be seen. We
might ask, to what effect? I like what the artist Katie Giritlian has to say about this in a piece
reflecting on Radigue’s long association with The Kitchen. She writes, “the act of listening
carefully to her compositions inevitably pointed back to, and asked for, such sensitivity in
everyday experience” (2015). Giritlian continues: “Radigue encourages us to continue
sensitive listening as we exit the art space and enter back into the world around us.
Radigue wants for everyone to understand how much beauty there is in the mundane”
(2015). This tracks with personal experience. Upon exiting the gallery at 55 Walker Street
after a November 2019 presentation of Adnos I-III, I found that the sounds of Lower
Manhattan were enwreathed in a newfound beauty that is difficult to describe. (I would like
to say, “It sounded like music.”) As the cold air of an approaching winter stung our faces,
my friends and I collectively felt as though we’d passed through some great trial together,
listening to over three hours of minimal music unfolding at what felt like a glacial pace,
with only our thoughts for 120 company. Seated in silence on the hard gallery floor with
winter coats as cushions, I know that this long night will stay in our shared memories for
quite some time. Even as it asks much of those who listen to it in concert, Radigue’s
electronic music is surely doing something extraordinary. In its extreme durations, minimal
materials, and concern with the minutiae of sound, I was able to ‘re-enter the world’ with a
residual, if fleeting sense of wonder in its sounding. Riding the train home later that night, I
heard something much like Radigue’s music in the rattle of the train car along decrepit
tracks, and in the long, low hum of its engines. Of course, sensitizing audiences to the
beauty of the mundane is one of the main aesthetic gambits of the New York avant-garde,
which as I and others have shown, exerted tremendous influence on Radigue. There’s a lot
to admire in this gambit, even if some of its implications are uncertain. For instance, what
exactly happens when the everyday becomes an object of aesthetic contemplation? What
is gained, and what is lost? There are other latent tensions in the presentation of Radigue’s
synth work. As I have argued, an account of Radigue’s music must be located inside her
very specific reading of sound as living. This “inner life” of sounds, to borrow Radigue’s
phrase, is disclosed through the composer’s foregrounding of combination tones,
waveform beats, and a filtering technique that reveals a sound’s constituent elements,
whereby “perceptual acuity is heightened through the discovery of a certain slight beating,
there in the background, pulsations, breath” (2009, 49). At the same time, by obscuring her
own direct involvement, this disclosure of sound’s inner life takes on an aura of
autonomy—the work seems to unfold at its own pace and on its own terms. In exchange
for heightening our perceptual acuity, Radigue asks in return for “the freedom to let
yourself be overwhelmed” and be “submerged in a continuous sound flow” (2009, 121 49).
To accomplish this, Radigue’s synthesizer work purports to present sounds ‘as
themselves’. While this supposedly neutral framework gives the impression of an
accepting or deferential posture—i.e., the supposed “modest” witness of modernism—
complexities and contradictions abound. Here is what we know for certain: in these works,
sound is allowed to ‘be itself’ (though some conditions apply); the audience is then
allowed to take in that sound as ‘itself’; but, as the composer, and the work’s sole author,
Radigue is the one doing the allowing. Even if her commitment to attributing acts of
intention to others readily complicates this reading—whether that is Jules, her audience,
or the ‘sound itself’—I would nonetheless contend that the presentation of this work
constitutes a disciplined performance of tolerance, in which a concept of ‘freedom’ is
inscribed by arbitrary constraints. Unlike Cage, however, rather than trying to fully cleave in
twain the subjectively ‘social’ and the objectively ‘natural’, Radigue knowingly (and, I might
add, admirably) situates her aesthetic project within an interpersonal contingency. In the
synthesizer works, this contingency was actualized through unconventional performance
practices, and contextualized by a metaphorical conception of sounds as forms of life,
which, in the course of a composition, had to be properly managed and cared for so that, in
her words, “we can go on.” Where are we going, exactly? Radigue closes her 2009
aesthetic treatise with something like an answer: “Further adventures, explorations of this
infinite mystery of the transmutation of noise into sound, of sound into music and, as with
all true questions, to receive in response only a few ‘hows,’ never a ‘why,’ thus leaving
endless freedom to trace one’s path, to find one’s voice. Pulsations, breaths, beatings….”
(49). Tracing my own path through this dissertation, I spent a lot of time on the ‘hows’ of
Radigue’s synthesizer music, and doing so has made me a more competent technician and
performer of 122 electronic music. Throughout this process, though, I have been haunted
by the ‘whys’, and in this brief conclusion, I have started to grapple with them, even if
Radigue herself doesn’t put much stock in that endeavor. Wading into this speculative
territory, we find some deep paradoxes with respect to autonomy and its deference. For
instance, if we take seriously Foucault’s contention that, in the calculus of liberalism,
control is a co-requisite of freedom (1982, 790), what then will be the conditions of
Radigue’s “endless freedom”? These conundrums have real significance for anyone
approaching this composer’s work from a critical perspective, and future investigations
must grapple with them. These paradoxes will not be easily unraveled, nor will we find they
are unique to Radigue’s work. As a composer strongly influenced by the same French and
American modernisms which informed Radigue, I have long tussled with these very issues
of autonomy and its deference in my creative work, and always with mixed results. I have
lost count of the number of times that I felt myself engaged in a project that, at its outset,
seemed truly radical in modeling an alternative way of being, even as it reproduced
hegemonic power dynamics I was convinced would be upended. In fairness to Radigue,
she claims no such pretension, but we would be remiss to ignore the ways in which
intersubjectivity as form of social relation comes forth—however “unevenly realized”—in
her synthesizer practice. As I ponder some of the tensions and paradoxes in this work, I am
doing so with eyes and ears pointed to the future. We don’t know for certain whether a
musical avant-garde will play a catalyzing or reactive role in the world to come, but there is
much we can learn from Radigue’s work today and tomorrow. 123 Bibliography ARP 2500
Owner’s Manual. 1970. ARP Instruments, Inc. Newton, MA. ARP Promotional Brochure.
1972. ARP Instruments, Inc. Newton, MA. Benjamin, Jessica. 1988. The Bonds of Love:
Psychoanalysis, Feminism, and the Problem of Domination. New York: Pantheon Books.
Benjamin, Jessica. 1990. “An outline of intersubjectivity: The development of recognition”.
Psychoanalytic psychology 7 (suppl.) 33-46. doi: [Link] Born,
Georgina. 1995. Rationalizing Culture: IRCAM, Boulez, and the Institutionalization of the
Musical Avant-Garde. Berkeley: University of California Press. Beyer, Christian. 2022.
“Edmund Husserl.” In The Stanford Encyclopedia of Philosophy (Winter 2022 edition).
Edited by Edward N. Zalta and Uri Nodelman. Accessed via web:
[Link] Blasser, Peter. 2015.
“Stores at the Mall.” MA thesis, Wesleyan University. Cavell, Marcia. 1993. The
Psychoanalytic Mind: From Freud to Philosophy. Cambridge: Harvard University Press.
Chion, Michel. 1999. The Voice of Cinema. Claudia Gorbman, trans. New York: Columbia
University Press Clark, Andy. 1998. “Embodied, situated, and distributed cognition”. In A
Companion to Cognitive Sciences, First Edition, 506-517. Edited by William Bechtel and
George Graham. Malden: Blackwell. Colin, Dennis P. 1971. “Electrical Design and Musical
Applications of an Unconditionally Stable 125 Combination Voltage Controlled
Filter/Resonator.” Journal of the Audio Engineering Society 19 (11): 923-927. Dougherty,
William F. 2021. “Imagining together: Éliane Radigue’s collaborative creative process.”
DMA diss., Columbia University. Eckahrdt, Julia. 2019. “Introduction: The Music of Éliane
Radigue”. In Intermediary Spaces/ Espaces intermédiaires, 29-39. Brussels: Umland
Editions. Foucault, Michel. 1982. “The Subject and Power”. Critical Inquiry 8 (4): 777-795
Gilmore, Bob. 2003. ‘Wild Ocean: an interview with Horatiu Radulescu.’ Contemporary
Music Review 22 (1-2): 105-122. doi: 10.1080/0749446032000134760. Giritlian, Katie.
“From the Archives: Éliane Radigue”. Accessed via web: [Link]
Glover, Richard. 2013. “Minimalism and Other Media: Minimalism, technology and
electronic music”. In The Ashgate Research Companion to Minimalist and Postminimalist
Music, 161-180. Edited by Keith Potter, Kyle Gann and Pywll ap Siôn. Abingdon: Routledge.
Gluck, Bob. 2012. “Nurturing Young Composers: Morton Subotnick’s Late-1960s Studio in
New York City”. Computer Music Journal 36 (1): 65-80. Hasegawa, Robert. 2019 “Timbre as
Harmony—Harmony as Timbre” In The Oxford Handbook of Timbre. Edited by Emily I.
Dolan and Alexander Rehding. DOI: 10.1093/oxfordhb/ 9780190637224.013.11. Heller,
Erik. 2013. Why You Hear What You Hear: An Experiential Approach to Sound, Music, and
Psychoacoustics. Princeton: Princeton University Press. Herivel, John. 1975. Joseph
Fourier: The man and the physicist. London: Clarendon Press. 126 Husserl, Edmund. 1960.
Cartesian Meditations. Dorian Cairns, trans. The Hague: Martinus Nijhoff. Holterbach,
Emmanuel. 2013. “Peindre du temps et de l’espace avec des sons, la musique d’Éliane
Radigue”. In Éliane Radigue: Portraits polychromes. Edited by Daniel Teruggi, Evelyne
Gayou, Pierre-Albert Castanet, Christian Zanési. Paris: Groupe de Recherches Musicales
de l’Institut national de l’audiovisuel. Holterbach, Emmanuel and Éliane Radigue. 2021.
Notes to INA/GRM’s digital release of Vice Versa, Etc… Hutchins, Bernard A. “Frequency
Modulation Spectrum of an Exponential Voltage-Controlled Oscillator.” Journal of the
Audio Engineering Society 23 (3): 200-206. Kane, Brian. 2014. Sound Unseen: Acousmatic
Sound in Theory and Practice. Oxford: Oxford University Press. Kemp, Casey Alexandra.
“Tibetan Book of the Dead (Bardo Thödol).” 2016. Oxford Research Encyclopedia of
Religion: [Link] Lacan, Jacques. 1977.
Écrits. A selection. London: Routledge, 1977. Latour, Bruno. 2010. “An attempt at a
‘Compositionist Manifesto’”. New Literary History 41 (3): 471-490. Lentjes, Rebecca.
“Doom and Womb”. VAN Magazine, 29 June 2017. Accessed via web: [Link]
[Link]/mag/eliane-radigue-biogenesis/ Margulis, Elizabeth H. 2007. “Moved by
Nothing: Listening to Musical Silence”. Journal of Music Theory 51 (2): 245-276. Pinch,
Trevor and Frank Trocco. 2004. Analog Days: The Invention and Impact of the Moog 127
Synthesizer. Cambridge: Harvard University Press. Piekut, Benjamin. 2011.
Experimentalism Otherwise: The New York Avant-Garde and Its Limits. Berkeley: University
of California Press. Piekut, Benjamin. 2012. “Sound’s Modest Witness: Notes on Cage and
Modernism”. Contemporary Music Review 31 (1): 3-18. doi:
[Link] Peters, Deniz. 2012. “Touch: Real,
Apparent, Absent.” In Bodily Expression in Electronic Music. Edited by Deniz Peters,
Gerhard Eckel, and Andreas Dorschel, 17-34. Abingdon: Routledge. Prosaïc, Anaïs. 2012.
L’ecoute virtuose. DVD. Paris: La Huit. Polansky, Larry. 1983. “The Early Works of James
Tenney”. In Soundings #13. Edited by Peter Garland. Sante Fe: Soundings Press. Radigue,
Éliane. 2009 “The Mysterious Power of the Infinitesimal.” Leonardo Music Journal 19: 47-
49. Translated by Anne Fernandez and Jacqueline Rose. doi:
[Link] Radigue, Éliane. 2011.
Transamorem/Transmortem liner notes. Important Records: MA Radigue, Éliane and Julia
Eckhardt. 2019. Intermediary Spaces/Espaces intermédiaires. Brussels: Umland Editions.
Rodgers, Tara. “Interview with Éliane Radigue.” 2010a. In Pink Noises. 54-60. Durham:
Duke University Press. ––––––. 2010b. “Synthesizing Sound: Metaphor in Audio-Technical
Discourse and Synthesis History.” PhD diss., McGill University. 128 ––––––. 2011. “‘What,
for me, constitutes life in a sound?’: Electronic Sounds as Lively and Differentiated
Individuals”. 2011. American Quarterly 63 (3): 509-530. doi:
[Link] Spiegel, Laurie. 2012. CD booklet liner notes from
The Expanding Universe. Stravinsky, Igor. 1970. Poetics of Music in the Form of Six Lessons
(The Charles Eliot Norton Lectures). Cambridge: Harvard University Press. Stern, Daniel.
1983. “The Early Development of Schemas of Self, of Other, and of Various Experiences of
‘Self with Other’”. In Reflections on Self Psychology. Edited by J. Lichtenberg and S.
Kaplan. Hillsdale, NJ: The Analytic Press. Trevarthen, Colin. 1980. “The Foundations of
Intersubjectivity: Development of Interpersonal and Cooperative Understanding in
Infants”. In The Social Foundation of Language and Thought: Essays in Honor of Jerome
Bruner. Edited by D.R. Olson. New York: Norton. Vanheule, Stijn, An Lievrouw, and Paul
Verhaegh. 2003. “Burnout and intersubjectivity: A psychoanalytical study from a Lacanian
perspective”. Human Relations 56 (3): 321–338. ZKM Karlsruhe, Andy Koch, and Xenia
Leidig. “Éliane Radigue: Interview with Ludger Brümmer”. 2 December 2019. Accessed via
web: [Link] 129
Partial Radigue Discography Adnos. Released 7 May 2021 by L’Institut national de
l’audiovisuel/Groupe de Recherches Musicales (INA/GRM).
[Link] Chry-ptus — Biogenesis — Arthesis.
Released 30 April 2021 by L’Institut national de l’audiovisuel/Groupe de Recherches
Musicales (INA/GRM). [Link]
arthesis Chry-ptus — Geelriandre. Released 30 April 2021 by L’Institut national de
l’audiovisuel/ Groupe de Recherches Musicales (INA/GRM).
[Link] Feedback Works
1969-1970. Released 2 July 2021 by L’Institut national de l’audiovisuel/ Groupe de
Recherches Musicales (INA/GRM).
[Link]
Transamorem/Transmortem. Released 11 July 2011 by Important Records (IMPREC).
Rereleased digitally on 16 February 2021.
[Link] Trilogie de La Mort.
Released 21 May 2021 by L’Institut national de l’audiovisuel/Groupe de Recherches
Musicales (INA/GRM). [Link]

Introduction by Joel Chadabe In the 1960s,


Eliane Radigue began to move away from her earlier work in musique
concrète as Pierre Henry’s assistant, with its focus on the juxtaposition of self-contained
“musical objects,” and towards an exploration of sound as an evolution with subtle
transformations. By the 1970s, she was composing sounds by performing with an
synthesizer onto a tape that was then played back in a concert. As she told me several
years ago, “I could make sounds that change almost imperceptibly, and I learned to modify
the sounds tout doucement, very lightly, almost like a caress....I use tape because my
pieces are made up of sounds that crossfade into other sounds, and at the moment of
overlap there’s an interaction between the two sounds, and it’s crucial to get the timing
right.... ” In a concert, her music floated in the air, coming from everywhere as music
without a source, just a natural part of our world, just there, and without effort. In 2001,
responding to a request from Kasper Toeplitz, she composed a work for double-bass. It
was an entry into a new world for her, with rich and inspirational collaborations, with new
ideas, and with the discovery of a new world of sound in traditional instruments. In
December 2005, she created the first part of Naldjorlak for and with cellist Charles Curtis.
The second part of Naldjorlak, composed with basset-horn players Carole Robinson and
Bruno Martinez, was finished in September 2007. The third and last part was composed
with the three musicians. Naldjorlak I, II and III was first performed in January 2009. The
evolving sounds, the mystery of the sounds, the depth and presence of the sounds, are all
there with the instruments. But with these compositions, Eliane Radigue’s focus has
changed from an ambience to a person, from the impersonal world around us to the
breath, pulsations, beating of life. The Mysterious Power of the Infinitesimal In the
beginning, there was the air’s powerful breath, violent intimidating tornados, deep dark
waves emerging in long pulsations from cracks in the earth, joined with shooting fire in a
flaming crackling. Surging water, waves streaming into shimmering droplets.... Was it
already sound when no ear was tuned to this particular register of the wave spectrum (Fig.
1) in this immense vibrating symphony of the universe? Was there any sound if no ear was
there to hear it? The wind then turns into a breeze, the base of the earth into resonance, the
crackling fire into a peaceful source of heat, water, the surf against the bank, cooing like a
stream. Life is there. Another level, another theme begins. An organ adapts itself to
transformation of a miniscule zone s p e c i a l s e c t i o n The Mysterious Power of the
Infinitesimal Eliane Radigue Eliane Radigue (composer, artist), France. E-mail: Translated
by Anne Fernandez and Jacqueline Rose. Fig. 1. Eliane Radigue, spectrum of waves. (©
Stêphane Roux) ©2009 Eliane Radigue LEONARDO MUSIC JOURNAL, Vol. 19, pp. 47–49,
2009 47 48 Radigue, The Mysterious Power of the Infinitesimal from the immense vibrating
spectrum decoded into sounds captured, refined, meaningful. Crackling, roaring, howling
and growling, the noises of life—cacophony punctuating the deep ever-present rhythm of
the breath, pulsations, beating.... A few more million years, the noisy emissions organize
into coordinated sounds and with reflection, become a language. But breath, pulsations,
and beating remain. How, why, the sound of the wind, of the rain, the movement of clouds
across the sky as they appear and disappear against the blue of space, the crackling of fire,
how, why, through what mysterious alchemy will all this turn into a chanted recitative for
one of these beings, recently appeared; how, why does the experience of an impression
become sound, music? An ordering is underway. Breaths caught in hollow tubes become
tamed sound sources, hollow percussive objects become sources of rhythm, strings
stretched over yet other hollow objects, through the stroke of a bow, turn into sound
waves. Haunting recitative. The Voice, the Path is there. Hollow tubes with holes,
assembled in different lengths. Hollow objects with a skin stretched over cylinders of
various dimensions. Strings stretched over resonating chambers with more sophisticated
shapes, fitted with sound posts that transmit and hear, animated by “arcs” turned into
“bows.” And the Path, always more and more the mysterious “Path.” Supple and fluid,
breath, earth, heat and water, everything at once. The subtle alchemy of sounds becomes,
oh wonder, understood. Onehalf, one-quarter, one-third of a string’s length reveal their
perfect harmony, as later confirmed by images on an oscilloscope. Except for…the tiny,
infinitesimal difference—when left to their own devices, natural harmonics unfurl into
space in their own language. Temperament.... So many marvels came from it. It had to
happen, it was worthwhile. Then came the electronic Fairy; through the power of magnetic,
analog and digital capture, breath, pulsations, beating, and murmurs can now be defined
directly in their own spectrum, and thus reveal another dimension of sound—within sound.
The occasional accident, a disrupted relation between recorder—transmitter—recorder—
playback, and there our medium assumes some independence. How, then, does it
behave? Breath, pulsation, beating, sustained sound, depending on the mood. So much
richness in all this “feedback” and other chance or provoked “interference.” Such a
challenge to keep them under control while maintaining the correct distance, the tiny
adjustment that makes them develop until a terrible “fit” causes them to self-destruct.
This is when other splicers of four piece tubes and surveyors of variably sized strings over
resonating chambers decided to take everything back to the primary elements. The
frequencies and everything that ensues. Varying modulations giving rise to new spectra. In
short, all so called “electronic” music. In the beginning, from the beginning, the first
generators and all the possible treatments, modulating, filtering, mixing etc.... (cf. Milton
Babbitt’s studio at Columbia University, those from the time of dear Karlheinz and others).
Irascible and unreliable mastodons that required patient taming. On the other hand, by
reducing all this paraphernalia, by “modulating” it.... Another story was beginning. A story
where breath, pulsations, beating, murmurs and above all the natural production of these
marvelous, delicate and subtle harmonics could be deployed in a differently organized
manner. No acceptable intervals to tolerate or obey. No harmonic progression. No reFig. 2.
Eliane Radigue, montage, from left to right: Eliane Radigue and Arp synthesizer in 1974, in
the late 1980s, and more recently, 2004 or 2005. (Photo: Yves Arman. © Stêphane Roux)
Fig. 3. From left to right: Bruno Martinez, Charles Curtis and Carol Robinson worked with
Radigue on the third part of Naldjorlak. (© Delphine Migueres) Radigue, The Mysterious
Power of the Infinitesimal 49 cursion or inverted series, no respect for rules of atonality
tending toward “discordant.” Forget everything to learn again. The freedom to be immersed
in the ambivalence of continuous modulation with the uncertainty of being and/or not
being in this or that mode or tonality. The freedom to let yourself be overwhelmed,
submerged in a continuous sound flow where perceptual acuity is heightened through the
discovery of a certain slight beating, there in the background, pulsations, breath. The
freedom of a development beyond temporality in which the instant is limitless. Passing
through a present lacking dimension, or past, or future, or eternity. Immersion into a space
restrained, or limited by nothing. Simply there, where the absolute beginning is found.
Lending a new ear to a primitive and naïve way of listening. Breath, pulsation, beating,
murmur ... continuum. I dreamt of an unreal, impalpable music appearing and fading away
like clouds in a blue summer sky. Frolicking in the high mountain valleys around the wind,
and grey rocks and trees, like white runaways. This particular music, that always eluded
me. Each attempt ended in seeing it come closer and closer but remain unreachable, only
increasing the desire to try again and yet again to go a bit further. It will always be better the
next time.... How can sounds or words transcribe this imperceptibly slow transformation
occurring during every instant and that only an extremely attentive and alert eye can
sometimes perceive, the movement of a leaf, a stalk, a flower propelled by the life that
makes it grow? How to know a little, just a very little, simply to try, to train oneself to look
better in order to see, to listen better in order to hear and to know these transient moments
of being there, only there? Like the butterfly emerging naked from its chrysalis, with only
small white, blue or grey dots developing imperceptibly into the wings that will take flight. I
have known the enchantment of discovery by forgetting all I had learned, I have of course
also encountered doubt, denial, and the feeling of absurdity during long years, alone with
my ARP (Fig. 2) and all of the difficulties “we” had to go through, before perhaps
understanding each other ... a little. Now, it is in the iridescence of these slowly flowing
grains of sand, that some wonderful musicians have agreed to share what I call my “sound
fantasies.” Carol Robinson, Charles Curtis, Bruno Martinez (Fig. 3) and I have just
completed the third part of Naldjorlak. With their instruments, cello and basset horns, they
agreed to explore this subtle, delicate sound world fashioned from breath, pulsation,
beating, murmurs and the richness of the natural harmonics that radiate from it. The
instruments tuned almost into unison, with just a minuscule interval of a few commas to
give more freedom to the breaths, beatings, pulsations, murmurs, sustained sounds....
And above all, the wonderful experience of sharing, with the most subtle affinity,
complicity. The joy of hearing the music I dreamt of, and that these marvelous musicians
make for me, giving all of their talent, their virtuosity, their souls. What a strange
experience after so much wandering, to return to what was already there, the perfection of
acoustic instruments, the rich and subtle interplay of their harmonics, sub-harmonics,
partials, just intonation left to itself, elusive like the colors of a rainbow. Simply returning to
my first loves, those never forgotten. And yet it is clear that this long journey through
uncertain lands also enabled me to simply recognize what was already there, buried,
hidden. May it lead to yet others. Further adventures, explorations of this infinite mystery of
the transmutation of noise into sound, of sound into music and, as with all true questions,
to receive in response only a few “hows,” never a “why,” thus leaving endless freedom to
trace one’s path, to find one’s voice. Pulsations, breaths, beatings.... 123555
Jeremy Grimshaw, Draw a Straight Line and Follow It: The Music and Mysticism of La
Monte Young (Oxford: Oxford University Press, 2011), pp. 99–113.

Divergent Dreams

Despite working closely together for several years during the 1960s, the core
members of The Theatre of Eternal Music seem to have disagreed in certain
fundamental ways about the nature of their collaboration and the significance of
just intonation. Young and Zazeela saw their work as deeply, if eclectically, spiritual
and even religious in nature and considered just intonation a kind of esoteric,
acoustical alchemy with an ultimately cosmic purpose. They also asserted that the
group's improvisations comprised realizations of compositions, to which Young
alone could claim authorship and ownership. Conrad and Cale found that Young's
neo-Pythagorean mythologization of number (as embodied in sound by just
intonation), combined with what they saw as a tendency toward authoritarianism,
turned what was supposed to have been a communal activity into a cultish one.
They felt that Young's assertion of musical authorship over the group's work, and
the spiritual authority implied by that assertion, challenged Young's supposed
reputation as a radical, and, more important, directly contradicted the ideals of
equality and resistance to authority (musical and religious alike) that the
countercultural movement ostensibly embodied. This fundamental
disagreement even manifested itself in the names by which the two camps
preferred identifying the ensemble. As Tony Conrad later wrote,

At the time, the numerical frequency ratios we used for the microtonal
intervals . . . appeared so intimate with ancient Pythagorean numerology that
it was easy for us to be seduced into fantasizing that our system of pitch
relationships was "eternal;' as in La Monte Young's preferred designation,
"The Theatre of Eternal Music:' For my part, I preferred "Dream Music;'
which was less redolent of a socially regressive agenda . . .

The nascent idealism of the early 60s made it easy to fall for Pythagorean
number mysticism without having a clear perception of the anti-democratic
legacy which Pythagoreanism brings with it.38 This terminological disagreement,
and the ideological divide it reflected, fueled a bitter war of words that continued for
decades. In 1987 Young tried to interest record labels (including Gramavision, with
which he had an established relationship) in releasing some of The Theatre of
Eternal Music's recordings from the early and mid-1960s, but Conrad and Cale,
insisting on the collectivity of the group's work and asserting rights of ownership as
coauthors, foiled Young's proposals by threatening a lawsuit. Conrad also even
publicly voiced his grievances to concertgoers arriving for Young's appearance at
the 1990 North American New Music Festival in Buffalo, New York, by passing out
leaflets outside the venue stating that "Composer La Monte Young does not
understand 'his' work:'39
In 1995, with the release of Slapping Pythagoras, a drone-based recording
for amplified strings, guitars, bass clarinet, accordion, and various found sounds,
Conrad offered his most vitriolic, if indirect, critique of Young. The liner notes to
the recording offer a lengthy diatribe ostensibly against Pythagoras's ancient
number mysticism and the cultural elitism that it fostered among his followers.
"How was it;' Conrad asks in the notes, "that the esoteric religious knowledge of
the Egyptian and Babylonian priests was transformed into an antidemocratic
force which achieved a hegemonic role in Western thought?" 40
The two movements that comprise Conrad's piece take their titles from
folkloric legends surrounding Pythagoras's death, supposedly at the hands of
an angry mob who resented his esotericism: (1) Pythagoras, Refusing To Cross
The Bean Field At His Back, Is Dispatched By The Democrats; (2) The
Heterophony Of The Avenging Democrats, Outside, Cheers Of The Incarceration
Of The Pythagorean Elite, Whose Shrill Harmonic Agonies Merge And Shimmer
Inside Their Torched Meeting House.
Conrad's liner note commentary alternates between an expository voice,
directed to the reader, and a first-person voice, directed toward Pythagoras
himself. In the former, he gives summaries of various aspects of Pythagorean
thought; in the latter, he fantasizes himself as one of the democrats confronting
Pythagoras near the bean field. As Conrad castigates Pythagoras for his
misdeeds- sometimes using derisive nicknames such as "Pythie" and "Python''-it
becomes clear to those familiar with his career that his attack on the ancient
thinker serves as a thinly disguised tirade against Young. Pythagoras's elitism,
his taking credit for mathematical innovations borrowed from the Orient or
contributed by his own students, his mystification of number, simply serve as
stand- ins for the charges Conrad himself had leveled against Young: that he had
abandoned countercultural communality for hegemonic ritual, that he had
asserted unwarranted authority over and authorship of the activities of The
Theatre of Eternal Music, and that he had cosmologized just intonation in order
to deify himself. Pythagoras reads as Young, the cult of mathematik oi that
studied with the ancient master reads as The Theatre of Eternal Music, and, in
the following passage, philosophy might read as "minimalism'' and/or "just
intonation'':

Pythagoras, Pythagoras ! You've been so destructive-you and all your


ideals of Perfection! . . . What could you possibly have been trying to do
but walk all over democracy? No- it's much worse than that. It was you,
Pythaggie, it was you-who showed how to use "philosophy" to fight democracy!
You invented the word "philosophy;' for shit's sake! And why? Why? Because
you could use it to justify your own personal sect, your cult of personality,
where everything is credited to you. Everything is run by you. Talk about
"elite" and "exclusive:' Sure, your cult is open-armed to anyone!-Anyone who
will take your shit for five years without singing out!

Near the end of his lengthy essay, Conrad becomes somewhat more explicit about
the real subject of his anger. Stepping outside the narrative for an aside to the
reader, he writes,

The number-juggling, system-building, arithmetical mumbo-jumbo, and


technical precision in which [some modern] microtonalists may be found to
indulge has inclined them toward cultural absolutism. They feel that they can
use their Western abstract (arithmetical) tools to grasp and encompass non-
Western microtonal traditions (in India, Cambodia, "Persia;' etc.), much as
Western ethnomusicologists tried to colonize these traditions with European
notational efforts.

Having located his target in the twentieth century, Conrad then jumps back into his
ancient fantasy for its final, eponymous conclusion:

This slap is to crack apart the voices that you forced to blend as "One:' And this
slap is to smack down the imperial dominion of Number. ... And here's a slap, too,
for stealing the names of all your sect members, and taking credit for their works
...
"Pythein-agora'': Filth market. The assembly of rot.

As strident as the timbre of the argument between Young and Conrad had become
by this point, its audience remained small and obscure. In 2000, however, the
dispute found its way into a feature article in the New York Times after Young
threatened to sue Table of the Elements, the label responsible for Slapping
Pythagoras, for releasing Insid e the Dream Syndicate Volume I: Day of Niagara, a
bootleg recording of a version of The Tortoise, His Dreams and Journeys from
1965. 41 This recording continued an effort on the part of Conrad and Cale to
write their version of the history of The Theatre of Eternal Music, including the
application of the name "The Dream Syndicate"-a name that Conrad had coined
in 1966, but which had not actually been used by the ensemble. Conrad and Cale
followed this with other drone-based recordings of their own works from the
time. 42 One of Conrad's projects with Table of The Elements took a particularly
brash revisionist stance: a three-disc set bearing the title Early M inimalism Vol.
1 and containing one piece, Four Violins, from 1964-and several other pieces "in
the style of " the '60s drone pieces, but actually composed in the 1990s. 43
In addition to filing a lawsuit over the release of Day of Niagara, Young
released statements on his website condemning the Table of Elements
recording on artistic grounds. Not only was the recording unauthorized, Young
complained, it also was remastered, poorly, from a low-quality dub.44 The
statement further asserted his authorship over The Theatre of Eternal Music's
recordings and chided Conrad's complaints as so much revisionist sour grapes:

Since Conrad believes there was no underlying musical composition, there is


nothing for him to have a co-copyright in, since the ©-copyright in a sound
recording applies to the underlying musical composition. Conversely, since I
recognize the structure of the underlying musical composition, it is obviously
my composition . . ..
If Conrad and Cale were so deep into music composition during this
period, why didn't they record more themselves without the encumbrance
of Big Brother watching over them? What did they need me hanging around
for? The answers appear to be simple. Without the work I had done then and
continued to do over the next thirty-seven years to make it famous, without
my name to continue to publicize it (even via a controversy), they would not
be able to sell it. And without my guidance, they must have been able to only
produce comparatively weak free improvisations without the controlled
structure and unprecedented level of compositional sophistication that
drove The Tortoise at its own slow but steady pace into music history.45

In his response to Conrad, Young also solicited the opinions of other artists and
musicians who had known or worked with the members of The Theatre of
Eternal Music in the 1960s. Their responses reaffirm Young's position of
authority within the group; that is, they recognize precisely the kind of
authoritarianism that so bothered Conrad and Cale, but insist that anyone
working with Young should have recognized the hierarchical nature of the
collaboration. As Dennis Johnson, Young's former classmate, observed,

I have never seen it fail in any arrangement that La Monte had with anyone
who entered into a collaborative creative venture with him, that it was never
collaborative in terms of the conception; it was always La Monte's conception
in the first place. He always consistently guided the others so that the project
would never get too far away from his conception. ... One virtually had to see
oneself as a student.46

The poet Diane Wakoski, Young's former girlfriend, gave an even more blunt
assessment:
The thought that anyone, including such talented men as Cale and Conrad,
could ever be collaborators or co-composers in any La Monte Young project
seems laughable to me. It simply wouldn't happen. It may be dear to John
Cale's personal vision of himself, or his aesthetic, that he was part of a
democratic collaboration with La Monte, but no one who has spent any time
around La Monte could ever perceive him as a collaborator. ... Everyone who
knows La Monte is aware of the fact that you either play his game, or he
doesn't play with you.47
The extraordinarily strident argument over the work of The Theatre of Eternal
Music transcends the bickering over a tinny secondhand drone recording and
symbolizes a much broader argument about the ideological underpinnings of
early minimalism and just intonationism. Conrad and Cale insisted that the
rejection of traditional notation and tuning went hand in hand with the
rejection of the traditional concept of the composer and the work. For Young,
these developments in compositional practice reinforced the conviction that
music came from a higher source and thus lent even more authority to the
composer: the acoustical purity of just intonation created a site of interface
between the physical, psychological, and spiritual realms, and endowed the
composer with the solemn responsibility of traversing those realms.

Discovery of a Guru

These differing efforts to ideologize just intonation and drone music reflect
something of a paradox within '60s countercultu re as well, for in the circles in
which Young, Zazeela, Cale, and Conrad moved, a resistance to traditional
authority paradigms coexisted alongside a fascination with Indian classical
music- a tradition with deeply etched hierarchies of its own. Conrad, along with
countless others of his generation, had first become interested in Indian music
after hearing Ali Akbar Khan's famous recording, with narration by Yehudi
Menuhin, that appeared on Angel Records in 1955.48 However, although Conrad
"found in Indian music a vindication of [his] predilection for drone-like
performing;' he rejected the particulars of the Indian classical tradition itself,
wondering instead "what other new musics might spring from a drone, set
within a less authoritarian and tradition-ridden performance idiom:'49 Young
traced his interest in Indian music to the same 1955 recording, and during the
ensuing years he maintained something of a cultivated exoticist attitude toward
Indian music.50 Young's interest in Indian music eventually progressed far beyond
Western stylizations, however, and, as he undertook a serious and prolonged study
of Indian music, he found a model of musical composition and musically oriented
spirituality that coincided closely with his own.
Psychedelic writer Ralph Metzner, as it turned out, played an inadvertent but
crucial role in Young's immersion in Indian music. In 1967 Metzner took Young
and Zazeela to a concert featuring the famous shehnai player, Bismillah Khan. At
the concert Metzner introduced Young and Zazeela to Shyam Bhatnagar, an Indian
musician and spiritual practitioner. Upon making their acquaintance, Bhatnagar
played them tapes of an Indian musician, still living in India, and still virtually
unknown in the West, named Pandit Pran Nath.51
Nath was born in 1918 into a prominent family in Lahore in present-day
Pakistan, and had shown great musical promise as a young man. His family did not
approve of his musical aspirations, so at the age of thirteen he left home and set out
on his own. Nath eventually became one of only a handful of students of Ustad Abdul
Wahid Khan, cousin of the founder of the Kirana gharana, Abdul Karim Khan. As was
the tradition among gurus and their shishyas, Nath served in the household of Abdul
Wahid Khan in exchange for instruction; his duties included cleaning, running
errands, making tea for his master in the early morning, and, occasionally, sitting
before his master with a tambura for a lesson. After several years of study Nath
adopted the lifestyle of a hermit; he sang only at the temple in the Tapkeshwar
Caves, his naked body covered with ash, the current of the nearby stream
substituting for the drone of the tambura.52
After five years of ascetic isolation, Nath's guru told him to reenter public life,
marry, start a family, and take his musical gift beyond the walls of the Tapkeshwar
Temple; as expected, he obeyed.503 He developed a distinctive style and a vast
repertoire of ragas, to the point that better- known musicians would visit him to
study the nuances of a particular raga. Eventually he became an instructor in
Hindustani vocal music at Delhi University. Nath remained something of an
obscure specialist, however- a "musician's musician;' as Young put it,
increasingly at odds with the stylistic trends and institutional politics of the
Indian music scene. In fact, David Claman, questioning Young's "myopic"
fascination with Nath, points out that in several collections and listings of
musicians of the Kirana gharana Nath's name and work are conspicuously
absent.54 Ihe Oxford Encyclopedia of the Music of India does have a short entry
on Nath, but mentions only that he was a student of Abdul Wahid Khan and
that "he migrated to the U.S. [...] where he earned a name as a performer and
teacher:' In other words, it posits that his most notable work occurred after his
departure from India.55 Claman also recognizes Nath's "musician's musician"
status, however, and quotes the recollections of Sheila Dhar, who studied with
Nath in Delhi in the 1960s: "It was true that [Nath] had not received the
recognition he deserved in his own country;' Dhar writes, "except from a
handful of erratic connoisseurs:'56 Dhar also recalls from her lessons with Nath
the same emphases that initially attracted Young to Nath's style:
Though [Nath] was fanatical about the purity of a raga, he was unbelievably
unorthodox and impractical as a performer. His entire concentration was on
the spiritual and emotive intention of music. He could spend hours
exploring and elaborating on the tonal nuances of the melodic phrase of a
raga, but had only a fleeting interest in rhythmic accompaniment. As a
result, his concept of presentation was considered wayward by all but
research-minded connoisseurs.57

Not only did the uniqueness of Nath's style set it apart from the other Indian
music making its way to the West from India in the 1960s (indeed, Nath's
relative obscurity may have made him all the more intriguing to Young), but the
particulars of that style, with its intonational precision and relative de-emphasis
of regular, patterned rhythm, resonated directly with the compositional style
Young had developed during the 1960s. After hearing the tapes of Nath provided
by Shyam Bhatnagar, Young and Zazeela contacted Nath and eventually
arranged for him to travel to the United States. Nath eagerly accepted the
opportunity; he had three daughters who needed wedding dowries, and he
recognized the financial advantages of taking on students in the United States. A few
weeks after his arrival on January 11, 1970, Nath officially accepted Young and
Zazeela as disciples by tying red threads around their wrists in a traditional guru-
shishya ceremony.58
Young and Zazeela studied with Nath for the remaining quarter-century of his
life. Nath took on several additional American students as well, including Terry
Riley, experimental trumpeter Jon Hassell, jazz musicians Don Cherry and Lee
Konitz, and a number of other musicians and artists from among Young's New York
milieu. For several years Nath split his time between New York and the Bay Area,
where he taught at Mills College; during his stays in New York, Young and Zazeela
hosted Nath in their home, waiting on him in a manner reminiscent of Nath's own
discipleship with Abdul Wahid Khan. (Photos 5 and 6 show Pran Nath and Young in
performance together in 1977.)
In addition to the sonic affinities that drew Young and Zazeela to Pran Nath,
certain broader aesthetic ideas spoke to them as well. Nath's subtle approach to
developing a raga's rasa-its "flavor;' or its particular emotional state-was not unlike
the indelible particularity of feeling that Young associated with sustained, complex
just-tuned harmonies. This acute attention to emotional state compelled Pran Nath
to perpetuate and refine a part of the Hindustani vocal tradition that many other
musicians, in the face of a modernizing world and music industry, had neglected: the
performance of a particular raga at the particular time of day deemed most
appropriate to its character. He instructed his American disciples in this practice as
well. Midnight I Raga Malkauns, recorded in 1971 and 1976, features two late-night
performances sung by Pran Nath, with Riley, Young, and Zazeela among the
supporting perforrners.59 For a performance series at Paris's Palace Theatre in
1972, Nath, accompanied by Young, Zazeela, and Riley, sang a cycle of time-
appropriate ragas on a Friday night, Saturday afternoon, and Sunday morning.60
The interpersonal dynamic of Young's relationship to Pran Nath arguably
shaped his artistic development and self-perception as profoundly as did the stylistic
resonance between the two musicians. Alexander Keefe discerns a symbiosis in Nath's
initial encounters with Western musicians, Young in particular:

It must have come as a relief to [Nath] when a new type of student started
trickling into Delhi in the mid-1960s, seekers without the usual baggage,
looking for someone to revere. These Westerners found a stubborn middle-
aged man with a limited but oracular command of English, a voice of
astonishing power, and an otherworldly mien. Pandit Pran Nath became
gurujee, and then a few years later he was gone, leaving behind an Indian
cultural scene increasingly hostile to a performer of such suspect religious
leanings-he was a devotee of the Chishti Sufi saints, as well as a Nada yogi
and mystic-not to mention such stubbornly contrarian tastes.61

Was Young, in fact, looking for someone to revere? His career to that point had
been characterized by cycles of idolatry turning to rivalry: his serial works tried to
transcend Webern, his indeterminate works tried to transcend Cage, his jazz
improvisations sought to transcend so eminent an authority as the twelve-bar
blues itself. He was still a student when his correspondence with his mentor
Leonard Stein took on the precocious tone of counselor rather than pupil. He had
already assumed a "guru" persona of his own.62 Yet when Pran Nath arrived in
New York, Young treated his new guruji with utmost reverence, even
subservience. Perhaps what Young saw in Pran Nath, aside from the intensity of
Nath's artistic vision and its resonances with Young's own musical activities, was
a model not only for how one should make art but for who an artist should be.
Nath's extremity of style as a musician was tied indelibly, like Young's, to the
breadth and profu ndity of his cosmic vision.
Just prior to their discovery of Pandit Pran Nath, Young and Zazeela were
themselves discovered, by the wealthy and magnanimous arts patron Heiner
Friedrich. Friedrich had begun visiting Young's and Zazeela's early experimental
electronic sound environments in 1966, and hosted their first public Dream
House environment at his Munich gallery in 1969. During the subsequent
decades he granted them a level of patronage virtually unprecedented among
twentieth-century artists. This afforded Young and Zazeela the freedom to
pursue their interests without concern for the demands of the marketplace.
Nath became an additional beneficiary of Friedrich's generosity, eventually
enjoying a level of adoration likely well beyond his expectations (and in sharp
contrast with his past life as an ascetic). In 1979, thanks to the largesse of Dia Art
Foundation, which Friedrich had founded, Young and Zazeela moved operations to
the Harrison Street Dream House. The spacious building, reportedly purchased for
over a million dollars, was also generously appointed and fully staffed. Young lived
and worked there under circumstances virtually unrivaled for artistic freedom and
creative accommodation; Nath lived there as well during his stays in New York.
Visiting Nath in the early 1980s at the Harrison Street Dream House , his former
student from Delhi, Sheila Dhar, was taken aback by the elegant circumstances in
which she found her teacher. After signing in with the greeter at the door and
proceeding past numerous students and staff members shuffling quietly between
the building's numerous rooms, she found Nath in one of the Dream House's upper
studios. "He sat serenely on a divan in an enormous loft with a thick, snow-white
wall-to- wall carpet;' she observed. "At the far end, about twenty tanpuras,
obviously newly exported from India, lay side by side. The sunlight streamed in
through tall glass windows. There was no furniture. ..."63 Pran Nath used the
Harrison Street space as his New York headquarters until April 1985, when Dia Art
Foundation underwent an organizational change that resulted in the liquidation of
the Harrison Street property and Young's and Zazeela's relocation back to their
apartment on Church Street.
Young treated Nath not only as a musical master, but as a seer, with actual
premonitional capabilities bordering on the supernatural.64 Young's devotion was
such that, in 1996, when Nath's health deteriorated and his death seemed imminent,
he and Zazeela traveled to the home Nath kept in Berkeley to see him one last time.
Nath passed before they arrived; along with Nath's wife, they watched over the body
for two days in situ while waiting for one of Nath's daughters to make the journey
from India. They were joined by many of Nath's students, who joined them in
singing over the body of the deceased. In fact, the crowd of mourners grew so large
that some camped in tents in the yard. On the second day, they brought the body
down the stairs for the transport to the crematorium. Young came down the stairs
last, carrying Nath's head.65 They followed the hearse to the crematorium, and
joined the others in placing sandalwood paste and holy water from the Ganges River
on Nath's forehead before the casket entered the furnace. In the days that followed,
they reported receiving several dreams and visions from their guru.66
After Nath's death Young and Zazeela continued Nath's work through their
stewardship over the Kirana Center for Indian Classical Music, the instruction studio
Nath had founded in New York City in 1970, though they did not yet begin giving
public raga performances. A few of their students observed some of the traditional
protocols of the guru-shishya relationship; one insisted on arising to make them tea
at 3:00 A.M., as Young and Zazeela had done for Nath, and as Nath had done for his
guru.67 Young continued his other (non-Indian) musical projects, but gradually
devoted more and more of his musical efforts to singing raga. In June 2002, Young
was pronounced Khan Sahib by Ustad Hafizullah Khan Sahib, the only surviving
child of Pandit Pran Nath's teacher, Ustad Abdul Wahid Khan Sahib, and the Khalifa
of the Kirana gharana.68 This apparent honorific is mentioned in all program notes
for Young's subsequent raga concerts.
Arguably, however, the mantle had already been passed from guru to disciple
even before Young's attainment of Khan Sahib status. A few years before, during a
period in which Young had stopped singing raga altogether-first in mourning over
Nath's death, and then because of a serious illness that subsequently befell Zazeela-
Nath purportedly appeared to Young in a dream and urged him to take up singing
again. According to Young, Nath also indicated the great promise of a young artist
who had recently asked to be taken on as a student. Jung Hee Choi thus became a
disciple of Young and Zazeela in 1999. The three of them became the core, founding
members of what would become The Just Alap Raga Ensemble, and gave their first
performance together in November 2002; a few months later, Jung Hee Choi became
joined to Young and Zazeela in the ceremony of the red thread-the same ceremony
that had formalized their discipleship with Pran Nath. The Just Alap concerts
eventually became Young's primary mode of musical performance and creativity,
and took on a decidedly ritual air; promotional photographs from performances in
2003, 2005, and 2008 all show Young at the center of the performance space, an
illuminated circle from Zazeela's light installation hovering above him like a
magenta halo, his hand reaching into the air above his head.69
Throughout his years of study with Nath, Young had continued his own work
with sound environments and also brought to fruition The Well-Tuned Piano
through a commercial recording and numerous public performances. Young
consciously established a separation between his performance of Indian music with
Pran Nath and his own compositions, however, and initially maintained that
distinction quite clearly. While singing raga, for example, he deviated very little from
performance practices as taught to him by Nath: namely, his performances focused
overwhelmingly on the alap sections of performance, the improvisatory melodic
development in which the facets of the raga are unfolded. The tabla players enlisted
for his performances might wait over an hour for the alap to end and the rhythmic
tala of the drums to begin.
After Pran Nath's death, perhaps emboldened by the visions of his guru and his
attainment of the status of Khan Sahib, Young not only began leading public
performances of raga but also began to take some license with North Indian
performance practice. In the improvisatory alap sections of the 2003 performance
described at the beginning of this chapter, for example, Young introduced a novel
harmonic technique: arriving at a particular note in the raga, Young would signal to
one of the accompanying vocalists (his wife, Marian, or his assistant-disciple, Jung
Hee Choi) to sustain the note. This created a kind of sustained vocal harmony
quite outside traditional raga performance.
Young's boldest deviation from Indian classical practice occurred in March 2009,
in a pair of performances with The Just Alap Raga Ensemble given at the Guggenheim
Museum in New York. Before the beginning of the concert I attended, a prerecorded
tambura drone filled the performance space. It was clear from the moment the
musicians entered the venue that Young's ensemble had taken further liberties with
North Indian tradition. Instrumentation was the most immediately apparent area of
experimentation. In addition to the prerecorded tambura drone, the voices of Young
and Zazeela, those of their disciple Jung Hee Choi and fellow Pran Nath disciple John
Da'ud Constant, and the spare tabla playing of Naren Budhkar, the ensemble also
included Young's longtime interpreter Charles Curtis on cello and former Forever
Bad Blues Band member Jon Catler on fretless sustained electric guitar. The visual
novelty of the cello and guitar were not matched by any stark musical incongruity,
however; both instruments followed the same subtle melodic contours and sustained
tones that had characterized the Just Alap performances I had heard on earlier
occasions. Soon after starting the performance Young initiated the series of sustained
tones emphasizing certain notes in the raga, sometimes passing the responsibility
around to different members of the ensemble with a nod or simple gesture. During
some improvisational passages Young exploited the timbral diversity of the group by
engaging in call- and-response with members of the ensemble; Zazeela and Choi
featured prominently in this regard. The concert consisted of the premiere
performance of a single piece, Young's own Raga Sundara, a work in twelve-beat
ektal and in the raga known as Yaman Kalyan or simply Yaman. Young's two
stanzas of Sanskrit text offered up praise- first to raga itself, then to Young's guru,
for their ability to manifest divine, cosmic harmony through sound.
Yaman is a very well-known raga within the North Indian tradition- it is one
of the first a student learns from his or her guru-but it has distinctive features
that stand out to the Western ear. For convenience, I will describe these features as
if Yaman were rendered above a Western C tonic (or, in Hindustani solfege, "sa'').
Above the tambura drone notes, C and G, the notes of the raga proceed as if in a
Lydian mode, with a raised fourth scale degree, or F sharp. However, the ascending
scale starts on the seventh-scale degree, B, and while the C and G are present in the
drone, they are often absent in the ascending melodic configurations of the raga. The
raga tends to emphasize the seventh- and third-scale degrees, B and E; in fact, their
distance a perfect fourth apart sometimes suggests a kind of "tonicization" of E, with
the F sharp and G suggesting E natural minor. This creates a stunning bifurcated
tonal orientation, as the B and E seem to occasionally escape the gravitational pull of
the ever-present C-G drone of the tambura.
The ensemble's performance of Raga Sundara exploited these features quite
ingeniously. In the alap section, the unmetered improvisational passage in which
the raga is gradually introduced in order to prepare the ear for the composition
proper, Young used occasional sustained notes to emphasize the competing tonal
allegiances of raga Yaman, including the perfect fourth dyad between the seventh-
and third-scale degrees. At one point, these pitches were actually sustained in four
parts across two octaves, combining with the tambura drone to create a rich chord.
After the introductory alap, the musicians initially presented the text of the
composition proper in traditional monophonic fashion against the drone. Later on,
however, the ensemble revealed its most striking innovation: in another bold
deviation from traditional North Indian monophony, they rendered the composition
in two-part harmony. The perfect fourth between the seventh- and third-scale
degrees, already emphasized ordinally in raga Yaman and occasionally sustained
during the alap, suddenly became audible as part of a dynamic harmonic
progression. Furthermore, as the various instruments proceeded in this
harmonic fashion, they followed lines in conjunct motion separated by sonorous
thirds and fourths. In the context of raga performance, this harmonization,
combined with the ethereal polytonal quality of raga Yaman, lent the ensemble
a breathtakingly lush quality with each return of the refrain.70
Young's program notes for the March 2009 concerts suggest that he had
begun to see the two previously separate strands of his musical life-
experimental ("minimalist") composition and Indian classical singing-as
intertwined. Or, as Young would describe it, the two strands revealed
themselves to have come from the same divine loom:

The parallels between the Kirana style . . . and my music with long
sustained tones, the focus on one work over long periods of time, and just
intonation, are remarkable- a set of shared concerns that seemingly
evolved independently but actually derived from a common source of
higher inspiration and resulted in a merging of East and West that now
continues with informed awareness.

Young then quotes the text of the evening's composition, Raga Sundara, which
seems to represent a reconciliation of Young's polymusical pursuits: Anahata
Nada. Raga Ahata. The inaudible vibrations of universal structure become audibly
manifest through Raga.
Young provided further evidence of this reconciliation in a series of concerts
in 2010, which featured Pandit Pran Nath's arrangement of "Hazrat Turkaman;' a
traditional piece in raga Darbari, rendered in the kind of harmony Young had
introduced in the 2009 performances of Raga Sundara. He also started including
these raga-based performances as compositions in his works list.
Still, despite this late-career reconciliation of previously distinguishable
pursuits, and despite the earlier commonalities between Young's "Western" and
"Eastern" styles (such as a general similarity between his improvisational style in
The Well-Tuned Piano and the Kirana approach to alap ), Young's most important
nonraga compositions avoid explicit borrowing from Indian music. The scale for The
Well-Tuned Piano, for example, finds no remotely similar scalar relatives in the
multitude of North Indian ragas. Beneath the surface stasis they share, his Dream
House sound environments bear little harmonic resemblance to a tambura drone.
The mystical discourse with which Young has surrounded his music, however,
has moved freely between the timeless tradition he hopes his music will inaugurate
and the established musical genealogy into which he, through Nath, had been
grafted. The mystical persona Young had already adopted before his first encounter
with the Kirana gharana found additional validation in Young's discipleship with
Pran Nath: the most devoted shishya , after all, one day takes the place of his guru.
The authority granted by just intonation's acoustical positivism, and the
concomitant psychophysiological path to transcendence that Young saw as the
promise of rational tuning, merged with the mystical and musical lineage brought by
Nath from India.
The cynic might call this double identity a case of hedging one's
cosmological bets: praying simultaneously to both the rational Western god of
number as well as the ethereal author of the Eastern OM (not to mention the
disparate deities of counterculture and Mormonism, LSD and LDS). These
spiritualities cohabitate comfortably in Young's universe; his religiosity is
cumulative. God, Young states,

. . . [is] like this multifaceted jewel. ... Each facet, of course, is


extraordinarily brilliant. If a prophet catches the light of this facet, it's just
like enlightenment, indeed. And maybe some prophets catch a few facets.
But my feeling is that there are so many facets that it's been difficult for
any prophet to get the whole picture, and that's why I think you have these
interesting overlaps and these interesting differences between so many
different spiritual paths.71

At the conclusion of the raga performance I attended in June 2003, the


performers were greeted with solemn silence rather than applause. Several
minutes passed before Young rose from his position on the floor, and even as the
audience got up to leave they moved toward the door slowly and quietly. One
woman, a former student of Young's, knelt, touched his feet in a traditional
Indian gesture of respect, and presented him a mango as an offering; he paused,
thinking, then placed it on the shrine against the wall, in front of the pictures of
his raga ancestry.
More recently, Young and Zazeela have sought to exert a guru's control not
only over the performance of Young's music, but over the musicological and
music-theoretical study of it as well. Just as this book neared production,
publicity materials appeared for a ten-day seminar on Young's and Zazeela's
work to be held in the summer of 2011. The workshop is to be led by Charles
Curtis (Young and Zazeela no longer travel), and held at Kunst im
Regenbogenstadl in Polling, Germany, the longtime site of installations of
Young's and Zazeela's work. The slated program features lectures, workshops,
screenings, and performances, all featuring or examining Young's and Zazeela's
works. Perhaps the most distinctive element of the publicity brochure is this
quote from Young, which appears in the second paragraph:

I am from the school that believes the guru should stand at the top of the
hill and throw rocks at the would-be students and disciples as they ascend
toward him. In this way, it is assured that only the most strong and serious
devotees will reach the top of the hill to learn the tradition and carry it on
into the future.72

To reiterate: this was not just one of Young's many strident statements about
his own importance; it was the text chosen to entice prospective attendees.
The reverence bestowed upon Young by his most devoted listeners and
students, the devotion he demands from those who would study his music-and,
it must be said, the working relationships with him that have soured-shed a
particular light upon final refrain of the “Song to Guruji” from the 2003
performance: Allah-ji, give Guruji to me. Given the context of the performance,
there, in that space normally devoted to the continual and complex drones of
the Dream House, Young's words transcended the memorial nature of the song
and expressed more than affection for his deceased guru. Having responded to
what he considered his own divine mandate, and having founded what he
considered a new but nonetheless ageless musical tradition,Young spoke in the
words of both a shishya and a mystic. As I heard Young paying homage to Pran
Nath (and, by extension, to Ustad Abdul Wahid Khan, and Ustad Abdul Karim Khan),
and as I likewise observed Young gesture to his own shishya , Jung Hee Choi, I
perceived the makings of a ritual ordination, a mantle being bestowed, a musical
priesthood being passed on through a lineage of ancient authority. Young had not
found the guruji he had sought so fervently through song. He had become it.
ΑΛΘ = Φ

1
Composer/Researcher

Dimitri Voudouris

Composition

ΑΛΘ=Φ

Text to speech synthesis with Computer


Processing for a 24 Speaker interactive robotic ensemble
with a designed space for performance.

Performance Space

ΘΩΡΑΞ

Duration

25min29sec

Composed

2005-2008

2
INDEX Page

1 Neural Networking 4

2 Pathways of communication 6

3 Components of symbolic representation in thinking 9

4 Alexithymia [ΑΛΕΞΙΘΥΜΙΑ] 11

5 AΛΘ=Φ 12

6 TTS-Text to speech synthesis 13

a Mbrola , Demosthenes , Praat 13


b Text modelling 14
c Problems encountered in TTS processing 14

7 Modular synthesis construction 16

a Singing and expressive sound modules 16

8 Cross mapping 16

9 Composing strategies 17

a Micro,Mezzo,Macro sound environments 17

10 ΘΩΡΑΞ 18

a Origin 18
b Construction 18

3
1 --- Neural network
Social Networks
The phenomenon of small-world networks seems to suggest that there is hidden principle at work that
organizes our world, a combination of randomness and order that hasn’t been fully explained. The concept
of the small world network theory turns out to be applicable to anything from social networks and power
networks to cell structure -- that is, the communication between specialized cells -- as well as the WWW.
The Internet and WWW as Small-World Networks
The Internet and World Wide Web are networks that have evolved without any centralized control --
potentially, everyone can connect a server to the network or create their website. The small-world
architecture of these self-organizing networks suggests that this structure seems to be a form of
evolutionary principle, a particularly efficient form of communication (in the broadest sense) that allows
quick transmission of signals and stability of the network even if links are removed.
The Brain as Small-World Network
The neural network of the brain exhibits the same fundamental structure as that of social or computer
networks. The brain can be understood as an assembly of distinct modules, each of them responsible for
different tasks, such as speech, language, and vision. In neuroscience labs, magnetic resonance imaging
techniques -- which use radio waves to probe the pattern of blood flow in the brain, revealing how much
oxygen its various parts are using at any moment -- are used to see these modules in action. This process
reflects the level of neural activity.
The processing centres of the brain reside in the cerebral cortex, which contains most of the brain’s
neurons. The modules of the brain have to communicate in order to coordinate overall brain activity. A
region of the human brain no larger than a marble contains 287,400,000 neurons. Each neuron is a single
cell with a central body from which numerous fibres project. The shortest fibres (dendrites) are the
neuron’s receiving channels; the longer fibres (axons) are the transmission lines.
Axons from any neuron eventually link up with dendrites of other neurons, and some axons link up with
neurons in neighbouring brain areas. The brain also has a small number of ‘long-distance’ axons. Neural
Networks, Evolutionary Computation, and Artificial Life and Intelligence Projects Models of brain and
behavioural processes are commonly applied to computer technologies and networks in fields including
computer science, neurobiology, and cognitive science.
The effort of building naturally intelligent systems has become its own area of research. Computational
neural networks or neurocomputers are designed to mimic the architecture of the brain. They are
information processing systems inspired by the structure of biological neural systems and mimic the
functions of the central nervous system and the sensory organs attached to it. Humans are estimated to
have 10 billion neurons and the largest neurocomputers currently have about a few million.

Computational neural networks are distinguished by the following characteristics:

• They are not programmed in computer languages as conventional computers are, but trained in the
way we want them to.
• They communicate through neurodes, interconnections with variable weights and strengths.
• The information in neural networks is processed by constantly changing patterns of activity.

As opposed to having a separate memory and controller like a digital computer, a neural network is
controlled by 3 properties:

• The transfer function of the neurodes.


• The structure of the connection among the neurodes.
• The learning law the system follows.

4
Neural networks have 3 basic building blocks:

• Neurodes. (Artificial models of biological neurons)


• Interconnects. (Links between neurodes)
• Synapses. (Junction where interconnect meets neurode)

Neural networks deal with:

• Sensory tasks. (Such as the processing of visual stimuli)


• Motor tasks (controlling arm movements) or the decision-making by which sensory tasks drive
motor tasks.

Neural networks imitate behaviours and are better suited for processing at the cognitive level -- for
example

• Motor control.
• Association.
• Speech recognition.

Small-world Architecture in the Structure of Human Language

Language and speech, as well as association are obviously an important area of an intelligent human
system. The architecture of a small world also seems to form the basic structure of human language.

• As the products of a homogenous associative memory structure. Associationism describes the brain
as a homogenous network of interconnected units, which are modified by a learning mechanism.
This mechanism records correlations among frequently co-occurring input patterns.
• As a set of genetically determined computational modules in which rules manipulate symbolic
representations. Rule-and-representation theories describe the brain as a computational device in
which rules and principles operate on symbolic data structures. (Some rule theories further propose
that the brain is divided into modular computational systems that have an organization that is
largely specified genetically.)

The above-mentioned two principles connect to the different models employed by neural networks (the
computational kind) and Artificial Intelligence.
Neural networks basically act as an associative memory while AI attempts to generate heuristics or rules
to find solutions for problems of control, recognition, and object manipulation. The underlying assumption
is that problems can be solved by applying formal rules for symbol manipulation – task digital computers
handle well.
Neural networks attempt to solve these problems at the level of the structure of the machine itself. In
neural networks, symbolic processing is a result of the low-level structure of the physical system. While
neural networks imitate behaviours with rules and symbols.
Genetic and evolutionary computing (GEA) are computer methods based on natural selection and genetics
to solve problems across the spectrum of human endeavour. Evolutionary computation and artificial life
are two relatively new but fast-growing areas of science. Some people believe that artificial life and
evolutionary computation are very distinct areas which only overlap in the occasional use of evolutionary
computation techniques such as genetic algorithms by artificial life researchers; others argue that artificial
life and evolutionary computation are very closely related and evolutionary computation is an abstracted
form of artificial life, since both strive to represent "solutions" to an environment, deciding which
“solutions" get to reproduce and how things reproduce.

5
2 --- Pathways ιn communication

Different levels of communication take place around us on the micro – second. Below I will demonstrate
this phenomenon in three different examples.

Example one:
The cell

The roads of the body are the blood vessels. The blood vessels deliver the products to the cells in the form
of macronutrients or micronutrients.
Cell-to-cell communication represents how cells coordinate their physiological behaviours so as to create a
cooperative whole, one that is greater than the sum of their cellular parts. When cell-to-cell communi-
cation is unsuccessful, a result can be a harmful absence of cooperation, defection, which between cells
within a multicellular organism we might recognize as tumours or cancer, as adult-onset diabetes, as
developmental abnormalities.
Pathways of communication are through chemical signalling – hormones, by products, local regulator,
signal-transduction pathway, reception, transduction, response, G-protein-linked receptors, Tyrosine-
kinase receptors, protein kinase etc.

Pathways of communication in micro and mezzo levels in the human body Figure 1

6
Example two:
The factory

In a factory there is an entrance that faces the road. Trucks enter the factory from the road to the
delivery area. The raw materials to be processed are delivered to the appropriate parts of the factory and
the workers in the factory use various machines to manufacture particular products.
The factory needs a constant and reliable power supply. It also has strong, sturdy walls to protect against
the weather and robbery. The management team headed by the boss work in separate offices in the
factory so regular, consistent instructions can be given to the workers. The quality control section is also
housed in the in the management offices. There is a health and safety surveillance team along with
security to maintain the well being of the workers and to ensure that the workers, the property and the
products are safe. There is also a cleaning and maintenance team that ensures the factory is spotless and
any waste products are transported out of the factory and taken to an areaway from the factory for
adequate disposal. The management team also has to organize the regular delivery of products to the
factory. It is the job of the management team to ensure the factory has an adequate, though not
excessive or depleted, supply of raw materials to ensure maximum efficiency within the factory.
There are many factories supplying similar and different goods. There is also and integrated smooth
communication between all the factories in one area and between all the areas of one region and between
all the regions of one city and between all the cities of one country.
The pathways of communication between micro, mezzo and macro levels are interdependent for the
system of the factory to function and survive.

Pharmaceutical Factory
Figure 2

7
Example three:
An accident and a robbery.

Figure 3

At 7am there is a collision at point X this has resulted in serious traffic congestion
Indicated by the red arrows [pathway A is a two-way road, pathway B is a one-way and pathway C is a
two-way road].
The traffic has come to a stand still for one hour; no vehicles have been able to cross the intersection in
this time.
On the micro level pathway A has a police vehicle which cannot move out of the situation the policeman is
communicating with the station he is frustrated for he needs to get to the supermarket up the road that
has been held up by robbers this is at the mezzo level as more than one person is involved in the robbery
the accident is also on the mezzo level.
The ambulance needs to get to the intersection to attend to the injured. It also cannot get through [the
crew are at the micro level] the hospital has contacted a helicopter to land in the seen of the accident at
level X.
The pathway of communication used by the police and ambulance is by cell phone or radio route, this will
involve more police vehicles both at the scene of the accident and at the place of the robbery and a rescue
helicopter at the scene on the accident.
The drivers in the vehicles held up by the accident are communicating via cell phone with the areas of
destination.
On the macro level the helicopter, vehicles held up, robbery, ambulance, injured people held up both in
accident and robbery including bystanders are delayed in reaching their point of destination. This example
reflects on how a situation of obstruction can lead to delay in the pathways of communication when each
level is interdependent on each other.

8
A
recipient / producer

outgoing email in text

B e m ai l
recipient / producer in E
text

incoming email in text


recipient / producer

C
recipient / producer

recipient / producer

An interactive network of communication via email [involving chemical, electrical and digital
processing]—Figure 4

3 --- Components of symbolic representation in thinking


Images, Concepts and Language are three symbolic components that represent our way of thinking.
I will focus on Language as AΛΘ=Φ was constructed around the impact that language has on the way the
computer and we think and the control that it has over emotional change.
The mechanics of language involves a system of symbols that are employed for making sense of our world
and for others – The intercommunicative function of language.
When we intercommunicate via words we reveal that we are thinking and what we are thinking about this
also leads us to another aspect that of internalising speech as a means of ordering and clarifying our
thoughts and feelings about something, this aspect of internalising allows us to reflect, explore and
understand situations better- inner and implicit speech.
Inner speech differs from the explicit use of language [inner speech is fragmentary consisting of key
words and phrases and simple grammatical constructions]. Inner speech is a by-product of thinking and it
is not itself a thought process.
Grammatical sensitivity involves prepositions and conjunctions [and, however, therefore, but and if] also
functional signs [comma, full stop, question mark, semi colons, exclamation marks and accent signs] e.g.
man has applied this notion of language to music scores. Grammatical sensitivity thus allows us to learn,
remember and manipulate more complex concepts therefore thinking and linguistic competences are
identical.

9
Concepts are learned pre-verbally thus words refer to or serve as symbols for concepts, and they cannot
be said to be the same as concepts they are rather representative of concepts, concepts are more
comprehensive than words.
Concepts can be extensional [connotation] or intentional [denotation] the experience there of and the
result will differ from person to person.
Denotative meaning of words e.g. send me an email has a standardized meaning were the members of a
language community can understand one another and would act accordingly. The denotative meaning of
the word is based on generally accepted rules.
Connotative meaning of words are subjective e.g. pain, love are words serving as symbols of concepts
that the individual has created subjectively built up meanings from own experiences.

Relationship between language and reality: -

Words are not identical with the reality they represent, and that the relationship between language and
reality is similar to the relationship between a map and the territory it represents.

Emotional Tension Threshold: -

The term emotional tension threshold refers to the amount of emotional tension a person can endure or
cope with before his effective functioning becomes impaired .It corresponds to the meaning of the term
elasticity limit.
The person’s basic tension level is dependent on homeostatic regulation in the autonomic nervous system.
The greater the degree of autonomic homeostasis or balance, the lower will be the intensity of emotional
tension and the higher will be the emotional tension threshold.
Other influences are namely: 1] emotional liability 2] temperament

! Emotional Liability – refers to the ease [speed and intensity] with which homeostasis in the
sympathetic and parasympathetic divisions of the autonomic nervous system becomes distributed
because of synaptic malfunctioning at various levels of the nervous system. Impinging stimuli are
converted at receptor level to electrochemical impulses that are then conducted along afferent
paths to hierarchically higher levels in the nervous system. In the nervous system every synapse
offers a degree of resistance to impulse transmission. In some people the inhibition is greater than
in others if the inhibition is low the impulses are transmitted more rapidly, to result in a quick and
intense reaction by the person. People with a labile nervous system react with greater speed and
intensity to a stressor than to people with a stable nervous system. People with a labile nervous
system have a high basic tension level and a low emotional tension threshold .The reverse applies
to people with a stable nervous system. Various other factors influencing the liability of a person
are the different stages in endocrine development, physical exhaustion [fatigue, heavy workload,
lack of sleep, illness, chemical stimulants such as drugs, etc]
! Temperament: - refers to the relatively consistent and characteristic emotional nature, general
mood and reaction pattern of a person this can be an inherited attributes of the nervous and
endocrine systems. There are four dimensions of temperament.
General activity level, with the extremes of high activity and high passivity. Emotionality, with
the extremes of high emotional perturbability and high emotional imperturbability.
Social disposition, with the extremes of gregariousness and detachment. Impulsiveness, with
the extremes of self-control and lack of self –control. Socialization and learning can regulate the
manifestations of the person’s temperament potential. The interaction between the person’s
genotype and environmental influences depends largely on how temperament tendencies are
manifested. For the computer network sensitivity of speaker receptors can vary.

10
4 --- Alexithymia [ΑΛΕΞΙΘΥΜΙΑ]
Definition: A condition where a person is unable to describe emotion in words.

We do not need to search for extreme examples to understand that there is a division between thought
and language. We experience such events every day. Writing papers, we struggle to find the perfect words
to convey ideas; feeling an intense passion for another person, we struggle to find simple words to
express our depths of emotion. In the wake of horrendous tragedy, we are without words to describe the
pain of the “horrible sights [we] have witnessed.” The way in which we experience the world and how we
choose to communicate those experiences are two very different aspects of the human mind. An inability
to find words doesn’t make us stupid or inarticulate or even less likely to experience “human enjoyment,”
it merely makes us human.

There are two types of alexithymia Primary a physical cause such as a genetic abnormality or due to
injury and Secondary which occurs in reaction to severe psychological trauma, whereby a patient
suppresses painful emotions as a temporary defence against trauma; when the psychological stressor is
removed, the alexithymia disappears.
It is in Secondary alexithymia were my research lies which led me to analyse the socio-cultural
implications that society might have on an individual.
Many people in society as a whole have difficulty in talking about there feelings whether they are
alexithymics or not. Materialism – money, appearances, grades, test scores are examples that Man places
high value on but not on feelings. Feelings of people from school level up to the workplace need to be
valued. Our society is so dysfunctional and we are in so much pain most of the time that we could not
handle it if we stopped to either really feel our pain or really talk about it.
People know they can't handle their real feelings. So they learn not to talk about them. Adults don't talk
about them, so how could we expect children or teens to learn to? Living in a progressively engaged
alexithymic society Man constantly alienates himself from emotional engagements. An area were the
computerized world that he has created is a mere mirror image, the extended self of Man. This
computerized world is emotionless e.g. when text to speech programs read text they read with no emotion
yet Man is continually striving to create emotion in these programs. Does this action mean that Man is
trying to find answers within or trying to combat and even reduce alexithymia?
I identify alexithymia more as a psychological state of mind. The physiological state can rather be termed
aphasia an inability to express oneself and understand. I have chosen language, as a tool of failed
communication were language fails emotion in self-expression.

11
5 --- AΛΘ=Φ
A= lack
Λ= word [lexis]
Θ= emotion [thymos]
Φ= sound [phone]

AΛΘ=Φ is a comparative study of pathways in communication between Man and Machine and is composed
using fragments of processed speech synthesis [TTS], the reason is due mainly to speech quality which is
a multi-dimensional term and the evaluation method must be chosen carefully to achieve desired results. I
created pre-linguistic expressions rather than actual words in the areas of emotion i.e. pain, frustration,
anxiety, confusion, love etc. that is not pre determined by language but is a pre cursor an expression that
language fails to address.
AΛΘ=Φ attempts to attach a language to emotions an area that normal language fails, at the same time
attempting to address an emergency in a world where imperfection is becoming less tolerable due to
social pressure, were perfection is measured on a materialistic and superficial level. At this level both man
and machine are interdependent on each other.
Has man become psychologically weak, is man not developed enough to survive this ongoing pressure
that both society and technology has to offer? To attempt to answer those questions we need to look at
another question. How can man remove himself from this reality when his state of conflict is created by
non-other than himself?

12
6 --- TTS-Text to speech synthesis

TTS, short for Text-To-Speech, is the creation of audible speech from computer readable text.

AΛΘ=Φ used the following text to speech synthesis programs:

Mbrola

The aim of the MBROLA project, initiated by the TCTS Lab of the Faculté Polytechnique de Mons (Belgium),
is to obtain a set of speech synthesizers for as many languages as possible, and provide them for non-
commercial applications. The ultimate goal is to boost academic research on speech synthesis, and
particularly on prosody generation, known as one of the biggest challenges taken up by Text-To-Speech
synthesizers for the years to come.
Central to the MBROLA project is MBROLA, a speech synthesizer based on the concatenation of diphones.
It takes a list of phonemes as input, together with prosodic information (duration of phonemes and a
piecewise linear description of pitch), and produces speech samples on 16 bits (linear), at the sampling
frequency of the diphone database used (it is therefore NOT a Text-To-Speech (TTS) synthesizer, since it
does not accept raw text as input). This synthesizer is provided for non-commercial, non-military
applications only. Diphone databases tailored to the Mbrola format are needed to run the synthesizer.
French voices have been made available by the authors of MBROLA, and the MBROLA project has itself
been organized so as to incite other research labs or companies to share their diphone databases.

Demosthenes speech composer Version 2

DEMOSTHeNES Speech Composer is a general-purpose multilingual and polyglot software text-to-speech


(TtS) system that supports the Greek language. DEMOSTHeNES targets to the delivery of intelligible and
human-like speech from a wide variety of e-text sources. Its open and component based architecture
offers great flexibility, customization and expandability.
DEMOSTHeNES is appropriate for multimedia applications (spoken encyclopaedias, presentations etc),
voice technology applications (e.g. telephony services) and aids for the disabled, while it can be
embedded or linked to others providing a spoken output. Its novel design is very efficient (approx. 200
times realtime, in version 2), and thus it can offer many channels on server applications. Moreover, the
support of several interfaces like MS-SAPI provides easy linking to other applications.

Praat

A program for speech analysis and synthesis written by Paul Boersma and David Weenink at the
Department of Phonetics of the University of Amsterdam

13
b --- Text modelling:

Text was typed into Mbrola and Demosthenes TTS programs and the results were imported into Praat.

c --- Problems encountered in TTS processing:

Numerals

Digits and numerals must be expanded into full words so must fractions and dates are also problematic

Abbreviations

Abbreviations may be expanded into full words, pronounced as written, or pronounced letter by letter.
For example kg can be either kilogram or kilograms depending on preceding number, St. can be saint or
street, Dr. doctor or drive and ft. Fort, foot or feet.

Acronyms

Special characters and symbols, such as '$', '%', '&', '/', '-', '+', cause also special kind of problems. In
some situations the word order must be changed. For example, $71.50 must be expanded as seventy-one
dollars and fifty cents and $100 million as one hundred million dollars, not as one hundred dollars million.

Pronunciation

Some words, called homographs, cause maybe the most difficult problems in TTS systems. Homographs
are spelled the same way but they differ in meaning and usually in pronunciation (e.g. lives). The word
lives is for example pronounced differently in sentences "Three lives were lost" and "One lives to eat". The
pronunciation of a certain word may also be different due to contextual effects Some sounds may also be
either voiced or unvoiced in different context. For example, phoneme /s/ in word dogs is voiced, but
unvoiced in word cats.

Prosody

Finding correct intonation, stress, and duration from written text is probably the most challenging problem
and may be considered as the melody, rhythm, and emphasis of the speech at the perceptual level. The
intonation means how the pitch pattern or fundamental frequency changes during speech. The prosody of
continuous speech depends on many separate aspects, such as the meaning of the sentence and the
speaker characteristics and emotions. The prosodic dependencies are shown in figure [Link],
written text usually contains very little information of these features and some of them change
dynamically during speech timing at sentence level or grouping of words into phrases correctly is difficult
because prosodic phrasing is not always marked in text by punctuation, and phrasal accentuation is
almost never marked. If there is no breath pauses in speech or if they are in wrong places, the speech
may sound very unnatural or even the meaning of the sentence may be misunderstood. For example, the
input string "John says Peter is a liar" can be spoken as two different ways giving two different meanings
as "John says: Peter is a liar" or "John, says Peter, is a liar". In the first sentence Peter is a liar, and in the
second one the liar is John.

14
Speaker Characteristics Feeling The meaning of sentence
-gender -anger -neutral
-age -sadness -imperative
-happiness -question

PROSODY

Fundemental frequency
Duration
Stress

Prosodic dependencies: Figure 5

15
7 --- Modular constructive synthesis

The sounds generated from the TTS were imported into a modular synthesis program

a --- Singing and expressive sound modules

The creation of basic Modular synthesis modules that included a granular sampler, filters, envelopes, LFO
ADSH with vibrato, amplifier were used to produce singing voices in duration of 10 second – 60 second
sound fragments.
Each sound fragment was analysed in Praat and manipulated so as to create expression and phonetics.

8 --- Cross mapping

Plotting paper was used to map and link sound commands were the X-axis represents sound layers as the
Y-axis represents time. The plotting paper represents a puzzle were each block could contain up to 20
sound fragments.

Plotting paper with sound fragments: Figure 6

16
9 --- Composing Strategies
a --- micro, mezzo, macro sound environments

Each one of the 20 sound fragments can have one of three possibilities assigned to them. A negotiable or
non-negotiable presence can be attributed to each environment.

micro mezzo macro

2 1 1 Various possibilities were measured, although the result equals a numeric


1 2 1 4 in each row. The sound would differ in each one of the rows.
1 1 2

micro mezzo macro

0.5 1.5 2
1.5 0.5 2
2 1.5 0.5 Various possibilities were measured, although the result equals a numeric
0.5 2 1.5 4 in each row. The sound would differ in each one of the rows.
1.5 2 0.5
2 1.5 0.5
2 0.5 1.5

The micro component of a sound fragment includes a mezzo and macro components. Thus if we had to
increase the micro component the mezzo and macro components would differ. Similarly if we had to
increase the mezzo component both micro and macro components would differ and if we had to increase
the macro component both the micro and mezzo would differ. Each instant proves that each environment
is inter-dependant on each other.
This exercise played an important role in the deployment and positioning of the speakers in the final
instance.

17
10 --- ΘΩΡΑΞ
a --- Origin

About the human thorax-The human thorax extends from the head to the diaphragm the skeleton of
the thorax or chest is an osseo-cartilaginous cage, containing and protecting the principal organs of
respiration and circulation. It is conical in shape, being narrow above and broad below, flattened from
before backward, and longer behind than in front.

The diagram above shows us the histology and muscular dispersal of the thorax. Figure 7

The ΘΩΡΑΞ is a spherical structure whose function it is to centralize the senses we could refer to it as an
encephalic center were mechanical and cellular meet. It is a performance space which was designed for
the performance of ΑΛΘ=Φ
The ΘΩΡΑΞ is not a psychological cross section of self-analysis but is a collective space that encompasses
society as a whole.
Drawing a triangle from the base of the diaphragm to the tip of the skull in a human body and create 14
asymmetric spheres with a central axis not extending beyond the parameters of the triangle we develop
the ΘΩΡΑΞ.

b --- Construction

The structural representation and mechanics of ΘΩΡΑΞ are as follows

Dimensions:

Length = 20meters
Width = 20meters
Height = 23meters

Metal ring structure:

14 metal rings of different angular placements in space with a radius of 20meters in diameter, supportive
crossbars will be used to enhanced stability of the structure.

18
Floor:

An elevated surface cut to the size of the bottom metal ring


Floor = Diaphragm

Top circle:

The top circle represents and tips the skull. This circle is closed not allowing sound to escape.

ΥΜΗΝ [membrane] to cover the ΘΩΡΑΞ:

Fibre must consist of at least 50% rubber and 50% synthetic material a petroleum by-product

Fiber = Muscle

About nature of structure and fibre:

The circular shape of ΘΩΡΑΞ allows sound waves to move in a circular manner within the space.

The nature of the semi synthetic fibre is to reduce the weight of the material the fibre needs to offer some
elasticity where sound is absorbed at a particular height but then released a few meters further up the
structure were the structure like the function of the thorax allows for elasticity due to muscular contraction
to occur thus increasing the pressure of air released from the lungs through the vocal chords and also
affecting the duration of sound produced versus the lung capacity. In the case of ΘΩΡΑΞ the capacity of
sound produced in the space as sound energy will vary in intensity that will be directly related to air
pressure dispersion and combustion [as more molecules collide at a given time in a given space pressure
will increase]

Public seating and space layout in ΘΩΡΑΞ:

• 50 people per performance are allowed in the performing space.


• The public is allowed to move through the space during a performance.
• All entrances and exits will be closed during a performance.
• The space will have no formal seating available.
• A few benches positioned in and amongst the partitionings in the space.
• Poles with speaker monitor attachments will be present.
• In figure 8 HSDF, ISDF, OSDF will be clearly marked on the floor for the public to see were the
field spatialization of the sound will be.
• The space will mainly be in the dark except for slight light illumination on the sides of the structure
extending up the cone that will be manipulated by the lighting technician.

19
Figure 8

20
Deployment of robots:

In ΘΩΡΑΞ the ground plan for the deployment of the robots is conducted by means of a network, shown in
figure 9

Figure 9

21
Copyright © 2008 Dimitri Voudouris. All rights reserved

22
ΑΛΘ = Φ

1
Composer/Researcher

Dimitri Voudouris

Composition

ΑΛΘ=Φ

Text to speech synthesis with Computer


Processing for a 24 Speaker robotic ensemble
with designed space for performance.

Performance Space

ΘΩΡΑΞ

Duration

25min29sec

Composed

2005-2008

2
INDEX Page

11 Robotic ensemble 6

a structure and specifications 6

12 Sound projection 8

a frequency spectrum 8
b sound elasticity, density and velocity of transmission 8
c Rubber elasticity 9
d Sound mixing console 10
e Execution of ΑΛΘ = Φ 12

13 Lighting Technician 12

14 References 13

3
figure 10 shows the connections of the speakers how they follow a central route into the space via
neurodes of various weight and strength.

Figure 10

At Unit B is a sound engineer and at Unit A is a computer engineer + robot technician. Signals from the
behavioural characteristics of the robots are received and pass this information to the computer engineer
who’s system then decides to up/lower the volume and to exercise manouvre possibilities for each of the
robots the information is passed on to the sound engineer who releases the sound spectrum.

4
The 14 Ring structure of ΘΩΡΑΞ together with speculative sound performance Figure 11

5
11 --- Robotic ensemble
a --- Structure and Specifications

Vibration Sensor: Figure 12

Seismic, low frequency, ceramic flexural ICP accel, 10 V/g, 0.03 to 500 Hz, top exit, 2-pin conn.
Broadband Resolution: (1 to 10000 Hz) 0.5 µg (5.0 µm/s²)
Electrical Connector: 2-Pin MIL-C-5015
Electrical Connection Position: Top
Weight: 22 oz (624 gm)

Acoustic Pressure Sensor: Figure 13

Measurement Range: (± 5 V output) 3.33 psi (181 dB)


Sensitivity:(±15%)1500mV/psi(217.5mV/kPa)
Low Frequency Response:(-5%)5Hz
Resonant Frequency:>=13kHz
Electrical Connector: Integral Cable

6
Figure 14

1. Vibration and Acoustic Pressure Sensors: which can detect the ground noise and adjust sound
volume accordingly (each speaker can function as an independent unit).
2. The speaker head can rotate at 180 degrees on its axis.
3. The speaker head can move forward and backward at 90 degrees.
4. Via hydraulic action the speaker can collapse to half its original size in height.

7
12 --- Sound Projection
A sound projectionist is required, who is completely familiar with the score who has ample experience with
mixing electronic music. The sound projectionist chooses the collaborators of the sound equipment
company and inspects, ahead of time, the performance venue. At that time, it must be arranged that no
listener is to enter the performance space before the performance and to enter the control booth were the
multi-track equipment is situated. The sound projectionist decides all details about the sound projection,
organises the necessary rehearsals with the equipment and directs the technical installation [tests that the
speakers are performing correctly].
In addition to two sound technicians, he needs a musical assistant who checks the acoustics of all the
positions in ΘΩΡΑΞ and to establish how clearly the sound directionality can be heard through out the
space.
It is his responsibility that the listener hears a balanced whole.

a --- Frequency spectrum


Sound spectrum is one of the determinants of the timbre or quality of a sound or note. It is the relative
strength of pitches called harmonics and various frequencies usually above the fundamental frequency.
Care was taken to take all above into consideration during the sound construction of ΑΛΘ = Φ.

b --- Sound elasticity, density and velocity of transmission


Sound waves travel through any medium to a velocity that is controlled by the medium. Varying the
frequency and intensity of the sound waves will not affect the speed of propagation. The elasticity and
density of a medium are the two basic physical properties that govern the velocity of sound through the
medium.
Elasticity is the ability of a strained body to recover its shape after deformation, as from a vibration or
compression. The measure of elasticity of a body is the force it exerts to return to its original shape.
The density of a medium or substance is the mass per unit volume of the medium or substance. Raising
the temperature of the medium versus inside and outside (which decreases its density) has the effect of
increasing the velocity of sound through the medium.
The velocity of sound in an elastic medium is expressed by the formula:

Even though solids such as steel and glass are far denser than air, their elasticity’s are so much greater
that the velocities of sound in them are 15 times greater than the velocity of sound in air. Using elasticity
as a rough indication of the speed of sound in a given medium, we can state as a general rule that sound
travels faster in harder materials such as steel [the metal rings in the construction of ΘΩΡΑΞ], slower in
liquids, and slowest in gases. Density has the opposite effect on the velocity of sound, that is, with other
factors constant, a denser material (such as lead) passes sound slower.
At a given temperature and atmospheric pressure, all sound waves travel in air at the same speed. Thus
the velocity that sound will travel through air at 0°C is 1,087 feet per second. But for practical purposes,
the speed of sound in air may be considered as 1,100 feet per second.

8
Figure 15 - Comparison of Velocity of Sound in Various Mediums

MEDIUM TEMPERATURE VELOCITY


°C (FT/SEC)
AIR 0 1,087
AIR 20 1,127
CARBON DIOXIDE 0 8,56
HYDROGEN 0 4,219
STEEL 0 16,410
STEEL 20 16,850

Elasticity is involved whenever atoms vibrate.A sound wave consists of energy that pushes atoms closer
together momentarily. The energy moves through the atoms, causing the region of compression to move
forward. Behind it, the atoms spring further apart, as a result of the restoring [Link] speed with which
sound travels through a substance depends in part on the strength of the forces between atoms of the
substance. Strongly bound atoms readily affect one another, transferring the "push" due to the sound
wave from each atom to its neighbor. Therefore, the stronger the bonding force, the faster sound travels
through an object.
c --- Rubber Elasticity
Rubber bands are made from polymers, but the chains are crosslinked to provide a network. The
amorphous phase is also said to be rubbery, constrained by the surrounding crystals and so cannot be
said to be liquid-like. For the rubber bands, it is the crosslinks that determine the properties. The
crosslinks provide a 'memory'. When the network is stretched, entropic forces come into play, which
favour retraction, returning the network to its original unstretched/equilibrium state.

Figure 16

9
Loss of entropy upon stretching means that there is a retractive force for recovery when external stress is
removed. This is why a rubber band returns to its original shape.
Network elasticity
However each chain does not deform individually but is part of a network. This means that for entropic
elasticity (unlike enthalpic) the modulus increases with temperature and the material gets stiffer rather
than softer. As crosslink density goes up, modulus goes up: a highly crosslinked rubber is stiffer than a
lightly crosslinked one.

d --- Sound mixing console

Sound Mixer: Mackie T24, 24 channel Digital Mixer Figure 17

10
Sound Mixing Console for live performance-Figure 18

11
e --- Execution of ΑΛΘ = Φ

• Each speaker would identify its position in the space and send a signal back to the robot engineer.
• During the performance each speaker would analyse the situation in a 0.5 metre radius and decide
whether it is important to increase or decrease volume by negotiating with the computer.
• Information regarding volume and manoeuvring of the robots is passed to the sound engineer who
is responsible for producing the sound spectrum.
• The music in ΑΛΘ = Φ should be played loud.
• The sound direction could be altered by either up or down, left or right movement taken by each
speaker.
• The sensors play a vital role in producing the final product.

13 --- Lighting Technician

Special lighting is necessary to illuminate the space and create a cellular appearance in ΘΩΡΑΞ the
lightining technician also needs an assistant. He/she will be seated next to the sound engineer in the
control booth.

12
14 --- References
Arthur Guyton, [Link] Physiology Second Edition Dept of physiology and biophysics,University of
Mississippi Medical Center page 393-440 W B Saunders Company [Link] ISBN0-
7216-4383-3 1977.
Lubert Stryer. Biochemistry Second Edition 1975,1981 Stanford University ISBN 0-7167-1306-3WH
Freeman and Company San Francisco Conformation of dynamics,Generation and storage of metabolic
energies.
Pinker, S. The Language Instinct: How the Mind Creates Language. Harper Collins, 1994.

D. Sciamarella, C. D’Alessandro. On the acoustic sensitivity of a symmetrical two-mass model of the


vocal folds to the variation of control parameters, Acta Acustica, 90 pp 746-761 2004 [Sciamarella &
D’Alessandro, 2005]

D. Sciamarella, C. D’Alessandro. Stylization of glottal-flow spectra produced by a mechanical vocal-fold


model, Proc. Of Interspeech 2005, pp 2149

Graps A., “An introduction to wavelets.” IEEE Computational Science and Engineering, 2(2) (Summer
1995), 50–61.

The Engineer’s Ultimate Guide To Wavelet Analysis -


[Link]
John J. Ohala University of California, BerkeleyAerodynamics of phonology Suen, C.-Y. and M. P.
Beddoes. 1974. “The silent interval of stop consonants,” Language and Speech 17, 126-134. Department
of Linguistics,University of California,Berkeley, CA 94720,USA

13
Copyright © 2008 Dimitri Voudouris. All rights reserved

14
ONTA
Composer / Researcher:

Dimitri Voudouris

[*1961]

Composition:

Voice

Alecia Van Huysteen

and

Electronics

Duration:

28 min 10 sec

Composed:

2003-2005
PART
A

Content Page

Abstract 4
Introduction 4
Organic and inorganic environments 5
Micro environment 6
Meso environment 7
Macro environment 9
Abstract

Exploring the use of public space and everyday behaviours for creative purposes, in particular the city as an interface
and mobility as an interaction model for the composition of ONTA. The city is an extension of Man and a comparative
study was done placing an inorganic world model [the city] in direct comparison with an organic model [Man] and further
noticing the implications involved with micro, meso, macro environments in Man's construction and the city. A multi-
disciplinary design process resulted in the implementation of a wearable, context-aware prototype.

Introduction

This project discusses the daily tensions encountered and focuses on the energy building up exploding or imploding
from these tensions in the city be it organically or inorganically created this promotes musical creativity integrated into
everyday life, of familiar places and natural behaviours. ONTA was constructed in sensing bodily and environmental
parameters. Considering the city as an interface and mobility as musical interaction, everyday experiences become an
aesthetic practice. Encounters, events, architecture, weather, gesture, (mis)behaviours – all become means of
interaction. ONTA is not a precise documentation of a sampled recording of city life rather it concentrates in addressing
the psychological encounters experienced by Man in attempting to express himself within these parameters described
below. Through the use of voice and electronics. I “created a world” were most of such encounters are addressed.
I will first outline the design methods and issues of the comparative study on the subject Man and the City [ I used
Johannesburg -South Africa being the city I live in, as reference material, ONTA is not a direct reflection of
Johannesburg but is a universal generalisation of a personal encounter of a universal city ] and the implementation of
the prototype to my approach in musical interaction and composition of the city.

Organic and Inorganic environments

Micro environment

Diagram 1
Diagram 2

Organelle Function City operators Function


*support
*protection Government offices

*Protection and support of ministers and CEO's associated with


cell membrane *controls movement of materials in/out of cell Private companies
companies who engage in what is legislated

*barrier between cell and its environment

*maintains homeostasis

nucleus *controls cell activities Government *Government Legislation is passed and executed

Security *Allows representatives and affiliates from each party


nuclear membrane *Controls movement of materials in/out of nucleus
to move in and out of parliament

*monetary system from government that is for support/protection of


cytoplasm *supports /protects cell organelles Economy
infrastructure

endoplasmic Transport services *transport of material needed to building a city


*carries materials through cell
reticulum (E.R.) through road,sea and river,air and railroad

ribosome *produces proteins Industry *cement,tar,steel and all material necessary for building a city

mitochondrion *breaks down sugar molecules into energy Powerhouse * coal to produce electricity for city

Industrial *production and dispatch of building material


golgi apparatus secretory organ
[dispatch]

*breaks down larger food molecules into smaller molecules selectively Mining *generate sound monetary system
lysosome
[ for raw materials ]

Diagram 3

The cell in Diagram 1 and 2 is discussed with all its organelles in detail in Diagram 3
Diagram 4

Diagram 4 indicates how the human body cell enzymes and chemical processes function in energy production in the
body through the conversion of ADP to ATP and also in gene formation and how this is extended to the cities building
blocks which involves production of bricks, cement, steel, tar for road construction, the building of computer systems
which can run industry, banks, traffic, telecommunication, shopping, business, security etc.

Diagram 5

Diagram 5 shows how the human autonomic nervous system is made up of both the sympathetic and parasympathetic
nervous systems and is directly related in the cities communication network with IT- Computer networks systems
branching out into all parameters of the city, communication is made easy through phone/Fax and electronic email and
intern interact with the community, security, ATM's and banking, in industry, government, transport services, commercial
sector, television, radio, traffic lights etc. As in the body the nervous system monitors happenings relating to the
functioning of the body, the strategic decisions taken with matters concerning both the internal environment and our
immediate surroundings has to do with back and forward communication with the brain, IT has to do with matters of
communication both with its internal environment and the surrounding environment of the city.
Meso environment

Diagram 6

Diagram 6 metabolism is the chemical processes that makes it possible for the cells to continue to live. Food and liquid
beverages ingested by the body are broken down and transported to different areas of the body which can be used for
storage or energy production. In an inorganic environment the processes are slightly different were instead of
metabolism we refer to chemical processes are present to convert e.g. coal into an electrical energy. In industry and
manufacturing plants chemical processes are necessary to produce the end product.

Diagram 7

In Diagram 7 the police,security,army,navy defend the city by protecting the borders of the country, preventing
crime,and patrolling the air and the seas places were an intruder can infiltrate. In an organic environment the immune
system protects the body were the defence is prevention of bacterial, viral and fungal infection.
Diagram 8

Diagram 8 focuses on special senses in an organic medium we can reflect to sweet, sour, bitter, salty, touch, vision,
smell however in an inorganic environment computerized sensors are present like CCTV, touch screens,
telecommunication – voice detection systems and voice recognition systems, electronic communication, etc. These
intern stimulate vision, hearing and touch senses.

Diagram 9

Diagram 9 shows temperature regulation were in an organic environment it would be controlled by sweating, insulation,
the hypothalamic centre in the brain and in an inorganic environment through the use of air conditioners, in well
insulated buildings that can store internal heat for the winter months and cool internal temperature for the summer
months.
Macro environment

Diagram 10

Diagram 10 the circulation system in the human body acts to transport oxygen , supply nutrients and remove waste
material from the body. The circulation is important also in the formation of clots due to injury to the body. In an
inorganic environment what circulates is the different transport services using road, rail, aviation and water to supply the
different areas of the city with defence and all other communication requirements.

Diagram 11

Diagram 11 the heart functions as a pump, pumping blood to the different locations in the body. In an inorganic
environment the heart of the system is the financial sector of the economy, a healthy economy together with healthy
government policy can only benefit the system.
Diagram 12

Diagram 12 in the Gastrointestinal Tract absorption and excretion of different food stuffs occurs thus in a person eating
and drinking the ingested beverages pass through the Gastrointestinal system and from there can go in a variety of
forms as fuel for the body or are converted into fat for fat storage. In an inorganic environment proper managed waste
disposal can have far reaching benefits in sanitation for the city and a good infection control, thus monetary expenditure
produced by different institutions for the cities needs must get properly allocated in areas were they are desperately
needed e.g. for salaries of workers, building, poor areas, for the development and maintenance of the city.

Diagram 13

Diagram 13 through respiration in an organic environment oxygen gets supplied to the body and carbon dioxide gets
removed from the body, without oxygen this can lead to death of organic matter. In a city proper management of toxic
fumes produced by motor vehicles, steam engines, aeroplanes, industry etc. The continuous consumption of these toxic
fumes can lead to respiratory disease and other irreversible ailments. Healthy Planet Conservation Policy has to do with
less toxic waste emitted into the environment the positive picture regarding greenhouse gases - could be prevented
through strong law enforcement policy by governments. Allowing emission of greenhouse gases could result in climate
change increasing the temperature of our planet resulting in devastating consequences.
Diagram 14

Diagram 14 reproduction in an organic environment has to do with Man's basic need for survival thus through
reproduction his needs are temporarily fulfilled. Inorganically however the tearing down of older less functional buildings
because of fashion or due to an abnormal setting [ building of a road that goes through the house], restoration of older
buildings [fashionable or not] and the building of new buildings has to do with Man's socio-economic need do develop.

Diagram 15

Diagram 15 it is in the capillaries that the most purposeful function of circulation occurs,namely, interchange of nutrients
and cellular excreta between the tissues and circulating blood. In an inorganic environment this is envisaged a little
different in that in an healthy economic environment good capital growth, good monetary exchange systems within the
city adds possible dynamics to a healthy economic environment were poverty eradication is possible, building,industry,
side street pedlars, from the rich to the poor could make a living, almost everyone will have money and could afford
good living conditions [education, owning a motor vehicle, live in descent suburb, earn descent salary, entertainment,
purchase clothes and other commodities etc.] this however could go the opposite way.
Diagram 16

Diagram 16 the kidney's functions are two fold they excrete through metabolism most of the end-products and second,
they control the concentrations of most of the constituents of the body fluids. In an inorganic environment what gets
used up in manufacturing of building material, medicine, petroleum products, steel, coal, rubber etc. as in any
manufacturing process there is inevitably going to be waste products that get produced that could negatively and
irreversibly effect the environment thus through good law reinforcement a lesser amount of these waste products would
be manufactured.

Diagram 17

Diagram 17 through amino acid production and introduced protein synthesis muscle is built, other nutrients such as
e.g. carbohydrates, various salts are needed for muscle to contract, the muscle gives shape to the body and this is
made possible through anabolism and catabolism. In an inorganic environment muscle could be associated with the
action of building and demolition of the raw material, in this case could be bricks, cement, steel, wood, tar, gravel etc. -
Material necessary for the construction of buildings, roads, railway lines, vehicles, trains, aircraft etc.
Diagram 18

Diagram 18 Possibly the world of external facts is much more fertile and plastic than we have ventured to suppose; it
may be that all these cosmologies and many more analyses and classifications are genuine ways of arranging what
nature offers to our understanding, and that the main condition determining our selection between them is something in
us rather than something in the external world.
The human species has modified our global environment at wide regional and global scales. Global warming,
biodiversity losses, ozone and freshwater depletion, to name a few, are now recognized as human-induced wide-scale
environmental transformations. In spite of admirable efforts to arrest some of these processes and restore
environmental vitality, the pace at which humans modify their environment continues with considerable intensity. The
future health of the biosphere for sustaining all life may be drifting close to the margins as environmental crises
increase within a single generation. These destructive propensities have deep cultural and psychological roots that
divide us from the rest of the environment. Significant social change is needed for improving our collective relationship
with the earth. Humans, with our unique capacity for self-reflection, are beginning to understand that the underpinnings
to our current ecological problems lie within our attitudes, values, ethics, perceptions, and behaviours. New ways to re-
conceptualize our unity with the biosphere, understand downstream impacts, and link social behaviour with
environmental transformations are increasing with corresponding intensity. Community-based restoration is a powerful
means for facilitating this trend, by reconnecting communities with their landscape, empowering citizenry, and fostering
an environmental ethos based on ecopsychological health. In our attempts to construct a city it is important to keep in
mind the inorganic implications have on the environment and to try and eliminate the Selected filtration as much as
possible, reducing the detrimental impact on the way we live.
Continued

Copyright © 2009 Dimitri Vourdouris. ® All rights reserved


ONTA
Composer / Researcher:

Dimitri Voudouris

[*1961]

Composition:

Voice

Alecia Van Huysteen

and

Electronics

Duration:

28 min 10 sec

Composed:

2003-2005
PART
B

Content Page

Interdependency 4
Response to stimuli 5
Composing ONTA 5
Microscopic simulation model 6
Mesoscopic simulation model 7
Macroscopic simulation model 8
References 14
Interdependency

Diagram 19

Diagram 19 The way that each part of the body is compartmentalised into the different environments this diagram
shows the interdependency of each part in the human body and also Man's extentions into all environments outside its
parameters.
Response to stimulus

If stimulus dimension concentration is in the audible range in the composition, arriving at the auditory level the
prevailing environment of an organism may be considered to be the pattern or configuration of all energies, present at
any given time, that are capable of entering into lawful relationships with behaviour, influences were attained by the use
of visual, auditory, smell, taste and touch senses. These energies are confined, at most, to those that can be detected
by the specialized anatomical structures, receptors, that organisms have for receiving certain energies and for
transforming them into electrical nerve impulses. The eye is specialized for the reception of a limited range of
electromagnetic radiation, the ear for a limited air pressure vibrations,the tongue and nose for certain chemical
energies. Receptors in the skin detect mechanical pressure and thermal changes. There are receptors within the
muscles and joints of the body that detect the movement of the muscle and joints in which they are embedded. A
stimulus is a part of the environment and can be described in terms of its physical dimensions. Man can respond to
differences in amplitude or intensity of light waves. Sound stimuli may also be analysed into a set of constituent
dimensions.
With response to the stimulus range we will concentrate on sound and try to recreate an environment which indirectly
would stimulate the other senses. To attain certain goals in the dynamics of ONTA it was necessary to engage in
alternating the amplitude of sound waves which produced changes in the intensity of the energy, and are associated
with different loudness responses. These dynamics directly represent the intensities relating to exploding and imploding
energy combustion relating e.g. to buildings that are new to those that are old to pedestrians wanting to cross a traffic
intersection and to motor vehicles wanting to do the same, money needed by a community for education which they do
not get. Such sounds represent very complex admixtures of many different frequencies. What makes ONTA unique is
its living coherence of the amalgamation of both organic and inorganic environments into one which addresses the
existence of energies that make up tensions in the chemical, mechanical and thermal portions of these environments
and thus can be noted to have cellular properties in their construction.
As part of his gestural extensions Man in relationship to the city constructs frozen/lifeless structures its only due his
presence that the city takes on life and becomes a place were he can conduct his business, dwell, entertain and
develop. The relationships between such association with Man and the environment are two fold what Man senses and
absorbs he needs to give back to the environment so that he can manipulate the results to give meaning and fulfil his
materialistic needs.

Composing ONTA

The type of composing needed to be consistent with how people already perceive and experience the environment of
the city. With this in mind, I developed hypothetical scenarios of user experiences, values, and taste. The scenarios
were based on potential users that I knew or interviewed. They were deliberately extreme in order to represent a wide
range of possibilities and design implications. Besides helping to determine the amount and nature of user control
supported by the system, they revealed differing personal relationships with the city. Specifically, I considered peripheral
versus foreground aspects of the experience and musical possibilities ranging from serial to rhythmical. Based on the
source that its Man's anatomical nature that creates rhythmical possibilities due to motion versus his gestural
extensions creating the lifeless constructions, I defined the boundaries of the composition's space. I was interested in
maintaining a close experiential relationship between the sound content and the context of music creation. Thus, using
electronic music composition with voice addresses the urban sounds as a basis for sound and voice synthesis.
Interesting processing parameters emerging from the composition process were abstracted according to the kind of
musical impact they would have on the output.

They were classified into:

- Structural composition variables, relative to the number of sound layers and the temporal structure of the music.
 Spectral variables, which determine the quality of each sound (their timbre, envelope, etc.)

The collection of various sampled statistical field data relating to the construction of city resulted in the plotting of
graphs of the microscopic, mesoscopic and macroscopic simulation models:
Microscopic simulation model

1] Pathways of communication -----


2] IT ------
3] Roads -----

35

30

25

20

15

10

0
0 2 4 6 8 10 12 14
Time [years]

Diagram 20

The above sampled data population was plotted in a frequency polygon over time duration - over 10 year period.
Mesoscopic simulation model

1] Police ---
2] Air conditioning ---
3] Chemical Process in industry and manufacturing ---
4] CCTV ---
5] Television ---
6] Telecommunication ----
7] Computer sensors ---
8] Email ---
9] Army ---
10] Security ----

Mesoscopic simulation model

90

80

70
Series1
60 Series2
Series3
Frequency f
50 Series4
Series5
40 Series6
Series7
30
Series8

20 Series9
S10 Series10
10
S7
0 S4
1 2 3 4 5 6 S1
7 8
T ime (year) 9 10

Diagram 21
Macroscopic simulation model

1] Banks Policy ---


2] Transport ----
3] Green House gases emitted ----
4] Demolition ----
5] Healthy economic environment ----
6] Waste disposal and management ----
7] Monetary expenditure ----
8] Restoration of Buildings ----
9] New buildings built ----
10] Government Policy ----

Macroscopic simulation model

100

90

80

70
Series1

60 Series2

Series3
50
Series4
Frequency f
40 Series5

Series6
30
Series7
20
Series8
S10
10 Series9

S7 Series10
0
S4
1 2
3
4
5 S1
6
7
8
9
10

Time (years)

Diagram 22

The data collected could be categorized periodically into seconds, minutes, hours, days, weeks, months and years. The
data in the various categories shows cellular properties in linear progression. In this particular instance the annual data
was collected and plotted showing results occurring over a ten year period.
In analysing the graphs for potential similarities I highlighted the areas on the graphs that showed exploding and
imploding energies. This resulted in producing further mathematical equations to begin construction of the analysis
made. The projection of sound allowed for an abundant degree of energy expenditure from all sides, thus this energy as
has been dissipated can result in exploding or imploding instances. Furthermore I did not want sight specific results to
occur like in UVIVI as I was working on a universal city model, I decided to use the results obtained from the data
collection in the experiment make the necessary observations needed in plotting the graphs to obtain a realistic
example of intensity, pitch, density and motion curves, I then compared my results with other such observations made
from eight researchers from all over the world and calculated an average representation of the data on hand, adjusting
the graphs accordingly, I proceeded to make further calculations in Matlab to convert to intensity, duration, speed, pitch
curves into sound parameters within the parameter range of frequencies, furthermore the sine waves chosen
represented the data analysed, it was also necessary to manipulate Alecia's voice to suite the parameters of frequency.
Diagram 23

Diagram 24
Diagram 25

Diagram 23,24,25 shows areas of exploding and imploding sound obstructions which produce accumulative energies
that can act as gateways for different sound possibilities to occur or that the sound becomes dissipated matter as it
goes through the transformation of explosion or implosion. These areas can have obstructions that imploding or
exploding depending on the similarity of decisions taken or the extent that decisions differ. Lets speculate and say that
energies above frequency point 50 are explosive and below are implosive. Such points are said to contain cross
sections which can potentiate an electronic sound signal to a point which it becomes inaudible as in very high pitch or
the reverse would happen with imploding structures were the sound signal would also become inaudible in a very low
pitch.

Diagram 26

Diagram 26 shows what happens if an explosive or implosive obstruction is in the pathway of a sound wave
approaching its energy gets accumulated and causes the output sound wave to explode in various directions of higher
frequency [like charges in an energy field attract each other] and thus the sound projection is said to have an (+)
additive value. The sound direction is noticed in 2D above. The opposite happens to an accumulating process causing
the output sound wave to implode in various directions of lower frequency [unlike charges in an energy field repel each
other] thus the sound projection is said to have a (-) negative value. The sound direction is noticed in 2D above.
Diagram 27

When there is a reason to suspect the presence of small occurrences acting additively and independently [a
conglomeration of points the population of which is said to have similar ideas], thus being continuous [or more precisely
have a continuous version] as in Diagram 27 which shows a continuous flow of events that can have multiplicative
modifications. If there is a single external influence which can result into larger flow properties on the variable under
consideration, the flow can be greater but when reaching the obstruction point energy flow can be combusted in an
implosive or explosive nature thus the assumption of normality is not justified, and is the logarithm of the variable of
interest that is normally distributed.
The various positions indicated by the dots in Diagram 27 can act as resistance points were at a specific instance the
decision taken could have a negative or a positive impact on various parts of the population thus affecting the sound
proportionally.
The sound that was sourced from the data collected and plotted on graphs in the micro,meso and macroscopic
simulation models with these observations, a block diagram of the composition was formulated in Diagram 29.
Energy and momentum stored in a sine wave are proportional to the square of its amplitude. If we observe a sine wave
passing a given point the displacement at that point varies with time as a sine or cosine. Hence, each point in a sine
wave is undergoing simple harmonic motion the kinetic energy and the total energy are also proportional to the
amplituted squared. Thus in sound the intensity I of a wave is directly proportional to its amplituted squared

Periodic waves are characterized by their frequency f wavelength λ and velocity c, which are related by fλ=c..The
velocity depends on the properties of the medium and in some cases on the frequency. The amplitude of a wave is the
maximum magnitude of its displacement. Waves can interfere with one another. When two waves are present at a
point, the resulting wave is found by algebraically adding the displacements of the individual waves. Two waves in
phase add constructively,while two waves a half wavelength out of phase interfere destructively. The property of
combining waves by the addition of displacements is called the principle of superposition or linearity.

Kinetic energy:

K is the kinetic energy at mass m and velocity v

Tension:

T is the tension measured at a weight mg and mass of an object m at an acceleration a

Diagram 28

Microscopic ---- Implosion ---- Microscopic simulation model from Diagram 20


Mesoscopic ---- Explosion ---- Mesoscopic simulation model from Diagram 21
Macroscopic ---- Macroscopic simulation model from Diagram 22
Diagram 28 shows one cellular component of the different environments each environmental instance is numbered in
accordance to the ever-changing periodic moments calculated in each environment, resulting in the exponential
implosive and explosive nature that has to do with the tracing of the [sound] signal through gateways that can result into
different sound possibilities and the influential relationship that this signal offers to the surrounding population. e.g. the
sound released in Microscopic instance 1 triggers a positive explosive accumulation of energy when reaching
Mesoscopic instance 1 that triggers a positive explosive accumulation of energy when reaching Microscopic instance 2
that triggers a positive explosive accumulation of energy when reaching Mesoscopic instance 8 that triggers a negative
implosive accumulation of energy when reaching Macroscopic instance 8 or visa versa etc. This current scene of events
could change from one moment to another. The event changes occur from the need to produce, the usage to the
implementation need e.g. The Mesoscopic instance 1 triggers an implosive accumulation of energy when reaching
Microscopic instance 2 or visa versa this occurrence happens due to a 30% drop in IT usage by the Police this could be
as a result of problems with usage or that the Police needs special training with the software.

Block diagram of computer score showing graphic transcription between bars 256-275 of ONTA

Diagram 29
Diagram 30

Showing composition strategy

Organic Inorganic Statistics Data / graph Formulae derivation

Micro environment Microscopic simulation model

Parameters of sound
Cell Building material of a city Collection and analysis of data relating to Sampled data and duration,intervals of intensity and
pitch,speeds,frequency

Nervous system IT,Roads,Pathways of communication amount of new buildings build, roads, graph analysis of field studies were established ,the simplest of
pathways laws were used each of the
equations were used in
Micro,Meso,Macro environments.

Enzymes Building blocks of the city of communication over a ten year period. showing micro moments of

and chemical processes exploding or imploding Duration:

areas

Meso environment Mesoscopic simulation model Poisson's law :

CCTV computer sensors Collection and analysis of data relating to


Sense Sampled data and
Telecommunication TV Email the

amount of security
Immunity Police Army Security graph analysis of field studies
services,email,CCTV,computer

sensors, industry,manufacturing
Temperature regulation Air conditioning showing micro moments of
expansion,

Chemical process in industry and due to an increase in population ten year


Metabolism exploding or imploding areas Intervals of intensity, pitch:
manufacturing data

collectively obtained from industry and

manufacturing sectors.

Macro environment Macroscopic simulation model

Collection and analysis of data relating to


Respiration Green house gases emitted Sampled data and
Waste

management,transport,Building and
Circulation Transport graph analysis of field studies Speeds:
demolition

restoration and new buildings built,green


Heart Banks and government policy showing micro moments of
house gas

emissions over ten years obtained


Muscle Building and Demolition exploding or imploding areas Frequency :
through

Healthy economic environment/ Waste government industry and manufacturing


Body Fluids and Kidney 16Hz-12kHz
disposal sectors.

Monetary expenditure/Waste
Gastro-intestinal tract
management

Reproduction Restoration and new buildings built


References:

1] Mathematics for scientific and technical students -[Link],[Link]: Long Group limited London ISBN058241075
page 312-320
2] Statistics – Murray R Spiegel : Mcgraw-Hill Book Company ISBN0708439901. page 82-94
3] Text book of Physiology – [Link], DE Smith ,CR Paterson: Churchill Livingstone ISBN044302152X
4] Biochemistry – L Stryer: WH Freeman and Company: ISBN0716712261 page 233-431
5] Physics for biology and pre-medical students -DM. Burns, SGG Mac Donald: Addison-Wesley Publishing Company
ISBN201043777
6] Hashimoto,s.,Bruno,B.,Lew,D.P.,Pozzan,t,Volpe,P and Melidolesi,J.[1988] Immunocytochemistry of calciosomes in
liver and pancreas.J Cell Biol, 107, 2523-2531.
7] [Link], [Link],J. Phys. A: Math Gen.26, L679 [1993]
8] Principles of Behavioural Analysis second edition J.R Millenson, Julian C. Leslie ISBN0-02-381280-X page 227-246

Copyright © 2009 Dimitri Vourdouris. ® All rights reserved


VOZ DA
REVOLUÇÃO
CONSTRUÇÃO

1.1

1.2
1.3
1.4
1.5

1.6
1.7

CONSTRUÇÃO

2
COMPOSITION/

SCHEMATIC SCENIC REPRESENTATION

DIMITRI VOUDOURIS

[1961-]

COMPOSED

2007 – 2009

DURATION

88 min 00 sec

for

TTS CHOIR

TTS SOPRANO

ADINA SVENSSON

TTS MEZZO SOPRANO

LUDMILA MENERT

TTS TENORS

ARTHUR DIRKSEN

ALAIN RUELLE

TTS BARITONE

LUIS ALVES

--------
LIVE ACT

2 SOPRANOS, 2 TENORS, MIXED CHOIR [25 CHILDREN AND WOMEN],


30 GYMNASTS, JOURNALIST, ECONOMIST, 4 CHESS PLAYERS,
50 CHILDREN, 10 CONSTRUCTION BUILDERS, 20 YOUTHS, 30 ACTORS,
5 SLIDE PROJECTOINISTS, 7 PROJECTIONISTS, 2 POETS, 5 PUBLIC SPEAKERS
WITH MEGAPHONES, 10 REMOTE CONTROL TOY OPERATORS,
5 HEAVY DUTY VEHICLES WITH DRIVERS,
3 PERCUSSIONISTS, 2 SOUND PROJECTIOSTS,
2 LIGHTING TECHNICIANS

------------

TEXT TO SPEECH SYNTHESIS,


PREPARED NATURAL VOICE ENVIRONMENT
AND
COMPUTER ASSISTED PROCESSING
INDEX Page

Vocal Expression 5
Philosophies of Change 5
Phonology, Neurolinguistics, Sociolinguistics 5
Language and Thinking 6
Vocal Intelligence 6
Non-verbal studies-perception 7
Language Policy and Planning 7
Mechanization of Language 7
The Composition 8

Theatrical Performance

CONSTRUÇÃO.. 1 11

1.1

Slide Projectionist 11
Audience 12
Journalist and Economist 12

1.2

Audience and Human Scavengers 14


Youth / FRELIMO Soldiers 14
Journalist and Economist 15

1.3

Audience and Construction Builders 16


Youth / RENAMO Soldiers 16
Journalist ans Economist 17
La ultima puerta 18
Cantiga del lanchon 18

1.4

Youth / RENAMO Soldiers 19


The Shaman 20
Children in RENAMO training village 20
Journalist and Economist 20

1.5

Gymnasts 21
Vanyamussoro 22
Journalist and Economist 23

1.6

Macungeiro 25
Sound and illumination 25
Journalist and Economist 25

1.7

The Crane 26
The Black Clocked Woman 26
Journalist and Economist 26

CONSTRUÇÃO.. 2

Tenors and Sopranos 28


Mixed Choir 29
Construction Builders 29
Gymnasts 31
Libretto 32
Grito Negro 32
Tenor 1 32
Mixed Choir 1 33
Tenor 2 34
Mixed Choir 2 34
Soprano 1 35
Mixed Choir 3 35
Soprano 2 35
Mixed Choir 4 36
Mixed Choir 5 36
Audience 37
Public Speakers 37
Sound Projectionist 37
Heavy Duty Vehicle Operators 38
Percussionists 39
Slide Projectionists 41
The dead children 41
Journalist and Economist 41
Lighting Technicians 42
Sound Projectionist 42
References 43
Vocal Expression:
Every healthy person begins with the potential to express, through voice, an enormous range of feelings
and thoughts, which are a reflection of who they are in the greater context of the universe, an
enormously intertwined phenomenon, which brings the full connection of body and the inextricable
connection of mind.

We all have a fundamental frequency specific to us. All people possess the phylogenic disposition to sing:
Singing is not an extension of speech but a diminution of song. Voice is a complex phenomenon. It is a
product (sound) which is invisible, made from a place of the body we can not see (larynx) or sometimes
feel, linked to both emotional and physical responses, with an output we hear differently to those around
us.

Vocal-Dynamics echo psychodynamics: Voice is a reflection of self. Voice is a reflection of body which is a
reflection of mind. There is no vocal change without personal change.

There is a cumulative vocal tendency which is a reflection of our culture. Organizational culture is a
particularly divisive culture for voice. If western culture is a psychic prison, the organizational culture is a
solitary confinement.

Philosophies of change:
 Vocal and emotional capacity and expression cannot be ‘taught’, but they can be ‘released’.
 Vocal perception does not match reality.
 Voice is a kinaesthetic experience first and foremost.
 There are no right or wrong answers or methods, only personal insights.
 No single path will suffice.
 Manipulating personal sound offers the opportunity to deal with habitually poor body and mind
patterning, which is a result of the way we use that energy to present ourselves in the world.
 Of the jigsaw of independent, hierarchical and necessary elements of voice, breath is the
keystone for all vocal work, for which posture is the foundation.
 Understanding your voice is understanding your personal journey.
 Any pedagogy for the development of vocal intelligence must recognise the intricate connection of
duality such as the emotional and rational brain, the effect of left and right brain hemispheres,
and the extent of unconscious and autonomic functioning as well as conscious processes.
 It is impossible to teach yourself to sing – we do not hear ourselves as others do.
 Work on voice is,therefore, by nature, sometimes directive.

Phonology,Neurolinguistics,Sociolinguistics:
With the study of language in social context. The study currently includes such areas as language and
social interaction, language contact and change, Sociolinguistic variation, discourse analysis, cross-
cultural communication, narrative and oral history, language and identity, language and ageing,
endangered and minority dialects, language and health care, and forensic linguistics. Phonology of how
sounds are organized and used in natural language and Neurolinguistics which is concerned with the
neural mechanisms underlying the comprehension, production and abstract knowledge of language be it
spoken,signed or written. These methods, allowed me to formulate vocal expression and create protest
sounds,gunfire, shifting, mechanization and sounds of lament in the construction of VOZ DA REVOLUÇÃO.
Example Vocal Expression

Gun fire Boom,Tla..Pigghh..

Mechanization Dee Dee Dee…


Grr Grr Grr ….

Lament Crying, Ah Ha…Wheee


MMM…...

Shifting Krah , Mmbaka

Diagram 1: Vocal expressions used to explain signs of lament,gunfire,shifting etc.

Language and Thinking:


We can say clearly that language is a system of symbols which we employ for making sense of our world in
any way that makes sense for others. This involves the inter communicative function of language - the
process whereby we get into touch with other people by expressing ourselves in words which they can
understand. In order to understand other peoples messages, but also in order to produce understandable
verbal messages ourselves,we must have mastered at least the following three related skills.

 We must be able to associate speech sounds with their respective meanings.


 We must be able to associate the words we use with the things and ideas (concepts) for which
their are symbols.
 We must have learned and must be able to apply the rules in accordance with which the words of
a language are combined in order to achieve understandable communication.

As far as thinking is concerned,I gave particular attention to inner speech. If inner speech occurs during
thinking, it consists solely of key words and phrases and simple grammatical construction. On the basis of
experimental findings inner speech plays a mediatory role in thinking. Linguistic ability can promote
effective thinking, but it is not an essential requirement for effective thinking thus thinking can take
place without the use of language.
The relationship between concepts and language allow for two kinds of word meanings namely,
connotation and denotation, which are related respectively to intensional and extensional characteristics
of concepts. In respect of every day thinking and the influence exerted on it by denotative and
connotative meanings, we mentioned that orderly thinking and exchange of thoughts is possible only when
the Principle of Reasonableness is complied with this principle states that the words we use and think
with, have meaning only within an intersubjective context.
Words are not identical with the reality they represent, and that the relationship between language and
reality is similar to the relationship between a map and the territory it represents.

Vocal Intelligence

Vocal Intelligence – authenticity:


Vocal Intelligence evolved from the combination of two key construct domains:

 The vocal component refers to the mobilisation and expression of energy, emotion and personal
presence through engagement with vocal processes.
 The intelligence component refers to the creating, evaluating and choosing among options for the
authentic and effective expression of self.
Non- verbal studies – perception:
‘Non-verbal studies’ hold the study of perception as a critical focal point; indeed, the entire concern with
voice is devoid of change possibility, authenticity or relational interaction and is based solely on auditory
recognition. The attractive voice is defined as sounding: ‘more articulate, lower in pitch, higher in
pitch range, low in squeakiness, non-monotonous, appropriately loud and resonant’. People with
attractive voices, in turn, are seen to ‘have greater power, competence, warmth and honesty attributed
to them. People with ‘babyish’ voices are usually perceived to be less powerful and less competent but
warmer and more honest than people with mature, sounding voices’.
We are told that standard dialects tend to enhance credibility in formal settings, whereas ethnic and in-
group dialects are preferable in informal contexts, such as home and bars. Moreover, when the degree of
accent is an important consideration for stereotyping and categorising people, the more intense the
accent, the more negative the impact on credibility, such that consistently mispronounced words may
impair a speaker’s credibility and communicative effectiveness. Perceived competence is said to increase
as speaking rate increases, although there is a point at which speaking rate becomes so fast as to have a
negative effect on competence. There is, however, ‘no current consensus on what that rate is’. The
confident voice is apparently substantial, but not excessive in volume, a rather rapid speaking rate,
expressiveness and fluency. Stuttering, ‘ah’, incomplete sentences and tongue slips are, in the mind of the
perceiver, strongly associated with high levels of anxiety . Dominant and powerful individuals “exhibit
speech that is relatively free from hesitation and hedges, but these vocal phenomena are characteristic of
the speech of submissive and low-power people.”

Language Policy and Planning:


Continuation of the practice pursued by Frelimo (Mozambique Liberation Front) during the 10-year
liberation struggle for Independence. Portuguese was then chosen to unite nationalist freedom fighters
with different language backgrounds — as expressed by Frelimo at a seminar on the theme ‘Influence of
colonialism on the artist, his way of life and his public in developing countries’ held in Dar es Salaam,
Tanzania in July 1971: “There is no majority language in our country. Choosing one of the Mozambican
languages as a national language would have been an arbitrary decision which could have had serious
consequences . Thus, we were forced to use Portuguese as medium of instruction and as means of
communication among ourselves.”
“The need to fight the oppressor called for an intransigent struggle against tribalism and regionalism. It
was this necessity for unity that dictated to us that the only common language — the language which had
been used to oppress —should assume a new dimension.” (Machel, 1979)
The decision to opt for Portuguese as the official language of the People’s Republic of Mozambique was a
well considered and carefully examined political decision, aimed at achieving one objective — the
preservation of national unity and the integrity of the territory. The history of appropriation of the
Portuguese language as a factor of unity and leveller of differences dates back to the foundation of
Frelimo in 1962. President Machel who, at the launching of the National Literacy Campaign in 1978,
delivered the following words: “The spread of the Portuguese language is an important medium among all
Mozambicans, an important vehicle for the exchange of experiences at the national level, a factor
consolidating national consciousness and the prospects for a common future. In the course of the war,
some people asked: “Why are we continuing with Portuguese?” Some will say that this National Literacy
Campaign aims at valuing Portuguese. In which language would you like us to launch this Literacy
Campaign? In Makwa or Makonde, in Nyanja, Shangaan, Ronga, Bitonga, Ndau, or in Chuabo? Portuguese-
medium literacy planning prevailed until the end of the 1980s, and the results were felt to be mixed.
Positive in some instances, but unsatisfactory in several others. It is hard to give a balanced assessment of
the whole project because most activities were deeply affected by the war.

Mechanization of Language:
The interest of symbolic logic is not accidental but is highly characteristic of our times. It expresses the
mechanization of our thinking and talking. When words are merely signs, they can be replaced by symbols,
and thinking or language can thus become a mechanical procedure. Then it stands in man's service like in
any other apparatus or instrument.
Such a mechanization of language goes right along with the mechanization of the earth [the harnessing of
the earth to the slavery of man]. Just as the mechanization of language cannot reveal its essence so the
mechanization of the earth cannot reveal its essence of the earth.
Today the conception of language as an instrument of information goes to extremes. Although there is an
awareness of this fact,there is no attempt to see its meaning. Everyone knows that now in the field of
constructing electronic brains not only accounting machines but also thinking and translating machines are
being built. However all calculation in the narrower and broader sense, all thinking and translation,occur
in the element of language. The phonetics of Portuguese are rather complicated. In comparison with the
related Spanish language, there is no simple rule for the pronunciation of vowels, and some consonants
also have multiple values. European and Brazilian Portuguese differ somewhat: [The tidle indicates a nasal
vowel. It occurs over two vowels, ã and õ, and in several diphthongs such as ão and ãe. The nasal sounds
may also be indicated by a following m, as in bom ('good'). Unstressed o is normally /u/, and unstressed a
is normally an open central vowel. There are palatal cosonant lh and nh (the equivalent of Spanish ll, ñ).
The consonants ch, j are post-alveolar fricatives, SAMPLA /S/, /Z/, or the same sound as in French. The
letter s when final or followed by another voiceless consonant is /S/, or before a voiced consonant /Z/. So
the escudo (the previous currency - now Portugal uses the Euro) @SkuDu/, plural escudos /@SkuDuS/. This
peculiarity is only valid however in Portugal and in the metropolitan area of the city of Rio de Janeiro in
Brazil. In other regions of Brazil and other former Portuguese colonies, the s is merely voiced (to /z/)
when before a voiced consonant.]

The Composition:
It allowed me to use the voice structures that were developed in AΛΘ=Φ and to do more in-depth research
in the construction of vocal nonsense words using Portuguese as a language of choice. Using Mbrola TTS
Text to Speech Synthesis for the testing of prosody generation algorithms, Praat a Program which aims to
construct possibilities for phonetisation and transcription, PROSE for prosody extraction, PSOLA for
prosody manipulation[This system was used to transform the target emotion into prosody parameters using
multiple regression equation, further into the prosody pattern using the eigenvectors of the subspace,
reducing the dimensionality and succeeded in modeling the correlative relation between prosody
components in conveying emotion. The intended emotions were perceived from the synthesized speech,
especially “anger”, “surprise”,“disgust”, ‘sorrow”, “boredom”, “depression”, and “joy”], a speech
synthesiser, Portuguese keyboard, an editing program and a modular synthesiser.

In VOZ DA REVOLUÇÃO the aim was to create an elasticity and expression in the Portuguese language.
Construction and deconstruction of language occurred with the use of neurolinguistics, sociolinguistics and
phonology studies through a myriad of dialogues all happening at once, traced through the period of the
civil war. My intention was to use nonsense language patterns to describe what the people went through in
the civil war for with nonsense language patterns one can concentrate wholly in purity, of dialect and
phonetics. This noticeable change allowed me to construct with vocal sound patterns distinctive of
machinery and animal sounds in nature and progress to the vocalisation of complete sentences. In
response to mechanization of language it was necessary to address body movement, as language was not
spoken word any more but a rhythmical source of information it touched upon issues concerned with the
mechanization of the [Link] tower of Babel was mankind's second engineering project after Noah's arc
and why Babel failed was due to a lack of communication and its consequent organization. With
communication out of the way my observations fell upon people that could not speak or the mentally
handicapped I observed and studied their body language and body gestures with response to sound and
how they vocally expressed the sounds that they hear e.g. sounds of machinery, sounds animals, physical
pain, laughter and basically what the communicational patterns of verbal exchange are and the gestural
patterns that they make. I recreated that aspect of language that I discovered, so as to express more in a
theatrical context, the pain, suffering and the basic needs in an organised fashion through the different
scenes. In Mozambique the civil war carried on and on consuming thousands of people, it halted industrial
flow, farming and economically drained the economy this is because the government faction FRELIMO and
the resistance faction RENAMO could not communicate and this led to the consequences that were
mentioned above with respect to Babel.

VOZ DA REVOLUÇÃO had its difficulties in attaining the desired levels both in the auditory and
theatrical observation extended in the lexicon of vocal music to a new dimension, of live theatrical
performance, sound projection and combining surreal linguistic systems and synthesized computer-
manipulated voices which helped me to gain an unprecedented practical understanding of the human
voice, its computer simulation, cognition and sound projection.
Diagram 2
Theatrical

Performance
CONSTRUÇÃO.. 1

Maybe performed in concert form were its duration would be 61 min 04 sec

OUTDOOR SCENE

in

SCRAP METAL YARD

The Production Manager,Sound Engineers,Sound Projectionist, Lighting Technicians, Remote control toy
operators and Choreographer are to rehearse there parts according to the score.

One Sound Projectionist, Two Lighting technicians,Choreographer needed in this scene.

Economist who deal with each scene separately advise the journalist about economic procedures.

Journalist dealing with each scene are producing news and video footage.

Economist and journalist move together between positions A-H.

Eight layers were used of increasing Compression of figures of human music,extensions and pauses to
nullify time.

The audience is to be advised in accordance to their involvement before they enter the space.

Chess games are played through the duration of the performance and then the score results go on to the
journalists who produce the news.

The actors can be civilians or human scavengers and are allowed to explore the Scrap Metal Yard.

The youth are FRELIMO and RENAMO soldiers.

1.1
 The whole area is covered with mist.
The metal rubble shows the destruction caused by post war colonialism.

Slide Projectionists:

Hard Hat for Slide Projectionist

 The Slide projectionists appear on the scene [the slides that they project are news of war,
communist propaganda and news about RENAMO onslaught] projecting up against the metal heaps.
They are dressed up in grey Mao Tse Tung uniforms.

Audience:

Hard Hat for audience

 The audience follows the Slide Projectionists.


Light illumination sweeps over the barren landscape showing naked branches and emphasis is paid
on the shadows.
The skies are patrolled by remote control aircraft.
Amongst the images from the Slide Projectionists we begin to notice the body shapes[of children]
amongst the rubble,the live bodies act as a support mechanism for the collapsed metal structures.
The Slide Projectionists stop moving and abandon the audience whilst placing the slide projectors
down on the ground as soon as a body shape is detected -the projectors carry on with the slide
show and are placed on repeat and automatic.
Whilst some of the children are dead others are still alive, the audience covers the dead children
with long white sheets and send video messages and sms by their mobiles to one another in
separate groups and also to the journalists.

Journalist and economist:

Hard Hat for Journalist Hard Hat for economist

 The journalist at podium A receive video messages/sms from the audience and are advised by an
economist [statistics are worked out according to the amount of refugees, displacement, death of
people].
They can either produce news or video clips as part of there report, when they have completed it
they project it out of the parameters of the scrap metal yard on the big screen.
Diagram 3

Diagram 4
1.2

Audience and Human scavengers:

Hard Hat for audience Hard Hat for Human scavengers

 The audience are not alone, up in the heaps of metal amongst the dead and entrapped children
appear human scavengers that indulge in feasting on human flesh. They remove from under the
rubble dead children and carry them off to their den were they indulge in a cannibalistic ritual
[the den is a carcass of an enlarged human head lying with its mouth wide open to allow entrance
and exit to the human scavengers].
The audience screams for help trying to help and revive the children from the metal rubble.

Youth = Frelimo soldiers:

Hard Hat for FRELIMO Soldiers

 From the west end of the scrap metal yard soldiers appear dressed in FRELIMO uniform. Some join
the human scavengers in their den and partake in the cannibalistic ritual. Others capture the
audience and have their legs tied and pull them to the centre of Platform 1. The soldiers inspect
the scene and restore the bodies back in the original position were they were found. A flash of
light appears on Platform 2 and 4 as RENAMO soldiers were spotted trying to sabotage these
positions, they also triggered the ascending/descending platforms * between Platforms 4-H2 and
Platforms M2-M3 with a remote control killing 2 FRELIMO soldiers and 3 civilians. The response is
quick a truck drops of more soldiers which are sent to attend to the problem.
Journalist and economist:

Hard Hat for Journalist Hard Hat for economist

 The journalist at podium B receive the video messages and sms from audience and are advised by
an economist [statistics are worked out according to reducing government expenditures, phasing
out protective tariffs, relaxing minimum labour standards and control over corruption].
They can either produce news or video clips as part of there report when they have completed it
they project it out of the parameters of the scrap metal yard on the big screen.

Diagram 5
1.3

Audience and Construction builders:

Hard Hat for audience Hard Hat for Construction Builders

 The audience are led through the village were construction of shacks is taking place, and are
separated into groups of 10 and allocated to position S1-S4. The construction builders built shacks
around the audience enclosing them in [they are responsible for the construction of shacks and
the walking pathways,platforms and podia].

Youth = RENAMO Soldiers:

Hard Hat for RENAMO Soldiers

 In the neighbouring village RENAMO soldiers dressed in RENAMO military uniform forcefully recruit
young combatants from positions S5-S7.
They order the parents and children into the centre of the village rape woman and kill others.
They leave taking children with them [ Military training in these particular conditions constituted
a process of initiation to violence, marked by cutting the links of the children with society and
programming them to think of war and only war. These seem to have been a deliberate policy to
dehumanise the children and turn them into killing machines.]
On the way back to the main camp.
The audience through body language communicate with a faction of the RENAMO soldiers for help.
The RENAMO soldiers appear on the scene [try to pull down what the builders have constructed:
every window and every window-frame every door and every door-frame every piece of
wiring,plumbing or flooring was ripped out and carried away. Every piece of machinery that was
well bolted down or was too heavy for man to carry-pumps and generators were axed, shot,
sledgehammered, stripped or burned, thousands of relics of annihilative frenzy each tile of mosaic
was smashed, each pane or glass block wall painstakingly shattered. It was systematic
psychotically and meticulous destruction, destroying the economic infrastructure].
The construction builders seem over powered they fire warning shots with their flare guns.
FRELIMO soldiers who had joined the human scavengers in the eating of the children are the
closest and arrive on the seen.
Three flash of light appear on Platform 3 and 6 as RENAMO soldiers were spotted trying to
sabotage those positions, they also triggered the descending platforms * at Platform H6 and
Platform M4 with a remote control killing 2 civilians. Another 2 civilians are injured whilst walking
to go to positions S1 and S3 they stood on land mines. The response is quick a truck drops of
soldiers which are located in those positions.

 The RENAMO camp gets aided by the South African apartheid government: food, medication, and
combat provisions are flown in.

Journalist and economist:

Hard Hat for Journalist Hard Hat for economist

 The journalist at podium C receive the video messages, sms from audience and are advised by an
economist [statistics are worked out according to privatisation over state owned enterprises,
relaxing minimum labour standards and control over corruption]. They can either produce news or
video clips as part of there report when they have completed it they project it out of the
parameters of the scrap metal yard on the big screen.

Between Scenes 1.3 and 1.4 the poets appear reading poems by Jose Carverinha

Diagram 6
La última puerta

Ultima puerta a la derecha.


El mundo ensordecedor de moscas de silencio
los pulsos mata-hambres del gran ratón verde de miedo
la imaginaria omnipotencia de nuestros hechizos imposibles aquí
y el táctil gusto de las puntas de los dedos en las paredes
aculturaciones en común de los hombres
mientras escafandrizados locos
respiran la ternura de los varones.
Y por dentro la puerta al medio
más ciega
más sorda
y más muda que nosotros
en el papel auténtico
de puerta cerrada.

José Craverinha -poeta luso-moçambicano

Cantiga del lanchón

Si me vieses morir
las miles de veces que nací
Si me vieses llorar
las miles de veces que te sonreí...
Si me vieses gritar
las miles de veces que me callé...
Si me vieses cantar
las miles de veces que morí
y sangré...
Te digo hermano europeo
habías de nacer
habías de llorar
habías de cantar
habías de gritar
y habías de sufrir
sangrar vivo
miles de muertes como yo!!!

José Craverinha -poeta luso-moçambicano


1.4

 Finally the RENAMO soldiers who are involved in the freeing of the audience are over powered and
held captive by the FRELIMO soldiers they are flogged and public execution is passed on them.

Youth=RENAMO Soldiers:

Hard Hat for RENAMO soldiers

 In the RENAMO camp training of the children is crucial [Once under training, discipline is very
harsh, and the penalty for failed escape is execution. Sometime recruits are given their first
military assignment: to kill a colleague who tries to escape. To save one’s own life, that order had
to be carried out. Child soldiers are urged to suck and drink the blood of the person they had just
executed. This is aimed at making them be fearless and not feel remorse for the atrocity
committed. Some young soldiers pointed out that the commanders were also submitted to
treatments by “kimbandas ” to defend themselves against death. Some used “mufuca” (a tail of
an animal prepared with remedies). When in danger they had to shake the “mufuca” to protect
them.]
They forced the children to kill their own relatives, raid and loot their own villages, or kill their
neighbours,forcing them to sing RENAMO songs the whole night and were given hallucinogenic
drugs [creating an insurgent force of the youth – Gymnasts: cutting links and eliminating the desire
to escape and join the family.]
The Shaman:

Hard Hat for shaman


The FRELIMO soldiers await the arrival of the shaman.
The shaman arrives.
The audience is taken to the audience location so as to observe what will happen to the RENAMO
soldiers.
The RENAMO soldiers are placed on Platform 2 – 6, in the middle of Platform 1 the shaman
conducts a ritual as he dances to the rhythm of the music.
As the ritual reaches its an ecstatic proportion the shaman makes hand signals in the direction of
each Platform 2 – 6.
The RENAMO soldiers are blind folded made to kneel down and are finally executed.
From one of the Platforms a body part is taken to the shaman as it is he who needs to engage with
the spirits and he engages in the cannibalistic ordeal first .
The members of the Mixed Choir dressed in military uniform of FRELIMO are allowed to enter the
Platforms 2 – 6 as they engage in the cannibalism.

Children in RENAMO training village:

 In the RENAMO training village some children escape.


They redirect and go back to their village.

Journalist and economist:

Hard Hat for Journalist Hard Hat for economist

 The journalist at podium D receive the video messages, sms from audience and are advised by an
economist [statistics are worked out according to the amount of productivity and education
statistics].
They can either produce news or video clips as part of there report when they have completed it
they project it out of the parameters of the scrap metal yard on the big screen.
1.5

Gymnasts

Hard Hat for gymnasts

 Spotlights are in constant search they target 2 factions of [gymnasts] those with red T-shirts
FRELIMO who have been released from the platforms and others with black T-shirts RENAMO
[newly trained children] who are climbing ropes from metal heaps engaging in the attack. The
attack is about possession of space. They meet in a central position above the audience.
The music through speakers A-D must be played loud.

Diagram 7: Imprints on T-Shirts


Red T-Shirts Black T-Shirts
[FRELIMO] [RENAMO]
Comunismo Libertande
Isolar Anti-Semetismo
Lenism Estado de Espirito
Matança Utilizar
Um Estado Do Partido Indigente
Grilhão Cinzilar
REVOLUÇÃO Sionista
Utopia Anti-social
Evangelho Social Contrato Social
Antagonismo Socialismo
sEgrEgaçÃo Segurança
Sofrimento Sofrimento
Fucionãrios Efevitos Estradal
Raiva Filosofia
Sindicato de Trabalhadores Seduzir

 The ropes are in vertical and horizontal positions allowing the 2 factions pathways to move. As
there arms get tired or collide with one another they fall on level 2. They are allowed to make
primitive vocal calls [the choreographer needs to familiarise and rehearse this section.]
From level 2 the fighting over space dominance carries on. Touching the floor means death to the
faction fighters.
Some lights fall on the signs and partially focus on the motionless bodies.
At end of scene-the patrolling FRELIMO soldiers switch off the projectors.
Vanyamussoro

Hard Hat for Vanyamussoro

 This procedure takes place between S5-S10, were their are three vanyamussoro and war victims
involved in each section the cleansing ritual takes place from the diagnosis of a carrier or non
carrier of a spirit and the procedures that follow to reintegrate the person in society .

 The family of a person exposed to war who has been actively involved in atrocity's of war, decides
to take him to the vanyamussoro = healer. The aim of the visit was declared to be a “nyamussoro”
were the diagnosis of his/her actual situation and what dangers threaten that person are done
through divination [using a set of astragalus, cowries, turtle carapaces, seed shells, stones and
coins called “tinholo”.The action has a double purpose: first, to establish if the patient became
incidentally possessed by some spirit, and if he carries any health disorders that need
complementary treatments; secondly, to determine which actions must be undertaken in order to
clean, protect and, if necessary, treat him. The subsequent proceedings will depend on the
outcome of this initial process of divination – which, as a matter of fact, hardly differs, in its
purpose and dynamics, from any other nyamussoro’s divination session, whether it be to resolve a
health problem or a social one. The reason for this is that, according to locally dominant notions,
mind and body, health and social relations, the livings and the spirits of the dead, do not
function independently from each other, being part of a globally integrated process. In short, it
is assumed that the person is surrounded by many material hazards, but they only can harm us
due to three possible reasons: (i) our negligence or inability to recognise and avoid them; (ii)
someone’s sorcery; (iii) an absence of ancestors’ protection, in order to reprimand us or to call
for our attention].
The vanyamussoro must discover if the patient is also afflicted by physical or mental illnesses.
The next step is called kuguiya in changana meaning “to simulate a fight”. The patient must
imitate, with a pestle pole instead of a weapon, the fights and killings he performed during the
war – or those he had seen.
Thus submitting to cleansing rituals:
1] After this performance, the ritual follow-up depends on the diagnosis that has been made. If
the divination did not show evidence of possession by spirits killed or offended by actions
undertaken during the war actions, the regular “cleaning treatments” can start. Otherwise, an
exorcism must be performed.
2] This treatment has the general designation of kufemba, and it can include three different
forms: the patient’s fumigation with specific incenses; a kind of sauna with boiling plants and
other medicines; and the so called kufemba with xizingo, where healer’s spirits directly search
and catch the ones who are afflicting the patient.
3] When they deal with post-war cleansings, vanyamussoro usually prefer to “play safely”, and
combine all of them. The veteran is thus seated next to a burning piece of incense and covered
with capulanas, staying there until it burns out. As soon as that moment arrives, the healer,
wearing the capulana of the spirit he will be working with, grabs his tchova (a gnu tail with, inside
the handle, some hair from hyena’s tail – the xizingo) and starts sniffing the patient with it. When
he founds the afflicting spirit, he decides if is just a matter of sending him away, or if it is
necessary to let him speak. In the later case, the healer falls into deep trance and voices the
spirit’s complains and demands, which must be fulfilled in order to appease him and to restore
patient’s well being.
4] If it is recognised that the afflicting spirit belonged to someone the patient killed, the
performance of formal spirit will usually be demanded, and in exceptional cases this needs to be
carried out at his home region, in addition to compensation for deceased’s family. If the spirit was
wandering in war zone and just walked along with the patient, the most usual demand will be a
place to live – which can be just a “hut” made with a covered pot and hidden in the bush, that
will be ritually offered to him.
[Healers have a genuine concern with the mental effects of traumatic experiences resulting from
war, and their answer to it is both the administration of specific medicines, and the psychological
impact of the hlhambo, the “bath”.When there is no river nearby, the whole process may be
performed in healer’s premises, the young goat is killed over his head, while the person is
covered with the animal’s blood and the food it had inside the main stomach, some adaptations
is necessary in order to substitute those symbolic statements which are only possible in a river.
For instance, the patient will be seated inside a hole dig in the ground for that purpose, and the
washing up from the goat bath will be done with a mixture of river and sea water. At the end of
the ceremony the patient will get out naked, leaving the capulana inside to be burned over the
goat’s remains, and the hole is covered immediately after its consumption. The conclusion of the
process will be the administration of the so called «vaccine», intended to «close» patient’s body
to spirits and sorcery. It consists of the inoculation of a paste inside several incisions done in the
skin - nowadays with a razor blade provided by the client, due to the danger of HIV transmission.
The incisions are not aleatory, but done in the places believed to be the main entrances of spirits
and spells into the body: the chest, the loin, and the articulations from the arms and legs].

 The remote control air-planes and helicopters circulate over the audience sitting position.

 The RENAMO faction fires their flare guns at the aircraft [thus increasing the air combat].

 An aircraft crashes towards the middle of this scene there are no survivors, amongst the dead is
the body of Samora Michel, Projectors C, A, F, E show news flashes of the crash.

Journalist and economist:

Hard Hat for Journalist Hard Hat for economist

 The journalist at podium E receive the video messages, sms from audience and are advised by an
economist [statistics are worked out according to functional support structures of production
including access to information, knowledge and technical preparation].
They can either produce news or video clips as part of there report when they have completed it
they project it out of the parameters of the scrap metal yard on the big screen.
Diagram 8

Diagram 9
1.6
This scene to be conducted in partial darkness.

Macungeiro (Soothsayer):

Hard Hat for Macungeiro

 [Samantanje a powerful soothsayer and was told by that RENAMO had a new leader. Dhlakama
made a promise to Samantanje and the spirits that he would direct the military attack in a new
way and give new guidelines to the struggle- A series of miracles followed with this promise.
Dhlakama worked closely with Samantanje for four years. Refugee accounts talk of people that
were accused of casting spells or being “leopard men”, being identified by macungeiro's, who
would dance frenetically and claim to hear spirit voices.] The victim was then made to swallow a
potion. If the potion provoked convulsions,the case was regarded as proven,and the accused was
found to be guilty. If the victim vomited up the liquid again, under the prevailing judicial system,
he was declared innocent. Samantanje became very powerful and had no intruders coming into his
zone wanting to do bad things like – raping women etc. He would order a thunderstorm, lions or a
swarm of bees to attack the intruder. Dhlakama asks Samantanje to predict the outcome of a
government offensive against his headquarters at Casa Banana. Samantanje lifted two Cerveja
Nacional [beer] bottles and filled them up with spirit water one represented the spirit of Samora
Michel And the other of Matsanga, the one that became red like blood would be one the ancestors
left- the bottles were watched overnight by Dhlakama's body guards. In the morning Matsangas
bottle turned red. The air raid started two nights latter, forcing RENAMO out of the area,
Dhlakama was enraged and ordered Samantanje's death, however none of the people wanted to
touch Samantanje and his brother was executed. Samantanje's hilltop is an Island of peace in a sea
of conflict.

Sound and illumination:

 The areas that remain illuminated are those were the cleansing rituals are taking place and the
area of the Macungeiro.

 The sound will travel between the forward speakers (A-D) to the background speakers (a-d).

Journalist and economist:

Hard Hat for Journalist Hard Hat for economist

 The journalist at podium F receive the video messages, sms from audience and are advised by an
economist [statistics are worked out according to the existence/operation of banks or formal
financial institutions]. They can either produce news or video clips as part of there report when
they have completed it they project it out of the parameters of the scrap metal yard on the big
screen.
1.7

The crane:

 This scene opens with deconstruction. A crane with a swinging concrete destroys the shacks that
were built and also destroys some communication walking pathways.

The black clocked woman:

Hard Hat for black clocked woman

 From one of the metal heaps a black clocked woman with illuminating eyes stands in the area
where the aircraft fell, she is at a position overlooking the scrap metal yard, she moves slowly and
gazes with precision into every hiding place in the scrap metal yard.
She represents Mozambique, a woman who has been raped, poverty stricken, abused and lost her
children.
With a lantern in her one hand she walks in pain, she makes hand gestures with the other hand,
each directed in a specific corner of the scrap metal yard.
A light flashes and a revelation of death,rape,tortured children, devastated economy, the military
leaders, the battle between RENAMO and FRELIMO shows [on the big screens outside the
parameters of the scrap metal yard], she represents the ten provinces of Mozambique and the
people of the different ethnic affiliations.
Projectionists must be stationed in their positions.
Towards the end of the movement the swinging concrete takes her life.

Journalist and economist:

Hard Hat for Journalist Hard Hat for economist

 The journalist at podium G receive the video messages, sms from audience and are advised by an
economist [statistics are worked out from an institutional and infrastructure perspective such as
existence of communication and power grids, transport systems etc].
They can either produce news or video clips as part of there report when they have completed it
they project it out of the parameters of the scrap metal yard on the big screen.
Diagram 10
CONSTRUÇÃO.. 2

Cannot be performed in concert

OUTDOOR SCENE

in
SCRAP METAL YARD

Scene with no music accompaniment

The audience is to be advised in accordance to there involvement before they enter the space.

Diagram 11
Tenors and Sopranos:

Hard Hat for Tenors and Sopranos

 Tenors and Sopranos dressed in RENAMO uniform appear with headphones [the music is played
through the headphones and they sing Grito Negro by José Craverinha using articulation
technique, the poem can be repeated until illumination is turned off]. The exact synchronisation
by means of headphones, without conductor, alone requires a completely new technique of
hearing and singing. Each podium is equipped with a table where the singers have a number of
objects that can manipulate voice: a fan, different size empty containers with different size
openings to be placed over mouth,containers with different liquid volumes etc. They make
appropriate gestures and vocal sounds. The podia are situated: 1] The two for tenors are placed at
opposite ends on top of piping South West,North East, directions.2] The other two for sopranos are
placed opposite each other on top scrap metal heaps in the North West, South East directions.
Singing starts when light illumination at the podia goes on and stops when light illumination goes
off. [Special rehearsal is needed by the light technicians.]
The tenors and sopranos are attacking the mixed choirs verbally.

Diagram 12: Voice enhancement and natural voice


Mixed Choir:

Hard Hats for Mixed Choir

 A Mixed Choir consisting of woman and children dressed in military uniform of FRELIMO are
situated on Platforms 2 – 6, they de-construct and sing Grito Negro by José Craverinha, within one
group of singers [the poem can be sung backwards starting from the end and finishing at the
beginning, beginning to end, beginning skip two lines until end, backwards skip one line to the
beginning, skip words or expressions beginning to end or end to beginning, each poem is repeated
until light illumination is turned off ].
Each Platform is equipped with a number of objects that can manipulate voice: plastic to be
placed over mouth,fan, different size empty containers with different size openings to be placed
over mouth,containers with different liquid volumes etc.
Each Platform is equipped with two cassette players that are started randomly at any time.
Volume needs to be adjusted by the Mixed Choir or the sound engineer [the sound attack coming
off softly at some parts and louder over others, this matter needs to be discussed by the parties
involved].
The sound content is of the first FRELIMO conference and arias.
The choir member holding cassette player walks in the space during the performance manipulating
the player when coming close to the mics e.g. Placing player in plastic packet, altering volume,
placing plastic tightly or loosely over speaker, altering the direction of speaker.
Each group of the Mixed Choir is attacking verbally the tenors and sopranos so by the sound
produced in each of the Platforms this can have a negative or positive impact on the final result.
Whilst singing each member of the Mixed Choir is covered in blood, they raise their hands into the
sky above some holding body parts of the victims.
Singing and cassette manipulation starts when light illumination at the Platforms goes on and
stops when light illumination goes off. [Special rehearsal is needed by the light technicians].
The exact synchronisation, without conductor, alone requires a completely new technique of
hearing and singing.
During singing the Mixed Choir and the singers are under attack by RENAMO and FRELIMO followers
who try to climb and demolish the Platforms and podia.
The remaining walking pathways are carefully guarded by FRELIMO soldiers.

Construction Builders:

Hard Hat for Construction Builders

 The construction builders form groups of protesters [an early sign of a union] and engage with
them in the street outside the parameters of the scrap metal yard.
The construction builders flood the avenues on Platform 1 chanting songs of the liberation struggle
and further kick and throw metal objects from the heaps into the avenues of Platform 1.
Gymnasts:

Hard Hat for Construction builder

 The survivors from Scene 5 appear showing their handicap through movement.
Libretto

Conceived from the poem Grito Negro of José Craverinha


Libretto written: Dimitri Voudouris

-pause for 0,05sec

space between letters separate word with very brief pause this is to a low for breating to take place

[] expression must be on that sound

[] expression must be loud on that sound that it stands out

U capital letters unification of expression

u lower case letter an expression

no space very quick expression

a italics not to loud expression

|
Grito negro

|
Eu sou carvão!
E tur ancas me brutalmente do chão
e fazes-me tua mina, patrão.
Eu sou carvão!
E tu acendes-me, patrão,
para te servir eternamente como força motriz
mas eternamente não, patrão.
Eu sou carvão
e tenho que arder sim;
queimar tudo com a força da minha combustão.
Eu sou carvão;
tenho que arder na exploração
arder até às cinzas da maldição
arder vivo como alcatrão, meu irmão,
até não ser mais a tua mina, patrão.
Eu sou carvão.
Tenho que arder
Queimar tudo com o fogo da minha combustão.
Sim!
Eu sou o teu carvão, patrão.

|
Tenor 1

|
Eu-eu-e-e-e-e sou [ca-ca-ca-c-c-a-c-c-a-r-r-r-vã-o- 0]!
E tu-tu-tu-E-tu-tu-E-E-tu-tttt-E a-rrrrrr-ancas-me brutalmente do chão
e fazes-me tua mina, patrão.
Eu sou carvão!
E tu acendes-me-me- me-meme, papapatrão,
para te te te servir eterna-na na-na-namen-tetetetetete co-coco-coco-mo fofoforça mo-tritritritriz
mamama-s-s-s-s eter-erer-[NAMEN]-namentete-Te não, [PAaaapppaPApa]trrrrrrrrrão-o-o-o.
EuE-E-e-e-e-eu [SOUSO-U-S-ou] caCA-CA-r-r-r-r-r-r-v-v-vV-V-Vão-o-o
e tennn-n-n-h-hho que [AR]deeeeer-rr sim;
[quququeiqueiquei-QuEi]ma-Ma-MA-rrr [tudotudotttdodddddo] com aA-a-A-aAaA força da mimimi-MI-mi-MI-nnn-ha co--[mbuMBUmbuMmMmbbUU]-st-ãoo.
[Euuu-U-U-U] sou ca[rrrrrrrr]vão-o-o;
te-tet-e-te-te-t-en-hoh-oh-o-hoh que [A-A-ar-r- r-- r -r derrr] na exploração
[A-A-ar----r --r--derr-r ] aaat-aaté às cin-CIn--za--za-za-za---za-s da-a-dA ma-MA-ma-MA-ma-ld-ld-Ld-LD-içã-çã--çã--ição
A--a-A--a-rd-rd--rd-er vivo-vovi-vovi-vo-vo-vi-vo [co-co-mo-mo-co-como] al-l-lllcaa-a-a-trtrtr-trão, meu – meu - meuuuu-u-u ir---mã--oooo,
Aa-Aa-Aa-té- té--té--té não ser-sese-ser mais aaaaaa tttttttttt-u-u-u-u-au-au-a mina, pa--pa-[PAPA]-pa-trãooo-o-oO.
Eu sou carvão.
TenTeNtetetennnn-ho que AarAarAar..deDer
Quei-quei-QuEi-quei-qu-qu-ei-ei-mar tu-Tu-TU-tU-TUdodododoDO com o fogfo----go da [min--Min-min-MIn-MIN-mIN]ha cccc-om-CCco-com-co-m-bus-US-tão-ão-o.
Sim!
Eu –eu-eu-eeee sou o te-ttt-ee-Euu ca-ca-[cccAA]-rrrrrrrrrrrrrrrrrrrrr---------vã---oOo, pa-PA-pa[PA]---[tr--tr----tr-tr]------ão.
|

Mixed Choir 1

|
Eu-eu-uuu- souuososuossou [CA-ca-r-vã-o- 0]!
E ut-ut-tu-ut-ut-tu-ut-tuE-uut-utu-E-E-tu--E a-rr-an-na-----an--ca-s-me brrbrbrb-ut-al-men-men-te do ch-ch-----ão
eee-eEe-e faFAfefafefafefa---zes-sez-sez-zezeze-zes-me ututututuuuU-tua min---------a, pppp-PP-pa-tr-ão.
UUUu-u-ue-uE-Eu sou ca-Ca-ca-Ca-rv-RV-RV-ão!
E tu tttt aaa [c-ec-ec-ce-end-dne-end-dne-endes]me, paTRrtrtrtrtrtrtrtRTtr-----trão,
para te se te se te se te se – rv-rv-vr-vr-rv ...ir [e---te-te-te-te-te-rna-rna-anr-rna-anr-rna-me-nt-e] como força [mo-om-om-mo-om-mo-tr-rt-tr-rt-tr-iz]
mas eternamente não, patrão.
Eu ue eu eu eu ue-so-eu-ue-os-ca-rv-ã-o-ho-[nininini]-te-ge-ga-ge-ga-gat-ge-ge-geu-gat
[eeeeeeeeEEE]-te-teu-te-teu-ta-tau-te-nh-te-o-qu-e-u-u-ue-q-ar-ra-dr [siSIsiis]-m;
quei-quei-quo-q-ma-re-ra-tu-re-ra-do-de-da-co-com-ca-cac-co[mm]-a-fo-rga-rça da de du ga gage-mi-n-[hahaHA]-co-ca-mb-bm-u-st-ã-o.
ueEu-so-e-[uU]e ca-co-cac-co-m-rv-ão-ltr-lt[rrrrrrr]-R;
te-----tau-----teu-te-te---te-teu-tau----nh-teu te teo-que-ar-ra-rer-dr na na na na nana-ex-re-ro-plo-ra-re-ru-çãc------o
ar-ra-re-ra-re-[ddddd]r-a-té-teu ta te teu ta teu te tà so te teu cin-cicin-----ciz----as da de du do de da dau-ma-ldi-llldi-ç-ã-ooooo
ar---ra-ar---ra-ar-ra-dr-rd-ro-rd-ar-ra-vi-va-vi-vo-co-ca-co------cac-mo-al-cat-r-ã-o, me-u-UM-me-me-meu-me-ir-m-ã------o,
a-té-teu-tau-te-a-te-teu-te-nã-ooo-a-te-teu-se-r-es-ma-mar-ma-i-ma-s-a-tua-te-teu-min-ahaqum, pa-parra-paparra pa tr para-ã-o.
ue-Eu ue-so-e[uuUUU] ca-cac-ca-cac-rv-rv-vr-ã-[ooooOOOooooo].
Te-teu ta tau te te teu ta nh nho nheu ho-o que qui q ar ra re dra ra ra re do doo
Quei q qui ei ue eu ma ra re qi eu ue tu do de deu don da com cac ca com o fog gog gege o da de di min quee ha com cac co com cac ca coc bu co bu coc st ã coo.
Si-is-si-mi-meh!
Eu ue eu [UU]e-so-eu-os o te teu te ta tau te teu ta ca te ca te ca tet cac rv ã o te teu to, pa para pa para pa pae para tr parã-o.

Tenor 2

|
Eu-eu-e-e-e-e sou [ca-ca-ca-c-c-a-c-c-a-r-r-r-vã-o- 0]!
E tu-tu-tu-E-tu-tu-E-E-tu-tttt-E a-rrrrrr-ancas-me bru-BRU-bru-ta--taTAta-lL-mm-m-ee-me-nte-e--e do [ch--ch-CH--ã-oOOO]
[eeeeeeeeeeeeeeeeee] fa-FAz-AZ-AZ-es-me tua mina, patrão.
Eu sou carvão!
EeEeEe [tututututu] acenACENacACacen---dddddd-e-EE-s-me, [PA-PA-PA-pa-tr----tr-tr-tr-tr---tr-----------trãoO],
para te servir eternamente como força motriz
[maMA-----------maMAs] e----te-TE--te-TEteTE-r-na-NA--na-NA-ment-NT-Nt-e [n---ã--ã-ãã-Oo], patrão.
[eee----eee-Euuu------u-u-u-u-u-uu] sou SOSouu [CA-car-CAR-vvvvvvvvvvv-ã-o]
e tenTEN-TeN-te-te ten-hoHO-ho qu-QU-QUe [ardRD ---RD-Rd-rD---er ] [s-----i---I--I--I-m];
[qu-EI-ei-eI-e-IM-im-im-aaaa-r] tu-U-DUD-u-dUd-do [com] a [fOoooooRrça] da [mi---n/min-ININInHAhaHAha] com-MB-Co-cO-m-b-bu-ST-STTTT-stã-oooooooO.
[eeeeeEu] sou ca----RVRV-----rv---ão;
[TTT--TTT-T--TTT-------TTTTTTTT-t-t-t-t-t-t-t-ee-ee-n-h-H-H-o] que ar-RD-rrrr---RD--rrrr--Rd--rrr--Rd--rd--eeer [na-NA-na-NA] exEXexEX-plo-PLO-plo-PLO---RARAra-ex-ex-X-X-plo-
ra-----RA-ra---çã-ã-----çã-çã-çã-çççç------A-ããã-o
[[ARrrAAAR]ar------d------er] a-ttttttttttttttttt-é às [ciCIciCINNNZNnnzzzzzznza-a-a-a-s] daDAdaDAdADa [mal-ma-l-di-DI-di-çã-oo]
[arDEDEDEDe-D-ErrrrRr] vivo como alcatrão, meu--MEU-meu--MeU [ir-I-I-I-I-I-r-mmmm-ão],
AAAA----a--t-t-t-t-té nãonoNO----------não seSE---seSEr maiaiaiiiiaaaais a tua [mININniINninininiINina], pATRAPATRtr-trã-OOo.
eeeeeeeeeEE-Eu sou ca----RRRRRRRRRRr----vã------o.
etettteteteeteteteTenh[NHNHNHNH]nho que AAAardRDrdERer
[Qqq-ui---mar----QUIEMArrrr---eimar] tuUUDD-ooo-do com o fo-gogo-OGOG-go-OG-FOFOFO-go da minINININHAha cococococoCO----MB-mB-mb-US-uS-uuuuus-TAt-ão.
SIM?SISISISISim!
eeeee------------eee-------------eeEu sou o teuteu--TEU--teu ca-rvRVRVRVA-ão, pa---PA---pa---PA--pa--PA----trão.
|

Mixed Choir 2

Eu-e-eu-e-eu-e-eu-e-e-eu-e-eee-eu-eu-e-e-U sou [c-ca-c-ca-c-c-A-c-c-A-r-R-r-R-r-R-r-vã-o- 0]!


E-tu-e-tu-Etu-E-tu-tu-tututu-ttt-E-E-tu-e-tu-E[ttttt]ea-E-a-A-rrrrrr-AR-aAaA-nc-NC-NC-ncas-me-m-me-m-E-me br-Br-br-br-ut-Br-ut allll-ut-alll-ut-ALL-me-ME-me-ME-me-mm-me-
m-n-n-n-t---ttt-tt-te do-d-do-do--do--d--do-d--d-do ch-CH-c-c-CH-ch-ão
eE--E--e-e-E fa-E-e-fa-e-fa-e-fa-[zZZZzzzz]-z-z-z-e-[sss]-me-m-m---m-eeee-e tu-t-tu-t-tu-t-tu-tu-t-tu a-A-a-Aa- mi-a-mi-m-mi-m-a-mi-NA-N-N-a, pa-PA-pa-pppa-tr-a-TR-pa-tr-pa-
tr-pa-tr-ã-o-o-----O.
EuUU-e-U-U-eEU sou ca-ca-CA-ca-CA-rv-rrv-rv-RV-ã-o-O!
EeeeE-e--e--e--e-E tu e-tu E -tu-U-Uu--u--U-e-tu ac-ac- e-tu-ac-e-tu-U-U-u-ac-AC-e-tu-C-eu Ee-n-Tu-u-ac--de-ss-s-tu-ac-me-m-me-m-e-M-m-eE, pa-P-p-p-[ppp]-ptr-ptr-ptr-t-pr-pr-
t-pr-pr-t-ão,
pa-Pa-Par-Pa-PAR-para-Pa-para-pa-PARA-PAR-PARA-pa-pr-para-pr-para-pr te-pr-te-para-te-pr-para-te-t-t-t---t---t-e-p-a-te se-te-se-te-se-rv-se-rv-para-rv-ir-ra-pa-ra---ra-vi-rr et-
te-et-te-et-te-e-te-er-para-te-pr-naNA-na-pr-vir-par-pa-na-me-para-me-para-me-n-n-n-n-[tTTte]-e coCOCO------CO------mo-Co-cccc-Co-m for-co-m-for-fo-tu-fo-tu-fo-te-f-o-
[rrrrr]-ça mo-m-o-m-o-mo-mo---t-m-mo-tt-ri--z-Z
ma-m-ma-m-ma-sma-sma-sm-ma-sm-ma-sm-a-ma e[tttttttttttttt]-er-m-er-tu-ma-er-tu-te-tu-tu-te-na-tu-na-NA-tu-te-me-m-sm-ma-m-sma-me-ma-m-sma-me-nt-eeeee não sma-
m--nã, p-p-p----pa---p-p----p-pa-tr-p-tr-pa-t-r-pa-ptr-ão-o-O.
Eu-e-e----e--------------E-u-UU-u-eU so-Eu-sou ca-Ca-so-U-CA-ca-Su-o-Sou-SU rv-Ca-rv-so-sou-ão
e-E-E-E-E te-TE-e-e-TE-nh-oOOO-o que-ooo-q-ooo-u-ooo-q---ooo-ue ar-ooo-ar-q-oooo-ue-de-RRRRr-de-ooo-RRR- sim;
que-i-que-i-que-i-ma-que-ma-que-ir-ma-que-i-ma-ir-rrr tu-TU-tu-TU-tu-tu-tu-TU-tu-dododododo-tu-TU-dododo-tu-tu-d-o-tu-tu-do-d-od-o co-d-co-d-co-d-d-co-d-d-co-d-m-co-do-
[MMM] a for-[oooooo]-m-mi-[ooooo]-çaa-a-a da-a-da-a-da-a-da mi-a-mi-da-mi-a-nh-NH-a-mi com-mi-m-m-mi-co-m-mi-co-co-co-m-m-co-mb-mb-bm-co-mi-com-mb- us-tus-us-t-us-
co-tus-co-m-t-us-mi-ão.
eeeeeeeeeeeeeeeeeE-u-UUUUUUUU so-so-SO-so-SO-so-[uuuu] ca-[uuu]-ca-uu-So-[uu]-ca-uu-So-car---So-uuu-v-v-soã-o;
te-TE-[ttttttt]nh-te-nh-[eee]-t--[eee]-nh-oOO que [eee]-ar-Ar-[eee]de-a[rrrrr] na-a[rrrr]-n-na-a[RRRRrr] ex-na-n-a[Aaaaaa]r-pl-pl-ex-na-n-or-mi-no-mi-no-a-mi-no-a-çã-----o
ar-----Ar-mi-MI-ar-AR-ar-mi-d-d-e-r a-mi-ar-a-té às té às té às té às-[sssss]cin-té -cin-té -cin-za-té -cin-té-[sssssss] da ma-da-ma-da-ma-da-MAl-diçã---------------------------------o-O
[a-A-aaaaaaaaA]rd-[aa]-rd-[aaaaa]-rd-[eeeeeeeee]-r vi-viv-vi-viv-vo-viv-vi-vo-i co-como-co-como-mo al-co-mo-ca-co-mo-trtrtrtrtr-ão, meu-me-meu-me-meu-me-meu-me-meu ir-
meu-ir-meu-me-ir-meu-mão-meu ,
at-at-aaaaa-t-é não ser-Se-rrrr-ser-se-serr-se-serr-se mai-se-mai-se-mai-ser-s-s-s-ser-meu-me-ir-mai-ser a tu-a-atu-a-tu-a-tu-a min-a-a-tu-a-a-min-tu-a, [pppppppp]-pa-PaPa-tr-
pa-tr-pa-ãooooo.
E-uuU-[eeeeeeeeeeeeeeeeeeee]-Eu so-eu-so-u-e-u-so-e-u ca-CA-ca-so-eu-EU-so-car-so-car-ca-uu-vã-o.
[etetetetetteetteetteetTe]-n----[hohohohohohoho]-te-et qu-e-qu-e-ho-ho-HO-te-et-qu-e a[rrrrrrrrrrrrrrrrr]-[dedededede]-r
Quei-i-ue-iQ-iQ-ue-e-u-m-q-ue-m-i-ar tu-Do-do-tu-DO-od-do-do-tu-----tu-----tu co-tu-tu-co-tu-co-m o-mo-tu-mo-q-eu-mo fo-fo---do-do--ar-go da-da-da-da-do-ar-go-da min-go-da-
go-da-nah-ha co----nah-da-dah-ma-m-bu-ma-go-min-go-st-ão.
Sim!
[eeeeeeeeeeeee]Eu so-[eeeeeeee]-u-so o-so-UUU- te-[eeeeeeee]-u ca-so-rv-rv-RV-go-ã-g-g-g-g-go, paPa-ta-pa-ta-tr-pa-ta-tr-pa-ta-tr-ão.

|
Soprano 1

|
e--e--e--E-E-E------E---ee----uu---u-u---Eu-eu-e-eu-e-eu sO-OuSou-ou [car-ca-carrrrr-c-c-ca-cccccccccCC-c-a-r-r-r-vã-ããão]!
E-EeeE tu-ttttu-ttu-EeEeE-tu-tu-Ee-EeeE-tu-tt-E aaaa-rrrrrr-annn-cacacacascas-me brutalbrubrubrut-t-ttbru-brubrutalmen-n-N-n-te-e-e do ch-ch-ch-ch-ão-o-o
e faa—zaz--aza-a--ae--s [mee-mm---ee--m-me-em-me] tua-tu-a-t-t-au-ua mina, Pa-pa-pa-Pa-tr-tr-tr-t-rt-TR-ão.
Eu-eu-eu so-sou-sos--------sos--sos--sos-sos -so-sou c-a-r-r-r-r-vã-oOOO!
E tu-Tu-TU acennnnn---de-de-de-ss ...me, PA-pA-Pa-pa-tr-TR-TTTT-r-r-r-Rão,
para-Para-par- paa-par paa-par paa-par-para-paa-para te- te- tete-teng te servRV-RV-R--V-R-V-RV--RV--V-R-V-R-RV---ir et--te-et-er-er-er-nam------e--e-e-En--te [co-CO-co-mo]
força [moTRtRRRRRRRtrrr-tri-tri-triz]
mas eter------na---m---en--en--en--te não, pa--Pa--Pa-tr-tr-TR-ã-o-O-O.
Eu sou-so-so-so-s-os-o-U-U-u-u-u-u car-raC-rAC-Rac-Car-vão
e tentetentetetenttntntntntennnnnnohohhoho queeuqqueeuq ar------der si----mmmmmm;
qu---eieu----qqqqqqeui---imr-ar-ar tudo com a fffffforroorroorça da mi---Nn------ha cooccooccooccooccoocCo-mbu-stã--o.
eeee--UUU-E--u sousosou--sou-so-sou ca-rv-rv-vr-vr-rv-ão;
[tentetetetntetetenhoohohohoho] que [A-aA-aA-a-a-rder] na exXXX---plPLplPLplPl---ora--çã--oOOO
aaAarde-ed-de-ed-de-de-der até às cinCIN-NIC-az-zas da [ml-al-aal-al-ididdi-ção]
ar-dddddddddd---e---r vivo [oCocOC-co-om-mo] al---------[cecacacecace-ca-tr-ão], meu ir-----mã-o,
[a------tétetetetetetetetete-t--é] nãoo--oo--o ser m-ai-ia-ia-ai-ia-ia-ai-s a tua-tua-tua-tua-autautautautautaut-tua mina-ina-mina-min-min-mi-mina, pa----PA-aP-ap-pa-trrtrttr-tr-
ão.
Eu-UeUeUeUe-eu sou car-----------------------------------v--------ã---------o.
Te-ten-te-te-te-ten-nho que-euq-que-euq-que a-drdrrdrddr-rder
Queiieuqieuqieuqqqqieuqieqqeui-ma-r tuUUUdddo com o fff-[ogo] [ogo] da [ogo] da [ogo] da mi----im-im-mi-nha combustão.
mis-mi-mi-mi-m-m-Si-m!
Eu [sousososouuU] o [teuteteuteteuteteteuteteteteu] carRaCracRacCarVvVvVv-aaa-vão, PA-pa-PA-pa-ap-ap-PA-trTRtr-ão.
|

Mixed Choir 3

|
Eu te eu te eu te eu mina te eu te..ogo gmo ogo te eu ogo te teu te te te te teu ogo gmu te te te teu.

Eu te eu te eu te eu te eu te..ogo gmo ogo te eu ogo te teu te te te te teu ogo gmu pa rt ao te te te teu.

Eu te eu te eu te eu te ta ta tal eu te..ogo gmo ogo te eu ogo te teu te te te te teu ogo gmu te te te teu.

c-----a-rv-vr-rv-am-en-en-te-co-mo-omo- te te teu te te te te te ta te ta te teu faz fa fa faz aoooo teu.

Sou te te teu soust te te teu teteteu ogo sous me te te teu car papa pa que tua tua aut Ue teu.

Mis mi m m m m mitr rt rt traut aut aut TR nic oc co aAAAAA teu te te teu tau ti ti sou.

Vivo vi vi teu tet tet tet tet tet te te teu dr drr der nho nho nho da te te te teu te te teu.

Imr ar te imr ar te teu imr ten que euq euq te ten tet te en imir imr tua te te et et te teu.

OcoCo sou sou sssssous te et te et teu tau pa com te te bust teu te te ao.

Eu te eu te eu te eu mina te eu te..ogo gmo ogo te eu ogo te teu te te te te teu ogo gmu te te te teu.

Eu te eu te eu te eu te eu te..ogo gmo ogo te eu ogo te teu te te te te teu ogo gmu pa rt ao te te te teu.

Eu te eu te eu te eu te ta ta tal eu te..ogo gmo ogo te eu ogo te teu te te te te teu ogo gmu te te te teu.

c-----a-rv-vr-rv-am-en-en-te-co-mo-omo- te te teu te te te te te ta te ta te teu faz fa fa faz aoooo teu.

Sou te te teu soust te te teu teteteu ogo sous me te te teu car papa pa que tua tua aut Ue teu.

Mis mi m m m m mitr rt rt traut aut aut TR nic oc co aAAAAA teu te te teu tau ti ti sou.

Vivo vi vi teu tet tet tet tet tet te te teu dr drr der nho nho nho da te te te teu te te teu.

Imr ar te imr ar te teu imr ten que euq euq te ten tet te en imir imr tua te te et et te teu.

OcoCo sou sou sssssous te et te et teu tau pa com te te bust teu te te ao.

Eu te eu te eu te eu mina te eu te..ogo gmo ogo te eu ogo te teu te te te te teu ogo gmu te te te teu.

Eu te eu te eu te eu te eu te..ogo gmo ogo te eu ogo te teu te te te te teu ogo gmu pa rt ao te te te teu.

Eu te eu te eu te eu te ta ta tal eu te..ogo gmo ogo te eu ogo te teu te te te te teu ogo gmu te te te teu.

c-----a-rv-vr-rv-am-en-en-te-co-mo-omo- te te teu te te te te te ta te ta te teu faz fa fa faz aoooo teu.

Sou te te teu soust te te teu teteteu ogo sous me te te teu car papa pa que tua tua aut Ue teu.

Mis mi m m m m mitr rt rt traut aut aut TR nic oc co aAAAAA teu te te teu tau ti ti sou.

Vivo vi vi teu tet tet tet tet tet te te teu dr drr der nho nho nho da te te te teu te te teu.

Imr ar te imr ar te teu imr ten que euq euq te ten tet te en imir imr tua te te et et te teu.

OcoCo sou sou sssssous te et te et teu tau pa com te te bust teu te te ao.

|
Soprano 2

|
U--eu-Eu-eu-e-e-e-e sou-so-sos-eu-sos-eu-eue-so-u [ca-ca-ca-c-c-a-c-c-a-r-r-r-vã-o- 0]!
E tu-tu-tu-tutu-E-tu-tua-te-to-tu-E-E-tu-tttt-E a-rrrrrr-annn-cn-sacn-nnncas-me b--ru----tal--me-em-nte do chão[OOO]
e faz-Fa-FaZ-fa-got-tt-foes-me tua-te-tua-te-to-teu-mi-teu-te-na, pa-ca-qatr-ão.
ee-EuuuUU sousossou c--------a--rv-vr-rv-ãoOO!
tua-te-tueeeeee-E tutututu[utututututututut]tu a-ce-GOG-CE-ce-CE-cinq-ue-n-de-s-me, pa-ca-co-qat-r-ã-o,
para te para te para te para te para te seesseesseesserrvir ete---rn-rn-rn-am-en-en-te com-te-tua-teo fo-gor-for-gaça mo-Mo-ma-mi-tr-iz-UUU
mas mas mas mas mas et-ern ern Erna na ma ame-nt-nt-ae e n ã o, pa-que pa PA Que qatr-ã-o.
Eu s ue os ue o eu-u ca ca ca qui serv-a-o rv rv cã que te teu te ta-o
[eEEE]-----[eeee] te tau tan te tau teu nh te teu to que ar di go gog og-der si-is-m-m;
quei-que-im-ir-qua-ma-re-re RE-re-tu-tue-tau-do-te-go-gi-com-co-com-a-fo r ça da de gp fo for-min-ha-com-co com-bu-si-sh-sh-tã-o.
Eu ue eu ue eu so-eu ue-os eu ca rv ca ca [c c c c] rv go de do dat ã-o;
teTeteTete----te--tau--te--ton-te-n-ho--ho que-q-q-qui-ar-d-de-dr na ex-te-tau-ex-pl-or---ro-a--çã-o
ar-ra-ar-ra-re-do-dr ati---é--e--tia-te---teu às-ci-ci-nz-ar-r[aa]s da de do-ma-me-para-plil-di-çã-[oooooooooO]
ar-ra-red-dr-vi-viv-va-ogr-co-ca-co-cac-mom-ri-Tz-al---ca--co co co co CA tr TR tr ã o, meu-meeuow-me-me-UUU-ir-ir-mão,
a-at-é-nã-o sersi-san-mai-meu te teu tots a tua te teu ti te teu tua ta teu te teua-m-i-na-na-na-NA, para-[papapapa]tr-ã-o.
ue-ue-eu-Eu-so-eu-ca-eu-ue-eu-rv-ã-o.
Te-tea-teu-te-tau-tua-n-[hoho] que qui q-ar-dr
Quei-ei-q-u-ma-ru-qu-tudo-te-tudo-teu-te-tau-co-ca-cac m o fo-to-n-br-go da do demi-n-ha-[haha]-com-co-com-co-bu-bus-st-ã---o.
mi-mi-im-S-im!
eE-u sou-sos-sou-sos-sou O-o-o-O tetetete---te--uUUUUUUU c-a-A-A-A-rv-ã-o, patrão.
|

Mixed Choir 4

|
Ccar sou me tu tu tu grgr ca co ca car faz faz te car ac ac fo rm a pa teu te te tau ra o te ca car sou meu.
E tu tu ttt na oO gr car ca Cac me faz et ter sev sev curucu curucu curucu mo mo motr co omo for sou min Ca cac car et eter na men
Ca cac car curucu ow ouw me tr rt rt trao sou gr gr meu moo ancas do do do chao tua mi mi mina tr tr mina te tr te tr to.
OoO pra pra tri tri OoO pra tre tre cin za cin za meu que te ten netnet tua tehn ate eta eta eta ate minha Eu uuu Eu uU U U u.
Ho te ten tenh ser mais a tua meu meu miaou X X X X te te teu tan teu tanh rrr bru tal men te te te teu ta to tu ti te teu.
Qui qui im irm nao tu na tu nao tu ao nao ao nao tu eu il tu teu tu sim si mis mis sim e e e e e tern a mente da tu mis sou tenho.
Na sir ser sir ser fo fo fo go ogo min ha ogo da tu do od ut mar ram com moc oav voax eu u U U u da al alca acla ue.
MI si ma is si in te te teu than sim mis ate fo ogo ogo que euq euq euq weh ma wehma mahew sou sa saf ancas ard er.
A OoOoO te uUUUU tet Tetuuuu ate eta ate ag his sh mimao oamim sos sou sos net curucu sev ter me faz do moo meu trao.
Ccar sou me tu tu tu grgr ca co ca car faz faz te car ac ac fo rm a pa teu te te tau ra o te ca car sou meu.
E tu tu ttt na oO gr car ca Cac me faz et ter sev sev curucu curucu curucu mo mo motr co omo for sou min Ca cac car et eter na men
Ca cac car curucu ow ouw me tr rt rt trao sou gr gr meu moo ancas do do do chao tua mi mi mina tr tr mina te tr te tr to.
OoO pra pra tri tri OoO pra tre tre cin za cin za meu que te ten netnet tua tehn ate eta eta eta ate minha Eu uuu Eu uU U U u.
Ho te ten tenh ser mais a tua meu meu miaou X te te teu tan teu tanh rrr bru tal men te te te teu ta to tu ti te teu.
Qui qui im irm nao tu na tu nao tu ao nao ao nao tu eu il tu teu tu sim si mis mis sim e e e e e tern a mente da tu mis sou tenho.
Na sir ser sir ser fo fo fo go ogo min ha ogo da tu do od ut mar ram com moc oav voax eu u U U u da al alca acla ue.
MI si ma is si in te te teu than sim mis ate fo ogo ogo que euq euq euq weh ma wehma mahew sou sa saf ancas ard er.
A OoOoO te uUUUU tet Tetuuuu ate eta ate ag his sh mimao oamim sos sou sos net curucu sev ter me faz do moo meu trao.
Ccar sou me tu tu tu grgr ca co ca car faz faz te car ac ac fo rm a pa teu te te tau ra o te ca car sou meu.
E tu tu ttt na oO gr car ca Cac me faz et ter sev sev curucu curucu curucu mo mo motr co omo for sou min Ca cac car et eter na men
Ca cac car curucu ow ouw me tr rt rt trao sou gr gr meu moo ancas do do do chao tua mi mi mina tr tr mina te tr te tr to.
OoO pra pra tri tri OoO pra tre tre cin za cin za meu que te ten netnet tua tehn ate eta eta eta ate minha Eu uuu Eu uU U U u.
Ho te ten tenh ser mais a tua meu meu miaou te te teu tan teu tanh rrr bru tal men te te te teu ta to tu ti te teu.
Qui qui im irm nao tu na tu nao tu ao nao ao nao tu eu il tu teu tu sim si mis mis sim e e e e e tern a mente da tu mis sou tenho.
Na sir ser sir ser fo fo fo go ogo min ha ogo da tu do od ut mar ram com moc oav voax eu u U U u da al alca acla ue.
MI si ma is si in te te teu than sim mis ate fo ogo ogo que euq euq euq weh ma wehma mahew sou sa saf ancas ard er.
A OoOoO te uUUUU tet Tetuuuu ate eta ate ag his sh mimao oamim sos sou sos net curucu sev ter me faz do moo meu trao.

Mixed Choir 5

u-eu-e-eu-e-e-eu-e-eu-e-e-eu so-e-e-u-so-[uUU]-e-eu [ca-ca-ca-e-c-c-a-eu-e-e-e-e-c-c-a-eu-r-r-r-vã-o- 0]!


E-e-eu-E tu-tu-tu-E-tu-tu-E-Eu-eu-e-[uu]-e-tu-tttt-E a-rrrrrr-an-an-a-an-ca-an-ans-me br-mi-me-br-ut-mi-br-ut-al-uta-l-m-m-m-e-br-al-nt-al-nt-uta-tu-e-e-al-uu-ut-al-nte-e-al do-
uta-nl-chi-me-me-ma-do-go-me-do-me-mu-meia-u- ch-ão
e fa-me-mi-fe-ma-me-mi-fe-ze-ma-fe-ze-mi-go-d-d-d-d-dos-me tua-tua-min-t-t-tua-min-a-min-t-t, pa-tua-tr-tua-tr-pa-tua-ão.
e-u-u-[eee]-e-Eu so----u--so---u--soU ca-ca-ca-ca-CA-do-go-do-og-od-do-go-do-rvão!
E tu-e-tu-e-tu-tu-e-tu-tu-e-tu-tu-e-ac-ca-tu-tu-ac-en-En-ca-ac-de-do-go-ogs-[mmmmm]me, pa-tr-tr-pa-tr-pa-tr-ão,
para-te-para-te-pa-ra-te-para-te-para-te-et-te-et-te-para-par-te-tese-te-se-te-se[rrr]-vi-[rrr]- et-t-t-t-er-et-er-et-na--[nanananananaananna]-en-nam-nam-en-na-na-en-te co-coi-
co-coi-mo-te-[nana]-fo-na-fo-narça mo-narca-mo-tr-iz
mas e-mase-nana-fe-ze-eu-o-te-tu-[tttttttttttt]-rn-rn-mi-e-tu-tua-tut-tua-tut-m-tua-tut-en-te-tua-tut-en-te-nã-ooo, pa-tr---mi----ão.
[ueueueeu]-Eu-so-u-so-u-u-os [ca-cacacaca]rv------------------------------ão
e te-----------n-[hohohohohohoho] q--q---q---u--------e tu-atu-ar-a-tu-er-atu-ar---de-ta-tuar-te si-te-tua-m;
qu-q-que-q-qu-que-q-e-u-u-e-q-qei-q-ei-e-eu-m-a-r tud-tu-tud-tu-ut-ut-tud-o co-co-CO-mo-m-[aaaaaaaaa]-a-for-maça da-do-go- min-do-mi-go-ha co-CO-m-bus-bu-bus-bu-tã-o.
eeeee-Eu so-e-so-u ca-so-eeee-so-rv-ão;
te-te-te-ten-te-te-ten-nh-te-te-nh-te-te-nh-o que-te-te-nh-teA-rd-teA-rd-er na ex-na-na-ex-pl-pl-ex-q-quo-ra-quo-ra-çã--o
ar-co-quo-ra-que-de-r a[ttttttttt]-aé às as-as-cin-as-cin-as-cin-za-s da-do-da-do-da-go-gog-da-do-go-ma-gog-da-do-ldi-ldi-gog-da-do-ção
er-de-de-da-er-da-de-vi-vo-vi-de-er-da-co-da-de-gomo-al-de-da-gog-do-ca-dudi-de-teu-te-tr-ão, meu-go-og-uem-meu-go-de-da-du- ir-ri-ir-gog-og-eum-mão,
de-de-do-go-gog-og-aaaaaaaaaa-a-té-nã-o se-de-da-do-r mai-mai-mei-moi-ma-ma-mes a tu----a-te-tua-te-min-a-go, pa-te-tr-te-ti-go-gr-gw-t-r-ã-o.
[-euueeuueuuuUUUuu-Eu] so-U-u-So-U-so-so-[uu]-ca-ca-de-di-gog-gow-gla-glar-v-ã-o.
et-et-te-et-Te-et-n-ho que-qu-e-quUU-e-ar-ar-ar-de-r
Quei-que-quei-que-q-u-e-eu-q-ma-r tud-tudi-tud-tudi-o-co-tud-om- o-f-[oooooooo]-[oooo]-ogo-da-og-da-go-da-de-da-min-go-de-ha- co-co-co-de-da-du-de-da-hu-mb-mb-bm-ust-
de-de-da-mb-ão.
Sim-si-si-sim-mi-mis-sim!
[eeeeeee]-[ueueueue]-i-u-iu-Eu-so-ue-ue u o te-to-tauu-te-ca-teu-te-ca-rv-ac-ão, pa-de-da-pa-para-t-[rrrrrrrrr]-tr-de-goz-ão.
Audience:

Hard Hat for audience

 For the audience during this scene they are presented with plastic containers that go over the
ears. They are to use these and manipulate [the container to ear association] them to attain
different degrees and angels of the Sound Projection.

Public Speakers:

Hard Hat for Public Speakers

 Five Public Speakers [children and woman] move around with megaphones reading text from
Manual do Mobilizador.

Sound Projectionists:

Hard Hat for sound projectionist

 Two Sound Projectionists needed in CONSTRUÇÃO 2 one actively involved with voice projection
from RENAMO supporters the other for FRELIMO supporters.
Heavy duty vehicle operators:

Hard Hat for Heavy Duty Vehicle Operators

 Operators of Heavy duty vehicles: bulldozers,loaders,dump truck, flatbed truck,tanker truck are
positioned in operating conditions in their vehicles and at random do the following e.g.
Bulldozers
turn on engine
manouvre backwards and forward
flatten steel

Loaders
turn on engine
manouvre backwards and forward
lift steel
load it on dump truck and flatbed truck
move steel by paving the way

Dump trucks
turn on engine
manouvre backwards and forward
allow for loading to take place
accelerate truck down yard
empty the load off

Tanker truck
turn on engine
manouvre backwards and forward
allow to fill reserves with fuel
hoot when finished
allow tanker to leave the yard

Flatbed truck
turn on engine
manouvre backwards and forward
allow for loading to take place
accelerate truck down yard and out of
property

Manoeuvres must take between 5-10


minutes per vehicle

Percussionists:

Hard Hat for Percussionists


 Three percussion players with found objects from metal heap.
Percussion is un-tuned.
Playing is improvisational and very subtle.

Diagram 13

Performance Practise:

Diagram 14
Each performance requires extensive preparations for podia, platforms, sound equipment [8 active
speaker monitors and the mixing console] and lighting, vehicle management,public speakers, children,
gymnasts,tenors, sopranos, percussionists,and projectionists.

Slide Projectionists:

Hard Hat for Slide Projectionist

 Inside closure of the scrap metal yard is a wasteland amongst the mist there is a re-appearance of
the slide projectionists who are now dressed in rags, [they collect their projectors and proceed
with the screening of post war Mozambique and economic news that now is plaguing the country]
they hold their projectors in a begging stance as they project on the rubble desperately in search
for food.

The dead children:

 From the heaps of metal covered in white sheets and with white make-up rise the dead children,
they come off the metal heaps to the centre of Platform 1.
Their mouths are wide open from which a sound [wind through reeds is barely heard amongst the
ongoing noise] when heard by the others they are forced to be silent and freeze in their positions
wherever they are standing or what ever they are doing.
The children move through the terrain together clearing all the rubble from the avenues of
Platform 1 left behind by the metal workers [light illumination used correctly is very important to
make this section of the scene work well].
Any one found in their way is consumed and possessed by them [consuming and possessing is done
by touching or breathing onto a living being, the possessed is brought back into the rubble and the
dead child is now at peace helps the possessed human into the metal structure to act as support,
the dead child walks off and disappears in the darkness].Only the living act as support structures
to the metal heaps.

Journalist and economist:

Hard Hat for Journalist Hard Hat for economist

 The journalist at podium H receive the video messages, sms from audience and are advised by an
economist [statistics are worked out from privatization of state owned enterprises,reduction in
custom duties, streamlining of customs management,improved government budget,audit,
inspection capabilities].
They can either produce news or video clips as part of there report when they have completed it
they project it out of the parameters of the scrap metal yard on the big screen.
The outcomes of the chess games are added up and presented to the media.
The world economic outcomes is represented on each big screen projected by projectors in
positions A-H.
Lighting Technicians:

Hard Hat for lighting technician

 The lighting technicians are to be placed nearby the sound engineer/sound projectionist to
improve communication. Lighting technician set up and operate lighting equipment under the
supervision of a lighting director. This is when operating or loading all automated colour change
systems,programming or operating lighting consoles.

Sound Projectionists:

Hard Hat for sound projectionist

 The Sound projectionists will be placed on a table to the left of the Lighting Engineer care should
be taken when diffusing the sound to enhance the spatial components through delivery of musical
gestures, phrases, or single sounds to different loudspeaker locations surrounding the audience.

 Thus to attain this dynamic balance of the 4-channel computer projection in CONSTRUÇÃO 1 as
well as in CONSTRUÇÃO 2 with the 4 singers,mixed choir, the vehicles,public speakers,percussion
and the invasive or non invasive out door environment in relationship to one another and to the
computer,the single goal would be to maintain optimal intelligibility of all layers. Naturally the
dynamics of the computer sound were produced in a nearly final balance,but only nearly,because
the mixing of the composition with the soloists requires adjustments in the dynamics.

 Nevertheless,how loud the computer composition must be played and how the natural dynamics of
the soloists must be adjusted,depend on the acoustics of the given space “ the scrap metal yard”,
the goal being that from the middle of the space amongst the metal heaps everything should be
heard equally well.

 The live performance projection of the composition in 3-dimensional space can be an


enhancement by presenting points of variable distance, trajectories and waves, sudden near and
distant stereo field proximities and effective moving sound to the audience.

 What is added is a co-musical activity that supports and significantly expands the listening and
performance experience.
References:

 Alden, C. 2002, “Making old soldiers fade away: lessons from the reintegration of demobilized
soldiers in Mozambique ”, Security Dialogue, 33 (3): 341-356.
Bracken, P., J. Giller & D. Summerfield 1995, “Psychological responses to war and atrocity: The
limitations of current concepts”, Social Science and Medicine, 40: 1073-1082.
Lundin, I. B. 1998, “Mechanisms of community reception of demobilised soldiers in Mozambique ”,
African Review of Political Science, 3 (1): 104-118.
Maslen, S. 1997, The reintegration of war-affected youth - the experience of Mozambique ,
Geneva , ILO. Turner, V. 1967, The forest of symbols, Cornell, Cornell University Press.
West, H. 2004, “Working the borders to beneficial effect: The not-so-indigenous knowledge of the
not-so-traditional healers in northern Mozambique ”, paper to the Social Anthropology Seminar of
the University College London.
Boyden,J. & Gibbs. Children and War: Understanding Psychological Distress in Cambodia. Geneva:
UN, 1997.
Honwana, A. "Sealing the Past, Facing the Future: Trauma Healing in Mozambique," in Accord no.3.
London: Conciliation Resources, 1997.
Honwana, A. Okusiakala Ondalo Yokalye, Let us Light the New Fire: local knowledge in the post-
war healing and reintegration of war-affected children in Angola. Consultancy report for the
Christian Children's Fund, 1998.
Honwana, A. (forthcoming). "Negotiating Post-War Identities: child soldiers in Mozambique and
Angola," in G. Bond & N. Gibson (eds). Contested Terrains and Constructed Categories:
contemporary Africa in focus. New York: Westview Press, 1999.
Minter, W. Apartheid's Contras: an inquiry into the roots of war in Angola and Mozambique.
London: Zed Books, 1994.
Nordstrom, C. A Different Kind of War Story. Philiadelphia: University of Pennsylvania Press, 1997.
Vines, A. Renamo: Terrorism in Mozambique. London: Center for Southern African Studies,
University of York and Indiana University Press, 1991.
Eric Moulines and Francis Charpentier, “Pitchsynchronous waveform processing techniques for
text-to-speech synthesis using diphones,” Speech Commun., vol. 9, no. 5-6, pp. 453–467, 1990.
Sundberg, J. (1987). The Science of Singing Voice. University Press, Stocholm.
Ternstron, S. (1989). Acoustical Aspects of Choir Singing. Royal Institute of Technology, Northern
Illinois.
Moulines, E. and Charpentier, F. (1990). Pitch-Synchronous Waveform Processing Techniques for
Text-To-Speech Synthesis using Diphones. Speech Communication, (9):453–467.
The End

Copyright © 2009 Dimitri Voudouris. All rights reserved


ANAMNHΣΙΣ

ΜΕΡΟΣ Α

ΜΕΡΟΣ Β

Macrophages
Microphages

ΜΕΡΟΣ Γ

1
Composition / Animation
Schematic scenic representation

by

Dimitri Voudouris
[1961-]

2007- 2008
for

Birds
3 Actors
Audience
24 Trumpets
Paintball Guns
8 Microphones
3 Megaphones
50 Piccolo flutes
Sound Projection
Triggered lights
3 Inflatable balls with beads
Computer assisted music processing
3 Transparent screens with projectors
16 Dancers some on roller-skates and stalls
20 Children barring banners and remote control toys
Mixed choir [split in 3 groups] with short-wave receivers

The procession takes place in an abandoned factory

2
INDEX Page

Introduction 4
Innate immunity 5
Acquired immunity 5
Antigens 5
Specific attributes to humoral immunity-Antibodies 6
Mechanism of action of antibodies 7
Agglutination 7
Precipitation 7
Neutralisation 7
Lysis 7
The complement system for antibody action 8
Lysis 8
Opsonization and phagocytosis 8
Chemotaxis 8
Agglutination 8
Opsonization 8
Neutralisation of viruses 8
Inflammatory effects 8
Activation of the anaphylactic system by antibodies 8
Histamine 8
Slow-reacting substance of anaphylaxis 8
Chemotaxic factor 8
Lysosomal factors 9
Specific attributes of cellular immunity 9
Mechanism of action of sensitized Lymphocytes 9
Direct destruction of invader 9
Indirect destruction of invader 10
Release of transfer factor 10
Attraction and Activation of Macrophages 10
Blood Brain Barrier 11
Physiology 11
The Factory 12
MEPOΣ Α 15

3
Introduction
--------------------------------------------------------------------------------------------

Inside the body there is an amazing protection mechanism called the immune system. It is
designed to defend against millions of bacteria, microbes, viruses, toxins and parasites that
would love to invade your body. To understand the power of the immune system, all that you
have to do is look at what happens to anything once it dies.

When something dies, its immune system (along with everything else) shuts down. In a matter
of hours, the body is invaded by all sorts of bacteria, microbes, parasites... None of these
things are able to get in when your immune system is working, but the moment your immune
system stops the door is wide open. Once you die it only takes a few weeks for these
organisms to completely dismantle your body and carry it away, until all that's left is a
skeleton. Obviously your immune system is doing something amazing to keep all of that
dismantling from happening when you are alive.

4
Innate Immunity
The human body has the ability to resist almost all types of organisms or toxins that tend to
damage the tissues and [Link] capacity is called immunity. Much of the immunity is
caused by a special immunity system that forms antibodies and sensitized lymphocytes that
attack and destroy the specific organisms or [Link] type of immunity is called acquired
immunity . However, an additional portion of the immunity results from general processes
rather than from processes directed at specific disease [Link] is called innate
immunity. It includes the following:

• Phagocytosis of bacteria and other invaders by white blood cells and


reticuloendothelial cells.
• Destruction of organisms swallowed into the stomach by the acid secretions of the
stomach and by the digestive enzymes.
• Resistance of the skin to invasion by organisms.
• Presence in the blood of special chemical compounds that attach to foreign organisms
or toxins and destroy them.

This innate immunity makes the human body partially or completely resistant to some
paralytic virus diseases of animals, hog cholera, cattle plague, and distemper.

Acquired Immunity
In addition to the innate immunity the human body also has the ability to develop extremely
powerful specific immunity against individual invading agents such as lethal bacteria,viruses,
toxins and even foreign tissues from other [Link] is called acquired immunity.
This system of acquired immunity is important as a protection against invading organisms to
which the body does not have innate [Link] body does not block the invasion upon
first exposure by the invader. However within a few days to a few weeks after exposure,the
special immune system develops extemely powerful resistance to the invader. The resistance
is highly specific for that particular invader and not for others.
Two basic types of acquired immunity are:

• Humoral immunity
• Cellular immunity

Antigens
Each toxin or each type of organism contains one or more specific chemical compounds in its
make-up that are different from all other [Link] general,these are proteins,large
polysaccharides,or large lipoprotein complexes, and it is they that cause the acquired
immunity. These substances are called antigens. Essentialy all toxins secreted by bacteria are
also proteins,large polysaccharides, or mucopolysaccharides,and they are highly antigenic.
For a substance to be antigenic it usually must have a high molecular weight, 8,000 or
[Link] antigenicity depends upon regularly recurring prosthetic radicals on the
surface of the large molecule, which explains why proteins and polysaccharides are almost
always antigenic, for they both have this type of stereochemical characteristic.

5
Specific attributes of Humoral Immunity-The Antibodies
Antibodies are formed in the plasma cells of Lymph node at a rapid rate of 2000 molecules
per second for each [Link] antibodies are secreted into the the lymph and are carried to
the ciculating blood.

6
Mechanism of action of antibodies

Direct action of antibodies on invading agents

• Agglutination

In which the muliple antigenic agents are bound together in a lump.

• Precipitation

In which the complex of antigen and antibody becomes insoluble and precipitates.

• Neutralisation

In which the antibodies cover the toxic sites of the antigenic agent.

• Lysis

In which some very potent antibodies are capable of directly attacking membranes of cellular
agents and thereby causing rupture of the cell.

7
The complement system for antibody action

• Lysis

The proteolytic enzymes of the complement system digest portions of the cell membrane,thus
causing rupture of cellular agents such as bacteria or other types of invading cells.

• Opsonization and phagocytosis

The complement enzymes attack the surfaces of bacteria and other antigens, making these
highly susceptible to phagocytosis by neutrophils and tissue macrophages. This process is
called opsonization. It often enhaces the bacteria that can be destoyed many hundredfold.

• Chemotaxis

One or more of the complement products causes chemotaxis of neutrophils and


macrophages,thus greatly enhancing the number of these phagocytes in the local region of
the antigenic agent.

• Agglutination

The complement enzymes also change the surfaces of some of the antigenic agents so that
they adhere to each other,thus causing agglutination.

• Neutralization of viruses

The complement enzymes frequently attack the molecular structures of viruses and thereby
render them nonvirulent.

• Inflammatory effects

The complement products elicit a local inflammatory reaction,leading to


hyperemia,coagulation of proteins in the tissues,and other aspects of the inflammation
process,thus preventing movement of the invading agent through the tissues.

Activation of the Anaphylactic system by antibodies

• Histamine

Causes local vasodilation and increased permeability of the capillaries.

• Slow-reacting substance of anaphylaxis

Causes prolonged contraction of certain types of smooth muscle such as bronchi.

• Chemotaxic factor

8
Causes chemotaxis of neutrophils and macrophages into the area of antigen-antibody
[Link] chemotaxic factor,especially,causes chemotaxis of large numbers of eosinophils
into the [Link] play a special role in phagocytizing the products of the antbody-
antigen reactions.

• Lysosomal enzymes

Elicit a local inflammatory reaction.

Specific attributes of Cellular Immunity


Release of sensitized lymphocytes from lymphoid tissue.
Persistance of Cellular Immunity.
Types of organisms resisted by sensitized lymphocytes.

Mechanism of action of sensitized Lymphocytes

It destoys the invader either directly or indirectly

Direct destruction of invader


The immediate effect is swelling of the sensitized lymphocyte and release of cytotoxic
substances from the lymphocyte to attack the invading [Link] cytotoxic substances are
lysosomal enzymes manufactured in the [Link] direct destruction of the invading
cell are weak in comparison to the indirect method.

9
Indirect destruction of invader
When the sensitized lymphocytes combine with their specific antigens,a number of different
substances are released into the surrounding tissue.

• Release of Transfer factor

The sensitized lymphocyte releases a polypeptide substance called the Transfer [Link]
then reacts with other small lyphocytes in the tissues that are of the nonsensitized
[Link] intern take on the same characteristics as the original sensitized
[Link] the Transfer factor recruits additional lymphocytes having the same
capability for causing the same cellular immunity reaction as the original sensitized
lymphocytes. Thus this effect multiplies the effect of the sensitized lymphocytes.

• Attraction and Activation of Macrophages

A second product of the activated sensitized lymphocyte is a macrophage chemotaxic factor


that causes as many as 1000 macrophages to enter the vicinity of the activated sensitized
lymphocyte. A third factor, called migration inhibition factor,then stops the migration of the
macrophages once they come into the vicinity of the activated lymphocyte. A single
lymphocyte can collect as many as 1000 macrophages around it. A fouth substances increases
the phagocytic activity of the [Link] the macrophages play an important
role in removing the foreign antigenic invader.

10
Blood-brain barrier

The blood-brain barrier (BBB) is a membranic structure that acts primarily to protect the
brain from chemicals in the blood, while still allowing essential metabolic function. It is
composed of endothelial cells, which are packed very tightly in brain capillaries. This higher
density restricts passage of substances from the bloodstream much more than endothelial
cells in capillaries elsewhere in the body. Astrocyte cell projections called astrocytic feet
(also known as "glial limitans") surround the endothelial cells of the BBB, providing
biochemical support to those cells. The BBB is distinct from the similar blood-cerebrospinal
fluid barrier, a function of the choroidal cells of the choroid plexus, and from the Blood
retinal barrier, which can be considered a part of the whole (Eyes' retinas are extensions to
CNS, and as such, this can be considered part of the BBB)

Physiology

In the rest of the body outside the brain, the walls of the capillaries (the smallest of the
blood vessels) are made up of endothelial cells which are fenestrated, meaning they have
small gaps called fenestrations. Soluble chemicals can pass through these gaps, from blood to
tissues or from tissues into blood. However in the brain endothelial cells are packed together
more tightly with what are called tight junctions. This makes the blood-brain barrier block
the movement of all molecules except those that cross cell membranes by means of lipid
solubility (such as oxygen, carbon dioxide, ethanol, and steroid hormones) and those that are
allowed in by specific transport systems (such as sugars and some amino acids). Substances
with a molecular weight higher than 500 daltons (500 u) generally cannot cross the blood-
brain barrier, while smaller molecules often can. In addition, the endothelial cells metabolize
certain molecules to prevent their entry into the central nervous system. For example, L-
DOPA, the precursor to dopamine, can cross the BBB, whereas dopamine itself cannot. (As a
result, L-DOPA is administered for dopamine deficiences (e.g., Parkinson's disease) rather
than dopamine).

In addition to tight junctions acting to prevent transport in between endothelial cells, there
are two mechanisms to prevent passive diffusion through the cell membranes. Glial cells
surrounding capillaries in the brain pose a secondary hindrance to hydrophilic molecules, and
the low concentration of interstitial proteins in the brain prevent access by hydrophilic
molecules.

The blood-brain barrier protects the brain from the many chemicals flowing within the blood.
However, many bodily functions are controlled by hormones in the blood, and while the
secretion of many hormones is controlled by the brain, these hormones generally do not
penetrate the brain from the blood. This would prevent the brain from directly monitoring
hormone levels. In order to control the rate of hormone secretion effectively, there exist
specialised sites where neurons can "sample" the composition of the circulating blood. At
these sites, the blood-brain barrier is 'leaky'; these sites include three important
'circumventricular organs', the subfornical organ, the area postrema and the organum
vasculosum of the lamina terminalis (OVLT).

The blood-brain barrier acts very effectively to protect the brain from many common
infections. Thus, infections of the brain are very rare. However, since antibodies are too large
to cross the blood-brain barrier, infections of the brain which do occur are often very serious
and difficult to treat.

11
The Factory

12
The key to a successful operation in the years ahead. Is recognition of the
world as a global market and the imperative for systems integration. Changes in the
marketplace as well as in technology will continue to have a great impact on operations. The
manufacturing and servicing strategy will be based on an evaluation of organizational needs,
continuous measuring of company strengths in design, manufacturing, marketing, finance,
and human resources; and a persisting reinforcement of participative management style.

13
In the above diagram it is noted that if the costs, personal needs, industrial management, are
met this has a positive impact on the immune system which means that the factory operates
well. The opposite results as the negative impact takes place boosting the invasive organisms
which results in ailment or death in the human and the down fall of the factory.

ANAMNΗΣΙΣ is no opera. The reflection is not to be narrated or quoted, only reflected in


contradictions, fragments, superimpositions, expansions and ambiguities in music. It is for the
ear and the eye that can perceive these aspects, there is a fine equilibrium portrayed
between the two sense organs, there is enough time within and between the different scenes
that allows for an individual to see and hear.
We live in an incredibly toxic world being exposed to more deadly chemicals than at anytime
in history, as well as escalating food prices and the rising energy that has created a crisis in
the world.
Thus if the situation in the factory and human body is not managed properly this can result in
the downfall of the two systems and visa versa is true. An ongoing relationship between the
non-functional factory building and what goes on in the human body is portrayed in this study.

14
MΕΡΟΣ.Α
------------------------------------------------------------------------------------------------------------

The sound projection is over 4 speakers.

The scene starts with dark surroundings the lights revolve searching the interior parameters
of the factory focusing indirectly on movement. Movement is noticed but it’s as if there is not
enough light illuminating. Dancers forever shy covering their faces with their naked bodies
climb the pipes and scaffoldings of the space like scavengers, scavenging the internal
structure of the factory and competing with one another. They compete with one another
their presence is felt as shifting sounds are heard from their bodies as they threaten the mere
existence of the deteriorating structure.
The children are repairing the affected surroundings, just like within the body leukocytes,
eosinophils, neutrophils, the clotting mechanism are constantly involved in the repair work
that is taking place within the body. The trumpeters are connected on the pipes and walls of
the factory via their instruments without any movement.

15
The music composed in this section used the flute sounds of Yamaha DX7 software synthesizer
FM7, processed through a modular synthesizer in Reactor 3. The work was derived for four-
speaker diffusion.
The work was mapped according to the movements of the dancers in the factory; they chose
to move at angles that allowed for the music to develop an amoeboid motion.
It is important to know that while working on the sounds and improving them later, I always
listened to the end result loudly, and through this could constantly test the incisiveness of the
synthesizer sounds.

16
Copyright © 2008 Dimitri Voudouris. All rights reserved

17

You might also like