invited speaker
speaker |
refereed paper |
panel chair
panellist |
artwork
music |
co-authors collaborators |
|
|
|||||
Garth Paine | |||||
|
Activated
Space Melbourne, Australia |
|
|||
garth@activatedspace.com.au | http://www.activatedspace.com.au |
TITLE: Interactive sound works in public exhibition spaces, an artist's perspective
ABSTRACT: This paper explores the research and responsive environment installation works developed over the last five years by the Australian composer/installation artist Garth Paine. It addresses the area of responsive environments within the scope of an artist interested in using interactive sound to encourage a consideration of our relationship(s) to our environment. The issue of public interpretation of the artworks is discussed, and in so doing the idea of a performance practice for interactive sound works is explored. As an artist my interest lies in reflecting upon the human condition.
The digital domain facilitates the subversion, expansion, dissection, and general exploration of "real world" material. It allows the artist to search the, until now, hidden aspects of the material for new expressions. Naturally occurring sounds are often made up of extremely complex combinations of partials, which vary over time in elaborate ways. This complexity of temporal structure is what makes the sounds interesting and worth exploring.
Trevor Wishart (On Sonic Art, 1996, pg.7) points out that contemporary composition, especially within the genre of electronic music, goes well beyond "a finite lattice and the related idea that permutational procedures are a valid way to proceed . . . ". Wishart proposes a "musical methodology developed for dealing with a continuum using the concept of transformation".
It is exactly this process of evolution from the pointillistic concepts of earlier approaches to sound and interactivity that is at the core of my work. I am interested in finding interactive processes and approaches that reflect the continual variation of structures, and the myriad interdependencies that alter and react to small variations elsewhere in any complex stream of interactive relationships.
NOTES:
Interaction is a widely abused term in computer mediated art.
The Oxford dictionary describes interaction
as follows:
inter-, pref. Between, among, mutually, reciprocally.
interact v.i., act reciprocally or on each other
interaction n., blend with each other
The Collins defines them as follows:
interact vb. to act on or in close relation with each other interaction n., 1. A mutual or reciprocal action or influence 2. Physics, The transfer of energy between elementary particles, between a particle and a field or between fields.
interactive adj., 1. Allowing or relating to continuous two-way transfer of information between a user and the central point of a communication system, such as a computer of television. 2. (of two or more person, forces etc.) acting upon or in close relation with each other; interacting.
The base of interaction is inter-.
Inter- is described as something between, among, mutual reciprocal. This
definition implies that the two parties act upon each other; that the parties
exchange something; they act upon each other in a way that is reciprocal. The
Collins definition of interaction outlines an action that involves reciprocal
influence. In the field of physics, it leads to understand that an exchange
of energy takes place.
How then do these definitions translate
into the area of new media arts? Does an exchange of energy occur when one is
viewing a CD-ROM? Probably not. An exchange of information certainly occurs.
The user requests a piece of information, and the computer, through the programming
of the CD-ROM, delivers that information to a screen in such a way that the
user can comprehend it. One could argue that a transfer of energy takes place
when someone is playing a game with a computer that requires a racing-car driving
wheel and gear changer to be used. In this case the user is directly transferring
energy through the interface - by turning the steering wheel, changing gears
and operating an accelerator and brake pedals. The variation in condition of
the interface is transferred as data to the computer program. The computer then
draws a scenario to the screen, to which the user responds. There is clearly
a causal loop here. However, does the racetrack alter because of the behaviour
of the driver? Is there actually a reciprocal transfer taking place?
If interactivity is predicated in the ability
of both parties to change in a way that reflects the developing relationship
or discourse between them - then we have to accept that multimedia systems that
do not evolve their behaviour in relation to their experience are not interactive,
they are simply responsive. In order for the system to represent an interaction,
it must be capable of changing, of evolving. The processes of evolution ensure
continually unexpected outcomes. The outcomes would be based upon the nature
of a response-response relationship where the responses alter in a manner that
reflects the cumulative experience of inter-relationship.
For this to be upheld, each exchange must
be personal. It must reflect the unique qualities of each particular dialogue.
Trevor Wishart discusses the difference between objects of fixed timbre (most
acoustic instruments) and sound objects with a "Dynamic Morphology".
This discussion has important ramifications
for the design of responsive and interactive instruments. It also provides a
profound insight when addressing the design of software-based performance instruments,
prevalent within the "laptop brigade" of electronic music performance.
For the gesture of a performer to be fully
inscribed within a realtime sound output, the sound must fulfill the Wishart
concept of Dynamic Morphology. The Sound must evolve and change in such a manner
as to correlate with the qualitative development of the gesture, an evolution
of momentary events that is unknown at any point prior to their execution. The
morphology of a particular gesture is unknown even at the beginning of the movement.
The player may change the direction or speed of the movement at any time, and
may alter the portion of the limb in both the vertical and horizontal planes
at a rate of change that matches no previous event. This will occur even when
the player is attempting to perform the gesture in an identical way to a previous
movement event. It is only the highly trained dancer, with a spatial awareness
developed over years of exacting training that can reproduce spatial positioning,
rate of change and horizontal and vertical gesture within sufficient bounds
for us to perceive them as repeatable. To the video analysis system, even these
highly trained executions alter in subtle ways.
The model for dynamic morphology is embedded within the concepts of object-oriented programming. A programmer writes a class that has a range of functionality. In sound synthesis terms it may look for a range of variables and perform some operation on an audio input stream, or may create an audio stream which is fed into a software-based mixer of some kind within the program (patch/instrument). The class will contain some parameters in its description. These parameters must be provided for the class to be become extant. For instance the parameters may be the frequency of the oscillator, or the variable that will be mapped to frequency, or the mixer/bus input into which the audio output will be fed. The functionality of the class is the domain of the code written within it.
A class is essentially a blueprint for an
object. Once extant it becomes an object that performs a function(s), as described
in the class design, until it is no longer required. At this stage the object
may be disposed of, and a garbage collection mechanism removes the object from
the computer's memory, thereby providing memory allocation for new object creation.
The beauty of the object-oriented approach is that units of functionality can
be dynamically created and disposed of in such a way as to fulfill the momentary
requirements of the global program. An analogy is that we may have an orchestra
whereby we use instruments when required and dispose of them when no longer
required. To some extent that is the case with a symphony orchestra. A composer
will orchestrate a work to use particular instruments to create desired timbres
at points throughout the work. However, the capacity of the orchestra is essentially
fixed. If we require four flutes, as Mahler does on occasions, then the flutes
are seated in the orchestra at the beginning of the work and remain until the
end. What would happen if a work were being orchestrated in accordance with
a responsive/interactive schema? Such a situation may require 20 flutes at one
point, and no violins! A software infrastructure can allow for such occurrences
within the limits of the processing power of the host CPU, and the limitations
of the software design.
One approach therefore to the design of
interactive instruments would be to adopt a structure that created dynamic instruments
as a response to interactive input. In this approach a dynamic morphology would
extend beyond the scope of the filtering, or otherwise altering a synthesised
output from one or even a collection of algorithms, (which, no matter how the
algorithm is designed, will have a finite range of aesthetic variation and variation
of timbre) to a dynamically forming orchestration. In such a dynamic orchestration,
a new sound object would be created when the morphological scope of the current
algorithm was reaching its limits. The new algorithm may exist only as long
as it is required, and may be augmented by other dynamically created instruments,
before being disposed of until the interactive input requires it again.
SuperCollider, although it is an object
oriented language, does not allow for the dynamic creation and disposal of synthesis
processes. The reason for this is that the synthesis process must be connected
to the base synth (the foundation synthesis engine and audio output structure)
at the time the program is instantiated. A new synthesis process can not be
added after this point. The approach taken in MAP2 is conditioned by this limitation
in that there are multiple synthesis algorithms available within each of the
four horizontal zones; each zone being an independent instrument. The output
of those algorithms is controlled by the position and dynamic threshold of gesture
within the instrument's zone. All the algorithms run at once, from the beginning.
They may be audible or not as the input demands, but the processes are all running
all the time. Clearly this is not an efficient use of resources, and is not
a good way to encourage dynamic morphology beyond the initial capabilities of
the instruments. An analogy might be an orchestra, playing a wind quintet, but
requiring all the other members of the orchestra to remain on stage, and worse,
to attend all the quintet rehearsals and sit silently in their chairs in preparation
for the work that follows.
Supercollider, the realtime synthesis language
for the Macintosh, takes on a predominant paradigm of composition, as outlined
by Wishart (as do all the current software synthesis applications). It does
so in the sense that it allows the composer/programmer to create the resources
the composer expects to need for an individual composition. These resources
are created at the beginning of the work. They contain a set group of instruments,
with an inherently limited morphological scope. This is a limitation that has
no place in interactive electronic music performance. Kyma, from Symbolic Sound,
which is the programming language used to control the Capybara, a hardware-based
realtime synthesis engine, allows for dynamic restructuring through the use
of Smalltalk scripts to manage "sounds" (a "sound" in Kyma
is any algorithm that contributes to the creation of sound). In this case the
"sounds" are treated as objects which can be dynamically created and
disposed of (an approach I have yet to explore).
There are clearly some technical problems
with dynamically creating synthesis processes. One could easily comprehend a
situation where there were too many instruments created for the computer to
play in realtime. A mechanism for CPU usage management would be required. If,
as I have mentioned elsewhere, an interaction is an occurrence during which
both parties change, and the outcome is unknown at the point of departure, and
as such is unexpected, a dynamical synthesis infrastructure is the only way
to make the engagement truly interactive.
Such a system may generate an orchestra
of 300 flutes! or 500 bass drums! It may therefore be preferable to provide
some constraints or preconditions. If the most recent dynamic allocation requires
a new flute instrument, a precondition may be that there are no more than 50
flute objects in existence already. In this way a certain control may be placed
on the scope of the system, but without unreasonably limiting the potential
for variation in momentary responses.
Work in this direction promised to reward the researcher with a more adequately interactive system; one in which both parties may change in response to a historical pattern of response.
ABOUT: Garth Paine is a freelance composer, sound designer and installation artist. He has been commissioned extensively in Australia the United Kingdom and Germany, producing original compositions and sound designs for over 30 film, theatre, dance and installation works in the last ten years. Garth was awarded the RMIT, New Media Arts fellowship by the Australia Council for the Arts for 2000, to assist him in the development of realtime interactive sound environments.
In recent years Garth's work has become increasingly involved with the design of sound and interactive exhibitions in museums and galleries. These include the Melbourne Exhibition, the East Supper Space and the Immigration Museum for the Museum of Victoria, the Australian Jewish Museum, the Performing Arts Museum, and the Eureka Stockade Centre.
|
|