Garth Paine (January 2001)

garth@activatedspace.com

http://www.activatedspace.com.au

 

Interactive sound works in public exhibition spaces, an artist's perspective

 

Abstract

This paper explores the research and respopnsive environment installation works developed over the last five years by the Australian composer/installation artists Garth Paine . It addresses the area of responsive environments within the scope of an artist interested in using interactive sound to encourage a consideration of our relationship to our environment. The issue of public interpretation of the artworks is discussed, and in so doing the idea of a performance practice for interactive sound works is explored.

Context

The focus of my research over the last two years has been in the area of responsive environments. The area of interactive sound systems can be loosely categorised into three differing approaches:

The differences in intent are marked.

My work has been based in the area of responsive environments, because my research goals have been to explore forms of interaction that contain as few predetermined factors as possible. I have focused on systems that directly reflect the uniqueness of individual participants' input.

  Artistic Intention

As an artist my interest lies in reflecting upon the human condition. There are many aspects of our lives, and many facets of international relationships between differing cultures, governments and philosophical persuasions that we simply do not take the time to consider in our day to day lives. Addressing these issues on some level however is vitally important to the continuing co-existence of the many disparate parties that form the global community, and integral to the further development of understanding and insight into the relevance of differences within the global community.

I see art as the perfect platform for the consideration and expression of these issues. Visual artists, writers, dancers and musicians have explored these issues for centuries with profound results. One only has to look at the prestige attached to the great cultural institutions of the world to see that the product of this artistic endeavour has communal value. One might argue that the value attached to these institutions is purely financial, that the value placed upon the works they contain is a product of contrary economic principles - of course the economics of historical value play a part in their financial value. It would however, be far too cynical to contribute their communal worth solely to the financial market place. If that were the case, the works would be housed in private environments for private enjoyment, not in public institutions, accessible by all.

If we agree that these works have a communal worth, we in turn agree that artistic endeavour is of value to society. As I have pointed out, this case is already proved for the traditional artforms, writing, music, dance and visual arts (painting and sculpture).

One of the interesting questions is:

The explosion of computer based technologies in the last forty years has resulted in many new forms of expression. To some extent, the reduction of all material to a binary base of zeros and ones dissolves the borders between the artforms. In fact a software algorithm that is used to process sound may also be applied to visual material, although with dramatically different outcomes.

This reduction of all materials to binary soup has both advantages and disadvantages:

The advantage is that the synthesis of aural and visual events is possible. The artists can write an algorithm that produces a sequence of zeros and ones that generates an outcome descriptive of something that occurs in the natural world, or for that matter, something that has never, and may never possibly occur in the natural world.

For me though, the excitement of this ability to turn anything into a digital soup, is the potential that lies in the treatment of material gathered in the natural world. The digital domain facilitates the subversion, expansion, dissection, and general exploration of the "real world" material. It allows the artists to search the, until now, hidden aspects of the material for new expressions. A sound for instance has an outer face, that we may be familiar with. However, if one starts to dissect the sound, it exhibits many layers of interacting vibrations that can be teased out. Once the artist has control of these individual aspects of the sounds character, it becomes possible to accentuate the hidden characteristics, or to peel away the outer facade like an onion skin, thereby revealing finer and finer nuances.

The computer-based arts have been stymied by the complexity of the programming required to generate even the most simplistic and subsequently relatively boring image or sound. Nature in its wisdom in infinitely complex and variable. Naturally occurring sounds are often made up of extremely complex combinations of partials which vary over time in elaborate ways. These variations are determined by the many environmental factors present in any momentary event.

Fortunately technological developments are currently so rapid that the speed of desktop computers has accelerated exponentially - the Apple Macintosh G4, presents specifications that only a few years ago were the domain of the super computer. The Apple Macintosh G4 or the Symbolic Sound Capybara 320 systems both provide the ability to generate audio in realtime that reflects the complexity inherent in nature.

This relatively affordable computing power has allowed artists like myself to move from off-line image and sound generation to realtime synthesis. Whilst this may not appear overly exciting, it has precipitated a much more profound exploration of the genre of interactivity.

Real time computation has allowed interactive arts to create realtime causal feedback loops.

The exploration of the cybernetic paradigm of feedback loops has been the focus of my installation works Moments of a Quiet Mind (1996), Ghost in the Machine (1997), and with the development of realtime synthesis engines, in my recent works MAP1 (1998) and MAP2 (1999/2000) and REEDS (2000).

MAP1 is an interactive sound installation commissioned in 1998 by the Next Wave Festival, Melbourne, Australia. MAP1 uses realtime granular synthesis of live audio input as the interactive sound response. The sound is gathered using a microphone in the installation space. The granulation process (generated in SuperCollider on a Macintosh computer) is controlled by gesture analysis of people within the installation. The gesture analysis is carried out using the Very Nervous System (VNS - developed by David Rokeby), and a small black and white video camera mounted in the roof of the gallery. A MAX patch converts the variations in light intensity per pixel per video frame reported by the VNS into MIDI continuous Controllers (MCC). These MCC values are sent to the SuperCollider patch using the OMS IAC bus.

The granulation process uses a cycling audio buffer as its source. The audio buffer is overwritten when sounds made by its participants exceed a certain threshold. The audio buffer is overwritten for the period the sound remains above the threshold. This provides the opportunity to work with sounds entered by other participants (past or present), or to enter one's own sound sources

MAP1 therefore provided an environment in which the sounds made within it are resynthesised in response to the quality of the movement and behaviour patterns sensed by the system. The possibility for unique system responses is multiplied by the gathering of individual's soundings, and the analysis of individual gestural motion as the source of the resynthesis. In these ways MAP1 provided a tight relationship between the input of each participant, and the system response. No pre-made material is contained in this installation.

Pierre Boulez describes composition as a selection of notes derived from a finite predefined set.

Trevor Wishart (On Sonic Art) points out that contemporary composition, especially within the genre of electronic music, goes well beyond "a finite lattice and the related idea that permutational procedures are a valid way to proceed . . . ". Wishart proposes a "musical methodology developed for dealing with a continuum using the concept of transformation".

The concept of a stream of constantly evolving sound is directly supported by the use of realtime sound synthesis. The ever evolving - sometimes audible, sometimes not - processes of data driven art follow the Wishart approach. The Boulez approach is more closely aligned with the commercially prevalent paradigm of interactivity as a response to a defined challenge with a prespecified finite outcome, such as the triggering of existent sound files.

Realtime synthesis provides a subtle but profound alternative. The use within the synthesis instrument of variables controlled directly by movement gestures provides a way of not simply creating a personalised mix of existent sound samples - much like a DJ - but of creating a completely unique sound stream. The temporal form as well as the pitch/timbre and "orchestration" of the score are created in realtime by the user. In my opinion, this freedom of response creates an individualised outcome that so tightly reflects the actions of the user as to be both qualitatively and quantitatively superior to an installation utilising pre-made sound sample events.

An installation using existent sound samples can be made to reflect user input by varying the playback polyphony, sample choice or small amounts of pitch bend. These relatively course reflections of control input does not reflect small intricacies of movement in as symbiotic a manners as realtime timbrel, envelope or modulation variations.

MAP2 features the concept of stream driven interactivity within an exploration of pure synthesis. MAP2 was commissioned by the Staatliches Institut für Musikforschung (SIM), Berlin, and was exhibited at the Musical Instrument Museum, Berlin (Dec 1999 - Jan 2000). MAP2 was developed in collaboration with Dr Ioannas Zannos.

MAP2 is based on a video sensing approach (using the VNS), that divides both horizontal and vertical planes into numerous independent fields. The horizontal plane is divided into four independent zones, each consisting of thirty-two fields, reflecting four different synthesis instruments, active at different thresholds of activity, and played by positional information provided. The vertical plane is defined as a row of interactive fields at a little over head height. These vertical fields control the playing of a physically-modelled plucked string sound.

The performance of MAP2 is controlled by data gathered using the VNS. The horizontal space in MAP2 was divided into four sections to enable participants to determine their own distinct input when playing the installation with others. This development was a response to feedback from users of MAP1 who sought to clearly identify their own contribution to the interactive sound environment. Each of the four quadrants of the MAP2 installation were totally independent, being controlled by separate data streams and addressing separate synthesis processes.

The research developments illustrated in MAP2 are

These changes in the structure of the installation provided a many fold increase in the complexity of possible interactive responses over the MAP1 installation. MAP1 provided a single synthesis process for all interactive input for the entire physical space.

MAP2 marked a progression towards multiple asynchronous feedback loops - one for each division of both the vertical and horizontal sensed spaces, and one division for each of the synthesis instruments available within each threshold range of each horizontal field.

MAP2 also provides more variation in sound aesthetic and "orchestration" by virtue of the greater number of synthesis algorithms available, and the ability of the system to allow 4 people to interact simultaneously and asynchronously.

Whilst the change in dynamic of sensed movement causes additional synthesis algorithms to become active, the position of the sensed body causes variation in resonant filters placed in the signal path just prior to the each of the 8 independent audio outputs. The audio signals were sent to the output assigned to the loud speaker closest to the position of the sensed activity. This technique caused the sound output to track the interactive behaviour through the physical space, two speakers being allocated to each of the quadrants specified in the horizontal video sensing set up.

My most recent work, REEDS was presented as part of the Melbourne International Festival of the Arts, Melbourne, Australia in November, December 2000.

REEDS extends the approach to complexity of interaction illustrated in MAP2 by providing eight simultaneous but asynchronous data inputs. REEDS changes the focus of the past works by moving away from the human body as the central controller to the adoption of momentary weather conditions as the data source. REEDS uses two weather stations, both collecting the following data:

  1. Wind Direction
  2. Wind Speed
  3. Solar Radiation
  4. Temperature.

The data is transmitted back to a land base, where it is parsed and directed as MCCs to a SuperCollider patch, containing six independent instruments (two stereo and four mono), containing eight variables and creating eight channels of audio.

The use of momentary meteorological data allows the exploration of truly chaotic multifaceted patterns of interaction.

The weather conditions naturally scope the range of response. The chosen synthesis approach and the specifics of the incoming data mapping on to the synthesis variables generally establish the aesthetic.

The use of naturally occurring data patterns is a way of exploring both the concept of interactivity as a stream, and an attempt to discover techniques for more tightly linking the organic and human contexts with interactive systems.

The program notes I wrote for the REEDS project illustrate the central tenant of the work:

A weed, so easily crushed underfoot, can push its way up through a tarmac path, creating a sizeable fracture in what appears to us to be an impervious surface.

One might postulate that if it could see the bigger picture, it might have decided to grow 2 feet to the left in the flower bed or the grass. There is clearly an analogy here to our own birth, which we seem to have little or no say in (depending on ones religious bent).

It is exactly this chaotic behaviour of the natural world that informs the Reeds project. Whilst civilisation tries to harness or tame the chaotic in nature, or to explain it in terms of quantum theory and fractals, humanity cannot perceive a truly chaotic state. The forces of nature that dictate the growth of plant life fall into this category. It is not possible for us to predict with certainty the meteorological conditions from day to day, let alone year to year, and certainly not on the micro scale of the weed in the footpath. It is precisely these chaotic variations that are used in Reeds to conduct the sound score - to control and dictate the output of the real time synthesis process.

Of course, the software design process predetermines the general structure and aesthetic of the sound, but the momentary output is unique. It is unlikely that the combination of wind speed, wind direction, solar radiation, and temperature that occur in this instance will be precisely replicated in any other moment. This chaotic variation is the very source of diversity, which I propose is the structure that creates such beauty in nature.

Reeds uses the relatively static external facade of the sculptural form as a way of representing the paradox observed in organic plant life, where in contrast to the apparently static external face of the plant, is the hidden, dynamic activity of photosynthesis and nutrient gathering that keeps the plant alive.

The Reeds pod sculptures appearing as lifelike presences on the Ornamental Lake at the Royal Botanic Gardens Melbourne, support two remote weather stations. These gather wind speed, wind direction, temperature, and solar radiation data (the meteorological conditions, vital to the plants life processes). The data is transmitted back to a land-base where it is transformed into eight channels of musical sounds that are broadcast back out to the Reed pods. These sounds give a voice to the secret activity of the inner life processes of the plant.

The viscous and fluid aesthetic of the sound material is an attempt to capture something of both the dynamism of the life sustaining processes and the ever-changing, silken thread that is the presence of life, the life force itself. The fact that the sound material is generated on the basis of meteorological conditions is a way of drawing as tightly as possible the bond between the processes of nature and the processors of the Reeds installation. The sound material can not then be avoided, being the voice of the processes of nature.

Sound/music is in many ways a unique media, for it is not an external artefact. Sound literally penetrates the body. Furthermore, it is impossible to concretely tie composed sound or music to a representation of anything beyond a communication of emotional states and journeys.

As an artist my interest lies in exploring ways of contextualising digital art processes within the natural organic environment. I have little interest in the purely synthetic, that is the synthesis of sound or images from a wholly academic or theoretical viewpoint. I prefer instead, as is illustrated in the Reeds project, to take a fundamentally organic source as the basis for my sounds. In so doing, I hope that some quality of that organic material will permeate the work and thereby bring the synthetic output at least a small way towards the organic world, and therefore within the human context.

   

Conclusion

It seems appropriate to ask about the value of interactive new media art works. What do we take away from them? How do they enrich our understanding of the world? Do we continue to think about the experience afterwards, thereby developing a deeper appreciation of the ways in which that experience reflects upon our own lives? - As one does well after viewing a good film.

I don’t pretend that my own work has these outcomes, although, like many other New Media artists, I strive to create work that will facilitate these outcomes. I must also say that a new media art is no where near its zenith. There is much work to be done in developing a language that communicates clearly and is sufficiently varied to accommodate the many individual artists working in the medium.

So I ask myself if the experience of these works is simply one of mapping the development of the art form, and in turn the evolution of the technologies, or an unbridled expression of artistic intent. I think that we are lucky enough at this point in the development of New Media art to experience both, however there are still many works that I experience, even at prestigious festivals that I think communicate little more that a technical achievement.

If new media art is to be taken seriously as an artform that has the capacity to communicate something of the metaphysical, we need to lose the technology - the technology that makes the work possible - the hours/weeks/months of programming, the innovative technical development. These aspects of the work, which are often revered as great achievements, needs to be translucent - conspicuous by their absence. The visitor/user should be unaware and unconcerned with the technology creating the experience. They should, however, experience a symbiotic relationship with the work that permits a real sense of freedom of interaction, and an infinite scope for self-expression and exploration. This is my goal, a goal I hope you see illustrated in the description of the work above.

In summary, I feel the most rewarding outcomes in responsive/interactive environments continue to be achieved through the exploration of realtime sound and/or vision generation. The realtime synthesis process reflects small intricacies of individual interaction. The participant feels directly acknowledged by this direct reflection of individuality. This in turn encourages a deep level of commitment to the exploration of the installations potential. Although technology has developed in leaps and bounds in the last decade, I feel computing power is just now sufficient for realtime interactivity. The current state of technology is encouraging for the development of this kind of work, We are living at a time that encourages realtime data driven sound synthesis through fast computing and excellent software tools.

We must shift our focus from technical achievements to a user driven experience. The technology must become both infinitely variable, and invisible to the end user.

The development of virtual reality technologies has shown a distinct partiality to the visual. In my view, sound is a much more direct and affective stimuli.

If we can make sound more responsive to individual interaction intricacies, I am sure we can prove responsive sound environments to be a superior form of immersive experience.

 

Back to the Papers Index