locus sonus > audio in art





locus sonus > Productions lab

Last changed: 2012/03/07 00:33

 

Locus Sonus is a research group specialized in audio art. It is organized as a post graduate lab by the Art Schools of Aix en Provence (ESAA) and Nice (ENSA Villa Arson) in the south of France. We have a partnership with sociology lab CNRS, LAMES Aix en Provence (who are interested by the way that practices related to new technologies are creating modifications in artistic production and the way that the public responds to these modifications), and we currently continue collaborations with the CRESSON, architecture lab CNRS in Grenoble (sonic spaces research centre), the School of the Art Institute of Chicago (SAIC), and other international partners.
Locus Sonus is concerned with the innovative and transdisciplinary nature of audio art forms some of which are experimented and evaluated in a lab type context. An important factor is with the collective or multi-user aspects inherent to many emerging audio practices and which necessitate working as a group. Two main thematic define this research - audio in it's relation to space and networked audio systems.


The Locustream Project

In the fall of 2005 the lab started work on a group project with the aim to involve the various different members of the group in a way, loose enough, to not stifle individual creativity, while still providing a firm basis for communal experimentation and exploration.
Locus Sonus is inherently nomadic in nature, shared between 2 institutions separated by several hundred Kms, we travel regularly to meet and work together in and from different locations.
It was decided to set up some live audio streams, basically open microphones which upload a given soundscape or sound environment continuously to a server and from there available from anywhere via the WWW. Our intention being to provide a permanent (and somewhat emblematic) resource to tap into as raw materiel for our artistic experimentation.
After setting a first permanent stream (outside Cap15 a artists studio complex in Marseilles) we started by using the stream in a performance/ improvisation type mode using the, now standard, laptop and MIDI controller with homemade patchs to reinterpret the stream in real time. This proved to be somewhat problematic because often nothing in particular would be happening on the stream at a given time when we were intending to work with it.
more info




A discussion that followed this type of presentation led us to believe that it was necessary to define the protocol (sound capture/ network/local form) that we were employing more precisely. One of our problems was the choice of the stream emplacement - should this be made in relation to geographical location or sound quality or some kind of political or social situation... The decision was made to leave this up to other people, a partly practical and partly ideological choice. At this point we tidied up our Pure Data streaming patch so that other people could implement it without too much difficulty, boosted the number of streams which could be accepted simultaneously by our server, and started stripping down our ideas for installations, confident that the worldwide audio art community (with a little help from our friends) would respond to our call, which they did.
Various practices developed within the lab following the evolution of the project.





Locustream Tuner april 2006


Listening installation
more info
last version (Locus Sonus Roadshow, GMEM, Marseilles, F)


One installation with which we present the streaming project, consists of a pair of wires stretched the length of the exhibition space with a small ball threaded on them. The position of the ball can be altered by the public acting like a tuner, an audio promenade where users slide their way through a series of remote audio locations. Multiple loudspeakers enable us to spatialize the sound of the streams creating so that each different audio stream selected on the wire emanates from a new position in the local space.
In order to make the installation function efficiently we were obliged to incorporate a system allowing us to interrogate our server and update the list of current streams (people go away or use their streaming computer for a concert or a machine crashes...) we use the list to provide visual feedback by projecting names of the places the streams are coming from.





Locustream - SoundMap, Audio Tardis july 2006
Open Mikes
more info : Locustream SoundMap project (07/2006)
access to the soundmap

more info : Locustream Audio Tardis (on-line listening interface) (04/2008)
list of the available streams

At one point it seemed necessary to provide the "streamers" (as we have come to call the musicians and artists who've responded to our call) with the possibility to access the streams themselves, not only to hear their own stream but also those provided by other people. So we made this animated map which shows the location of all the streams and indicates those which are currently active with a blinking light. By clicking on a chosen location one can directly listen to the OGG Vorbis stream in a browser.

Community of streamers. Another interesting development arising from the fact that we are involving other people to set up the microphones is that we have found ourselves with a network of people - artists, musicians and researchers, who are inherently interested by networked audio. This has led to use of the streams for art forms, outside of the lab itself (SARC in Belfast, Cedric Maridet in Hong-Kong, etc.).
Much of our research concerns the emergence of listening practices which are based on the permanence, the non spectacular or non event based quality of the streams. We have found ourselves creating a sort of variation on Cageian (as in John Cage) listening, importing a remote acoustic environment in a way which can be chosen by the user, creates a renewed concentration on the local environment itself.
This has led us to reflect on a form which adopts a permanent or semi permanent situation to present the streams publicly, and which involves a relationship between the local and the remote environment.





Locustream Promenade july 2007
Listening installation
more info
prototype (Locus Sonus Roadshow, GMEM, Marseilles, sept 07 - Symposium Audio Extranautes, Nice, dec 07 - Portes ouvertes, ESA Aix en Provence, avr 2008)


Our project which we call Locustream Promenade uses parabolic loudspeakers which focus sound into a beam beneath a suspended dish, only heard when the listener passes through it. We have equipped each parabola with a mini computer (actually a hacked wifi router) and sound card. Placed within a wifi network each dish connects to a specific stream, provided with electricity it can be placed anywhere.





Locustreambox july/sept 2007

embarked computer/ mini-pc (Linux, Pd, streaming)
more info
prototype Asus router (Locus Sonus Roadshow, GMEM, Marseilles, sept 07 - Symposium Audio Extranautes, Nice, dec 07 - Portes ouvertes, ESA Aix en Provence, avr 2008)
prototype Alix (Workshop Le Fresnoy, nov 2008)


We are simultaneously developing a "streambox" - mini-PC equipped with a microphone and configured to connect to our streaming server as soon as it is plugged in. These boxes are equipped with a wireless connection, they use very little electricity, they are also silent. Given these improvements, we hope that by sending them to "streamers" we will be able to ensure the permanent functioning of the open mikes.





Promenade in Paris 2008/2009
audio ambiances & field spatialization
more info (in progress project)


The project involves setting up the parabolas of the Locustream Promenade on the Parvis de La Défense in Paris. For those who are not familiar with this space - it is quite unusual in that it's almost entirely populated by people who work in offices around the plaza (Several thousand people emerge from the subway station in the middle of the area every morning, crisscross in different directions and then return underground in the evening). The parabolic loudspeakers will be distributed throughout the public space, each one relaying a remote ambiance. The public will be able to hear the evolution of the distant streams over weeks or months and at different times of the day. The parabolas will be set up progressively as permanent streams become available. The sociology lab LAMES wil be studying the process of installing the streams and its impact on the population frequenting "La Défense", the way in which they react to this listening experience.





Wimicam june 2006
WiFi parabolic mike and cam
more info (in progress project)
prototype micro parabolique (Cap15, juin 2006)
prototype wimicam (DigIt Festival, USA, août 2006 - Symposium Audio Sites, ESA Aix en Provence, nov 2006 - Locus Sonus Roadshow, GMEM, Marseille, oct 2007)


Parallel to the setting up and development of the Locus stream project Locus Sonus lab started a simultaneous and complementary line of experimentation related to the capture and amplification of sound in relatively local space, an audio survey so to speak of a limited perimeter around the amplification point, the auditors position. The intention here was to experiment with making the sound flux mobile, as a counterpoint to the "Locustream" project where the capture position is fixed. Inverting the principal behind the open microphone proposal by linking the point of capture to the deambulation of a person (performer) the sound flux becomes a subjective selection and therefore a personal representation of that space offered by the person manipulating the microphone. The hypothesis being is that as we (humans) render our own personal sound space mobile, via the use of cellular phones, laptop computers, ipods etc. Could this very principal become the basis for an artistic practice ?





LS in SL : Sound in Virtual Spaces may 2007
Research on remote ambient sound combined with an interest in spatialization techniques and ways to interface with them has led us to take an interest in virtual worlds and 3D environments.

Locus Sonus in Second Life (LS in SL), May 2007/ June 2008,
audio stream from a virtual space

more info
prototype Festival Seconde Nature, Aix en Provence, oct 2007


We looked at Second life in terms of a networked community, and we started wondering if it would be worthwhile to create an extension of our lab there.
The first action that we accomplished was to set up an interface to listen to the locus sonus streams in SL. (Brett Ian Balogh, SAIC). We then asked ourselves what the equivalent of an open microphone might be in SL. It became apparent that the possibilities for generating audio within SL are extremely limited, therefore we decided to create an autonomous system which generates sound to be streamed to SL.
Our system was created as an extension of the real world into the virtual world of Second Life. In SL, we fabricated a series of rooms adjoining a virtual representation of a real place. In these rooms, we placed objects, each linked to a sound. When an object in the virtual space is moved, the sound reverberates through the virtual architecture, and is relayed into real life, as if it were a physical object. A microphone in the physical space plays the room tone and synthesized sounds back into the virtual space, creating a closed circuit between the virtual and real.
Today we are interested by the creative possibilities offered by this project, exploration of possible permutations between the local and the virtual space is just beginning. Using a virtual environment to manipulate relatively sophisticated audio synthesis is exciting, as is the relationship between a synthesized (imagined) sound and object built in 3d. We are now intending to start work on our own virtual world using a different platform for which we will provide a downloadable client.





New Atlantis dec 2008
Sound virtual space
more info
Transatlab blog


To create a virtual space that enables us to experiment with the possibilities between virtual architecture, sound synthesis and processing. This would primarily manifest itself through the notion of making the the space "playable," i.e. the notion that there is an architecture being created that also acts as an instrument. "Play" not in the sense of an instrumental interface or in a game/narrative structure, but from an environmental point of view. A visitor can take part in "re-arranging" the overall soundscape of in the virtual world by moving sound-objects through different acoustic spaces, creating a "musical" exploration.The end result would provide a playground for workshops that will enable participants, or teams of participants, to model 3d objects/environments and create sounds that are specific to them, as well as explore virtual sound spaces. This would ideally be a public space in which people anywhere could explore sound synthesis.
We've got a nice notion of creating a virtual space that was not photorealistic, nor strived to emulate reality. With a highly stylized world, the participants/creators would be freer to concentrate on the ideas, not necessarily on the technical implementation.
We want to build a multiuser online world, but with capabilities for experimental audio work. Over the next three years, this project, which is code-named "New Atlantis", will grow to include a persistent, user-modifiable world and the ability to upload new content (3D objects, audio synthesis patches, etc). Basically it will be like Second Life, but it will sound a lot better. La Seconde Vie Sonore.
The client software will be cross-platform, freely available, and we prefer it to be open-source. The same goes for the server, so that it can be used widely both for the New Atlantis world itself and for unique individual worlds.
The client/server architecture will probably primarily follow the usual model of a distributed shared state and scenegraph, with the simulation running dynamically on a master copy of the scenegraph and the changes distributed to the clients. The server will also maintain a persistent database (e.g. MySQL) of the world. Our current plan uses Apache/MySQL and something like Python or Java servlets (on the theory that having the data structure of the world actually in memory is faster than continuous SQL queries). We can look at the structure of other multiuser game engines, and also at toolkits like CAVERN/QUANTA which we have some experience with in tele-immersive applications.
The client will have two parts - the main visual client and the sound engine. We're currently planning to write the Client in Python using Panda3D, and use Pure Data as the sound engine. The client distribution will include these as distinct executables, but with a front-end shell script or similar mechanism that launches and connects them. It may at some point be possible to embed PD directly into another executable in the form of a shared library, but for now we expect that this method will work just as well.

The New Atlantis, by Francis Bacon, 1614
"We have also sound-houses, where we practise and demonstrate all sounds and their generation. We have harmony which you have not, of quarter-sounds and lesser slides of sounds. Divers instruments of music likewise to you unknown, some sweeter than any you have; with bells and rings that are dainty and sweet. We represent small sounds as great and deep, likewise great sounds extenuate and sharp; we make divers tremblings and warblings of sounds, which in their original are entire. We represent and imitate all articulate sounds and letters, and the voices and notes of beasts and birds. We have certain helps which, set to the ear, do further the hearing greatly; we have also divers strange and artificial echoes, reflecting the voice many times, and, as it were, tossing it; and some that give back the voice louder than it came, some shriller and some deeper; yea, some rendering the voice, differing in the letters or articulate sound from that they receive. We have all means to convey sounds in trunks and pipes, in strange lines and distances.”
http://art-bin.com/art/oatlant.html





NMSAT : Networked Music and SoundArt Timeline may 2008
An overview of practices related to sound transmission and distance
more info


This timeline aims to provide an overview of the principal events and projects in the realm of networked music and networked sonic performances since the beginning of the XX° century. The overview covers various domains and types of events : technologies and softwares, forward thinking literature, musicology & ethnomusicology, sound anthropology and history of telecommunication & radio, contemporary music to soundart. The general form of this timeline is a database organized with various fields for structuring each entry. Most of the entries, such as events, works and technologies, have a short description and sources annotation, completed by other informations that can be used like criterias for occurrences search and navigation. The idea is to build multiple interfaces connected to the database that present various ways and representations for navigating and editing entries. In the primary text version, the historical timeline is divided in two parts, the first one concerns early history and literature until the 60s, the second one a chronological list of works and references between 1950 to present. It is complemented with a third chapter : an alphabetical list of scientific papers and publications. The timeline maps out a rich resource for researchers and artists, and reveals the links between technics & social shifts, anticipation visions, proleptic statements & anticipation, and artworks. The objective aims a better reading and analysis of the very recent history of soundart & music in networks technologies environments, in the context of today social stakes and of World & local environments’ perceptions, shook and modified by the Internet development.

blog & database under construction