locus sonus > Productions labLast changed: 2012/03/07 00:33
|
||
---|---|---|
This revision is from 2009/02/15 18:22. You can Restore it.
Locus Sonus is a research group specialized in audio art. It is
organized as a post graduate lab by the Art Schools of Aix-en-
Provence (ESAA), Nice (ENSA Villa Arson) and Marseille (ESBAM) in the
south of France. We have a partnership with sociology lab CNRS, LAMES
Aix en Provence (who are interested by the way that practices related
to new technologies are creating modifications in artistic production
and the way that the public responds to these modifications), and we currently continue collaborations with the CRESSON, architecture lab CNRS in Grenoble (sonic spaces research centre), the School of the Art Institute of Chicago (SAIC), and other international partners. ![]() • last version (Locus Sonus Roadshow, GMEM, Marseilles, F) One installation with which we present the streaming project, consists of a pair of wires stretched the length of the exhibition space with a small ball threaded on them. The position of the ball can be altered by the public acting like a tuner, an audio promenade where users slide their way through a series of remote audio locations. Multiple loudspeakers enable us to spatialize the sound of the streams creating so that each different audio stream selected on the wire emanates from a new position in the local space. In order to make the installation function efficiently we were obliged to incorporate a system allowing us to interrogate our server and update the list of current streams (people go away or use their streaming computer for a concert or a machine crashes...) we use the list to provide visual feedback by projecting names of the places the streams are coming from. Locustream - soundmap, tardis july 2006 ![]() • access to the soundmap • Locustream audio tardis (on-line listening interface) (04/2008) • list of the available streams At one point it seemed necessary to provide the "streamers" (as we have come to call the musicians and artists who've responded to our call) with the possibility to access the streams themselves, not only to hear their own stream but also those provided by other people. So we made this animated map which shows the location of all the streams and indicates those which are currently active with a blinking light. By clicking on a chosen location one can directly listen to the OGG Vorbis stream in a browser. Community of streamers. Another interesting development arising from the fact that we are involving other people to set up the microphones is that we have found ourselves with a network of people - artists, musicians and researchers, who are inherently interested by networked audio. This has led to use of the streams for art forms, outside of the lab itself (SARC in Belfast, Cedric Maridet in Hong-Kong, etc.). Much of our research concerns the emergence of listening practices which are based on the permanence, the non spectacular or non event based quality of the streams. We have found ourselves creating a sort of variation on Cageian (as in John Cage) listening, importing a remote acoustic environment in a way which can be chosen by the user, creates a renewed concentration on the local environment itself. This has led us to reflect on a form which adopts a permanent or semi permanent situation to present the streams publicly, and which involves a relationship between the local and the remote environment. Locustream Promenade july 2007 ![]() • prototype (GMEM, Marseilles, sept 07) Our project which we call Locustream Promenade uses parabolic loudspeakers which focus sound into a beam beneath a suspended dish, only heard when the listener passes through it. We have equipped each parabola with a mini computer (actually a hacked wifi router) and sound card. Placed within a wifi network each dish connects to a specific stream, provided with electricity it can be placed anywhere. Locustreambox july/sept 2007 ![]() • prototype (GMEM, Marseilles, sept 07) We are simultaneously developing a "streambox" - mini-PC equipped with a microphone and configured to connect to our streaming server as soon as it is plugged in. These boxes are equipped with a wireless connection, they use very little electricity, they are also silent. Given these improvements, we hope that by sending them to "streamers" we will be able to ensure the permanent functioning of the open mikes. Promenade in Paris 2008/2009 ![]() The project involves setting up the parabolas of the Locustream Promenade on the Parvis de La Défense in Paris. For those who are not familiar with this space - it is quite unusual in that it's almost entirely populated by people who work in offices around the plaza (Several thousand people emerge from the subway station in the middle of the area every morning, crisscross in different directions and then return underground in the evening). The parabolic loudspeakers will be distributed throughout the public space, each one relaying a remote ambiance. The public will be able to hear the evolution of the distant streams over weeks or months and at different times of the day. The parabolas will be set up progressively as permanent streams become available. The sociology lab LAMES wil be studying the process of installing the streams and its impact on the population frequenting "La Défense", the way in which they react to this listening experience. Wimicam june 2006 ![]() • in progress project Parallel to the setting up and development of the Locus stream project Locus Sonus lab started a simultaneous and complementary line of experimentation related to the capture and amplification of sound in relatively local space, an audio survey so to speak of a limited perimeter around the amplification point, the auditors position. The intention here was to experiment with making the sound flux mobile, as a counterpoint to the "Locustream" project where the capture position is fixed. Inverting the principal behind the open microphone proposal by linking the point of capture to the deambulation of a person (performer) the sound flux becomes a subjective selection and therefore a personal representation of that space offered by the person manipulating the microphone. The hypothesis being is that as we (humans) render our own personal sound space mobile, via the use of cellular phones, laptop computers, ipods etc. Could this very principal become the basis for an artistic practice ? Sound in Virtual Spaces may 2007 ![]() Locus Sonus in Second Life (LS in SL), May 2007/ June 2008, audio stream from a virtual space • in progress project We looked at Second life in terms of a networked community, and we started wondering if it would be worthwhile to create an extension of our lab there. The first action that we accomplished was to set up an interface to listen to the locus sonus streams in SL. (Brett Ian Balogh, SAIC). We then asked ourselves what the equivalent of an open microphone might be in SL. It became apparent that the possibilities for generating audio within SL are extremely limited, therefore we decided to create an autonomous system which generates sound to be streamed to SL. Our system was created as an extension of the real world into the virtual world of Second Life. In SL, we fabricated a series of rooms adjoining a virtual representation of a real place. In these rooms, we placed objects, each linked to a sound. When an object in the virtual space is moved, the sound reverberates through the virtual architecture, and is relayed into real life, as if it were a physical object. A microphone in the physical space plays the room tone and synthesized sounds back into the virtual space, creating a closed circuit between the virtual and real. Today we are interested by the creative possibilities offered by this project, exploration of possible permutations between the local and the virtual space is just beginning. Using a virtual environment to manipulate relatively sophisticated audio synthesis is exciting, as is the relationship between a synthesized (imagined) sound and object built in 3d. We are now intending to start work on our own virtual world using a different platform for which we will provide a downloadable client. New Atlantis dec 2008 ![]() • in progress project • Transatlab blog To create a virtual space that enables us to experiment with the possibilities between virtual architecture, sound synthesis and processing. This would primarily manifest itself through the notion of making the the space "playable," i.e. the notion that there is an architecture being created that also acts as an instrument. "Play" not in the sense of an instrumental interface or in a game/narrative structure, but from an environmental point of view. A visitor can take part in "re-arranging" the overall soundscape of in the virtual world by moving sound-objects through different acoustic spaces, creating a "musical" exploration.The end result would provide a playground for workshops that will enable participants, or teams of participants, to model 3d objects/environments and create sounds that are specific to them, as well as explore virtual sound spaces. This would ideally be a public space in which people anywhere could explore sound synthesis. We've got a nice notion of creating a virtual space that was not photorealistic, nor strived to emulate reality. With a highly stylized world, the participants/creators would be freer to concentrate on the ideas, not necessarily on the technical implementation. The project is using Panda (3d-engine), PureData (sound engine), We want to build a multiuser online world, but with capabilities for experimental audio work. Over the next three years, this project, which is code-named "New Atlantis", will grow to include a persistent, user-modifiable world and the ability to upload new content (3D objects, audio synthesis patches, etc). Basically it will be like Second Life, but it will sound a lot better. La Seconde Vie Sonore. The client software will be cross-platform, freely available, and we prefer it to be open-source. The same goes for the server, so that it can be used widely both for the New Atlantis world itself and for unique individual worlds. The client/server architecture will probably primarily follow the usual model of a distributed shared state and scenegraph, with the simulation running dynamically on a master copy of the scenegraph and the changes distributed to the clients. The server will also maintain a persistent database (e.g. MySQL) of the world. Our current plan uses Apache/MySQL and something like Python or Java servlets (on the theory that having the data structure of the world actually in memory is faster than continuous SQL queries). We can look at the structure of other multiuser game engines, and also at toolkits like CAVERN/QUANTA which we have some experience with in tele-immersive applications. The client will have two parts - the main visual client and the sound engine. We're currently planning to write the Client in Python using Panda3D, and use Pure Data as the sound engine. The client distribution will include these as distinct executables, but with a front-end shell script or similar mechanism that launches and connects them. It may at some point be possible to embed PD directly into another executable in the form of a shared library, but for now we expect that this method will work just as well. The New Atlantis, by Francis Bacon, 1614 "We have also sound-houses, where we practise and demonstrate all sounds and their generation. We have harmony which you have not, of quarter-sounds and lesser slides of sounds. Divers instruments of music likewise to you unknown, some sweeter than any you have; with bells and rings that are dainty and sweet. We represent small sounds as great and deep, likewise great sounds extenuate and sharp; we make divers tremblings and warblings of sounds, which in their original are entire. We represent and imitate all articulate sounds and letters, and the voices and notes of beasts and birds. We have certain helps which, set to the ear, do further the hearing greatly; we have also divers strange and artificial echoes, reflecting the voice many times, and, as it were, tossing it; and some that give back the voice louder than it came, some shriller and some deeper; yea, some rendering the voice, differing in the letters or articulate sound from that they receive. We have all means to convey sounds in trunks and pipes, in strange lines and distances.” http://art-bin.com/art/oatlant.html
|
||
Lab 2013/2014: Elena Biserna, Stéphane Cousot, Laurent Di Biase, Grégoire Lauvin, Fabrice Métais, Marie Müller, (Julien Clauss, Alejandro Duque), Jérôme Joy, Anne Roquigny, Peter Sinclair. 2008/2014 — Powered by LionWiki 2.2.2 — Thanks to Adam Zivner © images Locus Sonus webmaster & webdesign : Jérôme Joy contact: info (at) locusonus.org 2004-2014 Locus Sonus |
Article:
Admin functions:
Other:
Search:
Language:
Info:
Powered by LionWiki 2.2.2
Tested on FireFox2, FireFox3, Safari2, Safari3