locus sonus > Wimicam.enLast changed: 2012/03/04 05:08
|
||
---|---|---|
WimiCam (2006-2007)cette page est en cours de rédaction / This page is under construction / página en construcciónIntroductionParallel to the setting up and development of the Locus stream project Locus Sonus lab started a simultaneous and complementary line of experimentation related to the capture and amplification of sound in relatively local space, an audio survey so to speak of a limited perimeter around the amplification point, the auditors position. The intention here was to experiment with making the sound flux mobile, as a counterpoint to the "Locustream" project where the capture position is fixed. Inverting the principal behind the open microphone proposal by linking the point of capture to the deambulation of a person (performer) the sound flux becomes a subjective selection and therefore a personal representation of that space offered by the person manipulating the microphone. The hypothesis being is that as we (humans) render our own personal sound space mobile, via the use of cellular phones, laptop computers, ipods etc. Could this very principal become the basis for an artistic practice? Taking as reference, musical and artistic projects based on live sampling such as those practiced by the Kaffe Mathews but also performance works using real time improvised narration (Laurie Anderson (find ref) and to a certain extent narrative principals developed by such artists as Janet Cardiff in her sound walks, Locus Sonus experimented using wireless radio microphones and headphones, concentrating, in the first instance, on the area imposed by the limits of transmission as a territory to work within. FM MicrophoneThe microphone used for this work is the SENNHEISER EW152 sold as a "tie pin" microphone the transmitter can also accommodate other types of microphone (handheld voice microphones, electret field mikes etc). The first experiments which took place at ESAA involved moving the microphone from one location to another at regular intervals but leaving it in place and "remixing" from a fixed position the captured sound along with other sounds produced locally or taken from remote streams. It was found that as with the open microphone streams, the audio content tended to be rather uneventful or at least unpredictable in terms of when and what sounds where going to be captured and to what degree they would be "performable" material. This led to the use of the mobile microphone in a handheld context. Experiments with narration :During one session at ESAA we attempted to develop a protocol tracing the area outlined by the transmission limits using descriptive narration. The performer with the mobile microphone (Esther Salmona) left the space where the sound diffusion was taking place and wandered around the surrounding area. Out of sight, her participation in the ongoing improvisation took the form of vocal presence as she described the environment she was experiencing. This principal was included in a public presentation of Locus Sonus work in progress at the Villa Arson in April 2006 although this principal showed promise it was put to one side as other forms were developed, notably Esther's journal des streams. Live sampling :During that same early session Lydwine Van der Hulst (a musician) used the mobile microphone to "sound out" the designated environment capturing acoustic phenomena and "playing" with sound producing objects while other members of the group operated on the sounds using digital processing techniques "orchestrating" them in real time. It is more this principal which has been developed subsequently. Rotating Parabola :Locus Sonus first used parabolic dishes to increase the directionality and the sensitivity of small electret microphones, adapting them for a better functionality as open microphones for the Locustream project. Becoming interested by the spatial possibilities offered by the focusing capabilities of parabola we made a prototype machine which combined a rotating parabola with digital sound spatialization using 8 loudspeakers. The auditors where invited to position themselves in the center of the circle of loudspeakers from this position it was possible to hear a seamless 360° audio pan synchronized with the mechanical movement of the parabola. The circle of loudspeakers was positioned within the exhibition space in such a way as to enable the listener to perceive the microphone visually through the adjacent window as it rotated at a distance of some 30m outside the exhibition space. Aside from the interest provoked by the phenomena of perceiving the distant movement of the microphone while experiencing the sound it captured from a separate space the interest also developed in the acoustic phenomena (filtering sweep) related to the movement of the parabola. WimiCam 1 :In response to these various experiments Locus Sonus built a first prototype of a wireless parabolic microphone and cam in July 2006. (Images) A wifi webcam (in-fact a compact streaming server) was added to the parabolic dish, the visual field of the camera representing roughly the same window as that of the directional microphone. The image from the camera can be accessed over the internet. For the first version of the Wimicam it was decided to create two examples of the object in order to enable a duplex performance or concert based on an encounter between two distant spaces. The audio interpretation of each space was assumed by a team of two people, the first manipulating the Wimicam and the second processing the captured signal and streaming it to the distant location. Monitoring was available for the 4 performers (through wireless headphones for the person holding the microphone) not only of the composition from the local space but also of the stream from the distant space. (diagram) “Je m'empare de la « Wimicam » pour découvrir l'environnement sonore qui m'entoure, et en révéler le pouvoir musical. Des gestes se précisent (le mouvement rapide de la parabole dans l'air, le déplacement plus ou moins rapide dans l'espace ) et font naître des petites narrations. Une performance avec cette interface de jeu est présentée en duplex lors du « road show » New Yorkais. Une partie de l’équipe joue depuis le Roebling Bridge (Pensylvanie), l'autre dans une galerie d'art à "Triangle Below Canal Street"(NYC). (Excerpt of a report by Lydwyne Van Der Hulst ) During a second performance using a single wimicam the area surrounding the presentation space (the amphitheater at ESAA) covered the performer with the wimicam (Lydwine Van der Hulst) leaving the room by a side door and circumnavigating the building while the second performer (Peter Sinclair) transformed the sounds for the seated auditorium, the image from the embarked camera is projected behind him. (image) Wimicam 2 :While the scope of the Wimicam object was apparent an obvious drawback appeared in the fact that the person holding the camera could have no say in the choice and manipulation of the sounds they where picking up and similarly, the person mixing had no possible means guide or encourage the person with the mike (in a "normal" musical or sound improvisation two partners this dependent on each other would rely on visual signals to communicate between themselves). Several solutions to this problem of communication where considered such as separating the monitoring in the wireless headphones in order to transmit vocal communication on one channel and the mix presented to the public on another. The direction which was finally taken involved modifying the Wimicam to allow the person holding it to manipulate an audio program by equipping it with sensors and controls which transmit their data wirelessly to the computer. This problem was put to the musical research group STEIM* (Amsterdam, NL) who have been specialized in electronic interfaces for sound control for the past three decades. They agreed to produce a prototype version inviting Locus Sonus to collaborate with hardware and software developers and Frank Baldé (In January 2006). (image) (patch) The interface was developed using STEIM's wireless controller which has a range of 50 to 100m. Two accelerometers to allow the user to control parameters of the sound program using the movements of the object itself along with 2 rotary controllers and 4 buttons. The prototype uses the power supply and battery from a cordless drill which provides ample autonomy for the controller and the camera. Once it was built the lab experimented using various software combinations to put the Wimicam to it's best use: "Junxion" (from STEIM) to format the control data, Lisa, PD and Max/MSP for DAP. Sound processing has gradually been simplified, eliminating most "effects" which radically transform the sound to concentrate on sampling, as a statement of a choice, a personal selection concerning the space covered, in the most recent version the program automatically samples sound events by detecting attacks and holds them looping in a memory buffer until they are either replaced by the subsequent sound event or chosen by the user and stored as a file to be recalled by the program, thus a new sound environment is constructed from the sounds which the performer has selected and saved. Spatialization :Another important development has been the spatialization of the above mentioned sound environment using a multi loudspeaker system and objects developed for the programming environment Pure Data by the GMEM* (Charles Bascou) in collaboration with Locus Sonus. As each sample is saved the user of the Wimicam can use the x y coordinates given by the built in accelerometers to position the sample in the audio field. When the sample is saved as a file it is tagged with these coordinates and when it is recalled it is placed in the position which they indicate. The spatialization process also allows for the development of different protocols concerning the mapping of the "natural" audio environment from which the samples are taken to the "electroacoustic" environment defined by the program and audio system. Sounds can be re placed according to their polar coordinates or according to the chronological order in which they have been collected. Attempts have also been made to define protocol which takes into account the provenance of different sound sources, thus when using sound sources from two different Wimicams in two separate and distant locations. For instance it is possible to start a performance with each sound environment situated at opposite poles of the electroacousticspace progressively approaching them to one another during the performance to end up with an amalgam of the two environments and by the same token both performers "compositions". Wimicam 3 :A recent development (Decembre 2007) incorporates image sampling and spatialization synchronized with sound therefore rather than simply monitoring the view point of the microphone, it is now possible to record an video file into a small window and position the window within a projection using the xy controllers of the Wimicam (Scott Fitzgerald). This visualization of spatialization facilitates the comprehension of the process for the public. image (screen shot see Scott) Limits and Drawbacks :Technical problems- The complexity of the whole setup and it's general “clunkyness”, a consequence of the fact that the Wimicam has been developed progressively rather than being thought through as a whole, means that it takes a prohibitive amount of time to setup the Wimicam. Subsequently it is difficult to work with the object on a regular basis or in different sites therefore skills necessary to use the thing as an instrument have not been developed and at the same time the object is to complicated to hand over to a non initiated person and too expensive to replicate in large numbers. Presentations up until now have been considered as demos as opposed to an art work or concert. The fact that it is extremely difficult to capture sound within the same space as the amplification system means that the performer is rarely present within the performance space. This presence/absence while being an integral part of the problematic in artistic terms requires reflection regarding perception of the piece from the publics point of view. The aesthetics of the object (which is jokingly referred to in the lab as "Fatal weapon") make it difficult to use in public spaces. The fact that the sum of equipment needed (headphones, parabolic mike, camera) create an image related to spying or surveillance tends to generate negative reactions especially when it is used in public spaces. Comments have been made indicating that people have an unrealistic perception of physical image of the microphone. Conclusion : :It is felt that the creative possibilities of this type of portable system are of interest to Locus Sonus. Here are some different points which we feel are important.
Developing the Wimicam is encouraging locus Sonus to look more closely at different forms of portable audio devices. We are joining forces with the "laboratories des usages" at Sophia Antipolis who have studied the use of mobile phones and portable technology in general. We hope that to combine our artistic experimentation with sociological research to propose projects which take into account multiuser functions. Future developments could take in to account studies of existing technology (observation, reflection on users behaviour) and possible use of these in art works. Another possibility is to develop a simpler, cheaper and easier to use object which can be reproduced in relatively large numbers. Collaborations: Physical interfaces for performance : STEIM Amsterdam Sound spatialisation ; GMEM Marseille, Charles Bascou
|
||
Lab 2013/2014: Elena Biserna, Stéphane Cousot, Laurent Di Biase, Grégoire Lauvin, Fabrice Métais, Marie Müller, (Julien Clauss, Alejandro Duque), Jérôme Joy, Anne Roquigny, Peter Sinclair. 2008/2014 — Powered by LionWiki 2.2.2 — Thanks to Adam Zivner © images Locus Sonus webmaster & webdesign : Jérôme Joy contact: info (at) locusonus.org 2004-2014 Locus Sonus |
Article:
Admin functions:
Other:
Search:
Language:
Info:
Powered by LionWiki 2.2.2
Tested on FireFox2, FireFox3, Safari2, Safari3