locus sonus > New AtlantisLast changed: 2012/03/07 19:12
|
||
---|---|---|
Page Index : Other pagesProjet New Atlantis : New Atlantis Documentation : New Atlantis documentation 1 (installation - configuration) (in French only) Documentation : New Atlantis documentation 2 (les objets dans New Atlantis) (in French only) Documentation : New Atlantis documentation 3 (créer un objet 3D pour New Atlantis) (in French only) Documentation : New Atlantis documentation 4 (créer un objet sonore pour New Atlantis) (in French only) Documentation : New Atlantis documentation 5 (exemples objets New Atlantis) (in French only) Locus Sonus decided along with the art and technology department at SAIC, to create a multi user virtual world based on the second life model, but entirely dedicated to audio experimentation. Like second life or many online video games each user or visiter will download an application which will render the world locally on their computer. Each copy of the world is linked to a server so each user can perceive the actions of other users online. The principal difference between the proposed world and second life is that it will incorporate relatively sophisticated audio processing possibilities and that the navigation, architecture & esthetics are to be thought out primarily to enhance the listening experience. IntroductionNew Atlantis is an ongoing research project shared between l'École Superieure d'Art d'Aix en Provence (ESAA) and The School of the Art Institute of Chicago (SAIC), which invites both students and faculty to explore the possibilities of using Virtual Reality and other game based technologies in a networked, multiuser environment. The originality of New Atlantis resides in the focus on acoustics of virtual spaces and a heightened awareness of the potential of sound as a means of expression. Progress is being made despite the herculean programming task involved in starting from scratch, designing an interface, and creating algorithms to generate the physical and acoustic space and to allow interaction with the user. The project is being developed within the context of a university exchange program between ESAA (represented by LS) and SAIC funded over a 3 year period (starting 08/09) by PUF, Roland Cahen from ENSCI (École Nationale Supérieure de Design Industriel) in Paris is currently collaborating in an unofficial mode, however we would hope to include ENSCI in the exchange programme sometime in the future. The major part of research and development is being carried out by faculty from Aix-en-Provence and Chicago and by artist/researchers from LS, however one of the main aims of the project is to provide a "sandpit" for students from both establishments and indeed students from other art education institutions. Much of the development is taking place in sessions on either side of the atlantic where students participate in a workshop type context, building objects and "patches" to inhabit the world. The title refers to a text by Francis Bacon "New Atlantis" dating from 1624-1626, which describes a utopian world filled among other things with incredible audio phenomena. We have also sound-houses, where we practice and demonstrate all sounds and their generation. We have harmony that you have not, of quarter-sounds and lesser slides of sounds. Divers instruments of music likewise to you unknown, some sweeter than any you have; with bells and rings that are dainty and sweet. We represent small sounds as great and deep, likewise great sounds extenuate and sharp; we make divers tremblings and warblings of sounds, which in their original are entire. We represent and imitate all articulate sounds and letters, and the voices and notes of beasts and birds. We have certain helps which, set to the ear, do further the hearing greatly; we have also divers strange and artificial echoes, reflecting the voice many times, and, as it were, tossing it; and some that give back the voice louder than it came, some shriller and some deeper; yea, some rendering the voice, differing in the letters or articulate sound from that they receive. We have all means to convey sounds in trunks and pipes, in strange lines and distances. http://oregonstate.edu/instruct/phl302/texts/bacon/atlantis.html NoteStumbled upon this great engraving from a publication from Bacon’s ‘New Atlantis’. Below you can see the ‘Sound-Houses’ he describes. The gentlemen at each end of the string marked n demonstrate the sounds in pipes and trunks (Bacon: “We have also means to convey sounds in trunks and pipes, in strange lines and distances”). I will blame the engraver for the embarrassing likeness of this marvel of the future to a tin can telephone. A sound-house is shown next to letter m together with a selection of bell-like objects (“Divers instruments of music likewise to you unknown, some sweeter than any you have, together with bells and rings that are dainty and sweet”) and a viola da gamba. The elderly gentleman with the walking stick just right of the sound-house seems to be demonstrating a hearing-aid (“We have certain helps which set to the ear do further the hearing greatly”). (Thanks to Gert Sylvest)
Click to enlarge If much of 2008-2009 was spent developing concepts on which to construct New Atlantis, 2009-2010 has been mostly dedicated to researching and implementing the basic programming to make the world function. As mentioned above, one of the main aims of this project is to explore possibilities for augmenting the capacity of a virtual 3d environment. In order to accomplish this it was decided to bundle a powerful and familiar audio programming environment (pure data or PD) with a 3d visual environment (Panda). These environments were chosen because they are open source and already being used in our institutions. The idea being that once the basic structure of the New Atlantis is established students (including future generations of students) will be able to create 3d models and sound generating programs to accompany them, thus participating in the process of inhabiting the world. History of the CollaborationThe encounter between School of the Arts Institute of Chicago and Locus Sonus (ESA Aix, ENSA Nice (2004-2010), ENSA Bourges) dates back to the beginning of the century, when Peter Gena expressed an interest in collaborating with, my colleague & co- director of research at LS : Jerome Joy. At the time this collaboration proved impossible due to lack of funding. SAIC is (a big art school even by American standards), it has an important art and technology department, with a long tradition in sound art. Locus Sonus is a post-graduate research unit whose area of research is audio in art, more specifically audio art related to space and distance (notably through networked audio). Although ESAA (and all other school of Arts in France) is much smaller than SAIC, roughly half the faculty are dedicated to it’s specialization in art and technology, (specialization which was first set up at the end of the 1980’s). So our collaboration which was among the first project to be funded by FACE, started in 2005. We decided to concentrate our exchange on three main aspects of our programs – Sound, 3d – virtual environments and meca-tronics. Over the period we have reduced the field to focus mainly on sound and 3d virtual environments, developing a trans disciplinary approach, which has largely been made possible through the PUF exchange, as should become clear. Over the first 3 years of the FACE partnership, we organized; student exchanges over 1 semester periods, faculty exchanges – mostly for workshops in the defined areas and remote teaching, which combined video conferencing software with remote desktop allowing faculty from either side of the Atlantic, not only to talk but also to give demonstrations of software and such like. SAIC participated actively in Locus Sonus’s experimental networked audio project (Locustream project). In fact they were the first to take part in the project and the fact of having a privileged partner gave us a real boost. The funding also helped us to invest in some equipment; specifically we were able to set up the same 3d (CAV) virtual projection space in Aix as in Chicago, creating a shared online virtual environment. (Remember this is long before 3d cinema hit the high street). Generally speaking both institutions benefitted hugely from the faculty exchange, which provided complementary and enriching expertise in the chosen domains. Students travelling to Chicago benefitted from the scale and the scope of the school while Locus Sonus was able to offer a specialized, environment for students interested in audio art. And the fact that we are specifically interested in art that uses networks and remoteness meant that the distance between the institutions could be turned to our advantage. Another project, which emerged at the end of the first Face exchange, was LS in SL. Second life was provoking a lot of interest at that time and we decided that it might be worth investigating it as a virtual workspace for our experimentation. We quickly discovered that Second Life is very poor in terms of the possibilities offered for creating sounds. It does however accept streamed audio, and as mentioned above we are good at streaming audio. The first project in Second Life was this radio programmed by a graduate student from Chicago, (Brett Balogh) which played the Locus Sonus open microphones in Second Life : LS in SL radio. We then went on to develop a program which, from data concerning, the positions of objects in Second Life, sent online from second life to our server, we could generate sound and complex sound spaces and stream it back to second life. This development was shared between, Aix and Chicago, splitting the load according to our different competencies. The piece was presented at the Seconde Nature festival in Aix en Provence. TeamPeter Gena Peter Sinclair Ben Chang Ricardo Garcia Robb Drinkwater Gonzague Defos de Rau Margarita Benitez Anne Laforet Jerome Joy Jerome Abel Eddie Breitweiser Sébastien Vacherand … Birth of New AtlantisDissatisfied by Second Life, for various technical and ethical reasons we decided to develop our own virtual world dedicated to the experimentation sound in a virtual environment.
Presentation
Aside from this work on basic structure it has been necessary to create 3d models of sound producing objects and the audio programs which go with them and architectural elements to structure the world. Objectives
So we began with a text by Francis Bacon "New Atlantis", an extract from which can be found above, which describes a utopian world filled among other things with incredible audio phenomena. Using this text as a starting point we have defined types (or classes) of objects to be represented in the visual space and which can have "concordant" digital audio processes.
other ideas which we wish to instigate which do no figure in the original text:
An example (by Benoît Espinola)Spheric, a hollow sphere that absorbs sound. Ricardo Garcia made the 3D object and Benoît Espinola programmed its Pure Data patch.
Click to enlarge Research aspectsAt the time of writing work is just beginning on this project which is programmed to develop over the next 3 years. We have concentrated our efforts on choosing basic principles concerning the way in which the world is to function, the foundations, if you will of the world. Various aspects of the task have been discussed in detail: credibility of interaction opposed to creative liberty. Esthetics realism opposed to imaginary. Many aspects of the project remain to be discussed and also to be verified in practice. A consensus exists in as much as everyone agrees that the world should have an abstract form while maintaining a certain acoustic credibility in relation to space distance etc. It is clear that the objective is not to concentrate our efforts on realistic simulation and at the same time the user needs to "believe" in the relationship which is established between the visual and the audio synthesis. The question was raised as to wether the notion of personalized avatars (predominant in Second Life) should exist or not. The decision is to use "camera vision" thus focussing participants efforts on the creation of sound as opposed to the visual esthetics of an avatar. the question remains as to wether the the user is represented visually in the world or not and if so how, or indeed if the presence is purely audio. Lengthy discussions have also taken place concerning the manner in which the audio is calculated in relation to the space and the degree of complexity which is manageable or desirable, however the discussion is a little too technically involved to be discussed here. Architecture of New AtlantisPathfindingA lot of time has been spent defining the rules of the world, deciding on software structures, interaction, navigation - the basic architecture so to speak. One of the most important challenges is that we have decided that all elements in the world are to have an effect on sound this means that all the virtual spaces will have acoustics, and so will surfaces (textures as they are called in the jargon) openings etc. This means that the path between each, movable, sound producing object has to be calculated and updated all the time so that that the chain of audio “effects”, the accumulation of different reverbeent spaces can be created dynamically. Because the idea is to give the User a rich and convincing (we do not say realistic nor strive for a specific acoustic realism) experience in regard to their virtual position in the world and their orientation to the surrounding sound(s) and the effect of the surrounding and intervening architecture, it is necessary to treat the "buildings" not only as geometric solids but also as "acoustic solids". In this sense the idea is to use the architecture as a material that can effect, occlude, transmit, and transform sound, always in respect to the Users orientation and in and around such spaces. In the very early stages of the project standard methods were worked out to simulate/provide acoustic cues to the User/Listener about distance from a sound. It was also worked out how to, at least approximately, provide the natural sounding reverberation of enclosed spaces (including variable surface treatments) (While at present the system creates a reasonable and quiet pleasing approximation of acoustic reverberation. Further work is needed to be able to extract the true geometry of the modeled 3d spaces and create a heightened sense of sonic accuracy.. It further followed that it is fairly easy to apply non-standard filtering (the Helps) to such spaces. We decided that "doors" and "windows" need not be treated as such acoustically, though they possibly can be rendered in 3d models. In fact it was decided that it is desirable that both sound artists and 3d designers treat these simply as "apertures", so as to include holes in floors and ceilings, as well as small cracks and crevices, or any other place from which sound can emit. The next step was to tackle the challenge of how to handle the possible multitude of apertures, and possibly intervening spaces, in relationship to the User. In all but the simplest scenario, where there is a User and a sound source within the same space, even the simple relationship of a User, a Sound source, and an aperture between e.g. a sound in a room with one doorway and the user outside, poses some interesting challenges. After examining the problem it was decided whether the scenario was as simple as one room with one aperture (a doorway)... or a much more complicated one, as in multiple rooms with numerous apertures and possibly multiple spaces leading to others... in both scenarios the issue of "sound to User" was a classic "path finding" problem common to computer games and VR. We concluded with the decision that more research was needed into the implementation of "path finding" and "least-cost routing" algorithms, and it was agreed that various team members would continue to look into standard implementation such as A-star, Dykstra, and others. We determined that the most challenging issue was creating methods to reasonably simulate the effects of multiple spaces, possibly with multiple apertures, and how that is conveyed (virtually/sonically) to the User. Ben Chang presented (after considerable work) his first attempt at a path-finding algorithm written in Python. One of the more interesting points that he offered to the group was the fact that, unlike the problem most game developers face, namely finding the shortest path, our challenge was a bit different, because what we needed to do is find all the paths. To put this in context imagine that there is a sound, a listener, and number of rooms between them. Furthermore, imagine that some of these rooms are connected by windows and doors (apertures), while others are not, and it is very possible there could be a "chain" of rooms, one to the next, leading to the listener. As it turns out, whatever the case, whether one room with one opening or many with dozens, whatever the case the listener must always hear the sound arriving to them from all the possible pathways. This in fact turned out to be a good thing as it meant that, unlike the classic problem in games, we do not need to do the computationally expensive step of calculating "the least cost part", rather we need only calculate (still computationally expensive) the tree of all the paths of a sound to the listener. To this end Ben Chang had devised a rather elegant (although not efficient) method to be precise. Whatever the efficiency of the algorithm, we believe that with the smallish arrangements of rooms that we have been talking about it will not really matter anyway. The place where the cost would come in is if we start increasingly expanding this—building whole cities, where we'd want some kind of audibility culling to skip whole parts of the world that have irrelevantly small contributions, like an audibility tree or something. However, that is a pretty far off program which, given a description of a series of rooms and the possible pathways in and out of each, would find all the possible paths from a point in one room to the Listener. By way of demonstration Ben Chang described a few possible scenarios e.g. room A has apertures 1 & 2, where 1 leads to room B and 2 leads to room C, and he showed how the algorithm solved for them. He then went further to not only explain how it worked in code but also showed the students how test it for robustness and challenged them to come up with different configurations of possible spaces that might break the algorithm. In fact students found that (in a few aberrant cases) it was impossible to do so (see note 2 below). Thus, with a relatively robust path-finding algorithm in hand, Robb Drinkwater proposed that what was next needed was a way for 3d modelers to not just 'describe' the connections between rooms ^A -> 1 -> B^ but rather to actually model them. From it's inception it has been imagined that participants in the New Atlantis project would not just be passive Users but also content creators, both sonic and 3d. Given the latter, there is a mechanism where 3d designers can create and upload models of their virtual spaces (which Users can access and place around the world). Therefore my proposal was to use a mechanism familiar to 3d designers to "mark" or "tag" where in their 3d models the apertures, e,g, doorways and windows, should be. To this end Robb proposed using 'empty' Objects, a feature of Blender 3d (it was never determined if 3DSMax or Maya has an equivalent) to mark, in 3d coordinates, the aperture between rooms (as well as designate, in the description field, the direction between each, another feature of the protocol). Moving ahead we (Margarita Benitez and Robb Drinkwater) worked up some models of various configurations of different 'buildings' with different arrangements of rooms (some of which we based on Ben's lecture and the models he and the students came up with) each with our proposed "aperture markers". From there we exported these models from Blender, in the usual Egg format that Panda requires, and in turn imported them into Panda. Once in Panda we used various methods to attempt to extract this information (in Python). As it turned out, Panda does not allow (or at least did not as of January 2020] access to this data). However we did not eventually find ways (it not somewhat complicated) to extract this data in a form that can be used in Python. (It should be pointed out here that this method is less than optimal. A better solution would be a program that could take three-dimensional models and parse them to determine the coordinates of various apertures. However this problem is nontrivial from a programming standpoint, and no team member has stepped up to attempt to offer a solution.) In the meantime Peter Sinclair worked out the mechanics of using the Pd sound engine to dynamically build patches. And while we were not able to get to the point where the NAV could parse a model and send this data to Pd when it loaded, there is every reason to believe it can simply given more time and engineering. NAV serverDescription: NAV (New Atlantis Viewer) is the 3d viewer of this project. Contents:
NAV ClientWhat is more since the idea is that the world is user participative, ie that it is the art student community who create the sound objects and spaces in the world, the whole system itself has to be open and updatable. We have a prototype version running and we are now cleaning up the code and optimizing. http://code.google.com/p/new-atlantis/ Research and documentationA last important point is that the whole project is open source and will be available to anyone who wants it along with documentation – so hopefully people, other than ourselves, will also benefit. Along the same lines Anne Laforet, a Locus Sonus researcher, write up the results of the research in detail, so all the discussions concerning both the technical and esthetic aspects of the research, will be thoroughly documented. New Atlantis documentation (Howto)go to this page : New Atlantis documentation 1 (in French only)
|
||
Lab 2013/2014: Elena Biserna, Stéphane Cousot, Laurent Di Biase, Grégoire Lauvin, Fabrice Métais, Marie Müller, (Julien Clauss, Alejandro Duque), Jérôme Joy, Anne Roquigny, Peter Sinclair. 2008/2014 — Powered by LionWiki 2.2.2 — Thanks to Adam Zivner © images Locus Sonus webmaster & webdesign : Jérôme Joy contact: info (at) locusonus.org 2004-2014 Locus Sonus |
Article:
Admin functions:
Other:
Search:
Language:
Info:
Powered by LionWiki 2.2.2
Tested on FireFox2, FireFox3, Safari2, Safari3