locus sonus > audio in art





locus sonus > New Atlantis

Last changed: 2012/03/07 19:12

 




Other pages

Projet New Atlantis : New Atlantis

Documentation : New Atlantis documentation 1 (installation - configuration) (in French only)

Documentation : New Atlantis documentation 2 (les objets dans New Atlantis) (in French only)

Documentation : New Atlantis documentation 3 (créer un objet 3D pour New Atlantis) (in French only)

Documentation : New Atlantis documentation 4 (créer un objet sonore pour New Atlantis) (in French only)

Documentation : New Atlantis documentation 5 (exemples objets New Atlantis) (in French only)






Locus Sonus decided along with the art and technology department at SAIC, to create a multi user virtual world based on the second life model, but entirely dedicated to audio experimentation.

Like second life or many online video games each user or visiter will download an application which will render the world locally on their computer. Each copy of the world is linked to a server so each user can perceive the actions of other users online. The principal difference between the proposed world and second life is that it will incorporate relatively sophisticated audio processing possibilities and that the navigation, architecture & esthetics are to be thought out primarily to enhance the listening experience.






Introduction

New Atlantis is an ongoing research project shared between l'École Superieure d'Art d'Aix en Provence (ESAA) and The School of the Art Institute of Chicago (SAIC), which invites both students and faculty to explore the possibilities of using Virtual Reality and other game based technologies in a networked, multiuser environment. The originality of New Atlantis resides in the focus on acoustics of virtual spaces and a heightened awareness of the potential of sound as a means of expression. Progress is being made despite the herculean programming task involved in starting from scratch, designing an interface, and creating algorithms to generate the physical and acoustic space and to allow interaction with the user.

The project is being developed within the context of a university exchange program between ESAA (represented by LS) and SAIC funded over a 3 year period (starting 08/09) by PUF, Roland Cahen from ENSCI (École Nationale Supérieure de Design Industriel) in Paris is currently collaborating in an unofficial mode, however we would hope to include ENSCI in the exchange programme sometime in the future. The major part of research and development is being carried out by faculty from Aix-en-Provence and Chicago and by artist/researchers from LS, however one of the main aims of the project is to provide a "sandpit" for students from both establishments and indeed students from other art education institutions. Much of the development is taking place in sessions on either side of the atlantic where students participate in a workshop type context, building objects and "patches" to inhabit the world.

The title refers to a text by Francis Bacon "New Atlantis" dating from 1624-1626, which describes a utopian world filled among other things with incredible audio phenomena.



We have also sound-houses, where we practice and demonstrate all sounds and their generation. We have harmony that you have not, of quarter-sounds and lesser slides of sounds. Divers instruments of music likewise to you unknown, some sweeter than any you have; with bells and rings that are dainty and sweet. We represent small sounds as great and deep, likewise great sounds extenuate and sharp; we make divers tremblings and warblings of sounds, which in their original are entire. We represent and imitate all articulate sounds and letters, and the voices and notes of beasts and birds. We have certain helps which, set to the ear, do further the hearing greatly; we have also divers strange and artificial echoes, reflecting the voice many times, and, as it were, tossing it; and some that give back the voice louder than it came, some shriller and some deeper; yea, some rendering the voice, differing in the letters or articulate sound from that they receive. We have all means to convey sounds in trunks and pipes, in strange lines and distances.

http://oregonstate.edu/instruct/phl302/texts/bacon/atlantis.html



Note

Stumbled upon this great engraving from a publication from Bacon’s ‘New Atlantis’. Below you can see the ‘Sound-Houses’ he describes. The gentlemen at each end of the string marked n demonstrate the sounds in pipes and trunks (Bacon: “We have also means to convey sounds in trunks and pipes, in strange lines and distances”). I will blame the engraver for the embarrassing likeness of this marvel of the future to a tin can telephone.

A sound-house is shown next to letter m together with a selection of bell-like objects (“Divers instruments of music likewise to you unknown, some sweeter than any you have, together with bells and rings that are dainty and sweet”) and a viola da gamba. The elderly gentleman with the walking stick just right of the sound-house seems to be demonstrating a hearing-aid (“We have certain helps which set to the ear do further the hearing greatly”).

(Thanks to Gert Sylvest)



Click to enlarge





If much of 2008-2009 was spent developing concepts on which to construct New Atlantis, 2009-2010 has been mostly dedicated to researching and implementing the basic programming to make the world function. As mentioned above, one of the main aims of this project is to explore possibilities for augmenting the capacity of a virtual 3d environment. In order to accomplish this it was decided to bundle a powerful and familiar audio programming environment (pure data or PD) with a 3d visual environment (Panda). These environments were chosen because they are open source and already being used in our institutions. The idea being that once the basic structure of the New Atlantis is established students (including future generations of students) will be able to create 3d models and sound generating programs to accompany them, thus participating in the process of inhabiting the world.





History of the Collaboration

http://transatlab.net/

The encounter between School of the Arts Institute of Chicago and Locus Sonus (ESA Aix, ENSA Nice (2004-2010), ENSA Bourges) dates back to the beginning of the century, when Peter Gena expressed an interest in collaborating with, my colleague & co- director of research at LS : Jerome Joy. At the time this collaboration proved impossible due to lack of funding.



SAIC is (a big art school even by American standards), it has an important art and technology department, with a long tradition in sound art.

Locus Sonus is a post-graduate research unit whose area of research is audio in art, more specifically audio art related to space and distance (notably through networked audio). Although ESAA (and all other school of Arts in France) is much smaller than SAIC, roughly half the faculty are dedicated to it’s specialization in art and technology, (specialization which was first set up at the end of the 1980’s).

So our collaboration which was among the first project to be funded by FACE, started in 2005.



We decided to concentrate our exchange on three main aspects of our programs – Sound, 3d – virtual environments and meca-tronics.

Over the period we have reduced the field to focus mainly on sound and 3d virtual environments, developing a trans disciplinary approach, which has largely been made possible through the PUF exchange, as should become clear.

Over the first 3 years of the FACE partnership, we organized; student exchanges over 1 semester periods, faculty exchanges – mostly for workshops in the defined areas and remote teaching, which combined video conferencing software with remote desktop allowing faculty from either side of the Atlantic, not only to talk but also to give demonstrations of software and such like.

SAIC participated actively in Locus Sonus’s experimental networked audio project (Locustream project). In fact they were the first to take part in the project and the fact of having a privileged partner gave us a real boost.

The funding also helped us to invest in some equipment; specifically we were able to set up the same 3d (CAV) virtual projection space in Aix as in Chicago, creating a shared online virtual environment. (Remember this is long before 3d cinema hit the high street).

Generally speaking both institutions benefitted hugely from the faculty exchange, which provided complementary and enriching expertise in the chosen domains. Students travelling to Chicago benefitted from the scale and the scope of the school while Locus Sonus was able to offer a specialized, environment for students interested in audio art. And the fact that we are specifically interested in art that uses networks and remoteness meant that the distance between the institutions could be turned to our advantage.



Another project, which emerged at the end of the first Face exchange, was LS in SL. Second life was provoking a lot of interest at that time and we decided that it might be worth investigating it as a virtual workspace for our experimentation. We quickly discovered that Second Life is very poor in terms of the possibilities offered for creating sounds. It does however accept streamed audio, and as mentioned above we are good at streaming audio.

The first project in Second Life was this radio programmed by a graduate student from Chicago, (Brett Balogh) which played the Locus Sonus open microphones in Second Life : LS in SL radio. We then went on to develop a program which, from data concerning, the positions of objects in Second Life, sent online from second life to our server, we could generate sound and complex sound spaces and stream it back to second life. This development was shared between, Aix and Chicago, splitting the load according to our different competencies. The piece was presented at the Seconde Nature festival in Aix en Provence.






Team

Peter Gena

Peter Sinclair

Ben Chang

Ricardo Garcia

Robb Drinkwater

Gonzague Defos de Rau

Margarita Benitez

Anne Laforet

Jerome Joy

Jerome Abel

Eddie Breitweiser

Sébastien Vacherand





Birth of New Atlantis

Dissatisfied by Second Life, for various technical and ethical reasons we decided to develop our own virtual world dedicated to the experimentation sound in a virtual environment.

  • Like Second Life the New Atlantis is a multiuser virtual world with a downloadable client which runs locally on your personal computer but which connects to other users via a dedicated server.
  • Unlike second life New Atlantis combines a sophisticated audio programming environment with a 3d virtual environment, which allows us to generate complex audio synthesis in real time, and unlike second life you don’t there are no avatars so you are more concerned with the environment itself than with how you look in it.





Presentation

  1. A client server system - This makes the world multi user.
  2. A 3d rendering environment - NAV (New Atlantis Viewer) The world it's esthetics, interactivity, rules etc.
  3. A digital audio environment - allowing the calculation of a relatively complex audio environment in correlation with the visual 3d space.
  4. A pathfinding system (which calculates all the trajectories between sound sources and the listener, especially in relation to complex virtual architecture).

Aside from this work on basic structure it has been necessary to create 3d models of sound producing objects and the audio programs which go with them and architectural elements to structure the world.




Objectives

  • To create an environment which will allow students to apprehend telematic methods to span both physical, geographical, and cultural boundaries.
  • To question the current status of audio in the domain of online virtual environments, and to influence an evolution of the same.
  • To provide a communal "sand pit" in which to experiment ways in which virtual objects can relate to sound.



So we began with a text by Francis Bacon "New Atlantis", an extract from which can be found above, which describes a utopian world filled among other things with incredible audio phenomena. Using this text as a starting point we have defined types (or classes) of objects to be represented in the visual space and which can have "concordant" digital audio processes.

  • 1. Sound Objects (sound sources)
    They can integrate interaction, they are mobile and can be moved by the user.
  • 2. Sound Spaces
    resonate or reverberate when activated by the sound object which is introduced.
  • 3. Helps
    An accessory which the listener can wear, ear plugs, listening trumpet, a fish bowl over the head - something which modifies the sound but only for the user.
  • 4. Sound Pipes
    transmits sound from one place to another (like a long pipe) without diminishing amplitude in relation to distance.
  • 5. Zones
    creates an effect locally (other then resonance) when a sound object is introduced.



other ideas which we wish to instigate which do no figure in the original text:

  • 6. Microphones - opening on to the physical world
    streams - those already existent in the locus sonus project, also the possibility of transmitting a sound into a given space in order to recuperate the acoustic reverberation.
  • 7. Auras
    something like a help but which also influences the audition of other users in close proximity.
  • 8. Voice
    the possibility to incorporate the users own voice, a sort of ventriloquist which allows the user to place their streamed voice somewhere and listen to the resulting modified sound from another position.



An example (by Benoît Espinola)

Spheric, a hollow sphere that absorbs sound. Ricardo Garcia made the 3D object and Benoît Espinola programmed its Pure Data patch.

Click to enlarge






Research aspects

At the time of writing work is just beginning on this project which is programmed to develop over the next 3 years. We have concentrated our efforts on choosing basic principles concerning the way in which the world is to function, the foundations, if you will of the world. Various aspects of the task have been discussed in detail: credibility of interaction opposed to creative liberty. Esthetics realism opposed to imaginary. Many aspects of the project remain to be discussed and also to be verified in practice. A consensus exists in as much as everyone agrees that the world should have an abstract form while maintaining a certain acoustic credibility in relation to space distance etc. It is clear that the objective is not to concentrate our efforts on realistic simulation and at the same time the user needs to "believe" in the relationship which is established between the visual and the audio synthesis.



The question was raised as to wether the notion of personalized avatars (predominant in Second Life) should exist or not. The decision is to use "camera vision" thus focussing participants efforts on the creation of sound as opposed to the visual esthetics of an avatar. the question remains as to wether the the user is represented visually in the world or not and if so how, or indeed if the presence is purely audio. Lengthy discussions have also taken place concerning the manner in which the audio is calculated in relation to the space and the degree of complexity which is manageable or desirable, however the discussion is a little too technically involved to be discussed here.






Architecture of New Atlantis



http://locusonus.org/documentation/img/PROJETSLAB/newatlantis/NewAtlantis_SoftwareArchitecture.jpg

http://locusonus.org/documentation/img/PROJETSLAB/newatlantis/NewAtlantis_ClientServerArchitecture.png






Pathfinding

A lot of time has been spent defining the rules of the world, deciding on software structures, interaction, navigation - the basic architecture so to speak.

One of the most important challenges is that we have decided that all elements in the world are to have an effect on sound this means that all the virtual spaces will have acoustics, and so will surfaces (textures as they are called in the jargon) openings etc. This means that the path between each, movable, sound producing object has to be calculated and updated all the time so that that the chain of audio “effects”, the accumulation of different reverbeent spaces can be created dynamically.



Because the idea is to give the User a rich and convincing (we do not say realistic nor strive for a specific acoustic realism) experience in regard to their virtual position in the world and their orientation to the surrounding sound(s) and the effect of the surrounding and intervening architecture, it is necessary to treat the "buildings" not only as geometric solids but also as "acoustic solids". In this sense the idea is to use the architecture as a material that can effect, occlude, transmit, and transform sound, always in respect to the Users orientation and in and around such spaces.



In the very early stages of the project standard methods were worked out to simulate/provide acoustic cues to the User/Listener about distance from a sound. It was also worked out how to, at least approximately, provide the natural sounding reverberation of enclosed spaces (including variable surface treatments) (While at present the system creates a reasonable and quiet pleasing approximation of acoustic reverberation. Further work is needed to be able to extract the true geometry of the modeled 3d spaces and create a heightened sense of sonic accuracy.. It further followed that it is fairly easy to apply non-standard filtering (the Helps) to such spaces.



http://locusonus.org/documentation/img/PROJETSLAB/newatlantis/NewAtlantis_pathfinding.png



We decided that "doors" and "windows" need not be treated as such acoustically, though they possibly can be rendered in 3d models. In fact it was decided that it is desirable that both sound artists and 3d designers treat these simply as "apertures", so as to include holes in floors and ceilings, as well as small cracks and crevices, or any other place from which sound can emit.



The next step was to tackle the challenge of how to handle the possible multitude of apertures, and possibly intervening spaces, in relationship to the User. In all but the simplest scenario, where there is a User and a sound source within the same space, even the simple relationship of a User, a Sound source, and an aperture between e.g. a sound in a room with one doorway and the user outside, poses some interesting challenges.



After examining the problem it was decided whether the scenario was as simple as one room with one aperture (a doorway)... or a much more complicated one, as in multiple rooms with numerous apertures and possibly multiple spaces leading to others... in both scenarios the issue of "sound to User" was a classic "path finding" problem common to computer games and VR. We concluded with the decision that more research was needed into the implementation of "path finding" and "least-cost routing" algorithms, and it was agreed that various team members would continue to look into standard implementation such as A-star, Dykstra, and others.



We determined that the most challenging issue was creating methods to reasonably simulate the effects of multiple spaces, possibly with multiple apertures, and how that is conveyed (virtually/sonically) to the User.



Ben Chang presented (after considerable work) his first attempt at a path-finding algorithm written in Python. One of the more interesting points that he offered to the group was the fact that, unlike the problem most game developers face, namely finding the shortest path, our challenge was a bit different, because what we needed to do is find all the paths. To put this in context imagine that there is a sound, a listener, and number of rooms between them. Furthermore, imagine that some of these rooms are connected by windows and doors (apertures), while others are not, and it is very possible there could be a "chain" of rooms, one to the next, leading to the listener.



As it turns out, whatever the case, whether one room with one opening or many with dozens, whatever the case the listener must always hear the sound arriving to them from all the possible pathways. This in fact turned out to be a good thing as it meant that, unlike the classic problem in games, we do not need to do the computationally expensive step of calculating "the least cost part", rather we need only calculate (still computationally expensive) the tree of all the paths of a sound to the listener. To this end Ben Chang had devised a rather elegant (although not efficient) method to be precise. Whatever the efficiency of the algorithm, we believe that with the smallish arrangements of rooms that we have been talking about it will not really matter anyway. The place where the cost would come in is if we start increasingly expanding this—building whole cities, where we'd want some kind of audibility culling to skip whole parts of the world that have irrelevantly small contributions, like an audibility tree or something. However, that is a pretty far off program which, given a description of a series of rooms and the possible pathways in and out of each, would find all the possible paths from a point in one room to the Listener.



By way of demonstration Ben Chang described a few possible scenarios e.g. room A has apertures 1 & 2, where 1 leads to room B and 2 leads to room C, and he showed how the algorithm solved for them. He then went further to not only explain how it worked in code but also showed the students how test it for robustness and challenged them to come up with different configurations of possible spaces that might break the algorithm. In fact students found that (in a few aberrant cases) it was impossible to do so (see note 2 below). Thus, with a relatively robust path-finding algorithm in hand, Robb Drinkwater proposed that what was next needed was a way for 3d modelers to not just 'describe' the connections between rooms ^A -> 1 -> B^ but rather to actually model them.



From it's inception it has been imagined that participants in the New Atlantis project would not just be passive Users but also content creators, both sonic and 3d. Given the latter, there is a mechanism where 3d designers can create and upload models of their virtual spaces (which Users can access and place around the world). Therefore my proposal was to use a mechanism familiar to 3d designers to "mark" or "tag" where in their 3d models the apertures, e,g, doorways and windows, should be. To this end Robb proposed using 'empty' Objects, a feature of Blender 3d (it was never determined if 3DSMax or Maya has an equivalent) to mark, in 3d coordinates, the aperture between rooms (as well as designate, in the description field, the direction between each, another feature of the protocol).



Moving ahead we (Margarita Benitez and Robb Drinkwater) worked up some models of various configurations of different 'buildings' with different arrangements of rooms (some of which we based on Ben's lecture and the models he and the students came up with) each with our proposed "aperture markers". From there we exported these models from Blender, in the usual Egg format that Panda requires, and in turn imported them into Panda. Once in Panda we used various methods to attempt to extract this information (in Python). As it turned out, Panda does not allow (or at least did not as of January 2020] access to this data). However we did not eventually find ways (it not somewhat complicated) to extract this data in a form that can be used in Python. (It should be pointed out here that this method is less than optimal. A better solution would be a program that could take three-dimensional models and parse them to determine the coordinates of various apertures. However this problem is nontrivial from a programming standpoint, and no team member has stepped up to attempt to offer a solution.)



In the meantime Peter Sinclair worked out the mechanics of using the Pd sound engine to dynamically build patches. And while we were not able to get to the point where the NAV could parse a model and send this data to Pd when it loaded, there is every reason to believe it can simply given more time and engineering.






NAV server

Description:

NAV (New Atlantis Viewer) is the 3d viewer of this project.



Contents:

  1. A python server uses Panda3d's networking libraries—we call it the NavServer. It connects clients and receives data from them, then re-sends data to the non-sender’s clients.
  2. A python client—we call it the NavClient. It connects to the NavServer and sends and receives data to/from it. It also connects to the http server that executes php requests in order to know what to load, downloading models if needed, saving, etc. Moreover the NavClient starts a Puredata (pd) patch that uses the OSC protocol on localhost to communicate between the viewer and the sound engine (written in pd). This patch is rewritten when it starts the NavClient to overwrite the OSC port chosen by user. The OSC port is NavClient's port + 1.
  3. A mysql server that is used to record all attributes of loaded models (saving positions, orientations, etc.).
  4. An http server allows a directory to be shared where we can put and download new models. Optionally if one wants to run NAV on a non-local network, he/she can use an http interface to upload models. In this case, when uploading a model, the php file will create an example of the NavClient that connects to NavServer in order to describe this new upload, so the server can send to this information to clients, and clients will automatically download the new model. Moreover, this download will be recorded in the CMS download's table so it will appear into the download section under category "NAV."






NAV Client

What is more since the idea is that the world is user participative, ie that it is the art student community who create the sound objects and spaces in the world, the whole system itself has to be open and updatable.

We have a prototype version running and we are now cleaning up the code and optimizing.

http://code.google.com/p/new-atlantis/



http://locusonus.org/documentation/img/PROJETSLAB/newatlantis/NewAtlantis_NAV.jpg



http://locusonus.org/documentation/img/PROJETSLAB/newatlantis/NavScreenshot.jpg






Research and documentation

A last important point is that the whole project is open source and will be available to anyone who wants it along with documentation – so hopefully people, other than ourselves, will also benefit. Along the same lines Anne Laforet, a Locus Sonus researcher, write up the results of the research in detail, so all the discussions concerning both the technical and esthetic aspects of the research, will be thoroughly documented.






New Atlantis documentation (Howto)

go to this page : New Atlantis documentation 1 (in French only)





http://locusonus.org/documentation/img/PROJETSLAB/newatlantis/transatlab.jpg