locus sonus > audio in art





locus sonus > New Atlantis

Last changed: 2012/03/07 18:12

 

This revision is from 2012/03/02 12:29. You can Restore it.

http://locusonus.org/documentation/img/NEWATLANTIS/NewAtlantis.jpg

Introduction

New Atlantis is an ongoing research project shared between l'Ecole superieure d'Art d'Aix (ESAA) and The School of the Art Institute of Chicago (SAIC), which invites both students and faculty to explore the possibilities of using Virtual Reality and other game based technologies in a networked, multiuser environment. The originality of New Atlantis resides in the focus on acoustics of virtual spaces and a heightened awareness of the potential of sound as a means of expression. Progress is being made despite the herculean programming task involved in starting from scratch, designing an interface, and creating algorithms to generate the physical and acoustic space and to allow interaction with the user.

The title refers to a text by Francis Bacon "New Atlantis" dating from 1624-1626, which describes a utopian world filled among other things with incredible audio phenomena.

We have also sound-houses, where we practice and demonstrate all sounds and their generation. We have harmony that you have not, of quarter-sounds and lesser slides of sounds. Divers instruments of music likewise to you unknown, some sweeter than any you have; with bells and rings that are dainty and sweet. We represent small sounds as great and deep, likewise great sounds extenuate and sharp; we make divers tremblings and warblings of sounds, which in their original are entire. We represent and imitate all articulate sounds and letters, and the voices and notes of beasts and birds. We have certain helps which, set to the ear, do further the hearing greatly; we have also divers strange and artificial echoes, reflecting the voice many times, and, as it were, tossing it; and some that give back the voice louder than it came, some shriller and some deeper; yea, some rendering the voice, differing in the letters or articulate sound from that they receive. We have all means to convey sounds in trunks and pipes, in strange lines and distances.

http://oregonstate.edu/instruct/phl302/texts/bacon/atlantis.html

If much of 2008-2009 was spent developing concepts on which to construct New Atlantis, 2009-2010 has been mostly dedicated to researching and implementing the basic programming to make the world function. As mentioned above, one of the main aims of this project is to explore possibilities for augmenting the capacity of a virtual 3d environment. In order to accomplish this it was decided to bundle a powerful and familiar audio programming environment (pure data or PD) with a 3d visual environment (Panda). These environments were chosen because they are open source and already being used in our institutions. The idea being that once the basic structure of the New Atlantis is established students (including future generations of students) will be able to create 3d models and sound generating programs to accompany them, thus participating in the process of inhabiting the world.

History of the Collaboration

http://transatlab.net/

The encounter between School of the Arts Institute of Chicago and Locus Sonus (ESA Aix, ENSA Nice (2004-2010), ENSA Bourges) dates back to the beginning of the century, when Peter Gena expressed an interest in collaborating with, my colleague & co- director of research at LS : Jerome Joy. At the time this collaboration proved impossible due to lack of funding.

SAIC is (a big art school even by American standards), it has an important art and technology department, with a long tradition in sound art.

Locus Sonus is a post-graduate research unit whose area of research is audio in art, more specifically audio art related to space and distance (notably through networked audio). Although ESAA (and all other school of Arts in France) is much smaller than SAIC, roughly half the faculty are dedicated to it’s specialization in art and technology, (specialization which was first set up at the end of the 1980’s).

So our collaboration which was among the first project to be funded by FACE, started in 2005.

We decided to concentrate our exchange on three main aspects of our programs – Sound, 3d – virtual environments and meca-tronics.

Over the period we have reduced the field to focus mainly on sound and 3d virtual environments, developing a trans disciplinary approach, which has largely been made possible through the PUF exchange, as should become clear.

Over the first 3 years of the FACE partnership, we organized; student exchanges over 1 semester periods, faculty exchanges – mostly for workshops in the defined areas and remote teaching, which combined video conferencing software with remote desktop allowing faculty from either side of the Atlantic, not only to talk but also to give demonstrations of software and such like.

SAIC participated actively in Locus Sonus’s experimental networked audio project (Locustream project). In fact they were the first to take part in the project and the fact of having a privileged partner gave us a real boost.

The funding also helped us to invest in some equipment; specifically we were able to set up the same 3d (CAV) virtual projection space in Aix as in Chicago, creating a shared online virtual environment. (Remember this is long before 3d cinema hit the high street).

Generally speaking both institutions benefitted hugely from the faculty exchange, which provided complementary and enriching expertise in the chosen domains. Students travelling to Chicago benefitted from the scale and the scope of the school while Locus Sonus was able to offer a specialized, environment for students interested in audio art. And the fact that we are specifically interested in art that uses networks and remoteness meant that the distance between the institutions could be turned to our advantage.

Another project, which emerged at the end of the first Face exchange, was LS in SL. Second life was provoking a lot of interest at that time and we decided that it might be worth investigating it as a virtual workspace for our experimentation. We quickly discovered that Second Life is very poor in terms of the possibilities offered for creating sounds. It does however accept streamed audio, and as mentioned above we are good at streaming audio.

The first project in Second Life was this radio programmed by a graduate student from Chicago, (Brett Balogh) which played the Locus Sonus open microphones in Second Life. We then went on to develop a program which, from data concerning, the positions of objects in Second Life, sent online from second life to our server, we could generate sound and complex sound spaces and stream it back to second life. This development was shared between, Aix and Chicago, splitting the load according to our different competencies. The piece was presented at the Seconde Nature festival in Aix en Provence.




Team

Peter Gena

Peter Sinclair

Ben Chang

Ricardo Garcia

Robb Drinkwater

Gonzague Defos de Rau

Margarita Benitez

Anne Laforet

Jerome Joy

Jerome Abel

Eddie Breitweiser



Birth of New Atlantis

Dissatisfied by Second Life, for various technical and ethical reasons we decided to develop our own virtual world dedicated to the experimentation sound in a virtual environment.

  • Like Second Life the New Atlantis is a multiuser virtual world with a downloadable client which runs locally on your personal computer but which connects to other users via a dedicated server.
  • Unlike second life New Atlantis combines a sophisticated audio programming environment with a 3d virtual environment, which allows us to generate complex audio synthesis in real time, and unlike second life you don’t there are no avatars so you are more concerned with the environment itself than with how you look in it.



Presentation

  1. A client server system - This makes the world multi user.
  2. A 3d rendering environment - NAV (New Atlantis Viewer) The world it's esthetics, interactivity, rules etc.
  3. A digital audio environment - allowing the calculation of a relatively complex audio environment in correlation with the visual 3d space.
  4. A pathfinding system (which calculates all the trajectories between sound sources and the listener, especially in relation to complex virtual architecture).

Aside from this work on basic structure it has been necessary to create 3d models of sound producing objects and the audio programs which go with them and architectural elements to structure the world.




Objectives

  • To create an environment which will allow students to apprehend telematic methods to span both physical, geographical, and cultural boundaries.
  • To question the current status of audio in the domain of online virtual environments, and to influence an evolution of the same.
  • To provide a communal "sand pit" in which to experiment ways in which virtual objects can relate to sound.




Architecture of New Atlantis

http://locusonus.org/documentation/img/NEWATLANTIS/NewAtlantis_SoftwareArchitecture.jpg

http://locusonus.org/documentation/img/NEWATLANTIS/NewAtlantis_ClientServerArchitecture.png




Pathfinding

A lot of time has been spent defining the rules of the world, deciding on software structures, interaction, navigation - the basic architecture so to speak.

One of the most important challenges is that we have decided that all elements in the world are to have an effect on sound this means that all the virtual spaces will have acoustics, and so will surfaces (textures as they are called in the jargon) openings etc. This means that the path between each, movable, sound producing object has to be calculated and updated all the time so that that the chain of audio “effects”, the accumulation of different reverbeent spaces can be created dynamically.

Because the idea is to give the User a rich and convincing (we do not say realistic nor strive for a specific acoustic realism) experience in regard to their virtual position in the world and their orientation to the surrounding sound(s) and the effect of the surrounding and intervening architecture, it is necessary to treat the "buildings" not only as geometric solids but also as "acoustic solids". In this sense the idea is to use the architecture as a material that can effect, occlude, transmit, and transform sound, always in respect to the Users orientation and in and around such spaces.

In the very early stages of the project standard methods were worked out to simulate/provide acoustic cues to the User/Listener about distance from a sound. It was also worked out how to, at least approximately, provide the natural sounding reverberation of enclosed spaces (including variable surface treatments) (While at present the system creates a reasonable and quiet pleasing approximation of acoustic reverberation. Further work is needed to be able to extract the true geometry of the modeled 3d spaces and create a heightened sense of sonic accuracy.. It further followed that it is fairly easy to apply non-standard filtering (the Helps) to such spaces.

http://locusonus.org/documentation/img/NEWATLANTIS/NewAtlantis_pathfinding.png

We decided that "doors" and "windows" need not be treated as such acoustically, though they possibly can be rendered in 3d models. In fact it was decided that it is desirable that both sound artists and 3d designers treat these simply as "apertures", so as to include holes in floors and ceilings, as well as small cracks and crevices, or any other place from which sound can emit.

The next step was to tackle the challenge of how to handle the possible multitude of apertures, and possibly intervening spaces, in relationship to the User. In all but the simplest scenario, where there is a User and a sound source within the same space, even the simple relationship of a User, a Sound source, and an aperture between e.g. a sound in a room with one doorway and the user outside, poses some interesting challenges.

After examining the problem it was decided whether the scenario was as simple as one room with one aperture (a doorway)... or a much more complicated one, as in multiple rooms with numerous apertures and possibly multiple spaces leading to others... in both scenarios the issue of "sound to User" was a classic "path finding" problem common to computer games and VR. We concluded with the decision that more research was needed into the implementation of "path finding" and "least-cost routing" algorithms, and it was agreed that various team members would continue to look into standard implementation such as A-star, Dykstra, and others.

We determined that the most challenging issue was creating methods to reasonably simulate the effects of multiple spaces, possibly with multiple apertures, and how that is conveyed (virtually/sonically) to the User.

Ben Chang presented (after considerable work) his first attempt at a path-finding algorithm written in Python. One of the more interesting points that he offered to the group was the fact that, unlike the problem most game developers face, namely finding the shortest path, our challenge was a bit different, because what we needed to do is find all the paths. To put this in context imagine that there is a sound, a listener, and number of rooms between them. Furthermore, imagine that some of these rooms are connected by windows and doors (apertures), while others are not, and it is very possible there could be a "chain" of rooms, one to the next, leading to the listener.

As it turns out, whatever the case, whether one room with one opening or many with dozens, whatever the case the listener must always hear the sound arriving to them from all the possible pathways. This in fact turned out to be a good thing as it meant that, unlike the classic problem in games, we do not need to do the computationally expensive step of calculating "the least cost part", rather we need only calculate (still computationally expensive) the tree of all the paths of a sound to the listener. To this end Ben Chang had devised a rather elegant (although not efficient) method to be precise. Whatever the efficiency of the algorithm, we believe that with the smallish arrangements of rooms that we have been talking about it will not really matter anyway. The place where the cost would come in is if we start increasingly expanding this—building whole cities, where we'd want some kind of audibility culling to skip whole parts of the world that have irrelevantly small contributions, like an audibility tree or something. However, that is a pretty far off program which, given a description of a series of rooms and the possible pathways in and out of each, would find all the possible paths from a point in one room to the Listener.

By way of demonstration Ben Chang described a few possible scenarios e.g. room A has apertures 1 & 2, where 1 leads to room B and 2 leads to room C, and he showed how the algorithm solved for them. He then went further to not only explain how it worked in code but also showed the students how test it for robustness and challenged them to come up with different configurations of possible spaces that might break the algorithm. In fact students found that (in a few aberrant cases) it was impossible to do so (see note 2 below). Thus, with a relatively robust path-finding algorithm in hand, Robb Drinkwater proposed that what was next needed was a way for 3d modelers to not just 'describe' the connections between rooms A -> 1 -> B but rather to actually model them.

From it's inception it has been imagined that participants in the New Atlantis project would not just be passive Users but also content creators, both sonic and 3d. Given the latter, there is a mechanism where 3d designers can create and upload models of their virtual spaces (which Users can access and place around the world). Therefore my proposal was to use a mechanism familiar to 3d designers to "mark" or "tag" where in their 3d models the apertures, e,g, doorways and windows, should be. To this end Robb proposed using 'empty' Objects, a feature of Blender 3d it was never determined if 3DSMax or Maya has an equivalent to mark, in 3d coordinates, the aperture between rooms (as well as designate, in the description field, the direction between each, another feature of the protocol).

Moving ahead we (Margarita Benitez and Robb Drinkwater) worked up some models of various configurations of different 'buildings' with different arrangements of rooms (some of which we based on Ben's lecture and the models he and the students came up with) each with our proposed "aperture markers". From there we exported these models from Blender, in the usual Egg format that Panda requires, and in turn imported them into Panda. Once in Panda we used various methods to attempt to extract this information (in Python). As it turned out, Panda does not allow or at least did not as of January 2020 access to this data. However we did not eventually find ways (it not somewhat complicated) to extract this data in a form that can be used in Python. (It should be pointed out here that this method is less than optimal. A better solution would be a program that could take three-dimensional models and parse them to determine the coordinates of various apertures. However this problem is nontrivial from a programming standpoint, and no team member has stepped up to attempt to offer a solution.)

In the meantime Peter Sinclair worked out the mechanics of using the Pd sound engine to dynamically build patches. And while we were not able to get to the point where the NAV could parse a model and send this data to Pd when it loaded, there is every reason to believe it can simply given more time and engineering.




NAV server

Description:

NAV (New Atlantis Viewer) is the 3d viewer of this project.

Contents:

  1. A python server uses Panda3d's networking libraries—we call it the NavServer. It connects clients and receives data from them, then re-sends data to the non-sender’s clients.
  2. A python client—we call it the NavClient. It connects to the NavServer and sends and receives data to/from it. It also connects to the http server that executes php requests in order to know what to load, downloading models if needed, saving, etc. Moreover the NavClient starts a Puredata (pd) patch that uses the OSC protocol on localhost to communicate between the viewer and the sound engine (written in pd). This patch is rewritten when it starts the NavClient to overwrite the OSC port chosen by user. The OSC port is NavClient's port + 1.
  3. A mysql server that is used to record all attributes of loaded models (saving positions, orientations, etc.).
  4. An http server allows a directory to be shared where we can put and download new models. Optionally if one wants to run NAV on a non-local network, he/she can use an http interface to upload models. In this case, when uploading a model, the php file will create an example of the NavClient that connects to NavServer in order to describe this new upload, so the server can send to this information to clients, and clients will automatically download the new model. Moreover, this download will be recorded in the CMS download's table so it will appear into the download section under category "NAV."




NAV Client

What is more since the idea is that the world is user participative, ie that it is the art student community who create the sound objects and spaces in the world, the whole system itself has to be open and updatable.

We have a prototype version running and we are now cleaning up the code and optimizing.

http://code.google.com/p/new-atlantis/

http://locusonus.org/documentation/img/NEWATLANTIS/NewAtlantis_NAV.jpg




Research and documentation

A last important point is that the whole project is open source and will be available to anyone who wants it along with documentation – so hopefully people, other than ourselves, will also benefit. Along the same lines Anne Laforet, a Locus Sonus researcher, write up the results of the research in detail, so all the discussions concerning both the technical and esthetic aspects of the research, will be thoroughly documented.