Last changed: 2009/02/07 17:30 


This revision is from 2009/02/07 16:44. You can Restore it.

Report from LAC (Linux Audio Conference 2008) Köln, feb. 28 - march 2

Alejo Duque

This year I had the chance to attend a conference I followed via audio streams in the last 3 years.

For the first time the conference takes place at KHM, the school of arts and media funded by prof. Siegfried Zielinski.

The previous events took place always at the ZKM and in Berlin.

Frank Barknecht and Martin Rumori where the organizers of this 6th edition that had a strong presence of people from the pd (pudedata comunity) among others we had the chance to listen to Miller Puckette, Hans Cristoph Steiner, Georg Holzman, Malte Steiner, Ihoannes m zmogling.

There where 2 presentations about ambisonics, one encouraged into a DIY system, the other was the CUBEmixer

(a realtime universal Mixing and Mastering tool with room-simulation for multichannel speakersystems or binaural

production on base of Pure-Data control system) developed by IEM in Austria to exploit their 24 speakers sound spatialization room.

The whole list of paper presentations can be found here:

Hans C. Steiner presented a project developed for The New York Times new building in Manhattan, a very complex piece that had the objective

of bringing back to life Headers, Captions and Sentences from the gross archive of the newspaper. The work makes use of Linux embedded devices

to witch LCD screens are attached, 562 in total hanging in 2 walls in the hall of the new building. The sound part of the piece is generated by

sound processing (ran by pda . Puredata Anywhere) and some relays that tick at diferrente variable rythms giving a physical tone to the sounds

coming from little speakers in the back of each node.

Miller Puckette participated both with a workshop and a keynote, in the first one he presented patches for processing guitar strings individually,

during his keynote he made a sincere speech in which computers where not at all appraised, on the contrary he clearly stated that art made with

computers was nothing more than far from "human" art.

"whether or not you project your screen, when you sit down and present a work using a computer you are making art that is about your computer, not really about humans".

He began the keynote saying he won't use a computer cause they get in the way to communication. Nothing more clear. Appart from this sort of luddite call

from a software developer we had a great chance to understand why such stand. Miller declares himself more radical today than before, he claims that pd is a

software that allows the user to create what he/she wants withouth imposing rules hence its tendency or allowance for offspring of patching chaos.

So again, why is pd so chaotic?, Miller says this is becasuse he has a great aversion to software pulling the composer down a certain path, under certain

rules that can make the software clearer and more like a clean programming language.

Pd is entirely reactive in design, it doesn't make anything til it gets input, waits for something to happen completely in the discretion to

allow the performer to perform. Something that a well "educated" and highly trained composers and musicians will experience as "segmentation faults"

to their more traditional programming paradigms and methodologies.

Could this be then seen as puredata feature?

On the other hand, that of the contacts, I had the chance to talk with one of the main developers of the gstreamer, Stefan Kost

( project that works for Nokia on the side of linux embedded devices.

My chat was focused on getting feedback about the possible devices and solutions to the movable streaming box, as usual I mentioned the intention

of Locus Sonus to pay for the development of a integer-only encoder that could run on MIPS/ARM architectures. He way wise and proposed that he could

help us to propose either the gstreamer project or via XIPH/Vorbis such task to young developers in the next Google Summer of Code. I hope to continue

my exchange with him and manage to get this idea runinng. This way the code will be developed by a wider group and will not only get more attention

but economical support from google.

In the meantime theres a new test that I believe should be done, it is to install asterisk and stream via any of the ported encoders at the highest

possible quality to a server where re-encoding can take place.

About the music perfomances I have not really much to tell, one of the strongest impressions here to share is that quite often a composer should avoid

to be tied to his/her own developed instrument. It seems that one ends up often listening to a catalog of "effects" from his/her new tool, a typical

technological masturbation or the repetition of the experiment, instead of a musical piece.

Here some of the pictures:

Do remember this is a wiki so feel free to help me edit it.