The installation is accessible from two coupled access points. The first of these exists in real space, visitors interact with a virtual organic projected onto a screen via a special interface box. The second access point uses the world wide web as its user interface.
In both systems, users evolve a three-dimensional organic object which is created using genetic algorithms. The organic is defined by a genom, a set of components, which is successively mutated by the users. Out of six randomly generated mutations users select one, which in the succeeding steps becomes the starting point for new mutations. This way users choose a thread through a space out of approximately 1080 possible forms.
In the real space users additionally change the shape and dynamic behavior of the life-like organic object via an interface box. Both systems are coupled and operate on the same data set constituting the genom. Actions in the web space effect the real space and vice versa. If a change on the web happens, the organic in the real space slowly morphs towards the web selection, a change in real space directly affects the next web action.
Both sound and projection relate in equal parts to the same underlying abstract structure which they make palpable to the user. The sound acoustically represents selected properties of the genoms, i.e. their structure, position, and behavior in a non-arbitrary way. The easiest way to think of this representation metaphorically is that of a musical instrument: a set of rules with associated variables by which to generate sound, with the possibility included to control these variables in real-time according to the underlying genoms' structures.
As one of the installation's aesthetic goals is the bodily impression of the generated object on the user, a sound synthesis technique was in demand, that is able to both render a visible object's genuine sound thru all its user-inferred alterations in shape and space in a plausible way, and to be abstract enough where needed to not duplicate a real-world artefact. The technique of choice is known as physical modelling which derives the emerging sound from the physical properties of an assumed object, i.e. its shape, material, excitation mode etc.
Based on associative relationship to the genoms' textures, each acoustic representation has first been assigned a set of material properties, causing its basic timbre. Second, the genoms' shape is taken into account, controlling the representations' basic modes of vibration and their reaction to parameter-induced deformations. Third, the single graphic objects' current spatial positions are mapped to the sound space, rendering their horizontal movement as well as their proximity to the user.
It is possible and intended to handle the installation as flexible as a musical instrument, consisting of an image and a sonic component. Observation of the system's behavior during exhibitions has shown its ability to respond to users' varying approaches, playing styles, and temperaments in a differentiated and recognizable way.
A real time application cares for a life like animation of the object and reacts on user actions. Users interact with an interface box. The interface's eight sliders are used to manipulate the object's dynamic parameters and the camera position. Three additional buttons are used to trigger mutations. The application and the webserver communicate via an IPC/TCP connection to distribute changes of the genom.
A genom is composed of components each of which define a single structural or shape property of the organic. There are components which describe the geometry of a single limb and components which arrange limbs according to specified algorithms. Components are designed such that they incorporate form principles observed in nature, e.g the proportion of the golden section is used for the simulation of spiral phylotaxis.
There is a predefined set of components constituting the gene pool out of which the actual genom is assembled by random. Assembling operations are insertion, removal and change in connectivity (crossover) of components. Additionally, each component holds a set of parameters defining details like thickness, curvature, color, texture etc.
The moving sound sources were projected quadrophonically with two speakers in front and two in the rear. The height of the speaker mountings approached average ear level. The sound generation ran on a dedicated configuration of two Apple PPCs and Yamaha sound generators.
SonoMorphis has also been shown during the 11th Stuttgarter Filmwinter, Jan
14th - 17th 1999, in the CAVE
of the IAO, Fraunhofer Gesellschaft in
Stuttgart. The user interface was located in front of the CAVE, at the Media
Summer Festival, Schloss Kapfenburg, 2001, and at the"Art of Immersion"
Festival, CAVE der GMD Bonn, 2002
Bernd Lintermann (linter@zkm.de).