(participation: jonny zyka, rainer fuegenstein, lukas kaltenbaeck, harald
bauer, peter koger)
...........................................................................
............
+> basic ideas
as far as i understood the idea of oudeis (or a part of it), one goal is
to draw parallels between the theatre and the cyberspace as images of the
'real' world; traveling around the world (odyssey) becomes traveling of
information through media; actors and 'virtual actors' are performing on
stages around the world, real stages - and cyber - stages (the clients
browser).
but each media has its own features; for the theatre, time, sequential
performance or synchronization are important characteristics; the
internet 'behaves' in opposite to that: transmission is asynchronous, the
order of different information transmitted is not determined and time not
really matters.
applications like real audio/video, net phone or cuseeme by-pass this
'limitations', to reach a broadcast - like situation (beyond that, all of
this developments are for only one part of the 'cyber-world': the web). -
there is nothing to say against television, transmitted over whatever,
but that is tv, anyhow. to 'put on the stage' what the internet means in
a 'media' - context, it may be an unskilfull way to use exact these
technical work-arounds.
+> converting audio (voice) to midi - data
visual information is not transmitted as direct as possible (ie. by
video) - why should the sound?
in the oudeis draft, the actors are represented by movinglights, only
their movements are transmitted over the net; all other (visual)
characteristics of the actual person get lost. the 'virtual actor' is a
converted image of the real actor; translated by the means of
transportation, the internet. the 'virtual actor' takes its
characteristics from the real actor on the one side, and from the net on
the other (asynchronous, non - continuous, delayed,..);
as an analogy to the abstraction of visual information, the speech of the
actor could be converted to midi - data; designed for musical
instruments, midi - data transmits information on pitch and velocity
(-touch - velocity on a keyboard, (eg. on a piano) - nearly the same as
volume).
midi - data could be transmitted along with tracking - data over the net,
fed to a synthesizer / sampler and midi would become audio again - in a
different 'shape'...
this translation of audio - information would be another (probably not
unimportant) phase in the process of morphing from the 'real' to the
'virtual' actor.
a nice side - effect is the lowering of technical needs - the amount of
data produced by midi is minimal, compared with digital audio... - and
that reduces network trafic and required bandwith (beside financial
considerations, a small bandwith is quite necessary for the eighth stage,
the web - clients)
there is one problem left: the words themself (ie. the content) will get
lost, too. (this may be annoying...)
- but: as mentioned in previous drafts, the text will be visualized on
the stage anyway (localized versions of the text)
[maybe a text-to-speech - software could be used to make the text audible
- mixed with the 'speech melody' provided by the sampler, this could
become an interesting 'virtual actors voice']
preventing a relationship between action on the stage and the wording
displayed (or spoken by the computer) causes an additional requirement:
pragraphs (or even words?) have to be triggered by the actor or someboy
else, eg. the stage-manager.
the variance caused by triggering (we definitly cant avoid) and the
resulting asynchronity (throughout one verse only!) should be no real
problem - beeing a characteristic of the internet, this probably is a
message, too (i think its not the aim to (try to) hide the 'nature' of
the web)
+> the actors data /oudeis-protocol
the (virtual) actors data (=vad) includes:
- tracking information: (hopefully) provided by a tracking system called
'lighting director' by martin professional. the system (for details see
http://www.martin.dk) delivers data via dmx, midi or rs232.
we'll divide the stage into a checkerboard, the number of squares is to
determine (depends on the expected stage - sizes) - the same for all
stages. smaller stages will use smaller squares; this guarantees that a
(real) actor going from stageleft to stageright causes all (his) 'virtual
actors' to do similar (and not to fall off the stage...)
each square () corresponds with a position-cue previously programmed to
the light-desk (compulite spark; see http://www.compulite.com). -
restoring the cues can be done by midi.
- bio-sensor. at the moment, nobody seems to know, what kind of data
comes out of this thing, or even if any data will ever come out of
it,.....
- midivox: gives midi-data (surprisingly). - they have no web - site,
detailed information to follow...
- special events (triggering of verses, .....) not determined by now.
maybe the actor triggers these events himself (there are midi - wests and
midi - shoes, too - believe it or not), maybe the stage-manager (or
somebody else) does (eg. on a computer keyboard)
all data is gathered by the main cpu, reduced, converted and combined
into a proprietary protokol. not determined yet, but it would be sth.
like:
4 bit: head, start/stop,...
8 bit: position number
8 bit: biosensor
12 bit: midivox data
8 bit: eventdata
40 bit total per event unit
events should be quantized to a rate of about 15 units per second
total: 600 bps - (thats not too much)
+> asynchronous transmission...
as mentioned before, an important issue is the (seeming?) contradiction
between asynchronous transmission via the net and the synchronous
performance on stage.
on one hand there is no (real) way to transmit a continuous stream of
data over the net (and no way to prevent synchronization); on the other
hand there has to be a connection between the moving light, the audio and
the text - otherwise the story is no longer told....
->fortunately the piece itself is devided into paragraphes. each
paragraph is rather short (a few seconds) - this gives us the following
approach to a possible solution:
task I: vad recording and transmission
--------------------------------------------------------------------
* situation: the actor does nothing (..);
# vad - status: idle, no data is transmited at all
* situation: the actor starts moving, speaking,... in other words: he
acts.
# vad - status: start recording
first - event recording starts, as the actor starts 'acting' (we ll have
to define a threshold, of course)
or triggers an event (maybe well choose the simplest way: the actor
himself starts recording).
first of all, a timestamp is recorded (absolute time) - we need this to
define a relationship between the actual scenes data and the (possible)
reaction of the audience, later needed for the 'choros'
* situation: the actor acts...
# vad - status: recording
as long as the actor does sth. (above threshold) - these events are
recorded into a buffer, not transmitted yet.
this status stays till:
* situation: the actor stops acting...
# vad - status: stop recording
- event recording stops, as the actor stops 'acting' (a 'stop recording'
- event may be triggered; or the end of acting has to be determined by
'timeout' - if all sensor data (vad) stay below threshold for 'n'
milliseconds , status 'recording' stops.)
followed by
# vad - status: transmitting
1. all gathered vad-message (estimated maximum: about 4.5 kbyte - that
equals more than a minute!) is coded into a tcp - packet and sent to all
stages.
2. the same message is coded as text and saved as input - file for a cgi
- script/programm
-> the cgi - script provides the information for all web-stages; the
client (ie. shockwave - movie) only requests data at specific periods,
therefore all vad (also incoming data from all other stages) must be
gathered and saved - in incoming order (not based on the timestamps -
impossible to handle) - in the cgi-datafile.
# vad - status is idle again, the whole procedure repeats (infinite).
this draft follows the principle of asynchronous networking and keeps the
narration alive, cause synchronization is provided throughout each
paragraph.
- of course, compared with broadcast/tv, there are some astounding
limitations: there is definitely no guarantee, how long the information
'travels' from one stage to the other(s); as a consequenz, no
synchronzation between stages can take place. - the stages are dependant
upon each other - and all are dependant upon network trafic.
thats an interessting aspect on the other hand: the actors have to 'play'
with the net, react to netlags, learn to deal with the net s
characteristics; the internet becomes part of the dramaturgy, as another
'virtual actor';
the web is not only means of transport or another 'virtual' stage - it
obviously influences the performance on the 'real' stage.
and because network - trafic is also a summary of people s activity (all
around the world) - one could say, they all participate in oudeis...
task II: recieving and interpreting incoming vad
--------------------------------------------------------------------
all incoming vad is decoded to:
midi data ->spark (position of movinglight)
midi data ->spark (color value of movinglight)
midi data ->sound unit (voice-melody)
event data ->cpu to trigger text scrolling and/or text-to-speech
the incoming packages are processed on a firstin/firstout-basis, or
- if possible - simultaneously(?)
incoming vad is also processed for client requests (see above)
task IV: responding client requests
--------------------------------------------------------------------
clients (shockwave) periodicaly request c-vad (a compilation of available
vads from all stages) and actual choros - data (for visualization). data
must be coded as text;
the requests search argument (eg. oudeis.org/show.cgi?200) describes the
last c-vad - section the client recieved. c-vad sections are defined by
the store - order (corresponds to incoming) and not by time-stamp.
all vad could eventually remain on the hd (as sth between logfile and
live-recording)
task IV: gathering choros -data
--------------------------------------------------------------------
choros - data comes either
- as single event, from one client; the due to a shockwave - limitation,
this data is sent with method 'get'; (shockwave cant post); choros data
ist delivered as search argument.
- as 'choros update', extracted from other servers vad. could be written
directly to a file (vad is probably handled on the same cpu...)
all incoming data should be stored and ordered by timestamp (encoded in
choros - data).
the same task can manage the 'actual choros - value' - update; (simply
sends his own client - data to all other servers..)
to be continued.....
[:]