© 2017 by Andrea Mancianti & Roberto Fusco

Dreams is a music performance generated in real-time out of the EEG signals of the visitors of the planetarium.

By means of eeg headsets the visitors will be wearing, a software extracts and interprets the brain signals and transcode them into parameters controlling the creation of a soundscape. The interpretation of the raw signals aims at putting forth the emotions of each spectator as well as the relation among them. Other parameters translate the overall mood of the crowd.

 

A system translates emotional parameters into sonic characteristics of the music created. The number of active minds is associated to the size of the sound masses (few droplets or cloud of sounds) while the intensity and quality of the activity can correlate to the density and textural quality of them, being smooth and soothing or harsh and noisy.

 

Depending on the possibilities available, the spatialization of the sound will be a further sonic dimension to be explored. Sound trajectories (perceived movement of sound in space), sound aggregation in one spot and separation will be used as metaphors of difference and similarities of among different minds’ activity.

 

On top of this mapping of emotion into music parameters, there will be another layer of control that dynamically change those association over time. Instead of being scripted as a fixed-media composition, the relationship change on the basis of the history of the previous emotions, a compositional approach that is systemic rather than prescriptive.

 

The result is a system constituted by minds connected together through the music they create; a coupling of minds and soundscape and not simple transformation of signals into sound.

Planetarium hall with 8 loudspeakers around the audience. Moving laser projections of the stars on the dome. Complete darkness.
10 EEG headsets connected to tablets via bluetooth. Each tablet sends to a server computer via wifi.
Each headset sends packages of 1 second of data (250 float values)
Selection of the electrodes to listen from for each head
Possibility of detecting alpha waves via fft around 8-12Hz.
2 smoothing factors for rapid changes and slow changes

‚Äč

Polyphony for minds and stars is a music performance generated in real-time out of the EEG signals of the visitors of the planetarium.
By means of EEG headsets the visitors were wearing, a software extracts and interprets the brain signals and transcode them into sound and/or parameters controlling the creation of a soundscape. 8 headsets were directly sonified via wavetable synthesis, 2 controls only.
The interpretation of the raw signals aims at putting forth the state of each spectator as well as the relation among them (running covariance).
Meta-controls. Sound trajectories (perceived movement of sound in space), sound aggregation in one spot and separation are used as metaphors of difference and similarities of among different minds’ activity.
The performer were making decisions on the fly on how to interpret the signals and how to use them.
The intention was to give a bodily translation of brain activity.

1/0