## Hello Cicada
![[Com-Video.mp4]]
To illustrate our approach, we present **Cicada**, our first prototype instrument. In Temporal Synthesis, **environmental sound becomes the raw material of music**. Here is [[How It Works]]: the musician walks with a field recorder and captures a snippet of the world (say, a busy city corner, a forest glade, or even the studio itself). The AI then slices that recording into many short _sound units_ and clusters them by timbral characteristics. These sound units are then used to generate a synthesiser patch that is playable.
## Demo
![[Interaction_Sequence.mp4]]
When playing, the musician triggers or performs phrases (via MIDI, controller, or the recorder’s live input). The system responds by playing the wave sequence synth patch[^1]. For example, a car horn in the recording might be used as a brassy horn tone; a passing cyclist’s wheel might turn into a percussive click; wind through trees might transform into a shimmering pad. Because every sound was drawn from the live recording, the palette is _always tied to that moment’s soundscape_. The instrument effectively “plays” the surroundings: you are performing a duet with the environment you just captured.
This approach roots AI synthesis in lived experience. Instead of plugging in abstract prompts, the musician performs within a real acoustic context. The output cannot be generic, because it literally carries the imprint of that place and time. This keeps the music fresh, personal, and unpredictable. No two performances with Cicada will ever be alike, because no two forest mornings are.
It bridges old and new practices. It extends the [ethos of field recording](https://www.signalsounds.com/blog/field-recording-101) and [musique concrète](https://www.frieze.com/article/music-22) into the AI era. The musician still chooses sounds and conducts the melody, but the computer weaves them together in a unexpected ways.
---
All the software and data will be open-source, allowing any curious coder or artist to build on it. In practice, this means you might [[Contribute]] your own recordings, refine the algorithms, or connect the system to new sensors – truly collaborating in the project’s growth.
### Footnotes
[^1]: Much like the Korg Wavestate