> “Technology should create calm.” — _Mark Weiser_
> “We shape our tools and thereafter our tools shape us.” — _John Culkin_
We are entering another paradigm shift. Just as digitalisation, social media, and mass media redefined how we live, so too will AI. *But not all shifts are chosen.*
In the music world, Generative AI arrived in a flash, promising creative liberation while quietly reshaping our relationship to sound. The CEO of a notable Gen-AI music platform famously said, "People don't enjoy making music". Tools now “compose” for us. Text-based prompts [[Generate full songs]]. And yet, we ask:
###### Was this ever the tool musicians actually wanted?
Does it make sense to hand a pen to someone who thinks in melody ?
Musicians do not work in static prompts. They engage in dialogue, silence, noise, chaos, and harmony. Their craft dynamically changes. And so we believe:
1. **AI must expand, not replace, creativity.**
It should be a co-creator, augmenter, supporter, not a ghostwriter.
2. **Music tools must enable conversation, not command.**
They should be dynamic, interpretive, and alive not pipelines that convert inputs into frozen outputs.
3. **Challenge the definition what an instrument is and how an instrument is played.**
Now we have the unique opportunity to make fluid instruments, that can adapt, evolve, and teach.
4. **Automation should assist, not replace, intention**
Automation can be liberating, but only when it’s intentional. By design, these technologies can take over parts of the creative process, which provides value in the right context. There must be a interplay between control and automation. The intentions of the human is what creates wonder.
And above all:
> “Good technology,” _Weiser_ wrote, “should not mine our experiences for saleable data or demand our attention; rather, it should quietly boost our intuition as we move through the world.”
The Resonance Machine is not a product but a movement: one that _reimagines tools_, not as static instruments, but as living collaborators.
This is what we take base when we are building [[Temporal Synthesis]]. A musical instrument that activates[^2] your environment. That doesn’t shout over the world — but listens to it, learns from it, and lets you play it.
We invite developers, musicians, and sound enthusiasts to explore and [[Contribute]]. Our codebase and models will be publicly released under open licenses. We expect a community to form around it – sharing field recordings, building new interfaces, and imagining fresh musical experiences.
### Footnotes
[^1]: Study on the economic impact of Generative AI in the Music and Audiovisual industries
[^2]: I refer to activation as an act of "imposing" the melodic intentions of the user to a given, tool, environment or sound.