Hi, i have an idea to take two Daisy Petal units, and tempo-synchronize one of them by the audio-output of another(but i also want to still be able to use the audio signal from that master-clock unit as an audio-signal as well), so i’m wondering if anyone might be able to point me in the direction of any code or math explanations or perhaps even explain here, how i could encode the audio signal with a time-code that might exist somewhere in the 20kHz-24kHz range(so i could split the signal(outside the Petal unit), to run one as an audio signal, but slightly low-pass filtered around 20kHz, that would keep most of the stereo image of the audio that was also processed by this first Daisy Petal as a musical signal, and at the same time, run the unfiltered version of this same channel into the next Daisy Petal as a clock-signal), and then decode the timecode at a receiving unit’s end to detect accents at certain points in time?
maybe two high-frequency square waves(say one around 22kHz and the other around 23kHz)… switching between the two only on accents… at the receiving end, frequency-followers tuned to these two high-frequencies could detect a shift between them as an accent?
if flashing at 48kHz, the Daisy should generate and detect this high?
(i’ve done this to myself on these forums before: answered my own question …but if anyone feels like guiding me to better paths, i’ll appreciate it… in the meantime, think i’ll just try simply working a 22kHz and 23kHz set of square waves into the signal and see what happens(maybe they should be sine tones?..i’ll try both)… then detect for change between them at the receiving end, and for the audible version, just lopass the high-frequency clocks out)