Architecture advice

Before I am starting my project I want to be sure I am on the right track, so I just want to
ventilate some questions and ideas.

  1. It seems like AudioCallback is super simple and easy to work with. Am I right to assume that the delay is (2 * buffersize / samplefreq), ie one buffer comes in, and we must have returned before the next buffer needs attention? What happens if we screw up? It starts emitting shit and waits to call us until we have returned?

  2. It seems like people do stuff in the main function as well, but only setting globals, never reading from audio or anything. But hypothetically if we want low latency and heavy computation - for example: low latency 5 ms with filters taking the parameters from some complicated computation taking 100 ms running four times per second, what do we do then? Sending buffers to main guarded by semaphores or something? This is a little hypothetical, I only maybe need this.

  3. I have code running by taking a WAV file (my guitar via garage band), processing and producing a WAV output. This is the easiest way to develop, and my plan is to implement AudioCallback() in my off-line simple code in my git repo and then clone/update it into the Daisy Seed source tree to put it on my Hothouse pedal. I see something with “git modules” in the Daisy Seed documentation. Is that the way to go?

For real-time audio processing, all processing on each sample MUST be completed in under 1/SAMPLERATE seconds. Everything follows from that requirement.

Wow. Thanks for info! This means that what the documentation mentions,

" the function must take less time than it takes to transfer out the audio buffer, otherwise your audio will start to have under-run errors, which cause digital artifacts in the audio path."

is slightly deceptive. Actually the requirement is (from what you say) that every sample should be written in time in the output buffer since the output interrupt handler is reading the output buffer at the same time!!! So I better not use it as a scratchpad. Haha.

And I found in the docs an example that async stuff is done in the main() while-loop, so that is what I will do.

No, the documentation is correct, and so is what I wrote. I’m sorry if I didn’t write it more clearly.

All processing on each buffer MUST be completed BEFORE the next callback interrupt.

One way I’ve seen some people doing it (check in Discord channels) is to do the main audio processing in the ‘main’ loop and then write the samples to a FIFO, and then the audio callback just pops the samples and writes them to the buffer. They were using a very small block size. It was partly to deal with audio noise caused by doing larger blocks, but could help if some processes take longer and others are shorter. There’s no RTOS though so slicing and dicing your code could get a little ugly.

Yea, that was more intuitive. Thanks.

And thanks to TallMike too for pointing out that people do heavy processing in the main loop. Now I can feel comfortable splitting my code between AudioCallback and main. I have a plan!

Thanks.