In a RTOS, you would have actual threads and the processor switches between them to realise multi tasking. On the Daisy, there are no threads, but you can still do multitasking with interrupts.
The main() function is a little bit like an “idle thread” because it will be executed when nothing more urgent needs to be done. Then there are multiple interrupt sources that can trigger an interrupt service routine (ISR; also called interrupt request, IRQ). These ISRs will interrupt the main function, do their thing and return when they’re done. ISRs typically serve a peripheral in the chip, e.g. output data to a serial bus or read the result of a A/D conversion. ISRs can be nested, meaning that they have priorities and one ISR can interrupt another one.
On the Daisy, most of the processing load lies in the calculation of the audio samples. This is done in the AudioCallback, which is an ISR, more specifically, it’s the ISR that’s triggered when the DMA needs more samples to write to the audio codec. The task of calculating the audio samples needs to be done within the time it takes the DMA to write one block of audio samples to the codec. If the AudioCallback doesn’t finish within this time frame, the DMA won’t have data to write to the Codec and your audio will stutter and glitch out.
In theory, nothing prevents you from doing everything in the AudioCallback - from calculating audio samples to scanning UI controls, to writing files to an SD card. If you can ensure that you’re able to complete all the things before the DMA runs out of samples to write to the codec, then you’re fine doing all that in the AudioCallback.
But in practice, most of these things need more or less time to complete, depending on the circumstances. The SD card access is a particularly bad example because it may block for a long time while the SD card writes stuff to its memory. You wouldn’t want this to block the delivery of fresh audio samples to the codec. Additionally, if writing to the SD card takes 10ms longer, you’d never notice - it’s not real time critical.
That’s why you should consider how real time critical your tasks really are and how much the time to complete them varies.
IMO, the calculating audio samples is the ONLY thing that should actually happen in the AudioCallback, simply because it’s the ONLY thing that may not be delayed. All other things (processing user input, reading/writing files, updating LEDs, etc.) can wait when the system is under higher load than usual. These things should be done from the main() function where they can be interrupted at any time. Effectively the main function can fill the gaps between your AudioCallback and other ISRs. That’s how you give priority to the things that have a real time constraint.
There are situations where you can still do non-audio things in the AudioCallback. E.g. writing or reading a GPIO pin. That is because this task is always very fast to complete and doesn’t impact the real time capability of the AudioCallback much. You can see that the daisy platform code (petal, patch, field, …) scans its UI inputs in the AudioCallback, for example. It’s not a super clean design, but in this case the effect is negligible and it makes things a little easier to program for beginners.