Saving DSP in [gen~]?

For the sake of example/argument
Let’s say I have a simple patch that allows several possible codebox distortion algorithms to be chosen from, and that’s all.
In an inefficient and non-optimized setting, each codebox receives the input and runs constantly and then you’d use [selector] to choose between them.
This is obviously not the “best practice” as far as actual CODE goes, if’n we were writing such a thing in full C++.
(Yes I am aware that we could lump them all into a single codebox and optimize code inside that to only process the currently chosen algorithm but let’s imagine you’re choosing between a sub-[gen] reverb and a delay)

SO, how does one “turn off” sections of code in [gen~]? Can we even do that?

The only true way to bypass computation in gen~ is via if() statements in codebox. But, it’s not always the right thing to do. A couple of reasons:

  1. The trade-offs on real CPUs of using if() versus computing all paths can be surprising sometimes. Quite often, the if() statement itself can be worse for CPU performance than simply computing both sides of the branch, because of branch prediction and instruction scheduling at machine code level. The general saying is to be careful not to optimize too early – actual behaviour on hardware can be surprisingly different to what is expected.

This gets even more significant when running patches with very small block sizes. The smaller the block size, the more proportional the impact having a bunch of if() (or select etc.) will be – or rather, the less likely benefit of doing it in the first place. And I’ve found with Daisy a lot of the time I can drop block sizes down to around 8 samples with very little impact on CPU cost, and I’m starting to use that as my default for new projects. A lower block size means less IO latency, higher frequency updates of cv inputs etc… at 48kHz, 8 samples is 6kHz, less than a fifth of a millisecond, which is pretty nice. But that means your if() tests would be called every 8 samples of processing, which also limits the benefits of doing it at all.

  1. Also, especially in an embedded case, the only thing that really matters is the worst case. It’s better to have predictable and relatively constant overhead than a CPU performance that is spiky and varies according to UI parameters. This does tend to be far more predictable in embedded versus desktop computing at least.

So, for these reasons is almost always true that it is better to figure out how to write an algorithm that shares as much code as possible, rather than writing several different algorithms and “switching” between them. That is, try to switch only small fragments of code (such as computing parameters that go into power functions, or mix coefficients, etc.) rather than entire algorithms. And the smaller they are, the more likely it is better just to compute both sides.

Also, if you can change an if() into a mix() then you get to blend between algorithms, rather than just switch between them. Which can be fun to modulate :slight_smile:

(Or, if the algorithms are completely independent of each other, then perhaps just use the app selection available in Oopsy?)

1 Like

(I should add – most of that applies whether you’re writing in C++ or gen~ or some other language.)

1 Like

While there are cases against this, like not being able to combine things, you can upload multiple top level gen~ operators to the daisy patch and then switch through them in one of the windows. Im not sure how all daisy products handle this. But it is a specific feature. I have made a patch with 4 distortions and a comb filter that im going to share soon. And you switch between them all in this way.

@ryan_pwm Yes, thank you, I am aware. Program change via MIDI too!
@grrrwaaa Thank you for a detailed exploration. I hadn’t considered all of that. Now that I think about it, switching in and out various algorithms is probably half the reason some commercial “VSTs” have terrible spikes in CPU usage, especially when UI sections are activated/deactivated.
Thanks !