How to deal with buffer latency issues

Take the example of an octave drop function. If you output the samples at half the input rate you get an octave drop. You get latency of anywhere between one sample time but up to sample rate x buffer size. Are there common ways to deal with this?

Also in any application where the output isn’t processed at exactly the input rate, when the input buffer pointer crosses the output pointer you get a glitch since the waves won’t likely be lined up. Again how is this normally dealt with?

Thoughts?

Here is the basic way to do polyphonic pitch transpose, whether up or down, as described on the Spin Semi website:

“To perform pitch transposition, we will need to use variable delays, but as we already know, we cannot change a delay, increasing or decreasing its length for very long, or we will eventually run out of memory. To retain the basic character of the music while changing its pitch through a variable delay technique, we will need to occasionally change our moving delay read pointer as it approaches one end of the delay to the opposite end of the delay, and continue on. This abrupt jump in the read pointer’s position will be to a very different part of the music program, and will certainly cause an abrupt sound. The problem can be largely overcome by establishing two delays, let’s name them A and B, both with moving read pointers, but with their pointers positioned such that when the read pointer of delay A is just about to run out of ‘room’, delay B’s pointer is comfortably in the middle of it’s delay’s range. We then cause a crossfade, from obtaining our transposer output from delay A to the signal coming from delay B. When the delay B pointer begins to run out of space, (just prior to pointer repositioning), we crossfade back to delay A. The delays can in fact be a single delay, with two read pointers properly positioned.”

This solves the discontinuity problem. There will still be a variable delay of the transposed signal. In addition, there can be phase cancellation during the crossover, which will be pitch dependent. With longer buffer lengths, this sounds like tremolo. Shorter buffer lengths reduces the delay, but the warble becomes unpleasant.

I tried to come up with more sophisticated algorithms on the FV-1 years ago, but that processor was too limited to do much more. I am sure one could do much better on the Daisy platform.

1 Like

Can you provide a description of the implementation of a “crossfade”? I think maybe that’s the part I don’t understand. Also merely crossfading will cause phase shifting artifacts, no?

I was trying to use a method of moving the index pointer based on zero crossings but it didn’t work well. In the real world there can be a lot of zero crossings that are not phase based. I’m planning to try the same idea based on peak detection since peaks seem to be more reliable.

The idea being, at any time when the input signal is at a peak, and also the delayed signal is at a peak, move the delayed signal pointer to the current input buffer position. Now the latency is temporarily eliminated and without phase shifting.

Ahh, I see there is a crossfade class. I will investigate…

Edit: I could use this crossfade class. I would be crossfading between two different points of the same signal to avoid collisions between the read and write pointers. But it would still leave me with the phasing issue. So in the end I think I need to do peak detection and regularly “catch up” the slow read pointer to the normal write pointer when they are both at a pos or neg peak.

Because in my application I don’t want any extraneous distortions (the phasing problem) I will write a class that inits with two signal arrays and when asked, given two position pointers, returns a best new position pointer for one of them that both reduces latency (down to one wavelength) and avoids phase shifting. Wish me luck :wink:

1 Like