Ultra long (3 min) delay time via SDRAM?

Hi there,
I’m trying to fidget with the stereo delay effect in the MultiEffect sketch to get somewhere in the neighborhood of 3 minutes worth of delay time. @raf suggested (via Slack) that I look into using int16_t instead of float. I’m wondering in how many spots that substitution would need to be made?

If I change #define MAX_DELAY static_cast<size_t>(48000 * 4.5f) to #define MAX_DELAY static_cast<int16_t>(48000 * 4.5f), will that be sufficient?

Or is that the wrong spot and I should change static DelayLine<float, MAX_DELAY> DSY_SDRAM_BSS dell; to static DelayLine<int16_t, MAX_DELAY> DSY_SDRAM_BSS dell;

Or both? Or neither? :slightly_smiling_face:

Very humble thanks.

The first snippet you shared would actually be invalid as the range of an int16_t is -32768 to 32767. So it would not reliably be the number you actually want (216000 in the above example).

The second one is probably what you’d want if you do want to reduce the memory footprint of your delay line.

it is worth mentioning that depending on what you’re already doing there are 64MB of memory in the DSY_SDRAM_BSS sector. So being able to do a 3min delay is totally possible without any need for adjustment (unless you need many more delay lines, and other effects going).

Every floating point sample (float) occupies 4 bytes, while an int16_t occupies only two. One consideration is that audio in DaisySP is processed as floats so you will have to convert when reading from and writing to your new DelayLine<int16_t, MAX_DELAY>. You can do this with the libdaisy s162f() and f2s16() functions.

So for a three minute delay (180 seconds) you could declare it as either:

#define MAX_DELAY (48000 * 180)

// Floating point, 3 minute delay
DelayLine<float, MAX_DELAY> DSY_SDRAM_BSS my_big_delay;
// Signed-16bit, 3 minute delay
DelayLine<int16_t, MAX_DELAY> DSY_SDRAM_BSS my_smaller_delay;

Thank you @shensley! Ok, so a couple things are coming up. Wondering if anyone has seen this behavior before.

  1. I’m noticing a pretty serious audio degradation problem with MAX_DELAY set at anything above
    (48000 * 40)

  2. anything above MAX_DELAY (48000 * 170) will not compile for me and throws an Error: .sdram_bss will not fit in region SDRAM and region SDRAM overflowed by 2011160 bytes

I’m using the following lines:
static DelayLine<float, MAX_DELAY> DSY_SDRAM_BSS dell;
static DelayLine<float, MAX_DELAY> DSY_SDRAM_BSS delr;

If you’re setting delay read position with ADC input, it would be effected by noise in that value. Using longer buffer will make it more obvious, as you’ll be increasing absolute value of this error. This can be solved by smoothing (lowpass filtering), applying hysteresis to ignore noise, using MIDI to set exact value, etc.

Can there be a more explicit error message to say that you’re trying to allocate more memory than there’s available? Just multiply 2 (number of delay lines) x 48000 (SR) x 4 (bytes per float) x 170 - this gives you 62.25MB. I imagine you would be using about 4MB for something else in this patch.

1 Like

Thank you! This was helpful. I tried changing the coefficient in fonepole(out, in, coeff) and successfully got rid of the adc noise.

However, I notice now that with the new coefficient in fonepole changing the delay time (via k1) affects pitch of the delayed sound.

I’ve been trying to understand this all week and just can’t seem to sort it out.
fonepole(currentDelay, delayTarget, .00007f) >> accurate pitch/delay time but adc noise
fonepole(currentDelay, delayTarget, .00000052f) >> accurate delay time / no adc noise but inaccurate pitch

I’m getting the .00000052f value from 1.0/sample_rate * time.

Any ideas or suggestions? I’m having trouble seeing what I’m missing.

It’s hard to understand what exactly you’re doing without seeing the code.

Other than that, it sounds like you’re getting Doppler effect due to slow change of delay time. Changing playback speed in a delay line changes pitch, that is the expected result as you’re just playing audio at faster/slower rate. Typically pitch shifting delays use it by interpolating signals in 2 delay lines.

1 Like

Ah yes. Thank you. Here is the sketch:

#include "DaisyDuino.h"

#define MAX_DELAY (48000 * 40)
#define DEL 1

static DaisyHardware pod;

static DelayLine<float, MAX_DELAY> DSY_SDRAM_BSS dell;
static DelayLine<float, MAX_DELAY> DSY_SDRAM_BSS delr;
int mode = DEL;

float sample_rate;
float currentDelay, feedback, delayTarget, cutoff;

float drywet;

// Helper functions
void Controls();

void GetDelaySample(float &outl, float &outr, float inl, float inr);

void AudioCallback(float **in, float **out, size_t size) {
  float outl, outr, inl, inr;


  // audio
  for (size_t i = 0; i < size; i ++) {
    inl = in[0][i];
    inr = in[1][i];

    GetDelaySample(outl, outr, inl, inr);

    // left out
    out[0][i] = outl;

    // right out
    out[1][i] = outr;

void setup() {
  // Inits and sample rate
  pod = DAISY.init(DAISY_POD, AUDIO_SR_48K);
  sample_rate = DAISY.get_samplerate();


  // delay parameters
  currentDelay = delayTarget = sample_rate * 0.75f;


  // start callback

void loop() {}

void UpdateKnobs(float &k1, float &k2) {
  k1 = analogRead(PIN_POD_POT_1) / 1023.f;
  k2 = analogRead(PIN_POD_POT_2) / 1023.f;

  float m = (float)MAX_DELAY - .05 * sample_rate; // >>>>> Not sure what the role of 'm' is in the sketch

  delayTarget = k1 * m + .05 * sample_rate;
  feedback = .17;



void UpdateLeds(float k1, float k2) {
  pod.leds[0].Set(mode == 2, mode == 1, mode == 0 || mode == 2);
  pod.leds[1].Set(mode == 2, mode == 1, mode == 0 || mode == 2);

void Controls() {
  float k1, k2;
  delayTarget = feedback = drywet = 0;
  UpdateKnobs(k1, k2);
  UpdateLeds(k1, k2);

void GetDelaySample(float &outl, float &outr, float inl, float inr) {

  // fonepole(currentDelay, delayTarget, .00007f);                     // This was the original line which doesnt filter out adc noise
  fonepole(currentDelay, delayTarget, 1.0f / MAX_DELAY);           // NOTE: this gets rid of noise but affects pitch and adds a bit of delay

  outl = dell.Read();
  outr = delr.Read();

  dell.Write((.01 * outl) + inl);                         // cutting feedback amount to .01
  outl = (feedback * outl) + ((1.0f - feedback) * inl);   // monitor input
  //  outl = outl; // no input monitoring

  delr.Write((.01 * outr) + inr);
  outr = (feedback * outr) + ((1.0f - feedback) * inr); // monitor input
  //  outr = outr; // no input monitoring


Your delay is very long, therefore any tiny amount of noise in the control creates significant pitch changes or noise. You could try these ideas:
– use a higher-order filter (2 or 4 poles), but with a reasonable cutoff frequency.
– use a dead zone, a hysteresis effect on a small range. For example if the current delay is set to T, any target delay value in the [T-d, T+d] range won’t change the current time.