Allocating Memory (audio buffer)

Hi, I have problems allocating more than 123000 samples of memory to my project.

I do it this way in my graindelay class:

const size_t gBufferSize = 75000; 

then I have a stutter effect class where I can allocate 48000.

If I allocate more in one of these, the daisy will crash while boot.

I can load the delayline example and allocate 192000, so there is no limitation.

what am I doing wrong?

the delayline also just seems to define an array size.
can you help? I need much bigger and many more buffers!

thanks

It would be more useful if you would show how you allocate the buffer, not how you create a variable to hold the size of the buffer.

maybe I am doing it wrong? I assumed I created the buffer here in the header file.
in the cpp I go on just writing and reading it like writeIndex[gAudioBuffer] and readIndex[gAudioBuffer].

Could you give me an example on how I can create a bigger buffer?

maybe allocating memory was the wrong term.

There are plenty of simple example programs which create buffers.
What effect are you trying to use?

I am building my own, stutters, slicers all kinds of buffer related stuff. so if I take delayline as example?

but I cannot really see how it does it differently,
but that you can set the size in the delayline cpp,
while I can go with fixed size.

I also looked at looper.h but that is really cryptic.

Be specific - what ‘delayline cpp’ are you referring to?

ok, you probably mean DaisyExamples/seed/DSP/delayline/delayline.cpp

OK - that example sets up a delay line for .75 seconds:

// this sets the size, but doesn’t actually create the delay line:
#define MAX_DELAY static_cast<size_t>(48000 * 0.75f)
// Next line actually creates the storage:
// Declare a DelayLine of MAX_DELAY number of floats.
static DelayLine<float, MAX_DELAY> del;

As written, that creates the buffer in the STM32’s builtin SRAM, which limits the size to about 2.5 seconds.

But if you add DSY_SDRAM_BSS to that definition, the buffer is placed in the 64M external SDRAM, allowing you to do really LONG delay:

static DelayLine<float, MAX_DELAY> DSY_SDRAM_BSS del;

When you build a program for Daisy, the Makefile displays how the various memory locations are used. Here’s the output when I set the delay to 60 seconds, in the SDRAM:
Memory region Used Size Region Size %age Used
FLASH: 73280 B 128 KB 55.91%
DTCMRAM: 0 GB 128 KB 0.00%
SRAM: 12188 B 512 KB 2.32%
RAM_D2: 16 KB 288 KB 5.56%
RAM_D3: 0 GB 64 KB 0.00%
BACKUP_SRAM: 12 B 4 KB 0.29%
ITCMRAM: 0 GB 64 KB 0.00%
SDRAM: 11520012 B 64 MB 17.17%
QSPIFLASH: 0 GB 8 MB 0.00%

1 Like

thanks teleplayer, this is exactly what I was looking for.
my limit of 12300 sounds about right for 2.3 secs at 96k.

I hope I don´t ask for too much, but how would I implement this?
It should not be in the main.cpp, but rather in the effect.h or effect.cpp.

here is my graindelay.h as an example which shows errors, because I don´t really know how to implement the bss-line:

#pragma once

#include <cmath>
#include <algorithm>
#include "my_effect.h"
#include "daisy_seed.h"
#include <random>

const size_t gBufferSize = 192000; 



class gdelay : public my_effect {
private:
    static gdelay<int 192000>DSY_SDRAM_BSS gBufferSize;
    
    std::mt19937 rng; // Mersenne Twister random number generator
    std::uniform_real_distribution<float> dist; // Uniform distribution
public:
    float gaudioBuffer[gBufferSize];
    float latchedGSize[8] = {0};

    float gCounter = 0;
    float gCounter_ = 0;
    int gTrig = 0;

    float walkIndex[8] = {0};
    float walkIndex_[8] = {0};
    float gwindow[8] = {0};
    float readIndex[8] = {0};
    float rawIndex=0;
    float rawIndex_ = 0;    
    int writeIndex = 0;

    float speed = .25f;
    float pitch = 1.0f;

    float gSize = 1028; 
    float sample[8] = {0};
    float index0[8] = {0};
    float index1[8] = {0};
    float frac[8] = {0};
    float xfade = 0;
    float xfadeIn = 0, xfadeOut = 0;
    float rate = 0;
    float windowedSample = 0;

    float jitter = 0;
    
    


    gdelay();
    ~gdelay();

    void process(float* in, float* out);
    float interpolate(float a, float b, float t);
    float random(float min, float max);
};

//#endif // GDELAY_H

got it:

#pragma once

#include <cmath>
#include <algorithm>
#include "my_effect.h"
#include "daisy_seed.h"
#include <random>

const size_t gBufferSize = 192000; // Adjust to stay within memory limits



class gdelay : public my_effect {
private:
        // Allocate gBufferSize to SDRAM
    static float DSY_SDRAM_BSS sdram_buffer[gBufferSize];
    
    std::mt19937 rng; // Mersenne Twister random number generator
    std::uniform_real_distribution<float> dist; // Uniform distribution
public:
    float gaudioBuffer[gBufferSize];

unfortunately, it is not working at all.
it shows sdram usage while compiling but the app will crash immediately.

this is my .h:

#pragma once

#include "my_effect.h"
#include "daisy_seed.h"

const size_t kBufferSize = 192000;

class stutter : public my_effect {
private:   

public:
    static float DSY_SDRAM_BSS kBuffer[kBufferSize];
    float* kaudioBuffer;

and the cpp:

#include "stutter.h"
#include <algorithm> // for std::fill
#include <cmath>     // for fmod

float stutter::kBuffer[kBufferSize];


// Constructor
stutter::stutter() {
    std::fill(kBuffer, kBuffer + kBufferSize, 0.0f);
    
    kaudioBuffer = kBuffer; 
     
}

// Destructor
stutter::~stutter() {

}

float stutter::interpolate(float a, float b, float t) {
    return a + t * (b - a);
}

// Process function
void stutter::process(float* in, float* out) 

It is unrealistic to expect somebody to debug that from a snippet of code.

I understand that, but this is just the code including the memory allocation.

the code works perfectly when I don´t attempt to use sdram.

what should I do?

chat gpt is no help here unfortunately…

What to do?

Provide a minimal piece of complete, buildable, runnable code which demonstrates what works, and what doesn’t work.

This seems (to me, at least) obvious, when asking others to figure out your stuff.

But I suspect the problem is, you can’t access SDRAM in code which runs before the board is initialized. So, don’t use the constructor for this.

This is why, in Daisy, many classes use an Init() function, which is called after the hardware Init() is called.

thanks so much teleplayer, this saved the day.