Changing audio buffer/blocksize of the Daisy Seed

I tried to change the audio buffer by using the function SetAudioBlockSize() before the StartAudio function. As described in the libdaisy reference:

I setted up a simple program to test it by setting the audio blocksize to 48000. After uploading I expected it to create a latency of 1 second. But when sending audio through it, the audio was still instantanious.

Why isn’t it delayed by 1 second? Or is my expectation wrong?

Here is my program:
#include “daisysp.h”
#include “daisy_seed.h”

using namespace daisysp;
using namespace daisy;

static DaisySeed seed;

static void AudioCallback(float *in, float *out, size_t size)
for(size_t i = 0; i < size; i += 2)
out[i] = in[i];

int main(void)
// initialize seed hardware.

//set blocksize.

// start callback

while(1) {}


1 Like

This is what ends up called by that function -

So 128 samples is upper limit for block size

Thanks alot! I have now checked the latency with a scope and it works indeed.

Orange is measured at daisy audio input.
Purple is measured at daisy audio output.

As you can see the total latency at a blocksize of 1 is roughly 0,8 ms. Which correspondents with the calculation:
Blocksize delay = 1/48000 = 20,83 us.
Group delay ADC-LPF = 18/48000 = 375 us.
Group delay DAC-LPF = 21/48000 = 437,5 us.
20,83 + 375 + 437,5 = 833,33 us.

The group delays are from the AK4556 datasheet.


Thanks for this post! I had an issue with aliasing because I was only updating my CV inputs one thousand times a second. Changing the buffer size to be smaller got rid of a ton of aliasing.

1 Like

Came across this thread. The calculations fully written out for input to output latency should be the following, which match the scope pictures better for the 48 and 128 blocksizes.

(input group delay) + (input blocksize)/(input samplerate) + (output blocksize)/(output samplerate) + (output group delay)

blocksize of 48 -> 375 us + (2 * 48 / 48000) + 437.5 us = 2.8125 ms
blocksize of 128 -> 375 us + (2 * 128 / 48000) + 437.5 us = 6.146 ms


And you can generally add 1 more sample delay in each direction, because the transfer of a sample frame between the MCU and the codec via I²S or similar protocols generally takes the time of one sampling period (or slightly less, depending on the padding).

So, that would add about 2x20.8us at 48k?