Hi,
I tried to change the audio buffer by using the function SetAudioBlockSize() before the StartAudio function. As described in the libdaisy reference:
I setted up a simple program to test it by setting the audio blocksize to 48000. After uploading I expected it to create a latency of 1 second. But when sending audio through it, the audio was still instantanious.
Why isn’t it delayed by 1 second? Or is my expectation wrong?
Here is my program:
` #include “daisysp.h” #include “daisy_seed.h”
using namespace daisysp;
using namespace daisy;
static DaisySeed seed;
static void AudioCallback(float *in, float *out, size_t size)
{
for(size_t i = 0; i < size; i += 2)
{
out[i] = in[i];
}
}
int main(void)
{
// initialize seed hardware.
seed.Configure();
seed.Init();
As you can see the total latency at a blocksize of 1 is roughly 0,8 ms. Which correspondents with the calculation:
Blocksize delay = 1/48000 = 20,83 us.
Group delay ADC-LPF = 18/48000 = 375 us.
Group delay DAC-LPF = 21/48000 = 437,5 us.
20,83 + 375 + 437,5 = 833,33 us.
Thanks for this post! I had an issue with aliasing because I was only updating my CV inputs one thousand times a second. Changing the buffer size to be smaller got rid of a ton of aliasing.
Came across this thread. The calculations fully written out for input to output latency should be the following, which match the scope pictures better for the 48 and 128 blocksizes.
(input group delay) + (input blocksize)/(input samplerate) + (output blocksize)/(output samplerate) + (output group delay)
blocksize of 48 -> 375 us + (2 * 48 / 48000) + 437.5 us = 2.8125 ms
blocksize of 128 -> 375 us + (2 * 128 / 48000) + 437.5 us = 6.146 ms
And you can generally add 1 more sample delay in each direction, because the transfer of a sample frame between the MCU and the codec via I²S or similar protocols generally takes the time of one sampling period (or slightly less, depending on the padding).