So I did get this to work - seems a-ok.
There were some gotchyas to look out for though - I was generating my filter coeffs using octave’s tf2sos(B, A) function.
Note that the transfer function used in octave’s filter is:
y(n) = - SUM c(k+1) y(n-k) + SUM d(k+1) x(n-k) for 1<=n<=length(x)
where c = a/a(1) and d = b/a(1).
Whereas the ARM DSP uses:
y[n] = b0 * x[n] + b1 * x[n-1] + b2 * x[n-2] + a1 * y[n-1] + a2 * y[n-2]
i.e. the a coeffs have opposite signs and the a0 column of 1.0s are removed
This brings up a questions though - I’m doing block processing using the 48 that is hard coded in all the daisy setup files. This is good as @antisvin points out it should reduce overhead. However libdaisy requires all DSP functions to be single sample based. What is the reason for this? I’d like to include this stuff as part of libdaisy eventually, but I’m reluctant to move to single sample calls for this.