What if a module is large and hierarchical enough to warrant splitting it into multiple source files? All existing modules seem to reside in single source and header file pair. The only compliant way out I see is to still split into smaller submodules and develop them enough to be usable on their own.
Where do unit tests belong? Existing modules seem to only have examples (which, according to guidelines should be simple and produce sounds). I’d like to include regression-style tests that verify functional correctness, but they run for several minutes and not producing any sounds.
I’d prefer that this be broken down into smaller submodules as you mentioned. I guess I could see how your Convolution could be complex enough to warrant multiple source files for various things, without it necessarily mandating multiple modules. So I would be open to whatever you have in mind for that. Obviously the contents of DaisySP are mostly relatively simple modules right now.
Tests should probably get a new directory since there aren’t really any unit tests yet. This is compounded by the fact that with the Makefile as-is there isn’t a way to compile the library for x86 without manually building it.
A while ago I had started to do some work on making a little daisy-sim that would just have a daisy-style audio callback with the ability to pipe control data in the udb/tcp for testing stuff, but I got busy and haven’t done much on it.
Very interested in your thoughts on progressing that forward. I would love to be able to have automated tests as a lot of little bugs slip through that should be able to be caught with some planning/automation.
thanks for the info!
Here’s how it’s applicable to the convolution module.
- So far I’m able to break it down into fully functional submodules, namely:
- FIR filter module (time domain convolution)
- FFT module (block transforms)
- FFT filter module (streaming transform, aka fast convolution)
- Partitioned convolution module (ultimate goal, using all of the above)
There resulting module is till quite big, and since it consists of several internal classes, it might be a good idea to split them into separate headers. I think if they are named appropriately (e.g. start with a common root name) it shouldn’t be a big deal. Additionally they may be put inside a custom namespace with only the main instantiable one being exported to daisysp. I still need to look how it shapes out, so far I only want to have 2 files for the partitioned convolution.
For the FIR module as requested I’ve made two implementations, generic and ARM-specific, with one being chosen seamlessly based on a #ifdef switch.
For FFT there’s a similar placeholder structure, but only ARM version is currently being developed. As discussed in the issue tracker, there’s a good candidate for generic FFT implementation, and I’ve already verified it is working fine and integrates without any problems. So let me know what do you think about it, should I go ahead and include it or should we have a separate discussion somewhere? Structurally it consists of one folder with 16 header files, so it would stand out from the rest of the modules.
- For my own purposes I’ve created a DaisySP/tests directory having similar structure and contents to examples but with tst_ prefix instead of ex_. Functionally the tests are supposed to perform automatic verification and present readable results.
From my perspective the things you mention do not necessarily always come together. Even without a x86 simulated environment, there’s value in having homogeneous unit tests. It may still require a board and some manual steps to be performed, but the results are repeatable, predictable and require very little human interpretation. Ideally, you execute something like
make all test in the /tests directory and it automatically builds all tests, runs them sequentially and provides with an overall result. If there’s a hardware debugger, this could be made fully automatic if we require every test to return from
main with a meaningful result.
Next, as you suggest, an x86 build system would be created. It would still use all the test code described above, but not requiring a board and manual interaction would allow automatic CI-style verification. The only real weight pulling is writing unit tests for all existing modules.
I don’t think we really need a generic audio-loop type simulation. I’d prefer every module test to process predefined or runtime generated data vector and compare the result against pre-computed reference. Repeated with several parameter configurations. For functional blocks, such as filters and oscillators it’s quite easy to come up with procedural success criteria, which is even better that precomputed vectors.
Let’s wait for my convolution stuff to be done so you can judge how it looks from the outside.
This all sounds fantastic, and I look forward to seeing the result.
I like the breakdown of the four submodules you have so far. I can see us wanting to add a lot of other FFT-based transformations down the road. They can essentially be their own sub-group of modules for spectral signals.
A few points of clarification:
- Regarding the CMSIS ARM FFT, are you you porting it into DaisySP, or just linking to it from the CMSIS folders? (i.e. is it unchanged, or are we already modifying that?) Obviously, it won’t be optimized for other platforms, but I haven’t seen anything in the source that doesn’t look like it would have a problem running outside of ARM (until the generic implementation is fleshed out).
- Regarding folders, and structure. I am open to having some subfolders in the modules/ folder anyway. There’s an open issue discussing this is a generic context.
- I think local namespaces for internal classes is okay.
- I like the idea of something like
make tests allbeing able to run on hardware, or software. As well as the proposal to have procedural, and/or pre-computed vectors for comparison. As you mentioned this can wait until the convolution stuff is in there. So we’re not juggling too many separate things at once.
On the note of tests, portability, etc. I’m planning on spending a little time this week or next playing around with the idea of using DaisySP in VCV Rack (which actually looks like it has a very similar Make-style build that will play well with the existing Daisy project structure). So the cross-platform nature of the library could likely start being more important sooner than later.
Excellent points, thanks.
I’m for sure only going as far as linking the existing CMSIS bits from libDaisy, without any modifications. Although I somewhat override it slightly to get rid of huge pre-initialized memory sections which I mentioned previously. I’m actually very close to sharing a draft version, so you’ll see for yourself soon what I’m talking about and whether it makes sense.
For the sake of experiment I’ve also added the previously mentioned portable FFT implementation. It worked out quite well, so I’m considering leaving it in place. One problem that I haven’t found a solution for yet is that different implementations have different data layouts and adding unifying wrappers would be very inefficient. In my accuracy/performance tests I use a pair of forward and inverse transforms and check against the input vector. This is guaranteed to work for both implementations, but as soon as you want to manipulate individual samples in the frequency domain you have to respect each implementation’s own layout. For convolution I only needed to multiply the frequency domain vectors, so I’ve added Multiply() methods to both FFT classes which take care of layout differences internally. That is of course only one way. Another would be to implement access functions like GetRe(i), GetIm(i) and hope the compiler does a good job of optimizing it out. But using ARM-optimized vector math operations would no longer be possible in this case though. The third way is to try and re-engineer the FFTReal to match the ARM FFT layout and hope the performance hit is not too big.
The good news is that the whole thing is indeed cross-platform. For my own convenience I’m also compiling everything on x64 Windows (using MSVC compiler) and it works perfectly with just small set of stub functions (like dsy_tim_get_ticks, seed.SetLed(), etc.). I have VS2019 projects for my test cases, let’s see how to share and make them reusable in the most convenient way.
I’ll keep thinking about the testing approach. For now I’m trying to take small successive steps to not get overwhelmed by too many things at once.
Regarding FFTReal, I totally forgot until today that Emilie from Mutable Instruments has a modified-for-arm version of FFTReal (shy_fft) in her stmlib repository that might be worth looking at as well.
And yeah, that’s a good point about the data layouts. That was one of the first things I had noticed about the CMSIS version when I first started looking into it a while back. Most other implementations I had seen were mapped a bit differently.
Great news on the cross-platform-ness. And I can’t wait to see this draft!
Thanks a lot for the pointer!
At a first glance that mod would be very handy.
(Un)surprisingly, I’ve already made a few of the same changes, for instance getting rid of dynamic memory allocations and allowing runtime size changes.
Of course! Yeah, I kind of figured. As soon as I re-read your post a lightbulb went off, and I had remembered seeing a version of it somewhere.