I am an acoustic engineer and sound artist. I am currently participating in an open project to register acoustic heritage. I would like to be able to implement a convolution reverb in my Daisy pod so that I can have a standalone device that can be loaded with various impulse responses from different places that I record.
I am not a programmer but from what I have been researching it could be done directly from a .cpp code or through gen in max. I have found some reference codes but they would need to be translated for the Daisy environment.
Indeed, your use case will soon be supported by Daisy SP library natively.
Hopefully our implementation is going to be much more efficient than the one pointed above, but nevertheless thanks for the link.
Thank you very much for the link to the project. I just read it and of course, one of the few things I know about it is that one of the greatest difficulties of a convolutional algorithm in real time is the way in which the process is divided and which sections are made in the time domain and which in that of frequency. After that you have to code. None of these areas is my field.
As I was saying, I am participating in several projects related to the importance of acoustic heritage, its rescue through libraries of impulse responses and the potential of auralization in the field of sound art. I share a link to a project in which I am currently helping: