Feedback Effect (Freqout, AcouFiend, etc)

Hi all,

I’m pondering how to generate a feedback effect with my Daisy similar to the Digitech Freqout and AcouFiend from BlueCat. It’s a cool effect that could apply to a lot of different instruments and would be really fun to have on the Daisy.

I have a few ideas for how to take a stab at this (I’ve tinkered with the Daisy in C++ enough to start prototyping something) but I’m so new to signal processing that I thought I might make more progress by asking others for advice.

My naive ideas:

  • feed the last X samples of the incoming signal into a buffer
  • analyze volume and frequency
  • when overall volume drops below a certain threshold and frequency changes are somewhat static, ramp up the mix value over time of the buffer back into the output signal (additively with some gain)
  • ramp down the mix value when incoming volume/frequency changes go back over a threshold
  • shift the pitch of the delayed signal by some detected harmonic frequency (was going to brute force this detection with an FFT)
  • slowly decay the delayed buffer (could be controlled by a footswitch)

Any ideas? I’ll make a simple prototype for my pod tonight, I’ll share the repo on GitHub.

That won’t work, because FFT detects magnitude in bins of frequencies. So you can use it to detect how much energy you have in various parts of the spectrum, but you won’t get exact frequency value.

Second problem with it is that It won’t be easy to get FFT working on Daisy, because you need to use CMSIS with LUTs that won’t fit on internal flash. Running FFT without those precomputed table would likely be too slow.

You may also try using something like this to detect pitch - https://github.com/pichenettes/eurorack/blob/master/tides2/ramp_extractor.h . Or there are a few other ramp extractors in MI repo, not sure which one would work better for audio.

1 Like

Thanks! I was going to try the HPS technique once I could compute the FFT. If my understanding of that is correct…

This article may help on how to slim down the compiled in look up tables: Reducing FFT code size of CMSIS DSP – M0AGX

I’ll check out the ramp extractors, thanks!

You could use parabolic interpolation to estimate iter-bin frequencies… That assumes of course, a usable FFT implementation, and enough cycles to do so.

Sure, there are ways to increase FFT resolution. Still, there are plenty of problems here - i.e. FFT would give you least accurate results in lower part of the spectrum (where fundamental is!), since we’re dealing with linear measurements rather than exponential which would make more sense for frequencies. And then latency, since you need to measure over a longer period of time to gather enough data to process.

There are FFT based algorithms for pitch detection, but none of the boils down to just “brute force with FFT” unfortunately :wink:

It would be interesting to see how something like kissfft runs on H7, I hope it won’t perform worse than CMSIS on F4.

1 Like

Thanks, that’s helpful and exactly why I’m asking for suggestions!

I’m doing some prototyping on the desktop first to do so without constraints (apart from my own naivete). I got a basic harness working last night with a recorded signal with RMS volume detection to trigger the effect (which right now just feeds the delayed signal ramped back up into dry signal). I think having a resonating bandpass at the detected frequency might help… I’ll post as I make progress, but please do keep the candid comments coming, I really appreciate it.

Here is another approach that works more like the real-life guitar/amplifier feedback process:

From a note trigger, do a coarse FFT or pitch detect to find approximately what the note frequency is. Then apply a peaking filter to the input at the frequency of the desired interval above the note to enhance/extract the natural harmonic content of the signal. Feed this into a delay buffer. Take an output from the buffer at a position that corresponds with one wavelength of your desired harmonic. Feed it back to the input, controlling the gain to get the desired build, sustain, and decay with respect to the dry signal envelope.

Of course the pitch detection would only be monophonic but the FFT version would be polyphonic, if you can do it.

Does this make any sense?

EDIT Another thought: Why not use a couple dozen 1/3- or 1/4-octave bandpass filters and detectors for finding the approximate lowest fundamental of the signal, then use the appropriate harmonic interval-up bandpass output to feed the “resonant pipe”, as described above?

This should be well within the capability of the Daisy processor. Easier IMO than an FFT and still polyphonic. :slightly_smiling_face:

Fascinating. I’m going to kind of write that back in different words to make sure I get it :slight_smile:

Bandpass the signal (some small window of the last X samples) N times (using a bandpass at increments of 22khz/N) into a bunch of strips/buckets/bands (I guess kind of like an EQ), to find the lowest group with the most energy/volume to guess where the fundamental lies? How do you pick the band? The lowest one with sufficient/threshold energy or the max?

For the resonant pipe, are you suggesting using the guessed fundamental wavelength (shifted by the desired interval) as both the value for the peaking/band+resonance filter applied prior to writing to the delay and also as the read offset into the delay line? That read offset of one wavelength back is fascinating, I can kind of glean what that’ll do (some form of constructive interference with the dry signal). Can’t wait to mess with that tonight…

I’m working on a port of the bitstream autocorrelation algorithm, from the Q library for time domain pitch detection. It’s monophonic, (which is fine for my use case as a vocalist), and seems promisingly efficient at the lowest possible latency. Unfortunately the modern C++ is making it difficult for me to engineer back down to something more rudimentary. There is a simplified version here, which I’m working off of at the moment. I’d be grateful for any help with getting it into DaisySP.

Yes @cirrus, that is what I was thinking.

I would run the 1/N-octave bandpasses and detectors continuously, then make your lowest-fundamental decision some time after the note trigger. The bandpass filters will take time to respond to the note-on transient, depending on their Q.

As to what criteria to use for deciding which bucket has the lowest fundamental, my guess is it may not be the actual highest amplitude bucket. Maybe the lowest frequency bucket that exceeds some fraction of the highest amplitude bucket?

On the other hand, picking the highest volume bucket might work more like real-world feedback, even if it isn’t the lowest fundamental. (You might actually get some interesting harmonic-hopping feedback “accidents” if you guess wrong!) Others could weigh in, or you could experiment with real-world signals.

Another thought: You may be able to refine your frequency estimate by interpolating between the lowest two bucket amplitudes. Lots of possibilities!

1 Like

@recursinging I can take a quick stab at this tonight, at least see if I can get you something compiling!

EDIT: I got it compiling fine on my DaisyPod and I wrote a quick hacky test use-case with the audio callback generating a tone (can also use the input on the Pod, but mine is really noisy for some reason).

Some sample output:

guessing 445.454559hz
guessing 443.216095hz
guessing 445.454559hz
guessing 438.805969hz
guessing 445.454559hz

Does ok at guessing a 440hz tone although it definitely is quite taxing in that callback function, you can hear skipping as it can’t keep up. Maybe we should start a separate thread, or happy to chat about it in GitHub issues on that repo if you prefer.

1 Like

I have an old Roland spv355 p/v synth… It does analog pitch to control voice detection, and the service manual does a pretty good job explaining it here :

There is a switch to control some input filtering eg guitar vs brass or woodwind, something to think about regarding any pre-processing for your signal before you try to do any pitch detection on it.

It does work though. Well, to be fair it works as well as something 40 odd unrestored years old could, and sometimes turns out some funky weirdness - and other times would track really well. The portamento slider helped prevent it from sometimes going off into FM like sounds, which can also be fun in itself.

Who knows, maybe some ideas in that description in the service manual for you!

Good luck!

1 Like

@hammondeggsmusic very cool, thanks, will definitely peek (that thing is intense)

@donstavely I didn’t get to try out any of your ideas yet, but I’m super eager to give it a shot tomorrow. I got caught up toying with @recursinging’s bit stream autocorrelation stuff tonight.

1 Like

Very cool, thanks for the effort.

Hmm. I was hoping for better performance, both in accuracy and resource use, I’ll give your test code a try later and see if I can profile it a bit, perhaps there is some low hanging fruit there. Since this is getting a bit specific, I’ll open an issue on github and we can move the discussion there

1 Like

I was so intrigued with the discussion that I want to see just what the Daisy can do. Yesterday I coded up a filter bank of (32) 1/4th-octave SV filters, covering from 50Hz to 10KHz. My processor utilization blinky LED says it uses less than 1/4th the Daisy horsepower! (std 48KHz, 48-sample callback). Here is a screenshot of Arta showing the lowest, middle, and highest two bands:

This is very encouraging! :slightly_smiling_face:

Next I want to add 32 envelope detectors to use for estimating pitch. Keep in mind that my approach does not require perfect pitch detection, since I want to use the filter bank to isolate/extract the natural harmonic content of the signal, rather than synthesizing a harmonic signal. Then again, If the fundamental can be cleanly isolated, accurate pitch detection on a near-sine wave is easy.

2 Likes

@donstavely you hero! I really need to get more technical with this stuff. I’ve been wanting to finally dive into this deeply. I’m going to ramble off an onslaught of questions, but feel free to not answer any :slight_smile: Is an SV filter a technique for bandpassing the signal? In using ARTA, are you capturing the line output from Daisy with your audio interface? What are you sending to line-out, filtered or raw signal?

Not generating an artificial pitch should make for a more realistic effect! The only reason I thought detection was necessary was to then pitch shift to get to the desired interval ringing in the feedback signal. But isolating “the bucket” of frequencies and shifting that set would probably do the same thing, or at least get close enough. And getting it to work without the shift is enough to validate the basic concept too!

1 Like

I’ve been doing something with crossover filterbank and feel like adding my 2 cents to this.

Not sure if it was taken into consideration, but if those are 2nd order SVFs then you’ll get +3dB boost at crossover frequencies. This would hurt frequency to some extend. To get flat magnitude response at all frequencies you’d have to use 4th order SVF. That would use slightly less than x2 CPU if done right. However, if you’re using SVF from DaisySP, its code is not very effective. It performs ~x3 time more operations than necessary.

You can get much better performance by using SVF implementation based on this - https://cytomic.com/files/dsp/SvfLinearTrapOptimised2.pdf

State variable filters have the advantages that a) lowpass, highpass, bandpass, and notch responses are available simultaneously, and b) the parameters that control frequency and damping are independent of each other. There is tons of info on the web about both analog and digital versions.

For this application, we want a peaking (high Q, low damping factor) response. The screenshot shows the bandpass responses. I may switch to highpass with the same peaking in order to get greater rejection of the fundamental when trying to isolate a higher harmonic. If you want to pursue pitch-shifting of the fundamental to generate a harmonic, you would stick with bandpass.

BTW, if you want to pursue pitch-shifting, you probably want to tailor the buffer length based on the approximate note frequency in order to avoid the “warble” that happens with a fixed buffer size. This happens due to phase cancellation during the crossfade. Very doable, but I don’t think the DaisySP pitch shift algorithm allows for settable buffer length, so you would need to code your own.

For Arta, have an USB audio interface, but right now I am just using the internal audio and headphone/mic jacks on my laptop, connected to the Pod line in and out. Also, Soundcard Scope is another free tool that I have found very useful.

@antisvin, yes, I have used 4th-order SV filters for crossover applications, but I don’t need flat response here, as I am trying to identify the approximate frequency of the fundamental, and then isolate it or a harmonic. Still, 4th-order will give better rejection, so it may be the way to go if there are cycles available.

I coded my own filters rather than use the DaisySP SVF class, like this:

void FiltBankRun(float in)
{
for (int i=0; i < BANDS; i++)
{
notch[i] = in - damp * bpass[i];
lpass[i] += freq[i] * bpass[i];
hpass[i] = notch[i] - lpass[i];
bpass[i] += freq[i] * hpass[i];
// 2X oversample - should really interpolate input
notch[i] = in - damp * bpass[i];
lpass[i] += freq[i] * bpass[i];
hpass[i] = notch[i] - lpass[i];
bpass[i] += freq[i] * hpass[i];
}
(Don’t know why my proper indents disappeared)

1 Like

I see, you’ve got rid of the “drive” parameter apparently. But this code is still computing unnecessary outputs. This would be especially costly if you try to stack SVF, since it requires doing something like SVF1: get LP1, HP1; SVF2: LP1 → LP2, SVF3: HP1 → HP2. And it would compute unused outputs. An optimized version would be SVF1: LP1, AP1; SVF2: LP1 → LP2, HP2 = AP1 - LP2.

And yes, I’m using this with Linkwitz-Riley crossover arrangement. But if you only want to split bands for analysis and want to have steep roll-off, maybe you should be using elliptic filters instead?

If I understand correctly, you’re suggesting to split the whole spectrum with a cascade of LP and HP filters. But the high number of bands requires stacking at least log2(N) filters (+ the allpass compensation if used for resynthesis), and this will add a significant group delay, especially at low frequencies. Wouldn’t it be better to use pure parallel BP filters for analysis as suggested by donstavely? Moreover, these BP filters have an high Q and therefore a good selectivity near the cutoff frequency, which is not the case with 4th-order Linkwitz-Riley crossovers (split at -6 dB and slow curve transition).

If you need better asymptotic behaviour with the BP filter, you can chain it with another BP filter set to a lower Q (to keep the bandwidth at -3 dB), or even a Butterworth HP with a slightly lower cutoff frequency. Maybe a slight resonance could help to flatten the lower part of the bandwidth.

Output filtering is another matter, it wouldn’t be difficult nor significantly more CPU-intensive to use a totally different, sharper filter, more suited to the task.