I’m beginning to prototype my violin like synthesiser idea, which consists of 4 ‘softpot’ strings, and another small softpot laid over a pressure sensitive resistor to use as a bow. The bow will read note on/off for any ‘string’ it contacts (probably 4 tiny plastic barrel shaped rollers, with axis of rotation along the ‘strings’, if you can picture that.) It will also send speed and pressure data, changes in which could be used for volume, attack, harsh-smooth sounds, etc.
I’m also thinking of logging the 6 ADC values and saving the data as either a MIDI file or .csv, just for a record of what was played, so I can replay my efforts afterwards. Sample rate for saving data could be set very low, with software interpolation before or during playback to smooth the output. Maybe as low as 50Hz or less, to reduce processor load when logging and playing simultaneously.
I’m not looking for a fantastic quality instrument at this stage, just something that is sort of more or less playable would be fine to start with, then refine as necessary. It’s likely to have only one ‘string’ while being developed, for instance.
Is this (naive, uneducated) approach a reasonable one? Would I get on better by using MIDI events etc and a MIDI library of some sort for this? Or some (acceptably naff sounding) real time synthesis? Aesthetically pleasing output isn’t a huge concern, latency and accurate pitch generation probably are. I know very little about audio data processing, learning as I go.
From my personal experience, trying to get the softpot membrane sensor working with MIDI wasn’t really fun.
I think it’ll be good to keep your goal very minimal/simple and also kinda vague so that you can explore and change direction as you prototype.
So I suggest getting started with connecting the softpot membrane (with pressure sensor strip underneath) to the Daisy and go from there. The sensor can simply be on a desk for now.
TSF is pretty good if you don’t wanna dabble in your own synth system. Hadn’t personally tried it embedded - but it’s real breeze on pc. Daisy has plenty of ram and power for this and there are a ton of free SF2’s out there.
Ta Shabtronic, so that’s avoiding the midi route, yes? And an SF2 file is a bit like a patch? I assume if it’s C I can use the arduino IDE for that? Not used C++ in a long time.
That makes sense, Takumi_Ogata. I must admit it felt like MIDI would be a bit of a 5th wheel for this one, but then my level of understanding here is very basic at this point. The Softpot is currently sellotaped to a plank of wood, so we’re on the same wavelength there…
Yeah it’s c or c++. You load in your SF2 file (either via stdio or memory chunk), and ask it to play a note using a patch in the SF2 file. A lot of the original SF2s contain the original GM patch setup. So it can be used just like a GM sound card e.t.c. There are some really excellent SF2s out there - that go beyond the 64mb that Daisy has. I do remember having a few that are in the 100mb range. The largest violin set I have is around 5mb and the largest string set I have is 60mb. Just checked most will fit on the 64mb, but I have a few that are huge! FireBall SoundFont V1.28s.sf2 is 400mb!!
Thanks for the link, unfortunately I have no idea what any of them are. Can you steer me towards a reasonable violin like one that’s not too big? In addition to the data logging and sensor reading/processing/audio output, the Daisy also needs to handle pitch recognition FFT stuff, so I don’t want to stress it out. I mostly play Balkan type stuff rather than orchestral music, don’t know if that’s an issue; sweet legato sounds are not my thing, I prefer something with a bit of bite to it. Of course that may be totally irrelevant…
Okay, so that’s online, browser based, non local. Good option to start with, although it makes a big point of using dropbox etc; I take it I can upload SF2’s to it from my linux desktop? I don’t use cloud based storage. Also, as an aside, how would daisy SP fit in to what I’m trying, do you think?
umm based on what you have said so far - the daisy is very suitable. Buuuuuut what your doing requires a lot of R&D and doing that directly on a embedded device can be very tricky or at least make things a lot harder. I personally would develop something like that on the PC with GPIO board, get it all working how you want and then port to the daisy e.t.c.
It would be tempting to do something using Python on the PC as you said to test the general principles, but I’d probably have to put up with a lot of latency?
I did something similar years ago using a Picaxe microcontroller and a VB.net script on a windows PC doing pitch recognition and playback; latency was acceptable, although the only output I could get it to do was a square wave, so it sounded like a stylophone. Even something as crude as that would be fine to start with.
My attempts at FFT pitch detection for human whistling came out very well, although I haven’t tried anything like that on the Daisy yet. In fact it’s still in its anti-static bag, I’m using an arduino to test sensors etc.
I’ve not actually tried a PC GPIO board - was just a thought after many many embedded projects (ardunio, esp32, teensy, pico, daisy ) e.t.c.
All that has lead me to the fairly obvious conclusion that simple “plumbing” libs together on a embedded device is relatively simple and easy. Actually developing new algos directly on a embedded device is real pain - the iteration time is painful and sucks the enthusiasm out of me . I don’t know about latency on the PC GPIO boards - but modern PCs are damn quick!
I take it I can use a GPIO with any software on a PC that’s capable of working directly with ports etc? So python would be fine for that? I suppose it must be, as they mention Raspberry pi, that uses Python I think…
I’ve used old style serial connections to CNC devices on Win XP before, all worked very smoothly, but USB plus GPIO is simpler.
I think you may have hit on a really sound idea there, actually.
I don’t know - I don’t have one - I guess they act like some kind serial port/serial terminal with special instructions or something like that. I know there are some PC boards out there that use python to drive them.
I’m having trouble seeing how a PC GPIO Interface would help with developing for force sensing resistors. FSR would need some AD conversion.
IMHO, the challenge to this project is the controller stuff - sound generators are easy, whether using synthesis (VCO, VCF, VCA) or samples (such as SoundFont).
I’d start experimenting with FSR directly on the Daisy, and skip the side trip into PC stuff.
The pitch detection already works fine in Python on the PC, via an Arduino. It’s monophonic (human) whistle detection only, turns out to be a very easy signal to work with. So that bit seems to be okay, apart from my ropey programming skills, Python’s (probable) slowness, and the Arduino’s latency problems.
The sensors are not proving to be difficult so far (famous last words, I know…) it’s the sort of thing I’ve done before anyway, it’s really just my near total lack of knowledge about synthesisers that’s the head scratcher at the moment. A previous iteration of this project had a pwm square wave output; any improvement over that would be great.
GPIO + ADCs would mean I could run the whole thing in Python, where I’m most comfortable, and will probably make the fastest progress; I know Python can be slow, that’s the drawback, but there are a lot of tutorials and support with Python. Also there’s no problem with running out of RAM or processor speed on the PC if I find I need more to get acceptable results, although, apart from minimising latency, I’m not bothered about making something that sounds wonderful (yet.)
Plus I’m curious, I haven’t done any GPIO stuff since PCs stopped shipping with RS232 ports, and lastly, if I buy a Pi 5 and use it as GPIO, it’s there for me to use as a stand alone PC replacement later if I need it. Maybe it’s even powerful enough to run all the code for this project, rather than pass it to the PC.
If it works okay on Python, I at least know what it is I want the Daisy Seed to do, which I don’t at the minute. Then I can go for quality of output, which seems to be the Daisy’s strongest plus point.
Ahh, that part about pitch detection. How does that relate to the ‘violin’ project?
Daisy could do it all, but an alternative might be a pi pico as a USB MIDI controller, using MicroPython, driving sound generation wherever - PC, Raspberry Pi, Bela, Daisy…
I maintain that sound generation is (relatively) easily handled, and the first step for the violin-like instrument is getting the sensors to produce usable control signals.
The pitch detection is of me whistling a tune, and involves FFT and code to decide which rgb addressable LED(s) to light up on the fingerboard to show me in real time where to put my fingers; an attempt to make learning to play by ear easier. I’ve done this idea before, but it was a cobble up, and technology has moved on since. Latency with that was about 150ms, not quite good enough, but not unusable it turned out. The LED side of things was very delicate, and the FFT a shameful bodge, but it worked.
The softpot for the fingerboard works fine as is, using an Arduino sending to Python on the PC which outputs a tone. A bit crude, badly written code, laggy, but okay as proof of concept. I’d demo it if I could, but I’m not going home until the house re-wiring’s finished, so it’s all elsewhere at the mo.