2 unison oscillators with detune, waveform selection, and PWM
Sub-oscillator
Filter and amplitude envelopes
LP filter cutoff, res, and amount
Overdrive
MIDI Device support
Program Changes for presets
MIDICC for parameter control
Coming from Teensy, I really like the Daisy ecosystem. There was a bit of a learning curve, but the tutorial helped out a ton. Thanks, enjoy the project, and I look forward to seeing what everyone else is working on!
Cheers!
Nick C. | Moby Pixel
AI Transparency: This project was made with assistance from Claude Code.
Thanks for checking it out, Takumi! I love your videos! Yeah, the USB_MIDI and OSC examples really helped it come together quickly. I’m looking forward to testing out some other projects with it like my buddy @Charles TapeScam.
I did include the libraries in the repo since I couldn’t get the submodule to pull them in. Is there a guide for that?
And unfortunately I did have an unpleasant encounter with a community member for not initially disclosing that I used Claude Code assistance. I have CC running in terminal all the time, and you can just type “commit” and it’ll push your changes under “Claude Code”. I guess the culture of open-source and AI coding are a bit in flux at the moment. Lesson learned, but it’s a bummer when you launch something, for others, and are immediately met with ad hominem attacks.
I’ll reply again here: it was not an ad hominem attack, it was an analysis of your project.
If you publish publicly you should expect public scrutiny.
I’m glad that you added the notice on the project readme as it’s otherwise not clear to what extend this was your own work.
ps; the demo sounds awesome!
pps; git submodules are not that hard and I wonder why you had such struggles with it. Just look at all the examples and other projects that do this. [Edit: I’ve opened a PR]
Word. Glad to put that mess behind us. We’re all in this together.
I discussed the learning curve in the video, and other topics like the Teensy drama. Check it out! With iOS we use Swift Package manager, and in Arduino local libraries. This is my first time using gitmodules. Everything seems intuitive once you know how to do it.
Hi, MobyPixel! Perhaps you should try to build your
DIY Minimoog on the Daisy Seed platform (I recommended it to you in the comments on YouTube). All 19 potentiometers can be connected via multiplexers on the CD4051 in the amount of three pieces. And also connect the encoder and OLED-display. I’m currently developing a synthesizer with a similar layout, so I can give you some suggestions.
Hey stone_voices! Good to see you. Your suggestion is what convinced me to pick up a Daisy so thanks for that.
I initially tried having this project be polyphonic, but I couldn’t achieve 6 voices with 3 oscillators each + 2 envelopes + filter per voice. I used the Daisy’s built-in synth classes with Bandlimited waveforms which typically use more cpu (at least on Teensy). I’ve seen other Daisy projects with 6 voice poly, but with a bare bones implementation using the internal libraries I hit a max of 4 or 5 voices (each with the additional oscillators, envelopes, and filters). So if I ported my MiniMoog project I’d need to use another synth library or just make it Mono like the original MiniMoog.
I think next I’d like to explore using it as an effects pedal like some of your projects. I look forward to seeing how your synth project goes!
I have come across some polyphonic synthesizers that are built on the basis of Daisy Seed microcontrollers, for example, this one.
I also want to note that there really is a problem with Daisy’s computing power, since I simply cannot port some of my VST plugins to this platform. One option that partially solves this problem is to connect two or more microcontrollers in series via an audio.
I’m going to port my Marazmator VSTi-synthesizer to Daisy. But this is a specialized synthesizer designed to produce ambient sound textures, and I’m planning only two voices, which would be appropriate to call layers. This is quite enough to create ambient compositions, which you can listen to using this example.
And it’s also really cool to have a synthesizer in the form of a box for live sets, instead of a VSTi-synthesizer.
You can increase the blocksize so you have more cpu time to calculate all voices.
There will be larger latency, but if you are not doing audio effects that would be a decent compromise.
Thanks for the suggestions @dreamer! I tried that before, and did notice the latency, but that’s good to keep in mind. It didn’t seem to help with the voice count much though.
I think changing the waveform to the non-bandlimited versions would an easy win cutting down on cpu, but you do sacrifice quality. I’m happy keeping this mono with plenty of headroom in case I want to add LFOs or effects later. A paraphonic architecture would cut down on cpu too.
Yes, it will improve the situation a bit. However, you should also optimize the code, try to make as few function calls as possible or, if possible, set them as inline. I would also prefer not to use classes for DSP from the Daisy libraries, as in my opinion they are suboptimal and sometimes do unnecessary work. And it is even desirable that each voice be an object of the same class, and all envelopes, oscillators and other useful things are calculated only within this class, and even better that this happens within the framework of a single Process function.