Daisy vs FV-1

Hi. As a guitar pedals hobbyist, I was thinking of starting to play around with DSPs, particularly the FV-1 until I came across Daisy.

How would you compare Daisy to FV-1in terms of possibilities for guitar pedals (anything from modulation to overdrive / distortion or more crazy stuff like midi conversion)?

Thanks in advance!

FV-1 is ancient hardware that has overstayed its welcome. Compared to Daisy it would be limited in processing power, amount of available IO and ease of use.


Thanks a lot. I’m going to order one! Looking forward to it!

The Daisy is more effort to program than the FV-1, and doesn’t come with all the Alesis knowledge, but is more capable if you have the ideas for new sounds.

FV-1 PCB footprint is much smaller (and cheaper), even with the additional components like EEPROM and crystal. It is somewhat limited (128 instructions/sample, 3 pot, stereo in/out, no GPIO) but is easy to setup and very efficient for effects that don’t require a complex logic. The instruction set is well designed. But regarding processing power and versatility, the Daisy Seed is very far ahead.

1 Like

Sorry for the necropost, however I own many FV-1 based modules and have observed the audio quality to be far worse than it should be - Daisy is definitely superior in audio processing.

I have used the FV-1 for years, including algorithms used in some commercial guitar pedals. The tools are easy to use despite the unfamiliar instruction set. Download the assembler, paste in an example program, flash it, and it runs.
I am now trying to upgrade to a “big-boy” processor. I just ordered a Daisy pod and am trying to get started with tools, documentation, and examples. Despite having done other assembly and bare-metal C programing, I have found the process of getting started extremely frustrating. Right out of the chute, WTF is a GIT, clang-format, folder-compliance? I look at some of the example code and notice two things: first, there are rarely any comments at all. Is this modern programming practice, especially as “examples”? Second, they are often only a handful of lines long, none of which mean anything without the context of all the other code that is running before, during and after the code shown.
So how do I get a “mental model” of the Daisy system, so I can make progress? I really want to be able to write real code for real audio algorithms, not just patch together other people’s abstractions. Can a 67-year old retired EE make this leap from the FV-1?

1 Like

Yes you can! :slight_smile: And welcome to the forum!

But I understand your frustration. There is a lack of overview documentation that explains the programming paradigm of Daisy. For me, I had help in that I had already worked with the Mozzi library for Arduino, and studied a few other things related to audio programming. They are all based on the same model (the audio callback).

I plan to write a couple of programming tutorials for beginners to Daisy programming, just need to get a couple of other things done first.

Meanwhile, check out @andrewikenberry’s video presentation: https://forum.electro-smith.com/t/daisy-session-at-the-programmable-audio-workshop/897/2?u=staffanmelin

Thank you for replying and for the words of encouragement, Staffan. The video helped a little, but it also confirmed how much new stuff I have to learn, not having a modern CS background. The sentence “Clone the GitHub repo” contains three words that mean absolutely nothing to me.

So I need to learn about GitHub, Slack, the VScode IDE, then I need to learn C++ so I understand classes, methods, namespaces, templates, the standard library, etc., then I need to learn this “audio callback” framework, and then the specific Daisy libraries. This is a order of magnitude more than the FV-1 learning curve.

I actually bought an STM32F4 Discovery board a couple years ago and spent a few frustrating weeks trying to get to where I could write a line of code. I spent days of zero progress because I had spaces in directory names instead of CamelCase. Silly me, thinking that I needed to use some old Unix convention on a modern IDE on Windows!

Electro-Smith has got a good thing going with the Daisy platform, and I believe they are honestly trying to make it approachable. It could be the perfect successor to the FV-1 for literally hundreds of small developers and DIYers. But If it takes a CS degree just to understand the environment, it will only appeal to the intersection of computer scientists and audio electronics fans.

I will soldier on when I my Pod arrives. But if I end up in “IT hell”, and it continues to be like real work, then it will go on the shelf next to the STM board.

Thanks again, Staffan. And I hope someone at E-S is listening.

1 Like

Don’t give up! There is new stuff to learn, but I think the solution to that is to learn one thing at a time while still doing useful stuff, small bits and pieces that make a sound, or something.

I still remember my first steps, and all I have learnt afterwards. I want to share that. You don’t have to learn it all to be able to use it. And forget about Slack, the VSCode IDE, namespaces etc. You don’t need that so make a synth. I write my programs in a text editor, and just type “make” and “make program-dfu” to upload them.

I just have to finish some stuff then maybe in 1-2 weeks time I will have the first tutorial finished. BTW what platform are you on? GNU/Linux?

And have you tried to upload a program using the Web programmer? Home · electro-smith/DaisyWiki Wiki · GitHub

Thanks for your feedback. We are always working to improve the accessibility of Daisy and ease of getting started.

Have you considered using Arduino instead of C++?

That might simplify the getting started process for you substantially.

Thanks, Andrew. I hadn’t gotten into Arduino, thinking it was for beginners just trying to blink lights and run motors on cheap little microcontrollers. It does seem like they have made the setup a lot easier than the ARM C++ universe. The audio support (hardware, libraries, examples, and documentation) seems pretty weak, though.

If I could make one request that would be hugely helpful: We need a tutorial that explains the callback framework. This seems to be the bedrock that all Daisy apps build upon. I am not talking about the function documentation or a code example on Github. I am talking about a real English tutorial that explains what is going on from analog into the codec, through the drivers and libraries, to the user app, and then back to analog out of the codec. A simple example program like tremolo would be perfect. Then you can explain how slower control functions like an LFO works within the framework of the real-time audio processing.

I think it would be really instructive for E-S folks to go back and look at the Spin FV-1 website:
at Spin Semiconductor - Knowledge base
You have to admit that they did a masterful job. They taught an army of folks how to build effects on their hardware, despite the primitive tools and bizarre instruction set. And no CS degree required! :nerd_face:

1 Like

I agree with this idea. As a programmer by trade who had never done C++ or hardware development, I found that entire paradigm hard to wrap my head around for awhile. I still am a little foggy on what black magic happens behind the scenes, from my code to the processor to the output, but I do better understand how the callback works now. It would be really helpful for people new to DSP and Daisy to see how the entire process happens.

1 Like

We are working hard to make Daisy as accessible as possible.

I’ll echo @andrewikenberry in that Arduino will be a little more beginner friendly from a code-perspective. The set up is also a bit simpler. We have brought over the same callback-based audio engine from our C++ library to Arduino through the DaisyDuino library.

Regarding a few other points you brought up:

Git is a version control system that is used to host, maintain the code base. It makes for an easy way to download and stay up to date with changes to the library code. It is, in itself a very useful tool, but you don’t need to know it inside/out to work with Daisy.

clang-format is not needed for anything if you’re not trying to contribute back to the libraries, it is just a code-formatting tool (often built-in to the IDE).

Slack is essentially a chat room centered around the Daisy platform. There are a lot of people active there, including myself. That are happy to help answer specific questions as you come across them.
This is a link to join our slack, feel free to chime any questions in the general/software channels :slight_smile:

As @StaffanMelin mentioned you don’t need to learn everything about all of the tools at once. Only what you need to achieve what you’re trying to do at that time. That said, a basic understanding of C/C++ is helpful for understanding the syntax.

All of that said, we are in the process of wrapping up a How To video that goes over installing all of the necessary tools to get started with C++ and VS Code. That should be up in the next week or so.

We will be working on many more videos to follow, including learning-based tutorials that go over how the Daisy programming model works from the user perspective. In addition, we may do some breakdown videos that go in depth with what specific examples are doing, and more technical videos of what’s happening inside libDaisy to make everything work. So we hear you loud and clear, and are working hard to provide the resources to make this stuff more accessible.

Thanks again for your feedback!

1 Like

I appreciate your responses and am looking forward to see what you can come up with. Some videos would be great.

I am not trying to be an a**hole, but I want to drive home the point. Here is my situation as I type: I am waiting for my pod to arrive so I can get started. Meanwhile, I have been a) browsing around trying to learn about the Daisy ecosystem, b) watching some online C++ tutorials, c) googling keywords like “audio callback”, and d) trying to read example Daisy code.

Let me describe d), because my tutorial say that the best way to learn C++ is to look at code. So I go to GitHub/Electro-Smith, and I see the DaisyExamples tab. Great. Hit this and I see what appears to be a list of recent file changes. Luckily, I scroll down and see the ReadMe, which looks helpful. But it says:

Getting the Source
First off, there are a few ways to clone and initialize the repo (with its submodules).
You can do either of the following:
git clone --recursive https//github.com/electro-smith/DaisyExamples
git clone https//github.com/electro-smith/DaisyExamples git submodule update --init

Dang. So I can’t even read the source code to blink an LED without “cloning the github repo”? Either I am an idiot, or I am missing something very simple, or you all are expecting way to much from folks who do not live in your universe. Do I really need to access a massive revision control system and install the entire tool chain just to read a source file?

Sorry for the rant. I know it will get better. Thanks for listening.

I guess I was just missing something very simple. I needed to open the folder for the particular hardware to find the example folders, open one, then the .cpp file. Baby steps.

1 Like

Ah, that’s correct.

We’ve been using git/github for quite a while here. So it’s easy to forget what it may look like to someone new to that environment.

As you’ve noticed, the code for each example is browsable within the folders of each platform’s name.

The Blink Example for the Daisy Seed does have a fair amount of comments to describe the various keywords and what each function is doing.

It does not use the Audio callback since it is simply blinking a light.

Nearly all of the other examples use an audio callback to fill buffers of audio. The number of samples in the buffer default to 48, but can be set to other sizes as well. The size of these buffers determines how frequently the callback function gets called in relation to the audio sample rate. At it’s default 48kHz/48 buffer size the audio callback is called once every millisecond, and presents and input and output to the user that can be manipulated before the data gets clocked out to the analog codec.

There are two options for the audio callback (you may see each in different examples): interleaved, and non-interleaved. You can tell by seeing whether there are one or two asterisks (*) prepended to the input/output arguments.

The various examples should also illustrate how to loop through the audio samples for processing or synthesis. The seed/Tremolo example, which is lacking comments, but is fairly concise, might be a good one to look at for an example of working on individual samples of audio.

We’ll do a little bit more of a deep dive of how this stuff all works in a different forum post, or video tutorial in the coming weeks.

FWIW, there is also a “download” button at the top of the github repo, labeled “Code” in green. that will let you download the source files as a ZIP file. However, this does not include the libraries (DaisySP and libDaisy) that are required to compile it, which is why the tutorials require using git to set everything up.


@donstavely I’m in the EXACT same spot you’re. Been programming the fv1 for long years now (sweetalk in the forum) and trying to make the jump to a bigger DSP. I know isn’t going to be super easy but the documentation on the daisy and info got me to a dissapontment hope new info will come to make the first steps easier

Hello again sweetalk/@toyosm. I remember that you were an even more frequent poster in the forum than I was. Weird to be such rubes here, eh? If you have read my other posts here you know I have been pretty obnoxious in my feedback as I stumble through the startup process. A solid week into it, and I am about where I was on hour two with my FV-1 dev board back in 2010! BTW, I am also now at hour 9 of a 10-hour online C++ tutorial.

And thank you @shensley for the explanation of the callback process. Those two paragraphs about the callback process were more informative than hours of looking at both example code and at the DaisySP documentation.

Dang, I can’t help myself, but I have to comment on this last statement. First, about learning from looking at code: Apparently, C++ programmers don’t believe in commenting their code. They are taught that well-designed code with good naming conventions is pretty much self-documenting. In other words, don’t say again in English what your code already says clearly in C++. This may very well be true if a) it truly is well-designed, b) you are absolutely fluent in C++ (where one colon, semicolon or cap can completely change the meaning of the whole thing), c) you have a prior understanding of the whole system framework, and d) you have prior working knowledge of the libraries the code is using.

Second, about documentation: Since the reference docs for the LibDaisy and DaisySP libraries are auto-generated from the sparsely commented source code, they are not descriptive at all. Very complex classes or functions may have single short sentences of English description. I trust that they are great references for checking syntax, parameters, etc. once one is fluent and knowledgeable, but they suck as explanatory documents to help one learn the Daisy ecosystem.

Sorry to E-S and the forum for continuing to be a PITA. As I said, I can’t help myself. :zipper_mouth_face:

I’m a retired C/C++ (and some scripting in perl, shell, PHP, HTML) SW engineer. When writing code for work, I would comment the algorithmic details or tricky idiomatic usage, but I didn’t add comments to explain the language to somebody unfamiliar.

Now, the Daisy stuff is slightly different, as its target audience includes many more beginners. Maybe libdaisy and the examples should have more comments.

My opinion is, Daisy isn’t appropriate for C/C++ beginners OR DSP beginners OR microcontroller beginners.