Hi,
I was looking to understand if it is possible to send control data over the SAI bus along with the audio data? If that is possible, is there a sample?
Thanks,
Brett
Hi,
I was looking to understand if it is possible to send control data over the SAI bus along with the audio data? If that is possible, is there a sample?
Thanks,
Brett
No, SAI only transfers data and deals with clocks/synchronization. Smarter codecs are typically configured with I2C.
Thank you for the info.
When looking at the I2S it looked like it had a space/ability to transport data and audio. Thus, I was wondering if the SAI did as well.
The reason I was looking at this possibility is that I wanted to link up to 4 of the daisy’s together to move audio across the bus and also wanted to see if I could also move some data along with it at the same time.
Thanks again for the help.
Brett
Not sure what you’re talking about, because I2S doesn’t have that capability either. All it sends is audio data, clock and L/R channel signal. SAI only adds to that ability to use non-I2S protocols for audio (PCM/DSP, TDM and S/PDIF).
Now I’m curious, why do you need to process audio on 4 microcontrollers serially?
Hello @antisvin,
On the sending data via I2S, I guess I must have misunderstood and/or my friend who mentioned this must have created a custom protocol to put data in line with the audio. I’ll look at another way to send data.
Why 4 devices?
Great question: In short I am creating a device that will be capable of being chained together and having the ability to do processing across the chained devices would allow for more unique routing and, frankly, just more options for audio processing.
It is not core to the requirement of the system to process audio across all 4 devices. But, the device will need to send minimal data between them to keep in sync and know when commands happen on the different parts.
Originally I was just looking at MIDI to move the commands between the devices, but in looking closer at the fact I have the SAI bus open, I thought well, that may be an interesting way to extend the functionality of the deice.
Mostly just a fun exploration as I am closing on how I may wish to complete the comms between the devices as I close on my second hardware iteration on the device I am creating.
I have at least two more revs of the hardware before it settles out to what I believe will the V1 product. So that implies that likely 4 more board designs for sure.
Overall, thank you for the feedback and help.
Once I get to the point where I have a viable prototype, I’ll share more with the group here. I am excited about creating a new device based on the Daisy system, as the more I use it, and the more work the team is putting into the ecosystem, well, the better and better I feel about my using this system.
Looking forward to sharing more soon.
Thanks,
Brett
You can’t send data instead of audio to audio codec, because it would look like garbage to it. And if you’re sending data to another device, it maker more sense to use something like SPI to have more bandwidth.
MIDI itself is slow, but you can send MIDI data over UART with higher clock or use I2C/SPI.
We have something similar as unreleased feature for OWL firmware. It’s based on UART connections making a P2P ring bus between multiple devices. The challenges are mostly with software, i.e. they have to discover neighbors and be able to reconnect if one of them reboots. Then they can exchange MIDI or set parameters to another device, exchange data, etc.
Another approach to this is I2C bus as used by Monome Teletype and other hardware. However it requires one device to be used as a master.
Thank you for all of the info.
The idea I really liked about being able to use the SAI bus is that it would also manage all of the timing. My thought was that I would use the SAI between the Daisy devices and then I would “decode” the channels. Now I can “pick off” the audio or the “data” channel and put the info to use… Plus… have everything in tight sync because of the SAI system. Anyway… clearly not a way to go today, but just something I was thinking about.
As for the SPI or I2C, well I’ll likely use SPI at this point.
My goal was to go for a “simple” setup and they could just plug the devices together. But, maybe I’ll just let them setup the device via MIDI/SD Card/USB and call it good enough for rev 1 of the hardware. The ability to chain the devices together is something I wanted to add to ensure the hardware can do this going forward so I have it in place for a later firmware update.
Overall, the more I use this little device, the more I am enjoying the project.
Thank you to all of you who make the device, the drivers, and do all of the support. You all do a great job!
Thanks,
Brett
Question for you: Can I use SAI to go from Daisy to Daisy device? If so, what I am thinking about this morning is that I could setup the # of channels to use in the audio loop and that data would go over the other Daisy device. In this case, I could use “audio channel zero” as my data channel. The rest of the audio channels could be the audio.
Maybe this is not possible, or not worth it, but i thought I’d complete more of the thought process that keeps picking at the back of my mind.
Thanks,
Brett
SAI peripheral doesn’t know or care if you have a codec or microcontroller on the other and. It’s a matter of setting up master-slave configuration for clock correctly. And you can use SAI in TDM mode to use extra channels. It would be slower than I2C or SPI though. I don’t think there an example for using libDaisy for this, so you may have to look at reference manual or STCubeH7 examples to understand how to use it.
Note that there’s just one SAI peripheral exposed on pins, so you’d have to make a ring bus for all connected devices.
Well, I guess this is a very late reply, but there’s nothing stopping you sending audio and control signals via SPI (150 Mbits/s) or even via USART (12.5 Mbits/s). Both have 8x8 FIFO buffers (DMA and interrupts too, IIRC).
You could possibly even use IrDA.
I2S is a subset of SPI functionality (as well as being implementable as a subset of SAI), and is commonly used for audio because it’s both a very old protocol (1986) and easy to implement, so is a lowest commonly denominator that most ADC/DAC/etc support.
It lacks versatility being only PCM encoded stereo, has no error checking, and is not inherently suited for external connectivity. It is in many ways the .WAV of digital audio.
That said, it is well designed for doing what it does, and in the STM32-H7 has a 16x8 (8x16) FIFO buffer (8 words rather than 8 bytes).
8P8C connectors (CAT5) would probably be the easiest way to connect whatever protocol you use, but keep the cables as short as possible.
As for including control messages within any digital audio stream, there’s nothing stopping you. If you sacrifice 1 bit per sample you can use it to indicate a control message. Check wikipedia for how UTF-8 works. Though you would be unable to send audio and control signals at the same time unless you did something super-tricky like Dolby used to encode surround sound within a stereo stream. Perhaps you could use 32 bits (if I2S supports this) with 8 bits for control messages, or only use 16 bits for audio data.
Datasheet for mcu: https://www.st.com/resource/en/reference_manual/rm0433-stm32h742-stm32h743753-and-stm32h750-value-line-advanced-armbased-32bit-mcus-stmicroelectronics.pdf