Am I correct in the following assumptions?
- In a WAV file the 16 bit range is -2^-15 … 2^15-1
- In Hothouse / Daisy Seed the normal range for floats is -1.0 … 1.0
- If I develop with WAV tests I should use conversion -2^14 → -1.0 and 2^14 → 1.0
- Then it will be approximately the same levels when I put it on hardware, ie a fuzz box clipping of
± 2^13 int16_t will be approximately the same as a fuzz box clipping of ± 0.5 float