Considers the effects of compression channels in a hearing aid algorithm
UNL would like to test human subjects with binaural WDRC, comparing 4-channels vs 16-channels (per side).
Questions to explore
Is the best starting place: WDRC_XBand_PrescripSave_wApp?
Using the app above, can it save two settings and load one or the other? I believe it can but wanted to confirm.
Using the Rev-E, can it do binaural? How many channels would it be able to process (if AFC and mic directionality are not needed)?
Oh, they’re needing the high performance! I love it!
The collection of features proposed here is pushing beyond the current example programs on several fronts simultaneously: more channels + stereo + App + prescription saving. So, I’m not quite sure where is the best to begin.
Pushing any one of these festures past the current example sketches is pretty straightforward, but getting all 4 at once will be a lot more work. Very exciting! But this needs a bit of consideration.
To answer the question of speed, one approach would be to run the example program that you pointed to. How much CPU does it use? (Reported in the App? Or, in the Serial Monitor?). And how many channels is that example? Scaling the number of channels by the CPU % should let one estimate how many channels are possible. You can probably go up to 85-90% of CPU.
@AMPLab Can you confirm how many channels you are hoping to implement (per ear?)
Do you currently have any Profiling Framework you use?
I build one based on this thread:
It gives a approximation to guesstimate HEADROOM before making changes.
Also places for code optimization.
Just a thought.
Because the Tympan audio library is built in the same was as the Teensy audio library, one can actually use the profiling that is built into the library.
As an example, you can try this example: 01-Basic\PrintCPUandMemoryUsage. Notice the line:
Every 3000 milliseconds (ie, every 3 seconds), this example sketch will write the current CPU% (and memory usage) to the Serial Monitor. This gives the overall CPU usage summed across all of the audio classes.
It is also possible to ask each audio class what its CPU usage is for just that one signal processing step. For example, in the example sketch linked above, you might want to know how much CPU is being consumed just be the
gain1 module separately from the overall total. In that case, you might write something like:
int n_cycles = gain1.cpu_cycles; //ask for how many cpu cycles for this module float cpu_percent = audio_settings.cpu_load_percent(n_cycles); //convert to percent
Note that, for a given algorithm element, both the sample rate and the audio block size will affect the CPU%.
- Audio Block Length: Every time you process a block of audio, there is some overhead that occurs regardless of the number of samples. So, longer blocks spread that overhead penalty across more audio samples resulting in a lower CPU%.
- Sample Rate: The calculations themselves don’t care what the sample rate is. But, the amount of time available to do the calculations goes down when the samples are acquired more quickly. So, lower sample rates will result in less CPU% whereas higher sample rates will result in higher CPU%.
Now, to actually try to answer the question: How many channels of compression can the RevE support? …
Below are the results from my own benchmarking. I tested a Tympan RevE (600 MHz) using a pair of Tympan digital earpieces (mixing the microphones together), running at a sample rate of 24kHz and running with a short block size of 16 samples. Given these operating parameters:
- 1.6% of CPU is consumed mixing the earpiece’s audio streams and feeding data to the hardware
- 1.4% of CPU is consumed per WDRC channel (IIR filter+compressor)
- 7.9% of CPU is consumed if you run the adaptive feedback cancelation (AFC) on one ear
So, these results suggest that we could do two ears, 12 WDRC channels per ear, and feedback cancelation for each ear and still only consume about 52% of CPU. Great! Is it too good to be true? We’ll have to build it to find out!
Given that it looks like we have surplus CPU, we could the sample rate to 32 kHz. Keeping the 16-sample block size, I estimate that increasing the sample rate to 32 kHz will increase the CPU consumption to 67% of CPU. So, still very do-able.
(Note that feedback cancelation is very computationally intensive. So, if you don’t want the AFC, you’ve got tons of spare CPU.)
Overall, it looks like we’ve got enough horsepower to do a lot of WDRC compression channels. If this really proves to be true, this is gonna be fun!
As always it’s best to RTFM first.
There are many gems, such as the extant profiling, as you point out.
If there were a reasonably complete manual, then RTFM might apply. In this case, how were you supposed to know?!? You’re supposed to go to the forum and you ask.
…which is what you did. Perfect!
We’ve made good progress on our end to create a decent example that enables the higher channel count that you folks are looking for. My re-working of the code also makes it easier to go stereo, so that’s good.
Hopefully, we’ll have a new release by the end of the week demonstrating these new capabilities by the end of the week.
You folks want to use the earpieces, too, right? If so, I’ll graft that into the example, too.
A big thank you for helping us get this project off the ground! Most of the lab working on this project has been out on vacation the last week, so we are just now coming back and planning on getting started soon.
@erk1313, to answer your question, I can confirm that we will be testing 4 and 16 channels.
@chipaudette yes we would like to use the earpieces. I believe Eric has sent the earpiece shield already.
Joslyn (AMPLab RA)
We’ve updated examples for the earpieces, which have been tested with Tympan Rev-E:
Make sure to download and replace the entire Tympan library.
We are also working on multichannel, stereo compression that can save a prescriptions to SD card. This allows real-time changes to the number of compression channels used. Please stay tuned!