A project using FFT/IFFT for directional (centre) filtering and asking about Cortex M-7 CMSIS

Yep, loving the FFT/IFFT functions as built into NoiseReduction_FD, which I’ve copied and extended to process frequency buckets left versus right ear. This simple first version looks for frequency buckets that are a reasonable match left vs right and using the base NoiseReduction that are not noise.
The aim would be a filter picking out around a metre right in front from a pair of wearable mono mics shoulder-shoulder for left-right and coming back to a single earplug. With my initial lapel mics the filter is a bit sharp with cutoffs over more like 20 cm.
As a first project this is more about trying out the flows and adding menu items to the bluetooth phone menu (ie LR_enable) to get a feel for the tech.
This is what it looks like (ino): https://github.com/stvhawes/Tympan_Library/blob/left_right_filter_freq/examples/04-FrequencyDomain/LeftRightFilter_FD/LeftRightFilter_FD.ino

So what’s the question?

I’d like to compile the same code on x86 to shorten my test loop, create some output expectations, auto-tests; so starting to dig into the cortex M-7 special functions ie CMSIS DSP. Is this something that’s do-able or does it cut against a purpose of CMSIS? Maybe another but related way would be an M-7 (teensy) simulator.

Anyway, asking because it’s a lumpy topic that had me put my project aside a few weeks. Regards.

This is super cool! Great work!

Sadly, no, I’ve never cross compiled the Tympan library for running on PC-class devices. And, I’m not familiar with a Teensy 4 simulator existing, but that would be a great question over on the Teensy forums.

In my mind, there are two different challenges to making the Tympan_Library run on a PC: (1) elements of the library that are tied to the audio codec and other hardware; and (2) elements of the library that are tied to the ARM CMSIS DSP library.

I think that both challenges could be overcome, but it would be a lot of work to do it in a maintainable, sustainable way. Doing it as a one-off project would be easier, but keeping the code base going through future updates to the Tympan_Library seems like a really hard thing to do (unless there’s a Teensy simulator, like you said).

But maybe that is thinking too big. Maybe having a generic ability to compile and run Tympan firmware on a PC is more than you really need. Maybe you’re just interested in getting this one project to work. If so, you could consider a much more narrow approach…

If you already have a PC-based programming environment that you like (C++, Python, whatever), you could consider translating over just the couple of elements that you really need from the Tympan_Library. For example, if it’s the overlapping FFT and noise reduction framework that are really the only two pieces that you need, you could consider just translating those pieces over to run on a PC. Forgot about the rest of the Tympan_Library; just copy and translate the few parts that you need.

If you’re in C++ on your PC, you would literally copy-paste the code from the Tympan *.h and *.cpp files into *.h and *.cpp files in your PC project. Then, whenever you see a piece of code that is ARM-specific (like the down-deep FFT and IFFT routines), you’d have to replace it with something that works on a PC. There aren’t many of these AMR-specific pieces of code. So, I think that you’d find that it wouldn’t take long before you’re done!

If you choose to pursue this route, I can help you with a substitute FFT routine.

Great work with the two-channel frequency domain algorithm for spatial processing. Very cool.

Chip

Thanks Chip that’s very encouraging :slight_smile:

I’ve python 3.12.4 and c++(gcc) 13.3.1 (yes audacity, but not matlab) … and hoping I can use pyaudio (portaudio) to make my code useful also on Mac/Windows. Installing pyaudio also brings in portaudio which is callback-based. I hope to copy Tympan Library code into these callbacks, eg:

static int patestCallback( const void *inputBuffer, void *outputBuffer,
                           unsigned long framesPerBuffer,
                           const PaStreamCallbackTimeInfo* timeInfo,
                           PaStreamCallbackFlags statusFlags,
                           void *userData )
{

Yes please on the substitute FFT, but perhaps not yet!

Something else not quite yet: are there easy maths/codes for assessing a result audio stream automatically? Perhaps counting frequency poles in the FFT arrays for an estimate of the number of speakers, or again from FFT; calculating an estimate of white noise or signal frequency-dispersion … as ideas.

Anyway, thanks again Chip!

Best regards,
Stephen

P.S>
portaudio works for linux (ALSA) and works on Windows by design
pyaudio then sits on portaudio