Learning acoustic transfer functions on the Tympan

I am trying to use the Tympan to estimate acoustic transfer functions. I am starting off by estimating the filter between the line out of the Tympan and the line in (which should be a scaled delay). However, I am having some difficulties getting this filter to converge using normalized least mean squares.

Has anyone successfully learned this filter on the Tympan?

Hi! Welcome!

I’ve never used NLMS for the specific purpose of finding the acoustic transfer function. Well, that’s a lie…we do use NLMS as part of our feedback cancelation algorithms (as shown in the Tympan examples) so, i guess, that i have used it to estimate the transfer function. It’s just that i don’t really think of it that way.

When I think of transfer function, I think of evaluating linear systems that exhibit minimum phase (ie, no time lag). In that case, it’s super easy to evaluate the transfer function in the frequency domain instead of using time-domain methods like LMS.

In the freq domain, I’d simply stimulate the system with white noise (for example), I’d record the input and output, I’d do an FFT on both, I’d do the complex division of output / input, and then I’d done. I’d repeat several times to get a more statistically reliable estime. Easy.

This approach works really well for seeing the transfer function as being simply a frequency response and a phase response.

Alternatively, if your system response is dominated by a delay between input and output, or if there’s the possibility of several echo pathways, then perhaps LMS is a better approach. Though, if the math is to be believed, they’re probably equivalent.

So, back to you and your goals…beyond your current case of measuring the delay between input and output, is measuring delay really your future goal? Will you expect there to be multiple echoes? If so, then I’d recommend sticking with LMS. You could start by looking at the Tympan examples for feedback cancelation.

Or, if you really just want frequency and phase response (and not much concern over multiple echoes), you might consider abandoning LMS and using FFT analysis instead.

Let me know what you think!


I would like to learn the filter from the output to the input adaptively for a few reasons:

  1. I am eventually going to learn the transfer function between a speaker and the on board mic on the tympan, which I plan on using to test out simple active noise cancellation algorithms.
  2. I plan on estimating the transfer function multiple times during the Tympan’s execution, I can save on space by adaptively learning which will grant me longer filters on the device.

I suppose I could use a frequency-domain based adaptive filter, but I should be able to do this in the time domain as well.

I don’t know how to attach my code on this forum, but I am happy to provide it.

Thanks for the additional details.

If I were you, I’d start by doing your calculations on recorded data and only then, once you’ve got that right, would I try to do the calculations on-the-fly adaptively. Doing anything on-the-fly requires a lot of annoying debugging and you want to be confident that the core processing has already been validated.

So, if you wanted to do your transfer function assessment off-line first, you’d have the Tympan record your data to its SD card. Then you’d pull the data off the SD card and play around until you’re happy with the core algorithm. I’d do this on a PC/Mac where its easier to debug and make graphs, but you could do it on the Tympan if you needed to. Then, once you’re happy with your processing, you’d move it over to the Tympan.

Just a suggestion.

Also, as for code sharing, it’s probably easiest to simply post it to GitHub and share a link here. You can paste chunks of code here in these forum posts, but full code is prob easiest via GitHub. (If you do GitHub)


Oh yeah…

Years ago, I measured the delay between the input and the output of the Tympan. This is a simpler version of your goal of measuring the full transfer function, including any delay.

Perhaps you’d be interested: Open Audio: Measuring Audio Latency


I have been able to learn the relative transfer function between the on board microphones, as well as recording to the SD card and estimating the transfer function (as a delay). In fact, I found that using the SD card increases the delay of the FIR code used in the latency tests, at least compared to your earlier post regarding latency.

Let me know if you need me to do anything else. Attached is my code


Your code posting on GitHub seems to work great. Nice work!

I’m glad to hear that you’ve had success with measuring the transfer function. Would it be possible for you to share any graphs or figures of your findings? I’m super interested!

As for the changing latency due to adding the SD card, that does not have to be the case. It is not inherent in doing SD recording. So, if you are seeing an extra delay due to adding SD recording, we should find out why!

How can we figure this out…

In the Tympan code, the overall delay does depend upon the order in which the audio processing classes are executed. If the system tries to execute them in the wrong order, the input data for a given class might not yet be available. The system will have to wait, adding delay. So, the key is to try to get all the processing blocks to get called in the right order so as to avoid any extra delays.

It may be helpful for you to know that the audio processing classes are executed in the order that you create them in your code. So, if you make an edit to your code where you change the order that you create them, you will change the order of their execution. This might change the overall delay.

For a simple (sequential) audio processing chain, were you simply have audio going in a straight line through each audio processing step, it’s easy to see how to create them in the right order. But, the SD class is tricky. It is tricky because it has two inputs (for stereo audio recording). If you care about delay, then you need to make sure that both audio paths are computed prior to the SD class is called. If you get the wrong order, you risk one of the channels being delayed relative to the other in the SD recording.

Because you saw the delay change when adding the SD recorder, I’m thinking that your are seeing this out-of-order issue. Perhaps, if we change the order in which you create your audio classes, we can make the extra delay to go away.

Can you share your version of the code where you have included the SD writing?

Also, even if you get the order of the classes perfect, there is still a certain amount of delay that you will experience. That minimum delay is described by the blog post that I linked. Under ideal conditions, your delay will be at least: delay_seconds = (2*audio_block_samples + 38 samples) / sample_rate_Hz

Per this equation, if you want to reduce the delay, the easiest two things to do are: (1) increase the sample rate, and (2) reduce the audio block size. You can do both at the top of your code where you set sample_rate_Hz and audio_block_samples.


I am on travel and I forgot to bring the SD card reader (I have the tympan and an aux cord on me). However, my code for measuring latency is now available on the github link I sent. It uses bluetooth to start recording. If you need me to generate a graph, it will have to be later this month.

I didn’t consider messing with the order of the classes I set up, this will be something I will look into when I am back.