# Demystifying Digital Filters, Part 2

In the first tutorial in our series on digital filters, you learned that filters are systems that act on signals.

If the system is linear and time-invariant, you can use impulse responses to entirely characterize the system, too.

You saw that convolution and recursion are two algorithms that give us two varieties of filters: FIR (finite impulse response) and IIR (infinite impulse response).

This next part of our tutorial series will explore FIR filters in more depth. It will cover:

The advantages and disadvantages of FIR filters

Design and implementation of FIR filters

The windowing method

A brief look at the optimum-design method

Resources for FIR filtering in Max

### FIR filter advantages and disadvantages

Before you dive into making an FIR filter, it’s worth knowing what they do well and what they do poorly.

** Advantages**:

It’s easy to design an FIR filter for

__linear phase response__The stability of an FIR filter is guaranteed (there are no feedback loops!)

** Disadvantages**:

They’re computationally intensive (convolution is a computationally expensive operation)

It’s harder to change the filter parameters in real time

### Designing and implementing FIR filters

In the last part of this series, you were introduced to the general process for designing an FIR filter:

You create a desired frequency response — that involves drawing out the frequency response curve, sampling it, and put that result into a vector of values

Compute the FFT of the frequency response to get the impulse response

Use the impulse response as the filter kernel and convolve the input with the impulse response you found in step 2

The three-step procedure described above is theoretically sound, but when you start getting into the implementation, there’s a problem that arises — for nearly all situations, the impulse response you will compute in step 2 will be *infinite*, rather than *finite*.

Why does this happen, and what do we do about it?

### A brick-wall lowpass filter

To see this issue firsthand, let’s consider an example.

Suppose you want to create a brick-wall lowpass filter (in other words, a filter with a perfectly flat __passband__, infinitely narrow __transition band__, and perfectly flat __stopband__). The frequency response for this would be would be a rectangle, like this:

To get the impulse response, you’d use the FFT. (Since the details of the FFT are not the focus of this series, let’s skip straight to the results):

This particular function appears so often in Fourier analysis that it has its own name: the __sinc____function__. This function shows up all over in DSP texts, so it’s worth knowing. The equation for the *sinc* function is

There are two important characteristics of the function to consider.

It has an

**infinite domain**. In other words, it outputs a value for every*x*.

You might notice that if *x* is zero, it appears that you’ll run into a division by zero. Mathematically, this is no issue, and it can be proved with a __little calculus__ that the result is actually 1, not undefined. (Note that a computer will not have the insight that *x* = 0 is a valid input, so if you’re coding up your own *sinc* function, you’ll need to handle this special case.)

Related to the point above, the

*sinc*function is “**non-causal**,” meaning it has non-zero values for negative values of*x*.

A non-causal impulse response of a system indicates that it is using feedback loops (and thus depends on samples in the future). This means the system is recursive.

Knowing these two characteristics of *sinc* (and thus the characteristics of the impulse response of our brick-wall filter), you should now see how you might end up with a filter kernel that is unsuited for an FIR filter.

This is just one example, though, and it is healthy to remain skeptical of how likely you’ll end up in this situation! To quell those concerns, you can research more about the __duality property of the Fourier transform__. To summarize the property, finite signals in one domain (like the frequency response we designed above) become infinite in another (like the *sinc* function) and vice-versa.

Now that you have an idea of why the issue will arise, let’s explore one of the most common solutions - ** windowing**.

### Windowing method

You can choose to “zero out” parts of the impulse response by multiplying the impulse response by a *window function*, a signal full of zeros except for a section of interest. By multiplying a signal with a window function, the result will look like a finite version of the original signal.

As an illustration, take a look at this arbitrary sinusoidal function (an infinite signal), a rectangular window, and the result of multiplying the two together:

You may wonder why you need to bother with the idea of a window function at all when you could just simply “crop” the infinite signal. The reasoning goes back to a property of linear time-invariant systems — *multiplication in the time domain is convolution in the frequency domain, and vice versa**.* This means when you crop a *sinc* impulse response by multiplying with a rectangular window, you have just convolved the frequency response with the same rectangular window! In other words, *by windowing the impulse response, you’re going to change the original frequency response, too*.

All is not lost, though! If you are careful about selecting a window function, you can minimize the effects on the new frequency response and keep your original design.

There are __many__ window functions to choose from, each one with their own optimizations.

You should notice that a common characteristic of window functions is their tapered edges. Much of window design comes down to just how this taper occurs, and sometimes window functions are simply called *taper functions*.

*Pro-tip: If you’re interested in understanding why tapering is so crucial, you’ll want to read about the *__Gibbs phenomenon__*.*

To help pick from all of those window functions, you can use the same tools you’ve learned about already — impulse and frequency response analysis.

Let’s analyze the responses of two popular windows, Hamming and Blackman.

First, notice that the Blackman window has better *stopband attenuation — a*t frequencies further from the “main lobe,” the gain drops off more quickly. The tradeoff for this is that the frequency response of the Blackman window will have a slower roll-off than the Hamming window.

Another factor to keep in mind is the *window length*. The longer of a window you use, the more of the original signal will remain (so you’ll be using more of that impulse response you worked for earlier!), which will give you more accuracy.

The tradeoff for this is computation speed and delay — the larger your window, the larger your filter kernel will be and the longer you’ll have to run expensive convolution computations and the longer of a delay line you’ll need.

Let’s say you decide that you’re comfortable with a slightly slower rolloff in exchange for better stopband attenuation, so you choose the Blackman window.

Now you’re ready to apply the window to the *sinc* impulse response from earlier.

With that, you now have a finite impulse response ready to act as a filter kernel!

### Optimum design methods

While the windowing method is straightforward, there are other ways to design FIR filter kernels. The main alternative to the windowing method is known as “optimum design.”

We’re not referring to the method as “optimal” because it is necessarily better than other methods. Instead, the name comes from the mathematical problem of __optimization__.

So far, you’ve seen that designing FIR filters with the windowing method involves some trial and error. You must:

Design your filter response.

Find the impulse response.

Try out some windows.

Examine the results.

What if you wanted to specify a parameter like, say, the maximum amount of ripple in the passband and not go through these computations over and over?

The answer, you may have guessed, is with optimum design.

The idea is that if you can describe the *error* of your resulting frequency response (which is shown in blue in the above illustration) compared to the ideal case (which is shown in orange), you can *minimize* that function for some amount of error you are willing to accept and use the resulting parameters.

That amount of error is sometimes referred to as the *tolerance* because it represents how much you will tolerate the result deviating from the ideal case.

A deeper discussion on the topic of optimum design is out of the scope of this series, but if you are interested in going further, check out the freely-available __optimal FIR filters__ section of *Spectral Audio Signal Processing *by Julius O. Smith III or Chapter 7 of the classic textbook * Digital Signal Processing* by Alan V. Oppenheim and Ronald W. Schafer.

### FIR resources in Max

To work with FIR filters in Max, there are a number of objects and examples at your disposal.

The

**buffir~**object takes a reference to a**buffer~**containing an impulse response and uses it as the filter kernel. For help, see:*buffir~.maxhelp**buffir-eq.maxpat*Patching in

**gen~**, which gives you single-sample control, is a natural fit for designing impulse responses. For an example, check out:*gen~.buffir.maxpat*The HISSTools Impulse Response Toolbox package contains externals and abstractions that are helpful for building FIR filters.

**bufconvolve~**for offline (non-real time) convolution and deconvolution of buffers**iruser~**to create IRs from a specified frequency response and output phase

### Up Next

For the third part of this series, get ready to dig into the other category of digital filters — IIR filters!

### References

and__Introduction to Digital Filters with Audio Applications__by Julius O. Smith III__Spectral Audio Signal Processing____The Scientist and Engineer's Guide to Digital Signal Processing__by Steven W. Smithby Alan V. Oppenheim and Ronald W. Schafer__Digital Signal Processing__

*Learn More: **See all the articles in this series*

*See all the articles in this series*

by Isabel Kaspriskie on 2021年3月2日 18:45