This is but one of many tricks I've used in improving the accuracy of noisy sensors and such in data aq. It is a simple simulation of a first order low pass filter, but in digital land, there are possibilities and "issues" that don't apply in analog filters, and these are worth looking into for a lot of applications. Often you can add some bits of accuracy to even a not-noisy signal (eg a limited bit resolution a/d converter, if the switching levels are more accurate than the number of bits implies - which is very often the case).

We do this by defining two filter coefficients, which must add to 1.000...for this writeup, I'll call them I and A (input and accumulator), and one accumulator register that holds the output.

The basic plan is simple. For each new input, you multiply it by I, and create the sum of this result and the accumulator multiplied by A and stuff that back into the accumulator - and that's the output.

It is NOT a perfect simulation of an analog low pass with ideal components - the I coefficient acts as though there is some series R in your filter capacitor. This can be good or bad. It's generally negligible if I is tiny, not so much if it's big, like .5, in which case half the input change gets directly to the output in one go. And like all filters, when started "fresh" at zero, it takes awhile to acquire the final value, even if the input is just a fixed non-zero number. But we can fix that easily in digital-land, where it's almost impossible in analog. We can also have separate attack and decay times (with a digital perfect diode simulation) logic, and other neat stuff, as the application requires.

Simple pseudo-code for this (thinking in C as most embedded stuff uses these days) would be:

#set coefficients

#define I 0.1F

#define A 1.0F-I

//for each sample:

accumulator = input * I + accumulator * A;

It's very important that the two coefficients add exactly to one in whatever number representation your target is using - which might not be the same as what the compiler uses. Therefore, it might be wiser to do it like this:

#define I 0.1F

float A;

# then, in some setup routine

A = 1.0F - I; // gets exactly one no matter the resolution/error of the target floating point representation

After that, the filter itself is the same. We are just making sure the net gain is 1.000x. If your compiler uses double during compilation, but your platform uses float (or you're doing this in fixed point integer, which takes a few other tricks), this can be really important. These days, for all but the most demanding requirements, the increased speed of microprocessors over what I learned on means, go ahead and use float rather than sweat fixed point integers for speed.

Now for some fun with this that's hard to do in analog. Suppose we are just starting off, and our input is non-zero. If I is small, then it'll take a long time for the filter output to ramp up to the correct value, logarithmically (assuming you did things right and cleared it in your setup routine in the first place). So, we add a flag variable, let's call it FirstTime (bool, or whatever). We set this true in our setup routine (which is called setup() for arduinos), and false thereafter.

Our filter then becomes:

if (FirstTime)

{

accumulator = input;

FirstTime = 0; // or whatever your compiler calls false

} else

{

accumulator = input * I + accumulator * A; // same as above

}

Assuming it's safe to assume the first input is at least close - this saves a lot of time (samples) getting to the correct output, particularly if I is really small - like .001, for example.

Note that if your input is from an A/D converter that has more-accurate bit switch points than the resolution implies, this can add a couple significant bits of accuracy to the result - in gross oversimplification, this is how an oversampling delta-sigma a/d works from one bit of comparison up to an N bit output (they use other tricks and fancier filters...but this is valid too, just not as slick).

But wait, there's more!

Suppose you want fast attack, slow decay, as is often the case in peak-picking threshold generation and other common tasks. Simple as can be, and there's more than one way.

First, you can use a different set of coefficients if the input is great than the output, ones with a bigger I and smaller A. Or you can even be "instant" and make I 1.0 and A 0.0 - in essence, just jam the input into the accumulator as we did above for a startup case, then afterwards revert to the old coefficients for the decay part of things. This digitally simulates an ideal diode across the series resistor in an RC lowpass filter - except that it really is perfect, unlike all real analog diodes.

The addition of logic (basically, use of the if() construct) can do all sorts of fun things that you might find useful. If your output is "going wild" - you can "clip" it to some maximum or minimum value of your choice, trivially, which would normally take "perfect" zener diodes to accomplish in analog. You could decide that if some input is way the heck off - just ignore it. (I will get into that with another technique - median smoothing, which has serious advantages in some cases).

Multiplies are nearly always cheaper in cycles than divides in computers, so this is a better construct than one that required divides. If() or case() type constructs vary depending on if your machine has a pipeline or speculative execution, but are generally pretty quick too. I've seen one compiler that if you had consecutive integer cases simply created a jump table and added the input to the case() (after multiplying by word size in the address space, usually just a shift) to the program counter, so it was extremely fast. That level of optimization is rare, but the TI TMS320 C compiler did it.

For peak picking - say you're looking at some sort of tuned circuit that gets hit with impulses and "rings down" thereafter, you can simply use some fraction of the accumulator as a threshold, and if the signal is above this, it's the hit. We've used versions of this (after some slick pre processing) to detect pitch pulses in human speech quite accurately and right on the pulse - avoiding a robotic sound that most speech codecs used to have (I had a patent on the process and it's still in use in your phone).

So, a lot of what I'm attempting to communicate here is that the combination of "simulation of analog" and computer logic can do things neither alone can accomplish, and that these simple tricks and others can be quite useful. You can do things "not in the books" pretty easily with some of these "brown" heuristics. (brown meaning "I pulled it out of my a**")

This IIR filter is most often better than a simple moving average (FIR with all coefficients = 1), which has leakage at frequencies determined by the block size, and it's certainly a lot easier to compute with less ram involved. Unlike a moving average (which does have its uses too) - this weights recent inputs more than older ones (which keep getting reduced via multiplication by A), which is most often what's wanted anyway.

Warning:

With this or any other linear or non-linear selection technique, you run the risk of finding what you were looking for - whether it was there or not. There is no substitute for understanding your problem, edge cases and all!

This is particularly important in cases where there is so much data you have to use a "dumb" front-end to discard lots of it - ask the people at CERN about that one, and the risks that they will miss something important as a result.

Sometimes there is no easy answer - you can't keep up with it all so you have to do what you can.