Digital Signal Controllers Turn
The digital signal controller combines the best features of both microcontrollers and digital signal processors in a powerful, flexible platform on which to build robust measurement and control applications.
Creed Huddleston, Omnisys Corp.
Precise temperature measurement is a requirement in many process control and/or monitoring situations. According to Eurotherm/Barber-Colman’s Dennis Hablewitz, “It is hard to think of a process or, for that matter, area of industry in which temperature is not measured” . Whatever the application, designers usually want measurement accuracy at the lowest possible cost from both a financial and a computational perspective.
Thermocouples are a relatively inexpensive way to measure a wide range of temperatures, but their outputs are extremely low voltage—on the order of millivolts—and they are often installed in high-noise environments. Many designers would love to apply digital signal processing techniques to enhance the quality of their measurements, but microcontrollers frequently lack the horsepower required to implement those techniques and to perform the associated control and communication functions. Riding to the rescue is the digital signal controller, a new class of processor that combines the best features of both microcontrollers and digital signal processors (DSPs) in a powerful, flexible platform on which to build robust measurement and control applications.
Signal Processing Techniques for Maximum Accuracy
Figure 1 shows the basic information flow from the thermocouple to the microprocessor, which looks like nearly every other system that uses digital signal processing to extract or enhance measured data.
At a minimum, the thermocouple signal conditioning circuitry transforms the differential millivolt-level thermocouple signal into one that we can deal with more easily by performing cold-junction compensation and converting the differential signal to an amplified single-ended output. This output (stage 2) then passes through a low-pass anti-aliasing filter that band-limits the frequency content of the converted signal to meet the Nyquist criterion.
There are two important points to note about the anti-aliasing filter. First, the filter must be analog, not digital. This requirement stems from the spectral replication that occurs when a signal is sampled; the spectral components of a digital filter are repeated every FS samples, but an analog low-pass filter has no such replication, attenuating all frequency components above its cutoff frequency. The second, related point is that the cutoff frequency for the filter should be significantly lower than the sampling frequency, FS, so that the order of the analog filter does not have to be too great to achieve the desired attenuation at frequencies near FS and above. Not only does the reduced filter order simplify the filter design, but it also improves the filter’s stability since a lower order filter requires fewer discrete components whose values may change over time. The band-limited signal at stage 3 is then digitized by the A/D converter and output as a numerical value at stage 4 for further processing.
The techniques used for a temperature measurement application will depend in large measure on the characteristics of that specific system: the frequency content of the temperature signal and of the electronic noise in the thermocouple’s environment, the amount of signal delay that can be tolerated, and, of course, the amount of computational horsepower available. You will also need to know something about the spectral characteristics of both the temperature signal you’re trying to extract and the noise sources in the system, and to do some testing to confirm that your analysis is correct.
Improving Temperature Measurements
Oversampling. In many processes, the physical limitations of heat transfer prevent rapid fluctuations in temperature and thus severely limit the frequency content of the associated temperature signal. One way to contend with this limited frequency content is oversampling, or sampling the measured signal more often than the frequency content strictly requires. While Nyquist tells us that a signal with a bandwidth of BM Hz must be sampled at a frequency FS of at least 2 BM Hz, real-world systems need a faster rate (often 4–5 BM Hz) due to physical limitations of the sampling hardware.
When we refer to oversampling in this discussion, however, we actually mean sampling at rates well above the Nyquist rate, on the order of 10–20 BM. By doing so, we can spread the replicated spectra further apart (remember, their separation depends on FS and BM, and BM is fixed). This minimizes leakage between adjacent spectra and allows us to implement powerful algorithms without incurring significant signal delays . Digital signal controllers easily handle fast data rates that would choke slower microcontrollers.
Note that the sampling rate must be based on the measured signal bandwidth, BM, and not that of the “real” temperature signal, BT, to avoid folding noise components outside the desired bandwidth into the spectrum of interest.
Removing Power-Line Interference. In real-world applications, it is quite common to run thermocouple signal lines close to AC power lines. In plastic injection molding, for instance, a cartridge heater is often used to heat the mold (and indirectly the plastic) to the proper temperature. These heaters usually have four wires, two for the heater’s AC power and two for the embedded thermocouple that monitors the heater’s temperature, and these wires run in parallel nearly the length of the heater. Although the thermocouple’s signal is differential, the power-line noise is often sufficiently non-common-mode to present serious problems. Fortunately, we can use the power of digital signal processing to get around this problem.
Let’s start with the easy case first. For those processes in which BT is much less than BM, the
There are a couple of important things to note about this spectrum. First, although the figure is obviously an idealized case in which there is no broadband noise (i.e., noise that fills the entire spectrum of interest), it allows us to easily visualize the concept we’re exploring. Second, even though the power of the AC noise is much higher than that of the temperature signal itself, because the noise spectrum is separated from the signal of interest we can apply a unity-gain low-pass filter to kill the noise very effectively, as shown in Figure 2B.
Figure 2C shows the spectrum after application of the low-pass filter. The power-line noise has been eliminated, leaving a clean temperature signal. The implementation of the low-pass filter can be in the form of either an infinite impulse response (IIR) or a finite impulse response (FIR) filter, depending on the amount of memory and computational resources we have at our disposal. IIR filters usually consume fewer resources, but FIR filters allow the designer to tailor the filter characteristics more precisely. While an exploration of the characteristics of the different filter types is beyond the scope of this article, there are a number of excellent sources that cover the topic thoroughly .
If we’re clever, we’ll set the cutoff frequency of the anti-aliasing filter at a frequency that maintains unity gain over BT but rolls off soon after that. We will also use a digital low-pass filter to further attenuate any remaining noise outside the frequency region of interest. The digital signal controller gives us the computational bandwidth we need to implement high-order filters with nice, sharp transitions between the passband and the stopband to eliminate any frequency components outside the region of interest.
To accomplish this, we use “notch” filters that pass everything above and below the filter frequency but sharply attenuate signals around the frequency, as shown in Figure 3B. Here, the computational power of the digital signal controller really shines, allowing us to create considerably higher order filters with sharper frequency notches than we could implement in the analog domain.
The key to minimizing the distortion of the “good” signal is to have good, sharp notches that are only as wide as necessary to eliminate the power-line interference. Figure 3C shows the result of applying the notch filter of Figure 3B to the noisy signal in Figure 3A.
Unlike the low-bandwidth case, in which we can exclude the noise without affecting the temperature signal itself, a notch filter will cause some degradation of the temperature signal. Even so, we will get a much better measurement than we would with a low-order analog filter or no filtering at all.
Median Filtering for the Removal of Shot Noise. All of the noise that we have examined thus far has been present continuously, but in some cases this assumption doesn’t hold. Noise that occurs sporadically, also known as shot noise, may have a variety of
If we simply average the signal by applying either an FIR or an IIR filter, the outputs for samples 4 and beyond will shift up significantly for as long as sample 4 is included in the average, which is not what we really want. Most people would intuitively toss out sample 4 as an aberration and say that the actual temperature is about 300 F, but a technique known as median filtering is a better way to do it.
When we perform median filtering on a group of samples, we sort them in ascending or descending order and then pick the median value as our filter output. In effect, a median filter is another form of low-pass filter but tends to do a better job than an averaging filter in the presence of shot noise because it actually throws out nonrepresentative samples rather than merely attenuating their effect.
The key parameter for a median filter is its length; the longer the filter, the greater the number of noisy samples it can filter out—but at the expense of greater signal delay. For instance, if we have a median filter that is three samples in length, we can tolerate only one noisy sample. If we have two noisy samples, it is quite likely (though not absolutely guaranteed) that one of the two will be the median value and thus be output from the filter. With three noisy samples, the filter output is guaranteed to be a noisy value. In general, a median filter must be at least 2 N + 1 samples in length in order to suppress at most N noisy samples. Since the length of the filter determines the delay through the filter, this may be a significant limitation in some applications.
Another drawback is that the filter samples must be sorted for each output sample. DSPs that have been highly optimized for performing the pipelined multiply-accumulate operations that are the bread and butter of standard digital filters slow down significantly when forced to perform the compare-and-branch operations required for sorting in a median filter.
Figure 4B shows the effect of applying a length-3 median filter to the noisy signal of Figure 4A. Note that sample 4, which was nearly twice the value of the surrounding samples, is now much more reasonable. Additional filtering by a simple moving-average low-pass filter can clean the signal up further, but without the ill effects of including the obviously incorrect sample value.
Multichannel Averaging. For a few lucky designers, multichannel averaging offers a way to remove noise that is induced in a particular thermocouple path. Although most applications cannot afford the cost of space or components to implement redundant thermocouples, those that can are able to use this technique to their advantage. Simply averaging sensor readings across multiple thermocouples measuring the same (or nearly the same) location significantly diminishes the effects of non-common-mode noise. Median filtering can also be applied across sensor readings (assuming that we’re measuring the same spot) to further reduce the effects of shot noise.
What’s Wrong with Using a Microcontroller?
The ability to perform single-cycle high-precision mathematical operations is essential to implementing digital signal processing algorithms. Not only do the mathematical operations themselves need to execute rapidly, but the accumulator (register) that holds the results must be able to store the results from many operations. Usually, we’re talking about accumulating 16-bit by 16-bit multiplications (32-bit results), which requires a 40-bit-or-better register that can quickly identify and recover from arithmetic overflows. Microcontrollers can’t do this, and therefore spend an inordinate amount of time manipulating individual bytes when performing high-precision mathematical operations. The wide, single-cycle accumulators found in digital signal controllers easily support high-speed, high-precision data flow for implementing digital filters and other signal processing algorithms.
Another requirement for maximum signal processing throughput is the ability to read from two separate areas of memory in a single cycle (for example, to get the filter coefficient and the associated data sample). Very few microcontrollers support this capability, known as a Harvard or modified Harvard architecture. Without it, the time to retrieve sample data and filter coefficients is doubled, effectively halving the processing throughput. Digital signal controllers all support either a Harvard or modified Harvard architecture, maximizing their ability to deliver data to the mathematical engine without delay.
Finally, most microcontrollers do not support an internal bus wide enough to move 24- to 32-bit data efficiently. The overhead required to transfer high-precision data severely limits the amount of data the microcontroller can handle and unnecessarily complicates the associated code. With their wider internal data paths, digital signal controllers eliminate this overhead from both a processing and coding perspective.
What’s Wrong with Using a Digital Signal Processor?
DSPs are certainly optimized to support fast mathematical processing, but to achieve their high performance they make extensive use of data or instruction pipelines. Once filled, these pipelines deliver all the data and the associated instruction to the processing core in a steady stream with very little overhead. Mathematical operations usually take only a single cycle, so the throughput is tremendous until something disrupts the pipeline. If the instruction or data are not contained in the associated pipeline, the pipeline has to be flushed, the new information retrieved, and the pipeline loaded again. All of this takes time, sometimes a lot of time. Interrupts or program branches are sources of pipeline disruption, and both are an integral part of many embedded applications. Digital signal controllers tend to have much shorter pipelines or none at all, significantly reducing the disruptive effects of interrupts or program branches.
A second consideration is packaging. Many embedded measurement and control applications are severely space constrained, but DSP pin counts are high and growing. Some digital signal controllers, such as the dsPIC family, come with as few as 18 pins, making board layout easy and reducing manufacturing costs.
Finally, and for many designers most important, the leap from microcontroller to DSP is fearsome, requiring them to learn new and very different architectures as well as new coding paradigms to get the most performance from the device. Digital signal controller manufacturers, on the other hand, have intentionally maintained a microcontroller look and feel while adding full-featured DSP performance. This allows a designer who is proficient in microcontroller applications to quickly add DSP functionality without having to learn an entirely new architecture.
While we focused on temperature measurement in this article, many of the techniques we examined are directly applicable to other process parameters such as flow and pressure. In the very near future, the use of digital signal controllers will explode as more designers become aware of their capabilities and are comfortable designing with them.
dsPIC is a trademark of Microchip Technology Inc.
2. If you aren’t familiar with these concepts, consult an introductory text on DSP to verify that they are not only true, but are actually used in real-world systems. One excellent and highly readable book is Understanding Digital Signal Processing by Richard G. Lyons (ISBN 0-201-63467-8). In particular, Lyons’s discussion of periodic sampling offers a clear, concise, and complete explanation of its effects.
3. In real-time signal processing, the number of samples used by a particular algorithm affects the delay between the initial sampling and the associated processed output signal. For example, in a simple 8-tap moving average filter, the effect of a sample takes eight sample periods to move entirely through the filter. Depending on the application, this processing delay may degrade the system performance to an unacceptable level.
Creed Huddleston is Executive Vice President, Omnisys Corp., Raleigh, NC; 919-981-0123, x-13, email@example.com.