January 2003
Table of Contents

Digital Signal Controllers Turn
Thermocouples into Superstars

The digital signal controller combines the best features of both microcontrollers and digital signal processors in a powerful, flexible platform on which to build robust measurement and control applications.

Creed Huddleston, Omnisys Corp.

Precise temperature measurement is a requirement in many process control and/or monitoring situations. According to Eurotherm/Barber-Colman’s Dennis Hablewitz, “It is hard to think of a process or, for that matter, area of industry in which temperature is not measured” [1]. Whatever the application, designers usually want measurement accuracy at the lowest possible cost from both a financial and a computational perspective.

Thermocouples are a relatively inexpensive way to measure a wide range of temperatures, but their outputs are extremely low voltage—on the order of millivolts—and they are often installed in high-noise environments. Many designers would love to apply digital signal processing techniques to enhance the quality of their measurements, but microcontrollers frequently lack the horsepower required to implement those techniques and to perform the associated control and communication functions. Riding to the rescue is the digital signal controller, a new class of processor that combines the best features of both microcontrollers and digital signal processors (DSPs) in a powerful, flexible platform on which to build robust measurement and control applications.

Signal Processing Techniques for Maximum Accuracy
By applying advanced signal processing techniques, a designer can greatly improve both the accuracy and stability of thermocouple temperature measurements. Before delving into the details of the algorithms themselves, let’s agree that we understand the underlying concepts (see “Key Digital Signal Processing Principles”), or at least are willing to accept them on faith for this discussion [2]. Bear in mind that in our case we’re trying to extract the underlying “real” temperature signal from a measured signal that may include significant levels of noise. To ensure that we don’t confuse the two, in the following discussion all references to parameters associated with the temperature signal are subscripted with a T; those with noise, with an N; and those with the measured signal, with an M. For example, the voltage of the real temperature signal is written VT; that of the noise is written VN; and that of the measured voltage is written VM:

VM = VT + VN (1)

Figure 1 shows the basic information flow from the thermocouple to the microprocessor, which looks like nearly every other system that uses digital signal processing to extract or enhance measured data.

Figure 1. The basic information flow from the thermocouple to the microprocessor resembles that of most other systems using digital signal processing to extract or enhance measured data.

At a minimum, the thermocouple signal conditioning circuitry transforms the differential millivolt-level thermocouple signal into one that we can deal with more easily by performing cold-junction compensation and converting the differential signal to an amplified single-ended output. This output (stage 2) then passes through a low-pass anti-aliasing filter that band-limits the frequency content of the converted signal to meet the Nyquist criterion.

There are two important points to note about the anti-aliasing filter. First, the filter must be analog, not digital. This requirement stems from the spectral replication that occurs when a signal is sampled; the spectral components of a digital filter are repeated every FS samples, but an analog low-pass filter has no such replication, attenuating all frequency components above its cutoff frequency. The second, related point is that the cutoff frequency for the filter should be significantly lower than the sampling frequency, FS, so that the order of the analog filter does not have to be too great to achieve the desired attenuation at frequencies near FS and above. Not only does the reduced filter order simplify the filter design, but it also improves the filter’s stability since a lower order filter requires fewer discrete components whose values may change over time. The band-limited signal at stage 3 is then digitized by the A/D converter and output as a numerical value at stage 4 for further processing.

The techniques used for a temperature measurement application will depend in large measure on the characteristics of that specific system: the frequency content of the temperature signal and of the electronic noise in the thermocouple’s environment, the amount of signal delay that can be tolerated, and, of course, the amount of computational horsepower available. You will also need to know something about the spectral characteristics of both the temperature signal you’re trying to extract and the noise sources in the system, and to do some testing to confirm that your analysis is correct.

Improving Temperature Measurements
Using the dsPIC family of digital signal controllers from Microchip Technology Inc. as an example, let’s now look at five ways to improve temperature measurements.

Oversampling. In many processes, the physical limitations of heat transfer prevent rapid fluctuations in temperature and thus severely limit the frequency content of the associated temperature signal. One way to contend with this limited frequency content is oversampling, or sampling the measured signal more often than the frequency content strictly requires. While Nyquist tells us that a signal with a bandwidth of BM Hz must be sampled at a frequency FS of at least 2 BM Hz, real-world systems need a faster rate (often 4–5 BM Hz) due to physical limitations of the sampling hardware.

When we refer to oversampling in this discussion, however, we actually mean sampling at rates well above the Nyquist rate, on the order of 10–20 BM. By doing so, we can spread the replicated spectra further apart (remember, their separation depends on FS and BM, and BM is fixed). This minimizes leakage between adjacent spectra and allows us to implement powerful algorithms without incurring significant signal delays [3]. Digital signal controllers easily handle fast data rates that would choke slower microcontrollers.

Note that the sampling rate must be based on the measured signal bandwidth, BM, and not that of the “real” temperature signal, BT, to avoid folding noise components outside the desired bandwidth into the spectrum of interest.

Removing Power-Line Interference. In real-world applications, it is quite common to run thermocouple signal lines close to AC power lines. In plastic injection molding, for instance, a cartridge heater is often used to heat the mold (and indirectly the plastic) to the proper temperature. These heaters usually have four wires, two for the heater’s AC power and two for the embedded thermocouple that monitors the heater’s temperature, and these wires run in parallel nearly the length of the heater. Although the thermocouple’s signal is differential, the power-line noise is often sufficiently non-common-mode to present serious problems. Fortunately, we can use the power of digital signal processing to get around this problem.

Let’s start with the easy case first. For those processes in which BT is much less than BM, the
Click for larger image
Click for larger image Figure 2. The first spectrum is an example of a process in which the temperature signal's bandwidth BT is much less than BM, the bandwidth of the measured signal (A). Note that the figure is obviously an idealized case in which there is no broadband noise. The spectrum of the unity-gain low-pass filter used to kill the power line noise is shown in (B). After applying the low-pass filter in (B) to the signal of (A), we obtain the spectrum shown in (C). The power line noise has been eliminated, leaving a clean temperature signal. [Click for larger image]
application of a low-pass filter with a very low cutoff frequency can significantly improve system accuracy by removing much of the noise. This often applies when large thermal masses are monitored in the presence of AC power-line noise, as shown in Figure 2A.

There are a couple of important things to note about this spectrum. First, although the figure is obviously an idealized case in which there is no broadband noise (i.e., noise that fills the entire spectrum of interest), it allows us to easily visualize the concept we’re exploring. Second, even though the power of the AC noise is much higher than that of the temperature signal itself, because the noise spectrum is separated from the signal of interest we can apply a unity-gain low-pass filter to kill the noise very effectively, as shown in Figure 2B.

Figure 2C shows the spectrum after application of the low-pass filter. The power-line noise has been eliminated, leaving a clean temperature signal. The implementation of the low-pass filter can be in the form of either an infinite impulse response (IIR) or a finite impulse response (FIR) filter, depending on the amount of memory and computational resources we have at our disposal. IIR filters usually consume fewer resources, but FIR filters allow the designer to tailor the filter characteristics more precisely. While an exploration of the characteristics of the different filter types is beyond the scope of this article, there are a number of excellent sources that cover the topic thoroughly [4].

If we’re clever, we’ll set the cutoff frequency of the anti-aliasing filter at a frequency that maintains unity gain over BT but rolls off soon after that. We will also use a digital low-pass filter to further attenuate any remaining noise outside the frequency region of interest. The digital signal controller gives us the computational bandwidth we need to implement high-order filters with nice, sharp transitions between the passband and the stopband to eliminate any frequency components outside the region of interest.

Click for larger image
Click for larger image Figure 3. In this signal spectrum, the power line noise is within the temperature signal's frequency band (A). The spectrum of a notch filter that passes everything above and below the "notch" frequency, but sharply attenuates signals around that frequency is shown in (B). The key to minimizing the distortion of the "good" signal is to have sharp notches that are only as wide as necessary to eliminate the power line interference. The application of the notch filter of (B) to the noisy signal in (A) produces the spectrum in (C). In contrast to the low-bandwidth case, a notch filter will cause some degradation of the temperature signal. [Click for larger image]
But what do we do when the power-line noise isn’t outside the temperature signal’s frequency band, as in Figure 3A? We have to be surgeons rather than butchers, snipping out only the bad frequency bands and leaving the “good” signal as intact as possible.

To accomplish this, we use “notch” filters that pass everything above and below the filter frequency but sharply attenuate signals around the frequency, as shown in Figure 3B. Here, the computational power of the digital signal controller really shines, allowing us to create considerably higher order filters with sharper frequency notches than we could implement in the analog domain.

The key to minimizing the distortion of the “good” signal is to have good, sharp notches that are only as wide as necessary to eliminate the power-line interference. Figure 3C shows the result of applying the notch filter of Figure 3B to the noisy signal in Figure 3A.

Unlike the low-bandwidth case, in which we can exclude the noise without affecting the temperature signal itself, a notch filter will cause some degradation of the temperature signal. Even so, we will get a much better measurement than we would with a low-order analog filter or no filtering at all.

Median Filtering for the Removal of Shot Noise. All of the noise that we have examined thus far has been present continuously, but in some cases this assumption doesn’t hold. Noise that occurs sporadically, also known as shot noise, may have a variety of
Figure 4. Data sampled in the presence of "shot" noise looks like (A). These sampled data in (B) show the effect of applying a length-3 medial filter to the noisy signal of (A). Note that sample in (B) is much more reasonable.
electrical or mechanical sources. A look at Figure 4A shows us that we can do a pretty good job of picking out the underlying signal, even in the presence of significant shot noise.

If we simply average the signal by applying either an FIR or an IIR filter, the outputs for samples 4 and beyond will shift up significantly for as long as sample 4 is included in the average, which is not what we really want. Most people would intuitively toss out sample 4 as an aberration and say that the actual temperature is about 300 F, but a technique known as median filtering is a better way to do it.

When we perform median filtering on a group of samples, we sort them in ascending or descending order and then pick the median value as our filter output. In effect, a median filter is another form of low-pass filter but tends to do a better job than an averaging filter in the presence of shot noise because it actually throws out nonrepresentative samples rather than merely attenuating their effect.

The key parameter for a median filter is its length; the longer the filter, the greater the number of noisy samples it can filter out—but at the expense of greater signal delay. For instance, if we have a median filter that is three samples in length, we can tolerate only one noisy sample. If we have two noisy samples, it is quite likely (though not absolutely guaranteed) that one of the two will be the median value and thus be output from the filter. With three noisy samples, the filter output is guaranteed to be a noisy value. In general, a median filter must be at least 2 N + 1 samples in length in order to suppress at most N noisy samples. Since the length of the filter determines the delay through the filter, this may be a significant limitation in some applications.

Another drawback is that the filter samples must be sorted for each output sample. DSPs that have been highly optimized for performing the pipelined multiply-accumulate operations that are the bread and butter of standard digital filters slow down significantly when forced to perform the compare-and-branch operations required for sorting in a median filter.

Figure 4B shows the effect of applying a length-3 median filter to the noisy signal of Figure 4A. Note that sample 4, which was nearly twice the value of the surrounding samples, is now much more reasonable. Additional filtering by a simple moving-average low-pass filter can clean the signal up further, but without the ill effects of including the obviously incorrect sample value.

Multichannel Averaging. For a few lucky designers, multichannel averaging offers a way to remove noise that is induced in a particular thermocouple path. Although most applications cannot afford the cost of space or components to implement redundant thermocouples, those that can are able to use this technique to their advantage. Simply averaging sensor readings across multiple thermocouples measuring the same (or nearly the same) location significantly diminishes the effects of non-common-mode noise. Median filtering can also be applied across sensor readings (assuming that we’re measuring the same spot) to further reduce the effects of shot noise.

What’s Wrong with Using a Microcontroller?
So why wouldn’t you just go out and implement these ideas on your tried-and-true 8- or 16-bit microcontroller? There are three basic constraints on a microcontroller’s ability to use these algorithms: arithmetic hardware functions, memory-addressing modes, and data bus width.

The ability to perform single-cycle high-precision mathematical operations is essential to implementing digital signal processing algorithms. Not only do the mathematical operations themselves need to execute rapidly, but the accumulator (register) that holds the results must be able to store the results from many operations. Usually, we’re talking about accumulating 16-bit by 16-bit multiplications (32-bit results), which requires a 40-bit-or-better register that can quickly identify and recover from arithmetic overflows. Microcontrollers can’t do this, and therefore spend an inordinate amount of time manipulating individual bytes when performing high-precision mathematical operations. The wide, single-cycle accumulators found in digital signal controllers easily support high-speed, high-precision data flow for implementing digital filters and other signal processing algorithms.

Another requirement for maximum signal processing throughput is the ability to read from two separate areas of memory in a single cycle (for example, to get the filter coefficient and the associated data sample). Very few microcontrollers support this capability, known as a Harvard or modified Harvard architecture. Without it, the time to retrieve sample data and filter coefficients is doubled, effectively halving the processing throughput. Digital signal controllers all support either a Harvard or modified Harvard architecture, maximizing their ability to deliver data to the mathematical engine without delay.

Finally, most microcontrollers do not support an internal bus wide enough to move 24- to 32-bit data efficiently. The overhead required to transfer high-precision data severely limits the amount of data the microcontroller can handle and unnecessarily complicates the associated code. With their wider internal data paths, digital signal controllers eliminate this overhead from both a processing and coding perspective.

What’s Wrong with Using a Digital Signal Processor?
By now you have probably agreed that a standard microcontroller just does not have the horsepower to use these algorithms, but what about a DSP? These devices are tailored for high-speed throughput, so they should be perfect, right? Well, not so fast.

DSPs are certainly optimized to support fast mathematical processing, but to achieve their high performance they make extensive use of data or instruction pipelines. Once filled, these pipelines deliver all the data and the associated instruction to the processing core in a steady stream with very little overhead. Mathematical operations usually take only a single cycle, so the throughput is tremendous until something disrupts the pipeline. If the instruction or data are not contained in the associated pipeline, the pipeline has to be flushed, the new information retrieved, and the pipeline loaded again. All of this takes time, sometimes a lot of time. Interrupts or program branches are sources of pipeline disruption, and both are an integral part of many embedded applications. Digital signal controllers tend to have much shorter pipelines or none at all, significantly reducing the disruptive effects of interrupts or program branches.

A second consideration is packaging. Many embedded measurement and control applications are severely space constrained, but DSP pin counts are high and growing. Some digital signal controllers, such as the dsPIC family, come with as few as 18 pins, making board layout easy and reducing manufacturing costs.

Finally, and for many designers most important, the leap from microcontroller to DSP is fearsome, requiring them to learn new and very different architectures as well as new coding paradigms to get the most performance from the device. Digital signal controller manufacturers, on the other hand, have intentionally maintained a microcontroller look and feel while adding full-featured DSP performance. This allows a designer who is proficient in microcontroller applications to quickly add DSP functionality without having to learn an entirely new architecture.

Digital signal controllers are tremendously powerful devices that easily bridge the microcontroller and DSP worlds. By adding DSP
Company Information

Microchip Technology Inc.
Chandler, AZ
functionality in familiar architectures, they allow designers to quickly implement new algorithms that produce significantly better results in the measurement and control of a variety of process parameters. Because digital signal controllers reduce the learning curve associated with using the new device, they also reduce the risk perceived by designers and their companies in adopting the technology.

While we focused on temperature measurement in this article, many of the techniques we examined are directly applicable to other process parameters such as flow and pressure. In the very near future, the use of digital signal controllers will explode as more designers become aware of their capabilities and are comfortable designing with them.

dsPIC is a trademark of Microchip Technology Inc.

1. Dick Johnson. Feb. 2002. “Old and in the way?” Control Engineering, Vol. 46, No. 2:46.

2. If you aren’t familiar with these concepts, consult an introductory text on DSP to verify that they are not only true, but are actually used in real-world systems. One excellent and highly readable book is Understanding Digital Signal Processing by Richard G. Lyons (ISBN 0-201-63467-8). In particular, Lyons’s discussion of periodic sampling offers a clear, concise, and complete explanation of its effects.

3. In real-time signal processing, the number of samples used by a particular algorithm affects the delay between the initial sampling and the associated processed output signal. For example, in a simple 8-tap moving average filter, the effect of a sample takes eight sample periods to move entirely through the filter. Depending on the application, this processing delay may degrade the system performance to an unacceptable level.

4. Lyons [2] devotes an entire chapter to each of the two major filter types (IIR and FIR).

Key Digital Signal Processing Principles
This article assumes the reader’s understanding of a few key digital signal processing principles, summarized below. If you don’t understand them, or not so well as you’d like, refer to the end notes for a better explanation.

Band-Limited Signal. A band-limited signal is simply a signal with no significant frequency content outside a specific range
Click for larger image
Click for larger image Figure 5. A band-limited signal's spectrum (A) has no significant frequency outside a specified range, B. Whenever a signal is sampled at a frequency of FS Hz, the signal spectrum is replicated every FS in the frequency domain (B). Violating the Nyquist rate results in the specttrum shown in (C). When the replicated spectra overlap, they add, distorting the measured signal. [Click for larger image]
that is usually (but not always) specified about 0 Hz. This range is the signal’s bandwidth, which we will call B Hz. For reasons that will shortly become apparent, the signals we process must be band limited. Figure 5A shows a band-limited signal’s frequency spectrum.

Spectral Replication. Whenever a signal is sampled at a frequency of FS Hz, the signal spectrum is replicated every FS Hz in the frequency domain, and this replication occurs ad infinitum in both the positive and negative frequency directions (see Figure 5B).

Nyquist Criterion. Because of spectral replication, the sampling frequency FS must be at least 2 B Hz to prevent the spectra centered at NFS from overlapping. For sampled systems, the separation between replicated spectra depends upon both FS and B. Figure 5B shows a sampling rate that meets the Nyquist criterion for the given bandwidth. Figure 5C illustrates what happens when the Nyquist rate is violated. When the replicated spectra overlap, the spectra add, distorting the measured signal.

Creed Huddleston is Executive Vice President, Omnisys Corp., Raleigh, NC; 919-981-0123, x-13, creedh@omnisyscorp.com.

For further reading on this and related topics, see these Sensors articles.

"Field Installation of Thermocouple and RTD Temperature Sensor Assemblies," August 2002
"Boost Your Sampling Rate with Time-Interleaved Data Converters," February 2001

Sensors Weekly
  What's New
  Product Picks

Questex Media
Home | Contact Us | Advertise
© 2009 Questex Media Group, Inc.. All rights reserved.
Reproduction in whole or in part is prohibited.
Please send any technical comments or questions to our webmaster.