The usual method of bringing analog inputs into a microprocessor is to use an analog-to-digital converter (ADC). Here are some tips for selecting such a part and calibrating it to fit your needs.
Input pin of analog-to-digital converter (ADC) accepts an analog input (a voltage or a current) and converts it to a digital value that can be read by a microprocessor.
Picture above shows a simple voltage-input ADC. This hypothetical part has two inputs: a reference and the signal to be measured. It has one output, an 8-bit digital word that represents the input value.
The reference voltage is the maximum value that the ADC can convert. Our example 8-bit ADC can convert values from 0V to the reference voltage. This voltage range is divided into 256 values, or steps. The size of the step is given by:
where Vref is the reference voltage. The step size of the converter defines the converter's resolution. For a 5V reference, the step size is:
5V/256 = 0.0195V or 19.5mV
Our 8-bit converter represents the analog input as a digital word. The most significant bit of this word indicates whether the input voltage is greater than half the reference (2.5V, with a 5V reference). Each succeeding bit represents half the range of the previous bit.
|Table 1 Example conversion, on an 8-bit ADC|
|Bit:||Bit 7||Bit 6||Bit 5||Bit 4||Bit 3||Bit 2||Bit 1||Bit 0|
Table 1 illustrates this point. Adding the voltages corresponding to each set bit in 0010 1100, we get:
0.625 + 0.156 +0 .078 = 0.859 volts
The resolution of an ADC is determined by the reference input and by the word width. The resolution defines the smallest voltage change that can be measured by the ADC. As mentioned earlier, the resolution is the same as the smallest step size, and can be calculated by dividing the reference voltage by the number of possible conversion values.
For the example we've been using so far, the resolution is 19.5mV. This means that any input voltage below 19.5mV will result in an output of 0. Input voltages between 19.5mV and 39mV will result in an output of 1. Between 39mV and 58.6mV, the output will be 2.
Resolution can be improved by reducing the reference input. Changing that from 5V to 2.5V gives a resolution of 2.5/256, or 9.7mV. However, the maximum voltage that can be measured is now 2.5V instead of 5V.
The only way to increase resolution without reducing the range is to use an ADC with more bits. A 10-bit ADC has 210, or 1,024 possible output codes. So the resolution is 5V/1,024, or 4.88mV; a 12-bit ADC has a 1.22mV resolution for this same reference.
Types of ADCs
ADCs come in various speeds, use different interfaces, and provide differing degrees of accuracy. The most common types of ADCs are flash, successive approximation, and sigma-delta.
The flash ADC is the fastest type available. A flash ADC uses comparators, one per voltage step, and a string of resistors. A 4-bit ADC will have 16 comparators, an 8-bit ADC will have 256 comparators. All of the comparator outputs connect to a block of logic that determines the output based on which comparators are low and which are high.
Block diagram of Flash ADC is given on picture below.
The conversion speed of the flash ADC is the sum of the comparator delays and the logic delay (the logic delay is usually negligible). Flash ADCs are very fast, but consume enormous amounts of IC real estate. Also, because of the number of comparators required, they tend to be power hogs, drawing significant current. A 10-bit flash ADC may consume half an amp.
A variation on the flash converter is the half-flash, which uses an internal digital-to-analog converter (DAC) and subtraction to reduce the number of internal comparators. Half-flash converters are slower than true flash converters but faster than other types of ADCs. We'll lump them into the flash converter category.
Successive approximation converter
A successive approximation converter uses a comparator and counting logic to perform a conversion. Block diagram is given on picture below:
The first step in the conversion is to see if the input is greater than half the reference voltage. If it is, the most significant bit (MSB) of the output is set. This value is then subtracted from the input, and the result is checked for one quarter of the reference voltage. This process continues until all the output bits have been set or reset.
Algorithm of its work will be easier to understand when displayed in blocks
A successive approximation ADC takes as many clock cycles as there are output bits to perform a conversion.
Plotted over time, the operation of a successive-approximation ADC looks like this:
These types of ADC are found in microcontrollers.
A sigma-delta ADC uses a 1-bit DAC, filtering, and oversampling to achieve very accurate conversions. The conversion accuracy is controlled by the input reference and the input clock rate.
The primary advantage of a sigma-delta converter is high resolution. The flash and successive approximation ADCs use a resistor ladder or resistor string. The problem with these is that the accuracy of the resistors directly affects the accuracy of the conversion result. Although modern ADCs use very precise, laser-trimmed resistor networks, some inaccuracies still persist in the resistor ladders. The sigma-delta converter does not have a resistor ladder but instead takes a number of samples to converge on a result.
The primary disadvantage of the sigma-delta converter is speed. Because the converter works by oversampling the input, the conversion takes many clock cycles. For a given clock rate, the sigma-delta converter is slower than other converter types. Or, to put it another way, for a given conversion rate, the sigma-delta converter requires a faster clock.
Another disadvantage of the sigma-delta converter is the complexity of the digital filter that converts the duty cycle information to a digital output word. The sigma-delta converter has become more commonly available with the ability to add a digital filter or DSP to the IC die. Here is a bit more details to how a Sigma-Delta ADC works.
In the schematic above the input analog voltage drives an integrator, whose output is compared with a ground voltage level by a comparator. D-latch controls the switch turning on/off reference voltage, they both are composing a 1-bit DAC. As the input voltage increases or decreases, the comparator turns on and off the reference voltage, that is subtracted from the input signal, aiming to maintain zero on the output of the integrator.
The counter C1 keeps track of clock periods, while counter C2 counts the number of pulses when the switch is closed. Suppose the volume of counter C1 is 1000. By the time it gets the final count, the number in counter C2 is proportional to the average level of the input signal during the time of 1000 clock pulses.
Now the name delta-sigma is making a little more sense: delta (the difference) refers to delta modulation, the principle of coding not the whole input value, but only the difference between the current signal sample and the feedback signal, corresponding to the previous sample. Obviously, fewer bits are required to code only the difference in the amplitudes.
Sigma (the sum) is because the sum of "deltas" is counted during the measured interval. In other words, the input to the quantizer is the integral of the differences between the input and the output signals.
If you do not understand all details of their operation do not worry. All you need to remember is that they are slower but with higher resolution.
Picture below shows the range of resolutions available for sigma-delta, successive approximation, and flash converters.
The maximum conversion speed for each type is shown as well. As you can see, the speed of available sigma-delta ADCs reaches into the range of the successive approximation ADCs, but is not as fast as even the slowest flash ADCs. What the tables do not show is the tradeoff between speed and accuracy. For instance, while you can get successive approximation ADCs that range from 8 to 16 bits, you won't find the 16-bit version to be the fastest in a given family of parts. The fastest flash ADC won't be the 12-bit part, it will be a 6- or 8-bit part.
These charts are a snapshot of the current state of the technology. As CMOS processes have improved, successive approximation conversion times have moved from tens of microseconds to microseconds. Not all technology improvements affect all types of converters; CMOS process improvements speed up all families of converters, but the ability to put increasingly sophisticated DSP functionality on the ADC chip doesn't improve successive approximation converters. DSP functionality does improve sigma-delta types because it enables better, faster, and more complex filters to be added to the part.
Sample and hold
ADC operation is straightforward when a DC signal is being converted. But if the input signal varies by more than one least significant bit (LSB) during the conversion time, the ADC will produce an incorrect (or at least inaccurate) result. One way to reduce these errors is to place a low-pass filter ahead of the ADC. The filter parameters are selected to ensure that the ADC input does not change by more than one LSB within a conversion cycle.
Another way to handle changing inputs is to add a sample-and-hold (S/H) circuit ahead of the ADC. Picture below shows how a sample-and-hold circuit works. The S/H circuit has an analog (solid state) switch with a control input.
When the switch is closed, the input signal is connected to the hold capacitor and the output of the buffer follows the input. When the switch is open, the input is disconnected from the capacitor.
The figure shows the waveform for S/H operation. A slowly rising signal is connected to the S/H input. While the control signal is low (sample), the output follows the input. When the control signal goes high (hold), disconnecting the hold capacitor from the input, the output stays at the value the input had when the S/H switched to hold mode. When the switch closes again, the capacitor charges quickly and the output again follows the input. Typically, the S/H will be switched to hold mode just before the ADC conversion starts, and switched back to sample mode after the conversion is complete.
In a perfect world, the hold capacitor would have no leakage and the buffer amplifier would have infinite input impedance, so the output would remain stable forever. In the real world, though, the hold capacitor will leak and the buffer amplifier input impedance is finite, so the output level will slowly drift down toward ground as the capacitor discharges.
The ability of an S/H circuit to maintain the output in hold mode is dependent on the quality of the hold capacitor, the characteristics of the buffer amplifier (primarily input impedance), and the quality of the sample/hold switch (real electronic switches have some leakage when open). The amount of drift exhibited by the output when in hold mode is called the droop rate, and is specified in millivolts per second, millivolts per microsecond, or microvolts per microsecond.
A real S/H circuit also has finite input impedance, because the electronic switch isn't perfect. This means that in sample mode, the hold capacitor is charged through some resistance. This limits the speed with which the S/H can acquire an input. The time that the S/H must remain in sample mode in order to acquire a full-scale input is called the acquisition time, and is specified in nanoseconds or microseconds.
Since some impedance is in series with the hold capacitor when sampling, the effect is the same as a low-pass RC filter. This limits the maximum frequency that the S/H can acquire. This is called the full power bandwidth, and is specified in kilohertz or megahertz. As mentioned, the electronic switch is imperfect and some of the input signal appears at the output, even in hold mode. This is called feedthrough, and is typically specified in decibels. The output offset is the voltage difference between the input and the output. S/H circuit datasheets typically show a hold mode offset and sample mode offset in millivolts.
An ADC system that uses a S/H may have to accommodate the hardware quirks. In some systems, the software directly controls the S/H control input with a port or register output bit. Typically, the S/H is placed into sample mode, and the software must ensure that the acquisition time requirement is met. In some systems, this can be accomplished simply by leaving the S/H in sample mode until a conversion is needed.
After the S/H is placed into hold mode, another bit (or a write to an address or some other operation) starts the ADC. After the conversion is complete, the software reads the result. However, a problem may occur if any one interrupt (or a worst-case stackup of interrupts) causes the output of the S/H circuit to droop by more than one LSB. If this could happen, the software may need to disable interrupts before switching the S/H to hold mode and re-enable them after starting the conversion. This ensures that the ADC will complete the conversion before the S/H droop occurs.
Software must also accommodate the charge time of the S/H. When the electronic switch closes and connects the input signal to the S/H capacitor, it takes a finite amount of time for the capacitor to charge because the switch and whatever source is driving the input both have nonzero impedances. If the sum of these impedances is large enough, the software may need to add a delay so the hold capacitor has time to charge to within one LSB of the final value before starting the conversion.
Internal microcontroller ADCs
Many microcontrollers contain on-chip ADCs. Typical devices include the Microchip 16F family and the Atmel AVR. Most microcontroller ADCs are successive approximation because this gives the best tradeoff between speed and the cost of real estate on the microcontroller die.
The most microcontrollers contain an 10-bit successive approximation ADC with analog input multiplexers. The microcontrollers in 16F family for example have from four to eight channels. Internal registers control which channel is selected, start of conversion, and so on. Once an input is selected, a settling time must elapse to allow the S/H capacitor to charge before the A/D conversion can start. The software must ensure that this delay takes place.
Some microcontrollers, such as the Microchip family, allow you to use one input pin as a reference voltage. This is normally tied to some kind of precision reference. The value read from the A/D converter after a conversion is:
(Vin/Vref) x 256
Some microcontrollers use the supply voltage as a reference. In a 5V system, this means that Vref is always 5V. So measuring a 3.2V signal with an 8-bit ADC would produce the following result:
(Vin x 256)/Vref = (3.2v x 256)/5V = 163
However, the result is dependent on the value of the 5V supply. If the supply voltage is high by 1%, it has a value of 5.05V. Now the value of the A/D conversion will be:
(3.2V x 256)/5.05V = 162
So a 1% change in the supply voltage causes the conversion result to change by one count. Typical power supplies can vary by 2% or 3%, so power supply variations can have a significant effect on the results. Power supply outputs can frequently vary with loading, temperature, AC input variations, and from one supply to the next.
This brings up an issue that affects all ADC designs: the accuracy of the reference. A typical ADC reference might be nominally 2.5V, but can vary between 2.47V and 2.53V (these values are from the data sheet for a real part). If this is a 10-bit ADC, converting a 2V input at the extremes of the reference ranges gives the following results:
At Vref = 2.47V,
Result = (2V x 1,024)/2.47 = 829
At Vref = 2.53V,
Result = (2V x 1,024)/2.53 = 809
The variation in the reference voltage from part to part can result in an output variation of 20 counts. Picture below shows the effect a reference variation has on the ADC result.
Although the percentage of error stays the same throughout the range, the numerical error is of course greater for larger ADC values. If it is important to have great ADC accuracy consider using external voltage reference such as MCP1541-I/TO (TO-92), 4.096 V reference. By using this reference with 10-bit ADC you will get 1 mV = 1 ADC count so you will have easier calculations. For examples of using integrated ADC inside microcontroller visit AVR example page.