Much confusion exists about characterizing the performance of a highspeed digitizer device. Nominal vertical resolution is routinely presented as an indicator of a digitizer’s performance, but the relevance of this parameter is dubious since a digitizer’s true performance is characterized by the Dynamic Parameters*.

The Digital Storage Oscillo scope (DSO), a widely used digitizerlike device, is optimized for the visualization of unknown signals^{1}. The relatively low 8-bit nominal vertical resolution of the DSO (and ENOB of 6~7) is sufficient for signal visualization and may be offered at the highest sampling rates (~80 Giga - Samples/second). As a result, DSO product specifications typically emphasize their high input bandwidth and rarely list their vertical performance parameters. By contrast, digitizers are usually optimized for the rapid acquisition and analysis of small changes in broadly-known signals. While providing lower maximum sampling rates, digitizers typically offer higher vertical resolutions of 12-, 14-, and 16-bits.

It is important to distinguish between the absolute accuracy and relative accuracy of a digitizer device. The absolute accuracy of a digitizer describes how close its measured voltage values are to the true MKS voltage reference values. By contrast, its relative accuracy specifies the fidelity of the shape of the acquired waveform with no reference to absolute voltage values. Using on-board calibration techniques, high-speed digitizers may achieve absolute accuracies of order 0.1% of the full-scale input voltage range. In the majority of digitizer applications, users are concerned not with the absolute accuracy but rather with the relative accuracy, which in turn is specified by the Dynamic Parameters.

Generally speaking, the fidelity of the signal acquired by a digitizer device may be compromised by three distinct factors:

- Addition of random noise by the digitizer;
- Distortion of the signal by the digitizer itself;
- Irregularities in the time intervals at which samples are converted.

The distinction between signal noise and signal distortion is illustrated in Figure 1. The figure shows a pure sine wave, together with a sine wave that has been compromised by the addition of broadband signal noise and by signal distortion. Distortion is shown as attenuation near the input range limits, which is the typical precursor to signal clipping.

Strictly speaking, signal artefacts cannot be classified as either noise or distortion based on a single waveform acquisition. This distinction requires a comparison of multiple acquisitions in order to determine whether an artefact is correlated or uncorrelated with the signal, which then classifies it respectively as distortion or noise. For example, while it is certainly not random, pick-up of spurious 60 Hz line frequency is considered to be noise unless the underlying signal is correlated with the line frequency, in which case it is considered to be distortion.**

As a rule, the engineering of amplifier stages that add low noise and that impose minimal distortion are incompatible design goals. Consider a generalized amplifier circuit with transfer curve that is shown in Figure 2. Imagine also that a small amount of random noise is picked up at the output of this amplifier.

Designing a circuit using the amplifier in Figure 2, if an engineer elects to work with the signal amplitudes that are within the red region, the signal will suffer high signal distortion, as evident from the visible local non-linearity. Alter - natively, the engineer can decide to reduce distortion by working with the signal amplitudes that are within the green region of Figure 2. While the signal distortion will then be significantly reduced, the reduced output signal amplitude will result in noise pickup having a proportionately higher effect.

Imperfections in a digitizer’s analog-to-digital converter (ADC) timing present an additional source of signal corruption. The instants at which the ADC samples the signal are determined by the clocking signal, which is usually a continuous wave with a single oscillation frequency. The clocking signal is typically not recorded by the digitizer, and its properties are assumed to be perfect in subsequent data analysis. In reality, the period of the clocking signal is not strictly fixed over time. Two distinctive types of clocking signal imperfections can be identified:

Phase Jitter: y(t)=sin[2π(ft + ε(t))] ε(t) <<1

Frequency Drift: y(t)=sin[2π(f + ε(t))t] ε(t) <<1

With Phase Jitter, the clock signal edges vary about positions that are spaced exactly uniformly in time. With Frequency Drift, however, the instantaneous clocking frequency changes over time. Moreover, since the signal frequency may be only determined relative to the clocking signal, the frequency drift is indistinguishable from the input frequency modulation. There are two different measurement methods for characterizing digitizer performance — one that is performed in the time domain and one that is performed in the frequency domain. Both methods involve acquisition of a highpurity sine wave signal by the digitizer under test***.

In the time-domain method specified in IEEE 1057-1994, a sine wave function is fitted to the waveform acquired by the digitizer. The error function is then normalized to obtain the SINAD. From the SINAD, the Effective Number Of Bits (ENOB) is calculated as:

The ENOB is the single most important overall indicator of digitizer performance. The ENOB allows comparison of the given digitizer to an ideal one with the specified resolution. The ENOB depends on signal frequency and, in principle, all adjustable digitizer input settings, notably its input range. The main advantage of the timedomain method is that it produces ENOB values with no adjustable parameters.

The primary disadvantage is that it does not allow clear separation and characterization of digitizer noise and distortion. The sine wave fitting method is iterative and it is not guaranteed to converge, especially in the case of significant frequency drift. It is possible to add harmonics to the sine wave fit in an attempt to separately characterize noise and distortion, but this approach further complicates the already non-linear sine function fit and makes convergence less probable.

The second method of characterizing digitizer performance requires analysis in the frequency domain. The acquired high purity sine wave is subjected to Fourier analysis and a Power Spectrum is obtained (Figure 3). Usually, the waveform data are pre-multiplied by a timedomain Windowing function, which reduces spectral leakage.

Once the Fourier spectrum has been obtained, three different types of frequency bins are identified:

- Fundamental bins: those within a specified range of the known input sine wave frequency, illustrated as green in Figure 3.
- Harmonic Bins: those within a specified range of harmonics of the known sine wave frequency, illustrated as blue in Figure 3.
- Noise Bins: all frequency bins, illustrated as black in Figure 3.