**This post covers the topic “AD and DA conversion explained”. Digital-to-analogue (DAC) conversion converts binary words into voltage or current signal. And it’s extreme values correspond to the 0s and 1s of binary code. The step size is usually determined by the quantity of bits that should be converted to the analogue signal.**

_{i}is the decimal value of the bit, ${R}_{i}=\frac{{R}_{0}}{{2}^{i}}$.

So the gained voltage will be ${v}_{a}=\u2013\frac{{R}_{F}}{{R}_{i}}\left({2}^{n\u20131}{b}_{n\u20131}\right)+\cdots +{2}^{0}{b}_{0}){V}_{i}n$. The output characteristics *${v}_{a}$* will be the step-form, because of the binary nature of the initial signal. DAC converters are usually fabricated on the monolithic IC to avoid many problems. The main characteristics of IC DAC converter are:

• resolution

• full-scale accuracy

• output range

• output setting time

• power supply requirements

• power dissipation

An *Analogue-to-digital converter (ADC) *is a device that enables the conversion of a signal from analogue to digital format . It is also manufactured as an monolithic IC. In order to do so, the signal should be *quantised *or represented in binary form. The quantisation process subdivides the analogue signal into a set of equal ${2}^{n}\u20131$ intervals, where n is the quantity of digital signal bits. The quantisation error is the default conversion process error, and it’s unavoidable. The smaller quantisation intervals are a more accurate quantisation process. So the larger amount of bits, the closer the digital signal is to its original analogue form. Let’s consider the different types of converters.

### AD and DA conversion explained: tracking ADC

The *tracking ADC* is a device that compares the final digital signal with its initial analogue form. A comparator ADC determines if the output digital signal is smaller or larger than the input analogue signal. A tracking ADC is depicted in Figure 2. The rate at which this ADC is incremented is determined with an external clock.

### AD and DA conversion explained: integrating ADC

The work of an *integrating ADC* converter is based on the charging capacitor principle. The integrating ADC is depicted in Figure 3. If the capacitor charges or discharges linearly, then the time needed for capacitor discharging is linearly related to the capacitor charging voltage amplitude.

The capacitor is connected to the comparator and reference voltage. The reference voltage ensures that the charging voltage function is linear. The comparator detects the status of the capacitor – charged/discharged. The count function counts the discharging time of the capacitor.

### The flash ADC

The parallel configuration of the resistors of the flash driver are used for speed conversion. If the input voltage is bigger than the indicated level, the comparator is on, if not – the comparator is off. The flash ADC is depicted in Figure 4.

All the ADCs are characterised by the A/D conversion time. The stable and well-performing converters use the sample-and-hold amplifier in their scheme. An example of a sample-and-hold amplifier is depicted in Figure 5.

The purpose of a MOSFET switch here is to sample the analogue voltage waveform. Meanwhile the voltage charges the hold-capacitor . When the MOSFET is in an off-state, the capacitor is charged and just holds the sampled analogue voltage.

So at the output, we have v_{S-H} – sampled and hold voltage. This process is needed to fix the analogue voltage waveform at a given time. The sampling time should be at minimum for as long as the conversion time. The sampling interval is the time between the two samplings. The Figure 6 depicts the analogue voltage waveform and sampled waveform. To understand the sampling rate it is good to use *the Nyquist criterion*.

More educational tutorials can be accessed via Reddit community **r/ElectronicsEasy.**