ADC is as fast as possible always the best approach?

Is as fast as possible always the best way to do things with STM?

Recently, I have stumbled upon an issue regarding ADC scan conversion on STM32 microcontrollers. I have wondered why the scan conversion was not taking place. As it turned out it did but it was not handled fast enough. In this blog post I will discuss the need to return from interrupt routine as fast as possible. However, it will not be only this one thing. Interested? So keep reading.

ADC in STM microcontrollers is a very advance piece of hardware. It supports many different types of work including: scanning, continuous measurement, triggering conversion from different sources, DMA transfers, and more. I would like to focus on one of these, particularly on scanning conversion mode.

Scanning mode

This type of operation mode can support of automatic conversion of pre-selected channels, called ranks. ADC can support up to 16 ranks where each rank can specify:

  • channel,
  • sampling time.

A sample configuration is shown below were you can see a pre-configured number of ranks. As you can see each rank can be configured independently. What is more, some ranks (even all of them) can be configured to measure the same channel.

Frequency of ADC

Let’s get back to the point were speed (or frequency) is being discussed. Below you can see a diagram where a sample clock tree can be seen. The frequency of ADC can be deduced from this one. This frequency should be treated as a base frequency for ADC peripheral. A single ADC can be configured in terms of pre-scaler which will reduce the frequency with which ADC works. If this value will be set to 1 then no frequency division is present. In other words, it will work with the maximum frequency coming from the clock tree. It might seem that it is it. However not. There is also one additional factor which defines how quickly the measurements will be available — sampling time.

Sampling time

This is the ultimate factor which defines how quickly the data will be available. There are a few predefined values starting from 2.5 cycles and ending on 640.5 cycles. To be true, we need additional ADC cycles for processing which is fixed to 12.5 cycles. This fixed preprocessing time differs depending on desired resolution of a performed measurement. Having this in mind the minimum processing time is equal to 15 cycles (for 12-bit resolution).

As it can be seen the frequency of measurement is dependant on multiple factors:

  • frequency clock signal for an ADC,
  • pre-scaler of the ADC,
  • measurement resolution (constant overhead calculated in cycles),
  • number of cycles per conversion.

The longer sampling time the more accurate measurement. However, for high resolutions like 12-bits it will not make much of a difference due to noise. To calculate this let’s assume that the reference voltage for ADC is 3.3V. 12 bits of resolution gives 2^12 = 4096 levels. In turn, this gives 1 bit = 3.3V / 4096 ~= 0.0008 V < 1 mV. This is small value. When you touch measurement point (you should not do that) the measurement will change for more then a few millivolts. Thus, measuring with that high resolution, in some cases, is pointless. Using lower resolution can reduce the cycle overhead and make the measurement process even faster.

As it can be seen, a very high measurement frequency can be achieved when: frequency clock signal for ADC is set to its maximum value; pre-scaler is set to one; resolution is reduced; and measurement cycle is set to minimum. This type of configuration will give measurements lighting fast but can these be processed, or even read from ADC data register.

If a conversion is started in an interrupt mode with

In addition, if the end of conversion was set to End of single conversion then the ADC conversion completed callback will be called as many times as the number of ranks. Then to retrieve the measurement it has to be done in following way, e.g.

The order of the samples in measurement array will be consistent with the order in the STM32 ADC rank. But will it be really?

Fast, faster, no data

This particular configuration of ADC is only to show two things. The first one is that before settling with particular configuration it is worth to perform some tests in real environment or at least simulate real load on the system. The second one is about HAL library. But before discussing this point it is worth to set up the whole environment.

Let’s assume that we are processing 10 channels in 10 ranks. Also, the ADC frequency was set to as high as possible, why not ;). To not reduce the frequency the pre-scaler was selected to 1. Now, it is fast, right? Just a teaser. I have a feeling that someone would set ADC to even higher frequency if it would be possible. Now, there can be an issue that something is not right and the measurements do not resemble the real state. This is because of how, or to be more precise, how fast the interrupt routine is being handled. There is a golden rule to follow when it comes to processing interrupt requests. It should be handled as fast as possible!

In a common implementation of HAL library using interrupts to handle ADC conversion the library is too slow. The HAL routine takes over 500 cycles to handle the interrupt request and call the callback function. If handling takes that long and an another request comes it will not be handled properly and the sample in the ADC data register will be lost.

There are a few ways to solve this problem:

  1. Decrease ADC frequency or increase sampling time.
  2. Use polling instead of interrupts.
  3. Use DMA to transfer the data.

Decreasing ADC or prolonging the time for conversion one or the other way is not always the best option.

Polling is the most ineffective way of checking if there was a successful conversion. It consumes resources and is synchronous operation. For simple applications it will be a good choice but then if we are not able to guarantee that the newly measured data will be read then it is better to drop this solution. Now, we are left with DMA which stands for Direct Memory Access. It will transfer data independently from CPU leaving us with freedom to handle it when it is needed. There are two modes of DMA operation: normal and circular. Normal mode will transfer as many data as it was requested. When it is done it is done. The circular mode allows for circular buffer implementation. When the DMA reaches the end of a buffer it simply starts from beginning and overwrites the content.

Conclusion

I think that the most important points are that running ADC (or peripherals in general) as fast as possible is not always the best course of action. The needs of the application should shape the configuration of peripherals, not the other way around. The second point is connected with DMA. DMA controller is a good solution to most of the problems. But one should never forget that it always comes with price. It is not recommended to start DMA and left it to do its job. Once again, maybe a fast conversion is not even required, and then it comes back to the first point. Lastly, HAL library is a convenient tool but easy of use comes with additional cost. It is always good to know the limitations of tools we are using.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.