On-screen Display with Raspberry Pi Pico

For quite some time, I was curious about the on-screen displays (OSDs). It is a piece of equipment which enables you to put some text or graphics directly on a video stream. I am going to present you my solution for this device and, most importantly, why it is useful. The project was based on the RP2040 microcontroller which can be found on a very popular platform, Raspberry Pi Pico.

Why OSD?

As you probably know, I build RC air plane models. Quite recently, maybe a year or two, I became interested with FPV (First Person View) models. It is nothing more than a simple air plane, but with a camera attached to it. It allows you to have a live video stream (and audio as well) during flight time. It is really nice to observe Earth from a couple hundred metres from above. The first equipment that supported FPV was fully analogue, meaning that the video stream was an analogue signal sent through the video transmitter down to the base station. Then it would be received by the video receiver and displayed on the screen. Nowadays, there are different digital systems where the video stream is encoded and has much better quality. However, if the signal is weak or distorted, it means that you either have video feedback (really good quality) or nothing at all. That is why analogue systems are still there because if you have some interference you are still able to make out what is on the screen. An example can be seen below where the video quality is low, but it is possible to make out what is what.

Low-quality video feedback

I am using an analogue video system; thus adding text to video is a bit more complicated. It requires a special device that can decode the signal and tell you where you are on the screen in terms of position. When you know where you are on the screen, you can start drawing pixels. But it does not answer the question why even bother with OSD while the control channel (commands from RC transmitter to the plane) usually comes with telemetry channel so you can get some useful information back from the RC model to the transmitter. In fact, the transmission is bidirectional. However, glimpsing from the screen (or removing goggles) can be problematic during flight and even cause a crash. Then it is better to have some parameters displayed directly on the screen. This is the first point. The second one is more related to transmission power of telemetry for the RC transmitter. You can have a very good signal strength from RC transmitter towards RC receiver on the plane, however the telemetry can have lower power or you simply might want to limit it. Then sending data via video feedback gives you a bit more room. If the signal strength of telemetry is too week you can still receive some data via the video channel.

Decoding Video Signal

There are two very popular standards that allow you to send encoded video data. They are called PAL or NTCS. You may have heard of the third SECAM. All three are used depending on the geographical location. NTCS is being used in North America, while PAL is being used in Europe. Most modern analogue video devices allow one to switch between these two.

For the time being, let us focus on the PAL standard. It allows sending 625 lines with frequency of 50 Hz, but 25 frames per second. The number of lines and frames can vary depending on particular version of the standard but it is not that relevant. The difference between the frequency and the number of frames per second is important. The difference comes from the fact how the picture is being encoded. To see an entire picture, you need to get a frame consisting of alternating odd and even lines. First, you get all the the odd lines and then all even lines. It means that you will actually see two different pictures, but first you get odd lines from the first image and then you get all even lines from the second image. Since the refresh rate is 50Hz and you usually do not move very fast, the image seems to be completed and not distorted.

Knowing this, we can plan the following. We would need to precisely (precisely enough) determine when a single new line is starting and when a new frame is starting. This information accompanied by some timing will allow us to know where we are in the image. My first attempt was based on a simple diagram from the RC forum where an Arduino-based solution was presented. The very first version of this device used a couple of discrete elements to discover frame start and new line start.

Video line sync discovery diagram

The video signal was denoted with a VIDEO label. There you should connect the video signal. The principle of how to put something in the video signal is fairly simple. When you pull the signal down you get a black colour. When you pull up the voltage of the video signal, you get a white colour. Please, look at the DIM label and the LATCH label. Together, they create a voltage divider that allows one to achieve a greyish colour. So how does it work? Let us focus on putting some white dots. What would be required is to put high state on LATCH pin while the DIM should be held floating. It will cause VIDEO to go high and thus produce a white colour. To produce a black dot, DIM should be low and LATCH should be floating. However, if greyish is required, DIM should be low and LATCH should be high. Due to the voltage divider, it will produce a voltage corresponding to grey colour. Adjusting the voltage divider’s gain, we can change the “intensity” of the grey.

Now we know how to put some colourful dots on the video signal. But, as mentioned earlier, we need to know when to put them. It can be achieved from the analysis of video timing diagrams. Let us then have a look.

Video signal timing diagram, Source: Texas Instruments, LM1881 Video Sync Separator, 2015

Each video line before its start is pulled low. So, we need to detect when it is low. It is a reliable way to determine the start of a new line because the black colour is not really 0V, but a bit above 0.2V. To trigger line detection we need to use a couple of discrete elements like resistors and diodes. This can be seen in the diagram at the top of this page. We can even adjust the level at which we would like to detect the sync signal of a beginning of a new line. To detect the start of a new frame it is also quite simple. Each new frame (a bit oversimplifying) can be detected in the same way as a new line because the sync pulse (low level) lasts longer for frames than it does for lines. Therefore, you can simply count time and know if this is another line or a new frame has begun.

And only if that would be easy. What I have found during my laboratory tests of this simple line detection circuit is that it works, but under specific conditions. First, it does not work reliably when the entire screen is black or when some part of the screen (video) is black. This is due to the fact that the voltage corresponding to sync pulse (the low level signal) is floating. In turn, this means that the voltage level would need to be constantly monitored and adjusted. It unnecessary overcomplicates the circuit (or not at all), but the implementation of a microcontroller would be more complicated. It is still feasible, but it requires a developer to implement adaptable voltage-level detection and some additional calculations. The second reason is once again related to the somehow simplicity of the diagram. It works great but with more or less constant lighting conditions and the same equipment. When you change a camera to a different one once again, you are required to adjust the threshold.

Detecting line synchronisation signal is not as trivial as it might seem. However, there is a good solution which I have decided to use. There are different integrated circuits that can detect the sync line (and many more) independently. One of such devices is LM1881.

LM1881 outline diagram, Source: Texas Instruments, LM1881 Video Sync Separator, 2015

LM1881 is a nice chip that works with different standards, adjusts the detection level by itself, and thus can be used with different cameras and different lighting conditions without any problems. Additionally, it also detects the start of a frame or even/odd frame. Below you can see a schematic utilising this device.

OSD video sync separator

It has two LM1881 devices, but only one is meant to be mounted. On a PCB they come with two different footprints. U1 is in the SMD package, while U2 is in the DIP-8 package.

LM1881 Separator Realisations

The part concerning dimming (black dots) and latching (white dots) is identical to the previous one. Here, the SYNC output notifies about new lines, and the FRAME output is used to discover a new frame and reset the frame counter in the microcontroller.

RP2040-based OSD device

The complete OSD device was presented. It is a sandwich 😀 with some pin-headers bent to fit the video separator board. Initially, the LM1881 board was only intended for testing. Well, another test was done where it was connected to a RP2040 development board like a Waveshare RP2040-Zero board. It turned out to be fully functional. Some RP2040 library or silicon bug was discovered ;), however I did not have time to discover which one it was, since the moment the final device was assembled and tested it went airborne.

Drawing Text

Now, when we want to draw a picture or draw a font character, we need to take into account a few factors.

First of all, this is the analogue signal, so we will need to handle it accordingly. This part was already covered at the beginning. We will try to put some white characters first since it is easier to understand the principle.

Second, the video lines are counted from top to bottom. This notation is natural for humans, and we will start drawing from the top of the image, including font characters.

Third, since the video signal is divided into lines, we need to draw line by line. This means that we need to introduce the concept of a font.

Let us assume that we would like to print out a single character. Just to focus our attention, let it be ‘A’. First, we create a font that includes this character. For example,

uint8_t font[][8] = {
// 'A' 0x41
{
0b00000000,
0b00111100,
0b01000010,
0b01000010,
0b01111110,
0b01000010,
0b01000010,
0b01000010,
},};

As you can see from the snippet above, the outline of the character is filled with ones ‘1’ (foreground) while the background is filled with zeros ‘0’. What we could potentially do, and it was already done, was put all characters into an array so that the outer index of the font array would correspond to a specific ASCII character. As we can see, the concept of a font is pretty easy to grasp. What else could have been noticed is the height of a font character. In the above example, it will stretch over 8 lines.

Drawing lines

Since we know how to encode characters, into font characters let us think how we could print out a line. Since multiple lines will constitute an entire line of text, it is obvious that we need to draw line by line. How then can a single line on the screen? The answer might seem complex, but actually is not. Let us think about it step by step. Once, we could notice that the character consists of zeros and ones. Also, before I have mentioned it, we will put some white text on the video. How do you do that? Ones in font characters will correspond to white dots, while zeros will correspond to transparent, or more precisely, no action at all. Since it is clear, how can this be translated to LATCH and DIM outputs? Let us take a look at the truth table below.

LATCHDIMResult on the screen
1ZWhite dot
0ZNo reaction
Truth table for OSD

For simplicity, the table was reduced to only two entries, while it could have nine in total. It is clear from the table that we basically can set DIM to input (Z means high impedance) and LATCH is the one controlling the drawing action.

Now, we can alternate between LATCH’s zeros and ones to actually put some characters on the screen. Recall that we are drawing line by line (horizontally). So, if we were required to print a line of text, we need to create a buffer. Preferably, the buffer should be a two-dimensional array because it is easier to move around. The line buffer should be declared as

uint8_t buffer[8][120];

The first dimension with eight elements corresponds to all eight horizontal lines of a font character. The second dimension is equal to 120 and allows us to fit 120/8=15 characters. We need to divide the total number of dots (120) by 8 because the width of each character is 8. This was done to simplify the calculations. We use 8 bits (width of font character) and it is equal to a single byte.

Perfect! So, as you might already suspect, what we need is to quickly alternate between bits in order to put some text on the screen. There are two ways in which this could be achieved. The obvious one is to write a piece of software which reads bits from line buffer and changes GPIO pin state accordingly. However, this process, called bit banging, is inefficient. Fully engages the CPU, leaving very little room for other operations. Other method is to use a serial interface such as I2C, SPI, UART or even timers with some DMA. However, one serial interface for this specific task would be better than the other. For this particular case, the best candidate is the SPI interface. It is a synchronous interface (for this task, it does not matter) that outputs data exactly as they are fed to the interface. There are no gaps, no additional bits, etc.

Raspberry Pi Pico

In the Introduction, I mentioned in the Introduction that the project was based on Raspberry Pi Pico. You already know the hardware part that uses LM1881. Now, it is time to show how to use its features and how to print some text.

Software structure

The application for RP2040 was written in C and consists of three main parts.

  1. Discovering and Processing Synchronisation Pulses from LM1881.
  2. Line Drawing.
  3. Controlling the operation of the Device.

Discovering and Processing Synchronisation Pulses from LM1881

From LM1881 two signals are being used. The first one is related to frame detection. This signal comes from the LM1881 IC directly and is labelled as VERTICAL SYNC OUTPUT. This signal allows one to reset the line counter so that the software can reset the frame count. If this signal is missed, the number of lines will increase. Depending on how lines are drawn, either some artefacts are doomed to appear, or an entire frame can be kipped.

The second signal coming from the video sync separator is the COMPOSITE SYNC OUTPUT. Each event on this output causes the line counter to increase and so we know when a new line has begun. If for some reason the signal on this output is lost, it may cause line desynchronisation. This will appear as stretching lines over other lines. The text will be chopped.

Line Drawing

For this particular implementation, I decided to use the SPI interface. Coupling it with DMA makes the implementation more efficient, no direct involvement from CPU is needed. The implementation is based on drawing a line during a single DMA transfer. Each time a new line is needed to be printed out, a new DMA transfer is being setup. Using SPI gives (un)expected ability to stretch or shrink the text horizontally. Increasing the SPI frequency narrows the text, while decreasing the frequency widens the displayed text. This comes directly from the constant timing characteristics of the PAL/NTCS video encoding.

The application allows to print out four lines of text. Two lines are placed at the top of the screen, and the other two are placed at the bottom. This implementation is sufficient to put some important data on the screen while preserving a clear view on the centre of the screen. However, the implementation is straightforward and could be optimised to include an entire screen. This requires more RAM usage, but with the available resources, it is more than doable. A keen eye could notice that each text line is a little bit shifted. It is due to the delay introduced by going over a loop of text lines. The introduction of a whole buffer would eliminate this problem. Other way to eliminate the text shift problem could involve introduction of look-up-table LUT. This would introduce a small constant delay.

In addition to printing out the content on the screen, it is possible to react on DMA transfer complete interrupt. This could be used to introduce dimming of the image. However, with current hardware application dimming affects the while text colour as well. Introducing some more components and a bit different print-out handling, it would be possible to only grey out the text background. This is important from a readability perspective. Increasing the contrast would make the text stand out.

Controlling the operation of the device

The only purpose of this little project was to put some useful flight data information on the screen. This includes battery voltage, current GPS position, distance, or return angle. The data from the flight controller. These two communicate over the I2C bus; thus, OSD is treated as an I2C slave device. And this is an important fact, since the entire OSD device is treated as a feature. My belief is that each good feature should allow one to disable itself. Thus, the flight controller can send a signal to turn off the OSD functionality. It is nice since it allows one to admire the entire screen without any kind of additional information.

The I2C interface maps registers to variables that are read directly and put on screen. From this perspective, the OSD device can be treated as any memory device. Some data is written to it, and it immediately takes effect. The refresh ratio was set to something like 2Hz. A higher refresh rate is not necessary for on-screen telemetry.

Future of the project

I have highlighted a few new features or issues of the project. Introducing a buffer storing the entire screen content would not only eliminate shift issue but would allow us to draw images, pictures, etc. on the screen. Wouldn’t it be fun?

Extending the hardware a bit would allow one to add a better diming. In turn, this would increase contrast and readability.

Resources

Nice image taken from video camera mounted on my Multiplex Twinstar during take-off

The best part, at least for some, is that I usually leave at the end. The entire microcontroller code can be found in my Github repository, as well as the hardware part designed in KiCAD.

If you like this post, please share it 🙂 Also, if you find the published code/design useful, leave a star 😉 on GitHub. Here are the promised repositories:

3 thoughts on “On-screen Display with Raspberry Pi Pico

  1. Dominik H.

    Hello Wojciech,
    I wrote about your project within the MagPI magazine. Well done!

    One question. Can you provide a list of technical components (for example sensors, cables, etc), you used for the project and maybe a diagram, how everything is connected together? Is the PI Pico located within the air plane or underneath the screen? Im also very interested, how you managed to print the battery voltage into that screen, because im also stuck in another PI pico project and i dont have any good solution how to get the battery voltage easily as output.

    Best regards,
    Dominik

    Reply
    1. Wojciech Domski Post author

      Hi Dominik!
      Thank you! I am glad you like it.

      With the list of elements it is pretty standard. What I have a on the RC plane featured in the MagPi is:
      – controller, based on STMF303, same as for my quadrocopter but adjusted for fixed wing, this one has the voltage measurement capability for the battery,
      – receiver,
      – motor with ESC,
      – some servos,
      – camera with video transmitter,
      – battery,
      – pico based OSD.
      The OSD is directly responsible for overlaing text on video signal. It communicates with flight controller to get useful information such as GPS position, altitude, time and as you mentioned battery voltage. OSD generates precise timing signals which latch “pixels” on the video signal rendering them white. This altered signal is sent out via video transmitter.

      On ground I have a video receiver and a small display that shows the transmitted feedback from camera on plane.

      Also, within the blog you will find a link to my github repository where the code for pico based OSD was released. Feel free to use it as your base.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.