If you’ve ever had occasion to watch video on a computer, you might have noticed that it doesn’t look like it does on your TV. This is especially true when watching interlaced video. This post briefly describes some issues associated with using a computer as a video display device.
Let’s start with interlaced video. What is interlace anyway? Think of video as being a sequence of pictures acquired at a constant frame rate, for example, 30 or 60 pictures per second. We call each picture a frame, and say that the system operates at 30 or 60 frames per second (fps). In progressive scan video, each frame is acquired simultaneously; so in a 30 fps system, every 30th of a second a new frame is acquired. In interlaced video, each frame is acquired as two distinct sub-frames called fields. For example, in legacy NTSC systems, the frame rate is 30 fps and each field is acquired every 60th of a second (NTSC is the analog TV standard we had prior to HD; it is still used for many applications including security cameras). The thing about interlace is that each field only captures every other line. So if the “top” field captures lines 0, 2, 4, etc., of a picture, then the “bottom” field captures lines 1, 3, 5, and so forth. The lines from the top and bottom (or equivalently the even and odd) fields are then interlaced with each other by the display to create a full-sized frame. You can see how this works by looking at the frame below, which is interlaced:
Figure 1: An interlaced frame
This frame is from a test video sequence showing a train pulling into a station. When the entire video sequence is viewed, it turns out that the camera is panning to the right to follow the train as it comes to a stop. Below are the top and bottom fields associated with this frame:
Figure 2: Top and bottom fields of a video frame
Each field has the full width of the complete frame, while having only half its height. This is why the fields look vertically squashed. But set aside the vertical distortion; what you will also notice is that the vertical lines, such as the shadows along the train station, are much straighter.
Let’s zoom into one of these areas, in Figure 1. We see the following:
What’s going on? Why isn’t the vertical shadow line smooth? The line is jagged because the camera has panned to the right between the acquisition of the top and bottom fields of the frame. In other words, the location of the vertical line in the top and bottom fields is slightly different. When the two fields are interlaced with each other and displayed as a frame, this causes the vertical edge to appear jagged. The faster the apparent motion of an object, the more jagged its edges will appear. Of course this interlace effect is only noticeable on objects that experience an apparent motion between frames and will not be noticed if the scene is stationary.
Now, an interlaced display device shows each field one after the other; the top field will be significantly dimmed by the time the bottom field achieves full screen brightness. This causes the jagged nature of the vertical line in our example to be invisible to an observer. You can see that the overall design of an interlaced video system is non-trivial and depends on the human visual system to make thing work. The eye must do enough integration of the average brightness to prevent the video from appearing to flicker as the individual fields first grow in intensity and then fade out. Furthermore, the fade-out rate of one field relative to the fade-in rate of the next must be such that motion blurring is not objectionable. It turns out that the integration effect of the eye allows the vertical resolution of an interlaced frame to appear higher to a viewer than it really is relative to the bandwidth required to display it.
However, interlaced displays are not used for computer monitors. One reason is that computer monitors usually display very high contrast material—for instance, black text on a white background. Such computer graphics material often has sharply defined horizontal lines like the top of the capital letter “T”. When such horizontal lines are viewed on an interlaced monitor, the lines appear to flicker. Trust me, if you’ve never seen this, it is very distracting. You can see this effect on NTSC systems when you see scenes such as bleachers or window blinds that have lots of parallel horizontal lines—the lines often appear to flicker. For this reason computer monitors are designed to be progressive display devices, meaning they display an entire frame at once.
What happens if interlaced video is shown on a progressive display, one frame after another? Remember that entire frames are displayed on the monitor simultaneously. In the case of motion scenes like the train pulling into the station, each frame will have jagged edges in it, and the monitor will faithfully reproduce them—without fading in and out between fields, as is the case with an interlaced display. So the jagged edges that appear in each successive frame on the edges of moving objects become perceptible! If the video being observed originates from a compressed format such as H.264 or MPEG-2, the jagged edges should not be confused with compression artifacts. The jagged edges are purely the result of displaying interlaced video on a progressive monitor.
Finally, at least two other issues can make the display of either progressive or interlaced video on a computer monitor problematic:
- The monitor may be set to refresh (i.e. draw a new frame) at a rate such as 70 fps, while the video originates at a rate such as 30 fps. Even when everything is working perfectly there still exists the need to map 30 fps onto 70 fps—which means that some frames will have to be displayed slightly longer than others. This effect can be visible under some circumstances (e.g., when watching 24 fps film movies that have gone through a telecine, or 3:2 pulldown process, to make them useable for TV viewing).
- The computer is carrying out many tasks simultaneously and may not always get the next frame ready for display in time. This can lead to visible stutters during playback.