Common Video Compression Artifacts to Watch Out For

Why Trust Techopedia
KEY TAKEAWAYS

Video compression can sometimes result in visual abnormalities known as 'artifacts,' which can be avoided with properly set parameters in the encoding pipeline.

All visual media is compressed. The purpose of an electronic medium is to store information in a packageable format. Digital video’s quality, clarity and fidelity all depend on a number of factors that generally come about as the result of compression. Transmission rate, file size, source quality and source complexity all play vital roles in video compression, as do the hardware devices used to capture, store and display audio-visual media data. Video artifacts generally refer to aberrations in signal processed outputs, and in digital video, they can be distracting and in extreme cases they can destroy an entire broadcast. Nonetheless, they exist for a reason, and understanding different artifacts’ unique features helps video technicians and engineers identify weaknesses in the encoding chain. Here are a few of the most common artifacts in modern digital video. (For more on video quality, see Twilight of the Pixels – Shifting the Focus to Vector Graphics.)

Macroblocking

A macroblock is a unit of image processing in various widely-used video formats, such as H.264 and MPEG-2. Macroblock processing involves mathematical equations that take color subsampled images and, through a series of transforms, quantize them into encoded data. It exists for the sake of encoding efficiency, but can result in video artifacts that are known as macroblocking errors. The visual characteristics of macroblocking artifacts are often similar to that of highly pixelated images, but with more clearly defined, box-like pixel groups that somewhat resemble misplaced puzzle pieces in the frame.

Typically, macroblocking can be attributed to any or all of the following factors: data transfer speed, signal interruption and video processing performance. Cable, satellite and internet streaming services are especially vulnerable to macroblocking, as their multi-channel transmission infrastructure often requires excessive video compression. It is, however, possible for the artifacts to occur in less congested signal flow as well (though it is not as common). And although macroblocking remains a common video artifact, it is gradually being phased out by High Efficiency Video Coding (HEVC), which utilizes innovative alternatives to macroblock processes.

Aliasing

Aliasing describes the process or effect of signal processed data reconstructed into a compromised output. It mostly affects segments of spatial and temporal media that include intricate and repetitive patterns, and can usually be attributed to insufficient sampling rates. If a source is not sampled at the proper rate and aliasing occurs, it can result in a strange sort of dragging effect on patterns within the frame. The visual appearance of aliasing depends on the nature of the source, but one of its most common manifestations looks like what is commonly referred to as a moiré pattern.

To picture this phenomenon, imagine two identical grates stacked on top of one another. If aligned properly, you will barely even notice that there are two of them and not just one. But if you rotate the top grate, even just by a little, the grates no longer line up. Now, the misaligned rows and columns create distortion where there before was a simple and uniform pattern, creating offset patterns that tend to ripple out. Another analogy for aliasing could be bike spokes in a spinning wheel. When filmed, and when turning fast enough, sometimes it looks like the spokes are rotating in the reverse direction of their actual turn. This is because the sampling rate of the capture device is not sampling quickly enough to accurately portray the speed of the wheel’s rotation, creating a different visual pattern (or alias) in its place.

Combing/Interlace Artifacts

Before modern progressive video was developed, the dominant broadcast video scanning mode was interlaced, which is still in limited use today. For NTSC video, that initially meant 525 alternately scanned lines of video per frame at about 30 frames per second. With the odd lines scanned first and the even lines second, each group (called a “field”) made up a half of a frame. Since the fields interlace with one another, each field has a comb-like appearance. And when the timing or pattern of the field scanning is disrupted (usually by way of frame rate conversion) combing artifacts appear in the picture that can be very subtle or very distracting.

Advertisements

The two prominent formats in motion picture technology’s early history were film and video – both of which had standard frame rates that differed from one another. As stated above, 30 frames per second used to be more or less the standard for video and television (in the regions that supported NTSC video) while film was generally shot and projected at 24 frames per second. This caused a discrepancy as to what would be done with the six-frame difference when one format was transferred to the other (a process known as "telecine" or "inverse telecine"). To deal with this, complex timing adjustments (called "pulldown patterns") were standardized to adjust frame rates with as little noticeable quality loss as possible. (For more on frame rates, see Video Tech: Shifting Focus From High Resolution to High Frame Rate.)

These patterns either skip or repeat fields in order to compensate for the difference in frequency between the input and output media, which naturally results in comb-like artifacts from the partial frames or residual fields. These artifacts are most noticeable in portions of the frame that depict motion, and often look like horizontal lines trailing whatever moves. There are de-combing filters that can remedy interlace artifacts to a certain extent.

Conclusion

The science of video compression evolves every day, and is becoming increasingly efficient. But as long as there remains a diverse range of codecs, compression schemes and video formats, there too will be artifacts that occur in conversion between them. New video technology will beget new forms of quality loss in transcode processes, as well as new solutions to address them.

Advertisements

Related Reading

Related Terms

Advertisements
Colyn Emery
Editor
Colyn Emery
Editor

Colyn is a writer and digital artist from Southern California. He writes about topics like AI, UX/UI, big data and blockchain technology. He has written articles, blogs, web copy and whitepapers for many different tech companies and organizations, and has worked in digital media professionally since 2007. He is a graduate of Chapman University and Art Center College of Design.