Hamming Code

What Does Hamming Code Mean?

A hamming code is a linear code for error detection that can detect up to two simultaneous bit errors and is capable of correcting single-bit errors. Reliable communication is assured if the hamming distance between the transmitter and receiver is less than or equal to one.

Advertisements

Techopedia Explains Hamming Code

Hamming code was invented by Richard Hamming in 1950. The method is useful for a single bit change, which is more probable than two or more bit changes.

The simplicity of hamming codes makes them suitable for use in computer memory and single-error correction. They use a double-error detection variant called SECDED. These codes have a minimum hamming distance of three, where the code detects and corrects single errors while double bit errors are detected only if a correction is not attempted. Adding an extra parity bit increases the minimum distance of the hamming code to four, which allows the code to detect and correct single errors while detecting double errors.

Hamming initially introduced code that enclosed four data bits into seven bits by adding three parity bits. It can easily be extended to eight and four bit code by adding an extra parity bit on top of the encoded word.

Advertisements

Related Terms

Latest Cybersecurity Terms

Related Reading

Margaret Rouse

Margaret Rouse is an award-winning technical writer and teacher known for her ability to explain complex technical subjects to a non-technical, business audience. Over the past twenty years her explanations have appeared on TechTarget websites and she's been cited as an authority in articles by the New York Times, Time Magazine, USA Today, ZDNet, PC Magazine and Discovery Magazine.Margaret's idea of a fun day is helping IT and business professionals learn to speak each other’s highly specialized languages. If you have a suggestion for a new definition or how to improve a technical explanation, please email Margaret or contact her…