[WEBINAR] Fast Analytics for Big Data and Small

Cyclic Redundancy Check (CRC)

Definition - What does Cyclic Redundancy Check (CRC) mean?

The cyclic redundancy check (CRC) is a technique used to detect errors in digital data. CRC is a hash function that detects accidental changes to raw computer data commonly used in digital telecommunications networks and storage devices such as hard disk drives. This technique was invented by W. Wesley Peterson in 1961 and further developed by the CCITT (Comité Consultatif International Telegraphique et Telephonique). Cyclic redundancy checks are quite simple to implement in hardware and can be easily analyzed mathematically. It is one of the better techniques in detecting common transmission errors.

It is based on binary division and is also called polynomial code checksum.

Techopedia explains Cyclic Redundancy Check (CRC)

In the cyclic redundancy check, a fixed number of check bits, often called a checksum, are appended to the message that needs to be transmitted. The data receivers receive the data and inspect the check bits for any errors. Mathematically, data receivers check on the check value attached by finding the remainder of the polynomial division of the contents transmitted. If it seems that an error has occurred, a negative acknowledgement is transmitted asking for data retransmission.

A cyclic redundancy check is also applied to storage devices like hard disks. In this case, check bits are allocated to each block in the hard disk. When a corrupt or incomplete file is read by the computer, the cyclic redundancy error is reported. This could be from another storage device or from CD/DVDs. The common reasons for errors include system crashes, incomplete or corrupt files, or files with lots of bugs.

CRC polynomial designs depend on length of block to be protected, error protection features, resource for CRC implementation, and performance.

Share this: