Definition - What does 8b/10b Encoding mean?
8b/10b encoding is a telecommunications line code in which each eight-bit data byte is converted to a 10-bit transmission character. 8b/10b encoding was invented by IBM and is used in transmitting data on enterprise system connections, gigabit Ethernet and over fiber channel. This encoding supports continuous transmission with a balanced number of zeros and ones in the code stream. 8b/10b can also detect single-bit transmission errors.
Techopedia explains 8b/10b Encoding
8b/10b code was defined in 1983 in the IBM Journal of Research and Development. It maps 8 bits to 10 bit symbols to achieve DC balance. This type of code also provides state changes for reasonable clock recovery.
The encoding is performed in link layer hardware and is hidden from upper layers of the software stack. Eight bits of data are transmitted as 10-bit entities called symbols, or characters. Lower bits of data are encoded into a 6-bit group and the top three bits are encoded into 4-bit groups. The code bits are combined to form a 10-bit symbol transmitted on the wire.
The encoding reduces the number of retransmissions as the combination of the encoding scheme with checksum enables data reconstruction. This algorithm adds an overhead to each character up to 25%. As serial interfaces such as fiber channels do not have clocks to identify the validity of bits, clock information is encoded within data streams.
The encoding process provides 10-bit characters conforming to coding rules. They are not used to represent data characters but as special characters that help in identifying management functions or control. As a whole, they recognize notation structures as they are seen in error messages. During the character transmission, two additional bits called running disparities join the stream along with other bits, which are variables ensuring that the number of "1" bits transmitted is almost equal to number of "0" bits transmitted.