Jump to content
  • entries
  • comments
  • views

The actual reason why communication standards measure in bits per second, probably

Mira Yurizaki



When you look at the bandwidth of a communication bus or interface such as USB, SATA, or the speed your ISP advertises they give you, you notice that often times they measure everything in bits per second instead of bytes per second, a figure we're more used to. The common reason we think that companies advertise the bits per second is because it's a larger number. And obviously larger means better to the average consumer. Confusingly as well, the shorthand versions for bandwidth looks similar enough.


Except there's a more likely reason why you see bits per second: in the physical aspect of communication, data isn't always 8-bits.


Let's take for instance every embedded system's favorite communication interface: the humble UART (universal asynchronous receiver/transmitter). The physical interface itself is super simple, at most all you need is two wires (data and ground), though a system may have three (transmit, receive, ground). However, there's three issues:

  • How do you know when the start of a data frame (a byte in this case) has started? What if you were sending a binary 0000 0000? If you were using a 0V as binary 0, the line would look flat the entire time so how would you know if you actually are getting data or not?
  • How do you know when to stop receiving data? A UART can be setup to accept a certain amount of data bits per "character," and so it needs to know when to stop receiving data.
  • Do you want some sort of error detection mechanism?

To resolve these:

  • A bit is used to signal the start of a transmission by being the opposite of what the UART 'rests' at. So if the UART rests at a value of 0, the start bit will be whatever the value of 1 is.
  • A bit (or more) is used to signal the end of a transmission. This is often the opposite value of what the start bit is in order to guarantee at least a voltage transition takes place.
  • A bit can be used for parity, which is 0 or 1 depending if the number of data bits are 1 is even or odd. Note error detection mechanisms are optional.

A common UART setting is 8-N-1, or 8 data bits, no parity, 1 stop bit. This means at the minimum there are 10 bits per 8 data bits (the start bit is implied). This can be as high as 13 bits per 9 data bits such as in 9-Y-2 (9 data bits, using parity, 2 stop bits). So if we had a UART in an 8-N-1 configuration, if the UART is transmitting at a rate of 1,000 bits per second, the system is only capable of transferring 800 data bits per second, or an 80% efficiency rating.


Note: Technically it's not proper to express the transmission rate of a UART as "bits per second" but "baud", which is how many times per second the UART can shift its voltage level. In some cases, you may want to use more than one voltage level shift to encode a bit, such as embedding a clock signal. This is used in some encoders like Manchester Code. But often times, baud = bits per second.


Another example is PCIe (before 3.0) and SATA. These use another encoding method known as 8b/10b encoding. In this, 8 bits are encoded over a 10-bit sequence. The main reason for doing this is to achieve something called DC-balance. That is, over time, the average voltage of the signal is 0V. This is important because communication lines often have a capacitor to act as a filter. If the average voltage is higher than 0V over time, it can charge this capacitor to the point where the communication line reaches a voltage that causes issues such as a 0 bit looking like a 1 bit.


In any case, like the UART setting 8-N-1, 8b/10b encoding is 80% efficient.


This is all a long explanation to say the reason why communication lines are expressed in bits per second than bytes per second is bits per second is almost always technically correct, whereas bytes per second is not.



There are no comments to display.