Calculating Checksum: Frame Length vs Generator Polynomial

Click For Summary

Discussion Overview

The discussion revolves around the conditions for computing a checksum for a frame in the context of polynomial representations, specifically focusing on the relationship between the frame length and the generator polynomial. The scope includes theoretical aspects of encoding and redundancy in data transmission.

Discussion Character

  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant asserts that the frame must be longer than the generator polynomial for checksum computation.
  • Another participant explains that the encoded frame will be longer due to the addition of redundancy bits, clarifying that the encoded frame consists of data bits plus a remainder from polynomial division.
  • A different participant questions the initial assertion, suggesting that the data bits (M(x)) do not necessarily need to exceed the generator polynomial (G(x)) before encoding.
  • Further elaboration is provided regarding scenarios where redundancy bits may outnumber data bits, particularly in long-distance communications, indicating that this may not be a strict requirement.
  • Another participant mentions trade-offs in data density and redundancy in magnetic media, referencing the use of different error correction codes.

Areas of Agreement / Disagreement

Participants express differing views on whether the number of data bits must exceed the generator polynomial length before encoding. The discussion remains unresolved regarding the implications of this requirement.

Contextual Notes

There are assumptions about the definitions of M(x) and G(x) that are not fully clarified, particularly whether M(x) refers to the initial data bits or the encoded message. Additionally, the discussion touches on practical considerations in data transmission that may influence the relationship between data and redundancy bits.

prashantgolu
Messages
50
Reaction score
0
To compute the checksum for some frame with m bits, corresponding to the polynomial M(x), the frame must be longer than the generator polynomial.
Why...?
 
Technology news on Phys.org
The encoded frame will be longer. If G(x) is a r+1 bit polynomial, then a polynomial modulo G(x) produces an r bit remainder. An encoded frame will consist of d + r bits, where d is the number of data bits to be encoded. The encode process appends r zeroes to the data to create a polynomial, then divides this poynomial by G(x) to produce a r bit remainder. The encoded frame then consists of d bits of data followed by the r bit remainder. Note that the number of data bits, d, can be just 1 bit.
 
Last edited:
I get it...but it says that M(x) should be longer than G(x)
i.e without encoding the data bits should be more than the generator polynomial...
(i think they are talking about the initial data bits and not after adding the check bits which it will obviously be longer than the generator polynomial as we are adding r bits to it in any case)
 
I updated my previous post to use your terminology with G(x).

prashantgolu said:
I get it...but it says that M(x) should be longer than G(x)
There's no rule that the number of data bits needs to be greater than the number of redundancy bits. You might want to check and make sure that M(x) doesn't mean an encoded message as opposed to the data portion of an encoded message.

It might waste space or bandwidth, but in some cases, such as communications from distant sattelites, where the time it takes for the message to travel between the Earth and the satellite makes it impractical to use a scheme that relies on status responses and re-transmission of data, there are more reduncancy bits than data bits for each message, perhaps 2 or 3 times as many reduncancy bits than data bits (a corrective code is used instead of crc).

For magnetic media, if the goal is to maximize effective user data density, there is a trade off between increasing bit density and increasing the redundancy required to support the increased bit density while maintaining some unrecoverable read error rate (usually 1 in 1014 for most computer peripherals). Usually some form of Reed-Solomon ecc code is used versus crc.
 
Last edited:
thanks a lot...it makes sense to have it that way :)
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 0 ·
Replies
0
Views
3K
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
1
Views
2K
Replies
4
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
1
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K