# Extended or bit based Reed-Solomon error correction

Tags:
1. Mar 1, 2017

### Povilas M

I am trying to find or develop an algorithm that can take a 1024 (lets assume random) bit array (so 128 bytes) and, by adding some redundancy (making the final code more than 1024 bits long), make it capable of decoding the data with up to n random bits flipped. Important point is that it has work on all the bits without splitting them up, so regardless of how the erroneous bits are distributed, it will always stay consistent in terms of how much error it can tolerate.

Reed - Solomon codes are as close as I've found to a suitable solution - they let you specify number of parity bytes n and can correct up to n/2 incorrect bytes, effectively letting you control the decoding success threshold. However, it has two issues that prevent me from using it:
1) It works on byte level, not bit. x flipped bits can affect from x/8 to x bytes, introducing a lot of inconsistency.
2) I would have transformed the bits into bytes with value 0 or 1, but that creates a 1024 byte array, while Reed - Solomon can support only up to 256 bytes (with correction codes included).

If anyone know about an algorithm that can achieve this or can direct me to some resources to help me develop it, that'd be greatly appreciated!

2. Mar 2, 2017

### rcgldr

If you are considering transforming bits into bytes, you could use the values 0x00 and 0xff, handling up to 3 bit flips in a byte.

You could also use 4 bits of data per byte, which would allow a 7,4 Hamming code with additional parity bit, combined with Reed Solomon (using a larger field like GF(512) == 9 bit field). WIki article:

If you're willing to add that much redundancy, you could use a (255,128) Reed Solomon code, which could correct up to 63 errors out of 255 bytes.

3. Mar 2, 2017

### Povilas M

Thanks a ton! I'll try the things you have pointed out, I'm sure at least one of them will work! The larger Reed Solomon field sounds exactly like what I need. Thanks again :)