I am trying to find or develop an algorithm that can take a 1024 (lets assume random) bit array (so 128 bytes) and, by adding some redundancy (making the final code more than 1024 bits long), make it capable of decoding the data with up to n random bits flipped. Important point is that it has work on all the bits without splitting them up, so regardless of how the erroneous bits are distributed, it will always stay consistent in terms of how much error it can tolerate. Reed - Solomon codes are as close as I've found to a suitable solution - they let you specify number of parity bytes n and can correct up to n/2 incorrect bytes, effectively letting you control the decoding success threshold. However, it has two issues that prevent me from using it: 1) It works on byte level, not bit. x flipped bits can affect from x/8 to x bytes, introducing a lot of inconsistency. 2) I would have transformed the bits into bytes with value 0 or 1, but that creates a 1024 byte array, while Reed - Solomon can support only up to 256 bytes (with correction codes included). If anyone know about an algorithm that can achieve this or can direct me to some resources to help me develop it, that'd be greatly appreciated!