Efficiently computing the parity of the Hamming weight of a binary sequence

In summary: I'm not sure what you're asking for.In summary, uart suggests using a lookup table to quickly determine the parity of a binary number.
  • #1
burritoloco
83
0
Hello,

I'm wondering if there are any tricks to quickly determine the parity of the weight of a binary number, i.e., whether it's even or odd. I'm not interested at all in the actual weight of the number, but in the parity of it. I know that if say x = a XOR b, then x has odd weight iff exactly one of a, b, has odd weight. But would you have come across a good algorithm for this problem before? The web seems full of the actual Hamming weight problem, but haven't noticed anything special about this other case. Thanks a lot guys.
 
Technology news on Phys.org
  • #2
Hey burritoloco.

Have you come across error correcting codes? Hamming actually wrote a book on these and deals with the construction of error correcting codes where not only can you detect errors, but you can even correct some of them depending on how the code is constructed.

You can construct these with matrices where sums are considered sums modulo 2 instead of just normal sums.

The different kinds of codes are constructed based on a lot of Venn diagrams. You can construct different codes to correct specific kinds of errors, but the general thing is that you need to add more bits to correct more errors. Notice that you will be representing a lot more than your minimum data.
 
  • #3
burritoloco said:
Hello,

I'm wondering if there are any tricks to quickly determine the parity of the weight of a binary number, i.e., whether it's even or odd. I'm not interested at all in the actual weight of the number, but in the parity of it. I know that if say x = a XOR b, then x has odd weight iff exactly one of a, b, has odd weight. But would you have come across a good algorithm for this problem before? The web seems full of the actual Hamming weight problem, but haven't noticed anything special about this other case. Thanks a lot guys.

When I've had to do this efficiently in the past I've just used a lookup table for very fast implementation on say a byte (or actually a nibble because I'm too lazy to type out a 256 entry lookup) and then use the property you mentioned to efficiently extend that to word or dwords etc as required.

For example, my (Pascal) code for "word" parity would be something like:
Code:
var a      : word;
    ah,al  : byte;
    parity : byte;
    p_look : array[0..15] of byte = (0,1,1,0,1,0,0,1,1,0,0,1,0,1,1,0);

begin
  a := $E234;                   {word to test}
  ah := hi(a); al := lo(a);
  parity := p_look[ah shr 4] xor p_look[ah and $0f] xor 
            p_look[al shr 4] xor p_look[al and $0f];

  writeln(parity);              {parity is "1" for the given test word}
end.
 
Last edited:
  • #4
If your PC is relatively new, it's processor may have the "popcnt" instruction, which returns the number of 1 bits in a value. For Visual Studio 2010 C/C++, this can be accessed using the intrinsic function _popcnt().

http://msdn.microsoft.com/en-us/library/bb385231.aspx

Odd parity would just be an odd count of bits. To do this in Visual Studio 2010 C:

Code:
    #include <intrin.h>
//  ...
    parity = _popcnt(a ^ b) & 0x01;
//  ...

gcc implents this instrinsic function with the name __builtin_popcount()

Otherwise, just use a table as suggested by uart.
 
  • #5
Thanks guys.

Chiro, I took a class on Finite Fields and Coding theory over a year ago. I'm currently doing some work over GF(2) and bits are all over ;).

Thanks uart, but I think that maybe your method might be exponential in time. Given an N-bit word and a look-up table of weight parities of n-bit words, you seem to be XORing 2^{N-n} words to get the result.

rcgldr, thanks for the comment; you already had me informed of popcnt in another post of mine ;). It would be real unfortunate if one would have to count all the '1's in a sequence just to know its parity.

Just another quick question: What are the time complexities of the bitwise arithmetical operations, OR, XOR, &, shifts.

Thanks again everyone.
 
  • #6
uart, you are also using 2^(N-n) arithmetical operations to slice the N-bit word into 2^(N-n) n-bit words...
 
  • #7
Don't know about you guys, but something in my gut tells me that there should be a quicker way to get the parity of the weight that doesn't involve computing its actual weight.
 
  • #8
burritoloco said:
I took a class on Finite Fields and Coding theory over a year ago. I'm currently doing some work over GF(2) and bits are all over.
I've worked with this type of math for error correction code, but haven't done much work with hamming codes. Normally a received code word is re-encoded or decoded to check for errors, I'm not sure why you just want to know the even / odd bit parity.

burritoloco said:
Thanks uart, but I think that maybe your method might be exponential in time. Given an N-bit word and a look-up table of weight parities of n-bit words, you seem to be XORing 2^{N-n} words to get the result.
The time would be linear (multiplicative), not exponential, the number of operations would be related to N/n.

burritoloco said:
Don't know about you guys, but something in my gut tells me that there should be a quicker way to get the parity of the weight that doesn't involve computing its actual weight.
I'm not sure how, since the Hamming weight is just the number of 1 bits in a codeword, and popcnt should be a fast way to do this.

http://en.wikipedia.org/wiki/Hamming_weight
 
Last edited:
  • #9
burritoloco said:
Thanks uart, but I think that maybe your method might be exponential in time.

I'm not quite sure how you worked that out? It's non-iterative with one lookup + one xor for per each 4 bits, that sounds linear to me. :smile:

Of course you can make it one lookup plus one xor per 8 bits if you don't mind making the lookup table bigger.
 
  • #10
Given an N-bit word and a look-up table of weight parities of n-bit words, you seem to be XORing 2^{N-n} words to get the result
I take this back! You're XORing a word of length N/n, which is smaller than the original one. As rcgldr pointed out, it is with time N/n that you slice the original sequence and I stand corrected!

Your method is great when N is bounded above (usually by 64 these days). Unfortunately, I'm dealing with arbitrarily large integers; maybe I should have said that earlier ;).
I've worked with this type of math for error correction code, but haven't done much work with hamming codes. Normally a received code word is re-encoded or decoded to check for errors, I'm not sure why you just want to know the even / odd bit parity.
I never said that I'm doing error codes :). But I'll have to agree with you that computing the weight is the way to go if you want to know the parity: there's just no other alternative that I've noticed online! However, I do know that mathematicians are very tricky people :) (a compliment!). Thanks guys.

p.s. pocnt and __builtin_popcount are so fast it's unbelievable. In my computer, these two give the weights of all integers from 1 to 2^61 - 1 in about 0 seconds! BTW, GCC also has the __builtin_parity function included; visual studio probably has one as well.
 
Last edited:
  • #11
burritoloco said:
GCC also has the __builtin_parity function included; visual studio probably has one as well.
I didn't find one. Based on a web search, it appears that gcc's __builtin_parity() is using the PF (parity flag) bit in the status register, which is set if the low order byte of a result (other than a move) has even parity. The advantage is that all X86 processors have the PF status bit. The disadvantage is that the builtin function will need to do more operations than the two instruction sequence {"popcnt" , "and"}, since it will need to shift and xor the operand down to a single byte, then do a LAHF, then a shift and mask ("and") to get the PF bit in the lower bit.

burritoloco said:
I never said that I'm doing error codes.
I'm curious as to why you want to calculate the parity on an arbitrary large integer.
 
Last edited:
  • #12
burritoloco said:
Don't know about you guys, but something in my gut tells me that there should be a quicker way to get the parity of the weight that doesn't involve computing its actual weight.
Well the xor method doesn't require calculating the hamming weight. It just requires some way of calculating the parity for some given chunk size and finding the overall parity by xor-ing the parity of the separate chunks. The weight is never calculated.

p.s. pocnt and __builtin_popcount are so fast it's unbelievable.
Yes, since popcnt is hardware implemented then of course it's going to be the fastest way, if your processor supports it (SSE4). Even if in general finding the complete hamming weight would not necessarily be the fastest way to do it in software.

In my computer, these two give the weights of all integers from 1 to 2^61 - 1 in about 0 seconds! BTW, GCC also has the __builtin_parity function included; visual studio probably has one as well.
Ok that is actually a bit hard to believe, even if we round that zero seconds up to say one complete second. You're presumably generating the numbers 1 .. 2^61 in a loop, there's got to be a minimum (at a machine level) of at least one increment, one popcnt, one xor and one compare+branch per integer tested. To do that in even 1 second by my calculations is about 10^13 MIPS.

I wonder if, in the case that this is just a benchmark and the results aren't being stored or used in anywhere, if some optimizing compiler is not optimizing your code right out of existence?
 
  • #13
rcgldr, interesting. Are you sure they aren't using look-up tables?

I'm curious as to why you want to calculate the parity on an arbitrary large integer.
I'm doing a bit of research on primitive polynomials over GF(2). I needed to compute the parity of certain blocks in a binary sequence and wanted to know the time complexity for doing that.

I wonder if, in the case that this is just a benchmark and the results aren't being stored or used in anywhere, if some optimizing compiler is not optimizing your code right out of existence?
Yes, I was using the -O2 option :). I did have a hard time believing it as well and thought there was some kind of error, but nope! Perhaps they are using look-up tables as well?

Well the xor method doesn't require calculating the hamming weight. It just requires some way of calculating the parity for some given chunk size and finding the overall parity by xor-ing the parity of the separate chunks. The weight is never calculated.
Look-up tables aside, isn't the time complexity of this method linearly proportional to the actual weight of the sequence? If so, then computing the actual weight is equivalent in time.
 
  • #14
burritoloco said:
Yes, I was using the -O2 option :). I did have a hard time believing it as well and thought there was some kind of error, but nope! Perhaps they are using look-up tables as well?
No if you use _popcnt then it doesn't need a lookup table, it's done in hardware which is even faster.

No matter which way you look at it you're going to need at least 4 machine language instructions per iteration. Now let's say your computer is really good, say it runs at 10GHz and can unroll the loop and do 100 instructions in parallel per clock cycle! It would still take over 100 years to complete your code!

Do the maths,

(4*2^61) / (100*10^10) / (3600 * 24) = 106.75 [strike]years[/strike] days. :blushing:
 
Last edited:
  • #15
burritoloco said:
Are you sure they aren't using look-up tables?
Assuming your compiler can output assembly code, output the assembly code and look at the generated code.

uart said:
Do the math
(4*2^61) / (100*10^10) / (3600 * 24) = 106.75 years.
That would be 106.75 days.

burritoloco said:
I'm doing a bit of research on primitive polynomials over GF(2). I needed to compute the parity of certain blocks in a binary sequence and wanted to know the time complexity for doing that.
Assuming that each block is an encoded codeword, then "parity" might mean the remainder produced by division of a block by the primitive polynomial, as opposed to even or odd parity of a block. I'm not sure why a person would want to know the even versus odd parity of a block when using primitive polynomials.
 
Last edited:
  • #16
rcgldr said:
That would be 106.75 days.

LOL sorry. :blushing: It's late at night here.

Anyway, still a very long time compared with "0 seconds". :smile:
 
  • #17
No matter which way you look at it you're going to need at least 4 machine language instructions per iteration. Now let's say your computer is really good, say it runs at 10GHz and can unroll the loop and do 100 instructions in parallel per clock cycle! It would still take over 100 years to complete your code!

Do the maths,

(4*2^61) / (100*10^10) / (3600 * 24) = 106.75 years days.

Well, I decided to repeat the experiment. I first COUTed the weights for a smaller loop to make sure everything was in order. The following loop, using the gcc function int __builtin_popcountl(long n) and the -O2 option on for the compiler, took 0 seconds to compute.

long m = (1L << 63) - 1;
for(long i=0; i < m; i++)
{
__builtin_popcountl(i);
}

Even worse, I then measured in milliseconds, and... again, 0 ms. I then removed the -O2 option in the build; same time. My laptop is an Acer 5736Z, Pentium(R) Dual-Core CPU T4500 @ 2.30GHz × 2, x64, not a super-computer. Have you tried a similar experiment by yourself? I would love to output the assembly code if anyone would show me how?

Assuming that each block is an encoded codeword, then "parity" might mean the remainder produced by division of a block by the primitive polynomial, as opposed to even or odd parity of a block. I'm not sure why a person would want to know the even versus odd parity of a block when using primitive polynomials.
From your comment I take it that perhaps "block" is a term already in use in relation to codewords? I merely meant chunks of a binary sequence. Again, no codewords involved in what I'm doing.
 
Last edited:
  • #18
burritoloco said:
long m = (1L << 63) - 1;
for(long i=0; i < m; i++)
{
__builtin_popcountl(i);
}
Try changing this to:

long j = 0;
long m = (1L << 31) - 1;
for(long i=0; i < m; i++)
{
j += __builtin_popcountl(i);
}

cout << j << endl;
 
Last edited:
  • #19
Cool, computing time was 29197 milliseconds, j = 33285996513. Still, addition time is proportional to the amount of digits no?
 
  • #20
burritoloco said:
I would love to output the assembly code if anyone would show me how?

Use -S (upper case) to generate assembly source:

gcc -S -O2 example.cpp

or to generate assembly source and object but not link:

gcc -S -O2 -c example.cpp

This will produce an assembler file called example.s.

burritoloco said:
From your comment I take it that perhaps "block" is a term already in use in relation to codewords?
No, there's no standard for the term "block" when working with primitive polynomials. I wasn't sure if in your case a block would be a codeword or not.

burritoloco said:
Cool, computing time was 29197 milliseconds, j = 33285996513. Still, addition time is proportional to the amount of digits no?
You can try this and hope the compiler doesn't optimize this to just running the last loop:

register long j = 0;
long m = (1L << 31) - 1;
for(long i=0; i < m; i++)
{
j = __builtin_popcountl(i);
}

cout << j << endl;
 
Last edited:
  • #21
Ooh, this is what I get after adding the -S option for g++:

./example: line 1: .file: command not found
./example: line 2: .globl: command not found
./example: line 3: .section: command not found
./example: line 4: .LC0:: command not found
./example: line 5: .string: command not found
./example: line 6: .LC1:: command not found
./example: line 7: .string: command not found
./example: line 8: .LC4:: command not found
./example: line 9: .string: command not found
./example: line 10: .section: command not found
./example: line 11: .p2align: command not found
./example: line 12: .globl: command not found
./example: line 13: .type: command not found
./example: line 14: main:: command not found
./example: line 15: .LFB1025:: command not found
./example: line 16: .cfi_startproc: command not found
./example: line 17: pushq: command not found
./example: line 18: .cfi_def_cfa_offset: command not found
./example: line 19: .cfi_offset: command not found
./example: line 20: xorl: command not found
./example: line 21: xorl: command not found
./example: line 22: pushq: command not found
./example: line 23: .cfi_def_cfa_offset: command not found
./example: line 24: .cfi_offset: command not found
./example: line 25: xorl: command not found
./example: line 26: subq: command not found
./example: line 27: .cfi_def_cfa_offset: command not found
./example: line 28: movq: command not found
./example: line 29: call: command not found
./example: line 30: xorl: command not found
./example: line 31: jmp: command not found
./example: line 32: .p2align: command not found
./example: line 33: .p2align: command not found
./example: line 34: .L6:: command not found
./example: line 35: movq: command not found
./example: line 36: call: command not found
./example: line 37: .L3:: command not found
./example: line 38: cltq: command not found
./example: line 39: addq: command not found
./example: line 40: addq: command not found
./example: line 41: cmpq: command not found
./example: line 42: jne: command not found
./example: line 43: syntax error near unexpected token `('
./example: line 43: ` leaq 16(%rsp), %rdi'

Without the -S it works fine though.
 
  • #22
burritoloco said:
This is what I get after adding the -S option for g++ ...
Try with the -c option (means don't link):

gcc -O2 -S -c example.cpp

In spite of the errors you got, gcc may have produced an assembly source file, look for a file with a .s suffix.
 
Last edited:
  • #23
Same thing happened with -c. I tried

register long j = 0;
long m = (1L << 31) - 1;
for(long i=0; i < m; i++)
{
j = __builtin_popcountl(i);
}

cout << j << endl;

Computing time was now 29107 milliseconds.
 
  • #24
The .s file is not there...
 
  • #25
burritoloco said:
j = __builtin_popcountl(i);

Computing time was now 29107 milliseconds.
That's just under 30 seconds, so it appears the popcount and the rest of the loop instructions are being performed (2^31) -1 times. It tried an assembly code version with a two instruction loop in 32 bit mode:

Code:
        mov     ecx,0blackffh
test0:  popcnt  eax,ecx
        loop    test0

and it takes about 5.7 seconds on my system (Intel 2600K). I also tried popcnt eax,ebx with various values in ebx, and the timing remained the same.

burritoloco said:
The .s file is not there...
I'm not sure why this is happening, but I don't have a copy of gcc. Try

gcc -help

to see the options. Also make sure you're using upper case S:

gcc -S example.cpp

I still don't understand why you want to calculate the even / odd parity when you're learning about GF(2) primitive polynomials.
 
Last edited:
  • #26
I still don't understand why you want to calculate the even / odd parity when you're learning about GF(2) primitive polynomials.
:) OK, to satisfy the curiosity, I'm looking at the factorization of cyclotomic polynomials of order 2^n-1; all its irreducible factors are the prim polys of degree n. There are several methods to compute prim polys, but this one seems at least unusual. In this method I'm looking at, the coefficients of these polys can be obtained by the parity of the weight of certain chunks of some types of "primitive" binary sequences of size given by the number (large) of irreducible factors of the corresponding cyclotomic polynomial. I was wondering about the time complexity to compute the coefficients with this method, but unfortunately, with what I know, the overall method seems to be exponential (seems to be true from the tests I've done as well). However, there are several other known methods which compute these prim polys in polynomial time...

Thanks rcgldr.
 
  • #27
burritoloco said:
I'm looking at the factorization of cyclotomic polynomials of order 2^n-1; all its irreducible factors are the prim polys of degree n.
Thanks for the explantion. My experience with this stuff is for Reed Solomon error correction code, where the primitive polynomials are usually given, and relatively small (9 bits, 13 bits, 17 bits, 33 bits), where crude methods are good enough to find them if needed. Sometimes irreducible but non-primitive polynomials are used, as in the case of AES encryption (not sure why this was done).
 
  • #28
burritoloco said:
Still, addition time is proportional to the amount of digits no?

Yep. But the overhead of integer add is similar to the logical instructions, so it would be roughly the same run time if you just chained them together with XOR (to give the overall parity of the string) and then just a single "and 01" at the end to isolate the parity.

Cool, computing time was 29197 milliseconds, j = 33285996513.
Ok thanks for the results. So if you multiply that time by 2^30 that extrapolates to about 1000 years to do the original proposed test. Now you see why I was so skeptical of those previous results. :smile:
 
  • #29
burritoloco said:
My laptop is an Acer 5736Z, Pentium(R) Dual-Core CPU T4500 @ 2.30GHz × 2, x64, not a super-computer.

Looks like nobody noticed that this CPU does not support SSE4 and (I'm pretty sure) doesn't support the ABM (advanced bit manipulations) either. So no native popcnt in this case. I guess gcc must substitute an X86 equivalent for targets without support ABM extensions.
 
  • #30
rcgldr said:
Code:
        mov     ecx,0blackffh
test0:  popcnt  eax,ecx
        loop    test0
It takes about 5.7 seconds on my system (Intel 2600K). I also tried popcnt eax,ebx with various values in ebx, and the timing remained the same.

Hi rcgldr. My PC is too old to support native popcnt, so just for fun I thought I'd try an X86 parity flag implementation. Surprisingly the following calculates the overall parity of the string [1, 2, 3 ... 7FFFFFFF] in about 14.8 seconds on my aging single core Athlon64 (@2.4 GHz).

Code:
var myGlobalResult : longword;

procedure testparity;
label Strt,L1,L2,L3,L4;
begin
 asm
         mov ecx,$7FFFFFFF
         mov edx,0
   Strt:
         mov eax,ecx
         or  eax,eax
         jpe L1
         not edx
   L1:
         shr eax,08
         jpe L2
         not edx
   L2:
         shr eax,08
         jpe L3
         not edx
   L3:
         shr eax,08
         jpe L4
         not edx
   L4:
         loop Strt
         and edx,01
         mov myGlobalResult,edx
 end; {asm}
end;  {testparity}
Just in case anyone is interested, the overall parity of that string was 0 (even). :smile:
 
Last edited:
  • #31
uart said:
X86 parity flag implementation.
Based on some comments in some old threads at other forums, I think this is similar to what gcc probably does. It takes 4.4 seconds on my system (note it's 1/2 the number of iterations I used for my popcnt test).

Code:
        mov     ecx,07blackfh
        xor     edx,edx                 ;set edx = parity
par0:   mov     eax,ecx
        xor     al,ah
        jpe     par1
        not     edx
par1:   shr     eax,16
        xor     al,ah
        jpe     par2
        not     edx
par2:   loop    par0
        neg     edx
 
  • #32
Hey good idea to add the "xor al,ah" to check 16 bits at once, that nearly doubles the speed. That reduced my times from 14.7 sec down to about 8.6 seconds. :smile:

note it's 1/2 the number of iterations I used for my popcnt test
Yeah I noticed that you had 2^32-1 for that previous test. At first I didn't see it, and I was seriously puzzled as to why my much slower computer seemed to be doing the tight loop at the same speed or even slightly faster than yours (about 5.6 sec). Then I noticed you were doing twice as many iterations.
 
Last edited:
  • #33
Wasn't sure what the return value is supposed to be. At the end of the code the "neg edx", returns 0 for even parity, 1 for odd parity, to reverse this use "inc edx" instead (also the return value would normally be in eax, not edx).
 
Last edited:

1. What is the Hamming weight of a binary sequence?

The Hamming weight of a binary sequence is the number of 1s in the sequence. It is also known as the population count or the number of set bits.

2. Why is computing the parity of the Hamming weight important?

The parity of the Hamming weight is important in various fields such as error correction, cryptography, and data compression. It helps in detecting and correcting errors in data transmission, ensuring data security, and reducing data storage space.

3. What does it mean to efficiently compute the parity of the Hamming weight?

Efficiently computing the parity of the Hamming weight means finding a method or algorithm that can calculate the parity with the least amount of time and resources. This is important in applications where large amounts of data need to be processed quickly.

4. What are some methods for efficiently computing the parity of the Hamming weight?

Some methods for efficiently computing the parity of the Hamming weight include using bitwise operations, lookup tables, and parallel processing. These methods can significantly reduce the time and resources required for computation.

5. What are the potential challenges in efficiently computing the parity of the Hamming weight?

One potential challenge is finding a balance between efficiency and accuracy. Some methods may sacrifice accuracy for speed, which may not be suitable for certain applications. Another challenge is adapting the method to different types of data, as the efficiency may vary depending on the data structure.

Similar threads

Replies
13
Views
2K
Replies
9
Views
940
  • Sticky
  • Programming and Computer Science
Replies
13
Views
4K
  • Programming and Computer Science
Replies
29
Views
2K
  • Quantum Physics
Replies
4
Views
700
  • STEM Career Guidance
Replies
11
Views
593
  • Programming and Computer Science
Replies
1
Views
4K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
2K
Back
Top