# Prove this inequality about the weights of code words (Coding Theory)

• imDooiNGMATH
In summary: What do you mean by "the 1-bit additions carry"? I'm assuming they don't in this context because for any code x,y x+y = 0. So any instance of 1+1 = 0. Sorry, if I'm misunderstanding.
imDooiNGMATH
Homework Statement
Prove ##wt(x+y) \leq wt(x) + wt(y)##. where "wt(x)" is referring to the weight of a specific code word
Relevant Equations
##0 \leq wt(x)##, ##0 \leq wt(y)##
I'm trying to prove the following:

##wt(x+y) \leq wt(x) + wt(y)##, where "wt(x)" is referring to the weight of a specific code word.

Proof:

For two code words ##x, y \in F^{n}_2##, we have the inequalities ##0 \leq wt(x)## and ##0 \leq wt(y)##. Adding these together, we have ##0 \leq wt(x) + wt(y)##. From here, adding ##wt(x+y)## to both sides, we have ##wt(x+y) \leq wt(x) + wt(y) + wt(x+y)##. From this, it is clear that ##wt(x) + wt(y) \leq wt(x) + wt(y) + wt(x+y)##. Thus, ##wt(x+y) \leq wt(x) + wt(y) \leq wt(x) + wt(y) + wt(x+y)##.

At first, I was thinking I should do cases where ##x = y## and ##x \neq y##, but reading over it, this seems sufficient.

berkeman
It's not obvious to me how the last two lines follow from the lines before them.

Office_Shredder said:
It's not obvious to me how the last two lines follow from the lines before them.
I've been thinking about this since I posted and I think what you pointed out is what's bothering me. With the cases idea, it's easier to make use of the digits in the words themselves to more directly make the point.

For example:
A case where ##x = y##. If ##x = y##, then ##x + y = 0##, and as such ##wt(x + y) = 0##. As ##0 \leq wt(x)## and ##0 \leq wt(y)##, we have ## 0 \leq wt(x) + wt(y)##. And thus ##wt(x+y) \leq wt(x) + wt(y)##.

Then I just need to make a similarly detailed argument for the case ##x \neq y##. I second guessed this idea, but I think this is a more proper way to go about it because the above makes a leap in logic at the end. What do you think?

imDooiNGMATH said:
For example:
A case where ##x = y##. If ##x = y##, then ##x + y = 0##, and as such ##wt(x + y) = 0##. As ##0 \leq wt(x)## and ##0 \leq wt(y)##, we have ## 0 \leq wt(x) + wt(y)##. And thus ##wt(x+y) \leq wt(x) + wt(y)##..
With your expression x + y, it looks like you are doing bitwise binary addition of 1-bit quantities. So with x = y, you have 0 + 0 = 1 and 1 + 1 = 0. Is that what's happening here? How is wt(x) being defined? Is it the number of nonzero bits in a word? For example, would wt(100110) be 3?

I don't see how you can do a proof without some idea of how wt() is defined.

Mark44 said:
With your expression x + y, it looks like you are doing bitwise binary addition of 1-bit quantities. So with x = y, you have 0 + 0 = 1 and 1 + 1 = 0. Is that what's happening here? How is wt(x) being defined? Is it the number of nonzero bits in a word? For example, would wt(100110) be 3?

I don't see how you can do a proof without some idea of how wt() is defined.
Yep! Everything you said is exactly how everything is defined.

Edit: Sorry, actually in this case 0+0 = 0, not 1.

imDooiNGMATH said:
Everything you said is exactly how everything is defined.
Which is information you should have included in your post. Also, when you add two multibit numbers, are there carries?

What I would do is play around with some specific examples, and see if you can generalize things for a proof. For example, suppose that ##x, y \in F_2^4##; i.e., bit strings of length 4.

Some examples:
1. x = 1001, y = 1001 -- wt(x) = 2, wt(y) = 2, wt(x + y) = 1 (assuming the 1-bit additions carry)
2. x = 1100, y = 0011 -- wt(x) = 2, wt(y) = 2, wt(x + y) = 4
3. x = 1010, y = 1110 -- wt(x) = 2, wt(y) = 3, wt(x + y) = 0 (again assuming the 1-bit additions carry)

Mark44 said:
Which is information you should have included in your post. Also, when you add two multibit numbers, are there carries?

What I would do is play around with some specific examples, and see if you can generalize things for a proof. For example, suppose that ##x, y \in F_2^4##; i.e., bit strings of length 4.

Some examples:
1. x = 1001, y = 1001 -- wt(x) = 2, wt(y) = 2, wt(x + y) = 1 (assuming the 1-bit additions carry)
2. x = 1100, y = 0011 -- wt(x) = 2, wt(y) = 2, wt(x + y) = 4
3. x = 1010, y = 1110 -- wt(x) = 2, wt(y) = 3, wt(x + y) = 0 (again assuming the 1-bit additions carry)
What do you mean by "the 1-bit additions carry"? I'm assuming they don't in this context because for any code x,y x+y = 0. So any instance of 1+1 = 0. Sorry, if I'm misunderstanding. I naively assumed the conventions of the field would more uniform like some other math courses I've taken.

Regarding what you said about examples, I do have the following worked out. Still regarding cases:

If ##x \neq y##, then the amount of non-zero digits in ##x + y## can't be equal to zero. So we have ## 1 \leq wt(x+y)##. It is also the case that because ##x \neq y##, at least one of them has to have a weight greater than one because they can't both be the zero word then ( the code word with all entries equal to zero). So without loss of generality, we have ##1 \leq wt(x)## and ##0 \leq wt(y)##.

I don't know if any of this will be particularly useful yet.

Edit: I think I got it. There's an equivalency: ##wt(x+y) = wt(x) + wt(y) - 2wt(x \cap y)##. (##x \cap y## is pair-wise multiplication of the codewords. Not super relevant here.) From this, we have ##wt(x) + wt(y) = wt(x+y) + 2wt(x \cap y)##. From this, it is immediately clear that ##wt(x+y) \leq wt(x+y) + 2wt(x \cap y)## because ##0 \leq 2wt(x \cap y)##. What a great way to spend several hours (not).

Last edited:
You wrote this in post #1:
imDooiNGMATH said:
For two code words ##x, y \in F^{n}_2##
I assume this notation means vectors of length n whose components come from the set {0, 1}. From the lack of information that you provided, I assumed that maybe you were talking about strings of bits, with addition defined as binary addition on n-bit words. If that's not the case, you need to clarify exactly what x + y means.
imDooiNGMATH said:
What do you mean by "the 1-bit additions carry"? I'm assuming they don't in this context because for any code x,y x+y = 0. So any instance of 1+1 = 0. Sorry, if I'm misunderstanding. I naively assumed the conventions of the field would more uniform like some other math courses I've taken.
This is all fine if x and y happen to be in ##F_2^1##; i.e., single bits. But what if x and y aren't just single bits? How is addition defined?

Mark44 said:
You wrote this in post #1:
I assume this notation means vectors of length n whose components come from the set {0, 1}. From the lack of information that you provided, I assumed that maybe you were talking about strings of bits, with addition defined as binary addition on n-bit words. If that's not the case, you need to clarify exactly what x + y means.
This is all fine if x and y happen to be in ##F_2^1##; i.e., single bits. But what if x and y aren't just single bits? How is addition defined?

Ok, so your first paragraph holds for how addition is defined in this context:

It can essentially be thought of as vector addition. It should also be noted: we are working in ##\mathbb Z_{2}## (well, this is how we formed the basis of the addition anyway, but these are binary block codes), hence the set ##{0,1}##.

##x + y = (x_1 + y_1)(x_2 + y_2)...(x_n + y_n) ##
A similar approach is used for multiplication, denoted as ##x \cap y##. Just multiply corresponding entries in the strings, keeping in mod 2.

Really sorry for the missing context

Mark44 said:
With your expression x + y, it looks like you are doing bitwise binary addition of 1-bit quantities. So with x = y, you have 0 + 0 = 1 and 1 + 1 = 0. Is that what's happening here? How is wt(x) being defined? Is it the number of nonzero bits in a word? For example, would wt(100110) be 3?

I don't see how you can do a proof without some idea of how wt() is defined.
From https://en.wikipedia.org/wiki/Linear_code:

==Definition and parameters==​
A linear code of length ##n## and rank ##k## is a linear subspace ##C## with dimension ##k## of the vector space ##\mathbb{F}_q^n## where ##\mathbb{F}_q## is the finite field with ##q## elements. Such a code is called a ##q##-ary code. If ##q=2## or ##q=3##, the code is described as a binary code, or a ternary code respectively. The vectors in ##C## are called codewords. The size of a code is the number of codewords and equals ##q^k##.​
The weight of a codeword is the number of its elements that are nonzero and the distance between two codewords is the Hamming distance between them, that is, the number of elements in which they differ. The distance ##d## of the linear code is the minimum weight of its nonzero codewords, or equivalently, the minimum distance between distinct codewords. A linear code of length ##n##, dimension ##k##, and distance ##d## is called an [##n,k,d##] code.​
We want to give ##\mathbb{F}_q^n## the standard basis because each coordinate represents a "bit" that is transmitted across a "noisy channel" with some small probability of transmission error (a binary symmetric channel). If some other basis is used then this model cannot be used and the Hamming metric does not measure the number of errors in transmission, as we want it to.​

imDooiNGMATH

## 1. What is Coding Theory?

Coding Theory is a branch of mathematics and computer science that deals with the design and analysis of error-correcting codes. These codes are used to transmit information reliably over noisy channels, such as wireless communication or digital storage systems.

## 2. What is an inequality in the context of Coding Theory?

In Coding Theory, an inequality is a mathematical statement that compares the weights of different code words. The weight of a code word is the number of non-zero symbols it contains. Inequalities in Coding Theory are used to determine the efficiency and error-correcting capabilities of a code.

## 3. How do you prove an inequality about the weights of code words?

To prove an inequality about the weights of code words, you must use mathematical techniques such as algebraic manipulation, induction, or combinatorial arguments. These proofs often involve analyzing the structure of the code and its properties, as well as using known theorems and lemmas.

## 4. Why are inequalities important in Coding Theory?

Inequalities play a crucial role in Coding Theory as they provide valuable insights into the performance of a code. By analyzing the weights of code words, we can determine the minimum distance of a code, which is a measure of its error-correcting capabilities. Inequalities also help in the design and optimization of codes for specific applications.

## 5. Can you give an example of an inequality about the weights of code words?

One example of an inequality in Coding Theory is the Singleton bound, which states that the minimum distance of a code is lower bounded by the minimum weight of a non-zero code word. This inequality helps in determining the maximum number of errors that a code can correct, and it is used in the design of many error-correcting codes, such as Hamming codes and Reed-Solomon codes.

• Introductory Physics Homework Help
Replies
3
Views
1K
• Engineering and Comp Sci Homework Help
Replies
2
Views
3K
• Engineering and Comp Sci Homework Help
Replies
1
Views
1K
• Differential Equations
Replies
5
Views
789
• Calculus
Replies
5
Views
620
• Engineering and Comp Sci Homework Help
Replies
7
Views
966
• Introductory Physics Homework Help
Replies
7
Views
2K
• Precalculus Mathematics Homework Help
Replies
15
Views
896
• Engineering and Comp Sci Homework Help
Replies
1
Views
4K
• Calculus
Replies
4
Views
127