A+B=C: Understanding Real Number Equality and Functionality

  • Thread starter Thread starter mahmoud2011
  • Start date Start date
  • Tags Tags
    Stupid
mahmoud2011
Messages
88
Reaction score
0
I don't know why I thought in this way , I ask : if a and b are real numbers and c is also a real number , now if a=b why a+c = b+c , and also why ac=bc , questions like this I don't know why I am asking , they seem to be stupid . What = means exactly . I know that numbers themselves are sets , and + can be considered as a function , from here I began to justify this and I justified it logically , question is : is there a formal way to justify this ?
 
Mathematics news on Phys.org
The question isn't stupid.

The Peano axioms are directly related to your question, and the history of mathematical logic article simply shows that in western thought, mathematical logic wasn't formulated until fairly recently.

Here you go: Read the http://en.wikipedia.org/wiki/Mathematical_logic#History".
 
Last edited by a moderator:
The proof that ac=bc and c≠0 implies a=b uses the fact that if 2 real numbers multiply to 0, then one of them must be 0. Think of what c(a-b)=0 means and how you could transform ac=bc to get to that.

One can prove that ab=0 and a≠0 implies b=0 using the field axioms. See below for a proof, in Citan Uzuki's post.
 
Last edited:
TylerH said:
the fact that if 2 real numbers multiply to 0, then one of them must be 0.

You need to prove that then...
 
micromass said:
You need to prove that then...

I can't. I was allowed to assume the fundamental theorem of algebra in my proof class, because the proof is far from elementary. I don't believe any non-elementary proofs will be beneficial to the OP, but here are some: http://en.wikipedia.org/wiki/Fundamental_theorem_of_algebra#Proofs

I should have said "given the fundamental theorem of algebra," to eliminate any ambiguity.

EDIT: Ignore everything in this post. I'm suffering from (hopefully) momentary stupidity.
 
Last edited:
2 real numbers multiply to 0, then one of them must be 0 (which is the result of FTA)

I'm not sure where you got that from. This is the result of the fact that the real numbers are a field. So if ab = 0, and a is not 0, then you can divide both sides by a (or to be more pedantic, multiply both sides by a-1) to obtain b=0. It has nothing to do with the FTA.
 
Citan Uzuki said:
I'm not sure where you got that from. This is the result of the fact that the real numbers are a field. So if ab = 0, and a is not 0, then you can divide both sides by a (or to be more pedantic, multiply both sides by a-1) to obtain b=0. It has nothing to do with the FTA.
Okay, I'll correct my post. I'm glad my proof class professor will likely never see this. :)
 
To answer the OP's question, it sounds like you're looking for the http://en.wikipedia.org/wiki/Equality_(mathematics)#Some_basic_logical_properties_of_equality. This is included as either an axiom schema or an inference rule (depending on which deductive system you're working with) in first order logic itself, and is therefore considered more "basic" than the axioms of the real numbers -- which is why you're allowed to assume it even though it's not explicitly listed among them.

I've seen this confuse a lot of students -- most students take a real analysis course before they take a course in formal logic, and the sudden emphasis on proving things that had previously been taken for granted can lead to a lot of hesitation in using anything, no matter how basic, that isn't explicitly listed as an axiom, yet the real analysis textbooks don't usually explicitly list the axioms of logic itself among the things you're allowed to assume -- even though you wouldn't be able to get very far without being able to infer that a=b ⇒ a+c=b+c. A bit of an oversight, in my opinion.
 
Last edited by a moderator:
Citan Uzuki said:
To answer the OP's question, it sounds like you're looking for the http://en.wikipedia.org/wiki/Equality_(mathematics)#Some_basic_logical_properties_of_equality. This is included as either an axiom schema or an inference rule (depending on which deductive system you're working with) in first order logic itself, and is therefore considered more "basic" than the axioms of the real numbers -- which is why you're allowed to assume it even though it's not explicitly listed among them.

I've seen this confuse a lot of students -- most students take a real analysis course before they take a course in formal logic, and the sudden emphasis on proving things that had previously been taken for granted can lead to a lot of hesitation in using anything, no matter how basic, that isn't explicitly listed as an axiom, yet the real analysis textbooks don't usually explicitly list the axioms of logic itself among the things you're allowed to assume -- even though you wouldn't be able to get very far without being able to infer that a=b ⇒ a+c=b+c. A bit of an oversight, in my opinion.

so can you recommend me to a book to learn logic ( where I will teach myself ) , and books where \i learn all of that more formally
 
Last edited by a moderator:
  • #10
Well, I know the book "forall x" (sic) isn't too bad, and has the advantage of being free. You can download it at http://www.fecundity.com/logic/. You may also want to start a thread in the academic guidance section asking for other recommendations.
 
Back
Top