Proving a Logical Statement: Puzzling Over Proving Pb and Not P1...

  • Thread starter Thread starter danne89
  • Start date Start date
AI Thread Summary
The discussion centers on proving a logical statement involving propositional logic, specifically the expression (P_b and not P_1) and its relation to an infinite set of propositions. The participant expresses confusion about the proper formulation and the implications of infinite formulas. A detailed explanation of the formal language and truth valuations is provided, emphasizing the associative and commutative properties of logical conjunction. The equivalence of certain formulas is clarified, and the limitations of negation in relation to conjunction are noted. Recommendations for further reading on mathematical logic are also shared to aid understanding.
danne89
Messages
180
Reaction score
0
Hi! I've problem proving a logical statement. I really can nothing about logic. I just messing around with some in a try to prove another theorem.

Anyway, I would be useful to be able to prove that:
(P_b and not P_1) and (P_b[/tex] and not P_2[/tex] and ...<br /> as P1, P2, P3 ...<br /> equals Pb and not (P1, P2, P3 ...)<br /> <br /> I don't know if this is the proper way to express this, but I hope you will get my point. If not ask. And please correct me.
 
Mathematics news on Phys.org
"Pb and not (P1, P2, P3 ...)" would be a formula of infinite length, and I don't know of any language that allows formulas of infinite length. However, you can have an infinite set of formulas of finite length- which I suppose would work just as well.
For your proof, the setup isn't fun, but here goes.
The primitive symbols of formal language L fall into two disjoint sets: a countably infinite set of propositional symbols and a set of two distinct connective symbols, "NOT" denoted by ~ and "AND" denoted by &.
Formulas are defined as follows:
1) a propositional symbol is a formula.
2) If P is a formula, then ~P is a formula.
3) If P and Q are formulas, then &PQ is a formula.
A (truth-)valuation V on L is a mapping from the set of formulas to the set {T, F} of (truth-)values, defined as follows:
1) If P is a propositional symbol, Pv denotes the value assigned to P by V (Pv = T or Pv = F).
2) If P is a formula, (~P)v = {T if Pv = F, F if Pv = T}.
3) If P and Q are formulas, (&PQ)v = {T if Pv = T and Qv = T, F otherwise}.
You want to prove that & has the associative and commutative properties. Informally, this is simple. Say that two formulas P and Q are equivalent iff Pv = Qv for every V. So you want to prove that if R, S, and T are formulas, then (&R&ST)v = (&&RST)v and (&RS)v = (&SR)v for every V. This follows immediately from the definitions.
You also want to prove that (&PP)v = Pv (so you can get rid of all those extra Pbs). If Pv = T, then (&PP)v = T, and vice versa. If Pv = F, then (&PP)v = F, and vice versa. So they're equivalent.
We can write "&PQ" as "P & Q" and introduce parentheses and subscripts for convenience. I think the meaning of your original statement is clear, if stated as follows:
For any n in N, [(P1 & ~P2) & (P1 & ~P3) & ... & (P1 & ~Pn)] is equivalent to [P1 & (~P2 & ~P3 & ... & ~Pn)].

Note that you cannot write ~(P2, P3, ..., Pn), unless you say what the commas mean. Note also that ~(P2 & P3 & ... & Pn) is not equivalent to (~P2 & ~P3 & ... & ~Pn)- "~" doesn't distribute that way.
If that didn't help, just say so. Is that what you wanted to say?
 
Thanks! You clear a few questionmarks, but created even more. I'll read a book on mathematical logic in the future i think.
 
Try googling "infinitary logic."
 
danne89 said:
Thanks! You clear a few questionmarks, but created even more.
If you have questions, just ask. :smile:
I'll read a book on mathematical logic in the future i think.
The best book on logic I've ever read is "Logic" by Wilfrid Hodges. If you read no other book on logic, read this one- Hodges is hysterical and seriously knows his stuff. You should also read this before any mathematical logic book. After that, "Set theory, logic, and their limitations" by Moshé Machover is great. "Mathematical Logic" by Joseph Shoenfield is also good.
 
Seemingly by some mathematical coincidence, a hexagon of sides 2,2,7,7, 11, and 11 can be inscribed in a circle of radius 7. The other day I saw a math problem on line, which they said came from a Polish Olympiad, where you compute the length x of the 3rd side which is the same as the radius, so that the sides of length 2,x, and 11 are inscribed on the arc of a semi-circle. The law of cosines applied twice gives the answer for x of exactly 7, but the arithmetic is so complex that the...
Thread 'Unit Circle Double Angle Derivations'
Here I made a terrible mistake of assuming this to be an equilateral triangle and set 2sinx=1 => x=pi/6. Although this did derive the double angle formulas it also led into a terrible mess trying to find all the combinations of sides. I must have been tired and just assumed 6x=180 and 2sinx=1. By that time, I was so mindset that I nearly scolded a person for even saying 90-x. I wonder if this is a case of biased observation that seeks to dis credit me like Jesus of Nazareth since in reality...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Back
Top