Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A case of English vs. classical logic

  1. Feb 16, 2006 #1


    User Avatar
    Gold Member

    If P, Q, and R are propositions,

    1) ((P -> Q) & (P -> R)) <=> (P -> (Q & R)).

    But let

    P = I have exactly one dollar.
    Q = I have enough money to buy a coffee.
    R = I have enough money to buy a pickle.

    and (1) fails (though in only one direction) under the English interpretation that, if, say, a coffee and pickle cost a dollar each, would make ((P -> Q) & (P -> R)) true but (P -> (Q & R)) false for the reason that I would need at least two dollars for (Q & R) to be true. What else is going on in this interpretation that makes (1) fail? I'm almost certain and is being used differently, but I think it might be some other things as well, like I referring to the same person and some assumptions about time, simultaneity, or such.

    Just thought I'd share. I think it's interesting but don't have time to think about it just now. Maybe someone else has an explanation? Can anyone make everything relevant explicit?
  2. jcsd
  3. Feb 16, 2006 #2
    One way to resolve the tension between the formal and informal reasoning is to say that you haven't properly translated Q and R.

    These two sentences have equivalent meanings:
    1)I have enough money to buy coffee.
    2)Possibly, I could buy coffee.

    Since they have the same meaning, they should both be translated the same way. The second sentence makes it clear that we are dealing with a modal proposition. I'll use * to meaning possibly. Thus you have

    1) ((P -> *Q) & (P ->* R)) <=> (P -> (*Q &* R)).

    Note that (*Q&*R) does not imply *(Q&R). I think this better captures the way we use P,Q and R.
  4. Feb 16, 2006 #3


    User Avatar
    Gold Member

    Ah, thanks, I hadn't thought of that. But I don't think that's the problem I'm thinking of. I think it's that this and has an added 'together' requirement. Compare

    3) Coffee tastes yummy and pickles taste yummy.
    3a) Separately, coffee and pickles taste yummy.
    3b) Together, coffee and pickles taste yummy.

    (3) implies (3a) but doesn't imply (3b). Suppose you let coffee c and pickles p be individuals and Yx mean x tastes yummy. In (3) and (3a), Y is a unary predicate, but (3b) seems to want to either make Y a binary predicate (Edit: eh, and I don't know what that would be or how it would make sense. Although together and separately as adverbs might be considered to modify the predicate, I'm not sure whether those classes and rules are valid or relevant. Hmm...) or name a new individual, say g, coffee and pickles (Ack, though if coffee and pickles was being treated as a single individual, taste should inflect for subject agreement, becoming tastes. So maybe it's somewhere in between. Oh, right, I think it's just a compound subject. Hah.). This latter and seems to function like an operation on your domain. (3a) seems like just an abbreviation of (3), having the same underlying structure. But (3b) seems to be an entirely different structure.

    I think the problematic interpretation depends on at least a) the function of and and b) the agent being the same in P, Q, and R.
    For (a), consider that ((P -> Q) v (P -> R)) <=> (P -> (Q v R)) and has no such problem. Indeed, ((P -> Q) & (P -> R)) => (P -> (Q v R)) removes the problem.
    For (b) consider that changing any one of the I's in P, Q, or R to a non-first-person noun removes the problem. Or, hm, maybe it doesn't. What do you think?
    Last edited: Feb 16, 2006
  5. Feb 16, 2006 #4
    It is less misleading if you repeat the full statements without combining the verbs. I have exactly one dollar so I have enough money to buy a coffee and I have enough money to buy a pickle. BUT: I may or may not have enough money to buy a coffee and a pickle.

    Ambiguities in natural languages are why they are not used in formal logic and why they don't translate directly.
  6. Feb 16, 2006 #5


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Yay, I'm not the only one who thought that the implication still went both ways in the original question!

    I just don't like natural language. That's all there is to it. :smile:
  7. Feb 16, 2006 #6


    User Avatar
    Gold Member

    Yeppers, I am studying linguistics and logic. What do you think are the necessary differences between a "formal logic" and a formal model of a natural language?
    Last edited: Feb 16, 2006
  8. Feb 16, 2006 #7


    User Avatar
    Gold Member

    Yeah, I'm pretty sure treating it as a compound subject/object would work out. So one way of looking at it is that that instance of and is functioning as an opertation on the domain instead of as a connective. I wonder if looking at it as changing the predicate somehow would work too.

    Does anyone else recognize the interpretation I'm takling about? I mean, do you guys naturally read it both ways?
  9. Feb 16, 2006 #8
    Each word needs to have one and only one meaning, a single precise definition. This is not the case in any natural language. Natural languages are dynamic and words constantly change meaning (eg: gay). You cannot control this, and you don't want to control it either. If you had a single meaning per word then vocabulary would expand beyond reasonable.
  10. Feb 16, 2006 #9


    User Avatar
    Gold Member

    You mean each word form needs to be associated with only one meaning? Why? If you can use other information to disambiguate the word forms, e.g., their functions in a sentence, you still can end up with one meaning per word; you would just have to broaden, or unnarrow, your concept of a word. The different meanings are themselves differences -- just represent those differences in the model. I don't really understand the reason for this requirement anyway. Formal logics don't need to be consistent or sound or anything else that I can think of that might tie in with this requirement.

    Do you think there is some linguistic phenomenon that cannot be formally modelled?
    You know every natural language?
    Sorry, I was thinking of a synchronic model, a snapshot of the language at a single moment in time. Still, some aspects of language do change regularly, and rules of language change have been discovered and used, so diachronic models aren't out either. Do you really think scientists can't model dynamic systems? If we were talking about physics instead of language would you have a different opinion?
    Last edited: Feb 16, 2006
  11. Feb 16, 2006 #10


    User Avatar
    Science Advisor
    Homework Helper

    The problem is that you're equivocating:

    "I have enough money for a coffee and I have enough money for a pickle"


    "I have enough money for a coffee and a pickle"
  12. Feb 16, 2006 #11
    I probably misunderstand what you are asking. Based on your OP I assumed you were interested in the differences between the language of formal logic and natural languages like English and Chinese. But I read again the question to which I was replying to see my mistake: you actually asked for the difference between formal logic and a formal model (of a natural language). Of course, formal logic is a method and a formal model is a representation. But maybe you misspoke so I won't venture too far. I can still reply to a couple of things.

    Regarding disambiguation you are correct, one word per meaning is not essential when a grammar retains context. A single word can have different meanings depending on context, but then the context itself must also be unambiguous. Resolution problems arise when more than a single context co-exist. If a context exists (eg. crime) and then a second context is added (eg. poverty) then words defined differently in both contexts need to be explicitly qualified. Practically, a simpler rule such as one meaning per symbol is more reliable than a more complex disambiguation method.

    You asked if I know every natural language. Of course not, but a "natural" language evolves naturally, and natural evolution implies changes. With change comes ambiguity, at least between old and new meaning during the transition period.
  13. Feb 17, 2006 #12


    User Avatar
    Gold Member

    Okay, maybe I picked a bad title. I know that English and propositional or predicate logic are different languages. I was actually interested in analyzing the two different English propositions represented by (4). In the version of English that I speak, (4) is ambiguous and one meaning of (4) is equivalent to one meaning of (5). I was wondering what exactly about (4) allows the interpretation that is equivalent to (5). I figured out one solution already. Two underlying structures of

    6) I have enough money to buy a coffee and I have enough money to buy a pickle.


    6a) Pab & Pac


    6bi) Pa(b*c)

    or more clearly,

    6bii) Pad

    where d = b*c and & and * are the two different operations represented by the word form and; & is an operation on your set of formulas and * is an operation on your set of individuals. So I guess I was right about the and being different (same form, different function) and the agent, the individual a in the example, having to be the same. See what I mean now?
    Last edited: Feb 17, 2006
  14. Feb 17, 2006 #13


    User Avatar
    Gold Member

    "formal logic" was your choice. :wink: I wasn't sure what exactly you meant by it. To me, a formal logic is a kind of (formal) language. What do you mean by method?
    I'm actually back one step. I wasn't asking about how one meaning per word can be acheived; I was questioning whether the one meaning per word property is even a necessary property of formal logics. I don't think it is or needs to be. I don't think anyone has intentionally designed a formal logic that doesn't have that property, but the only consequence that I can think of of not having that property is that it would probably introduce inconsistencies. But logics can already be inconsistent for other reasons, so big wup. I'm just looking for a reason to make that a necessary property of formal logics. Of course, I obviously don't know exactly what all you're counting as a formal logic -- I'm not sure what all I'd count as a formal logic either -- but I'm just curious about your ideas. :smile:
  15. Feb 17, 2006 #14
    The application of some rules. We may have similar ideas in mind. Logic can be seen as a language, a set of rules and symbols, or it can be seen as its application. This is similar to math: is it a language or the application of this language? When you apply logic to a problem then you follow a method dictated by the rules in question, by the grammar of the language you use. So I guess this is another example where natural language is imprecise since we seem to be talking about the same thing using different terms, or slightly different things using the same terms.

    Ah, well, this is an interesting idea. I never even questionned that anything formal must be unambiguous, it seemed self-evident to me (since I did not question it). Why bother to create something formal unless it is also clear... But I guess it is quite possible to have a formally ambiguous grammar where, for example, A & B means either A and B or A and not B. This would reduce to just A using "common" logic. Ambiguous grammars are not necessarily useful though, but I suppose they are not impossible.

    What I personally talk about when I speak of formal logic is the application of a set of operations on boolean values. These are classified with mathematical precision. I don't speak of fuzzy logic without using the adjective.
  16. Feb 17, 2006 #15


    User Avatar
    Gold Member

    Not useful? Secret codes, anyone? :tongue2: Yeah, perhaps as self-contained systems, they aren't very useful. But when information outside of the language can be used to disambiguate that within, ambiguity can make the language more efficient (I think that's exactly why natural languages are ambiguous in places). For example, as you noted, ambiguity cuts down on the size of our vocabulary. You can also use it to effectively keep your communications secret (by keeping which interpretation you intended unknown).
  17. Feb 18, 2006 #16
    But a language is a means of communication. If it fails to permit communication then it is not a language but a noise generator.

    The generation of secret messages can be independent of the language used. You could send just half a sentence now and the other half later in secret. In general, you send out part of a message but its meaning is not clear until you receive the rest, which may be the other half of the sentence, a key that unlocks the first part, or a context indicator as in your example. They still must be useable by the language in order to extract its meaning.

    Of course another way to have secret messages is to keep the language itself secret or partly secret. Then messages can be exchanged in full sight of everyone who doesn't know the complete rules of the language. But this is a different thing.
  18. Feb 20, 2006 #17


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I think the tension you have here between the English interpretation and the formalism may be occurring simply because English has a deeper structure than propositional logic. If you take a strict interpretation of the sentences involved, there is no contradiction between the English interpretation and the formal interpretation above: P -> (Q&R) just reads "If I have a dollar, then I have enough money to buy a coffee, and I have enough money to buy a pickle," which is true so long as you don't interpret it to mean "I have enough money to buy BOTH a coffee AND a pickle."

    In reading out that sentence in English, though, perhaps we do naturally want to read it as "I have enough money to buy both a coffee and a pickle," which in turn might imply that a first-order logic formalism is closer to how we naturally interpret the sentence. For instance, let

    D(x): "I have x dollars"
    B(y1,y2,...,yn): "I have enough money to buy y1 and y2 and ... and yn"

    then strictly speaking we have, as above,

    [(D(1) -> B(coffee)) & (D(1) -> B(pickle))] <=> [D(1) -> (B(coffee) & B(pickle))]

    but perhaps on reading the English sentence, we naturally assume the "and" function is working inside B rather than outside, and thus interpret the latter clause as

    D(1) -> B(coffee, pickle)

    This causes a tension because, given the way we've defined our first-order sentences and given the way commerce works, it is not always the case that (B(x) & B(y)) -> B(x, y)

    Gotta run for now, but hopefully that's some food for thought. I also have an inkling that the "xor" function could come in handy here, but I'll have to flesh out that intuition later.
  19. Feb 20, 2006 #18


    User Avatar
    Gold Member

    I came up with an answer similar to yours (see posts 3, 12). Only the arity of the predicate doesn't change; one and is operating on formulas, the other on individuals. It's just two different words with the same form, as the verb form and the noun form.

    One theory: Verbs refer to events or situations that involve characteristic roles called thematic roles, or θ-roles. For example, a killing typically involves a party that performs the killing (the agent) and a party that is killed (the patient). The parties filling these θ-roles are called the arguments of the verb (and I think are typically represented in English by Determiner Phrases). The representation in the mental lexicon (the dictionary in your brain) of the meaning of a verb includes its number of arguments and its θ-roles. This is all semantic stuff, by the bye. In a sentence, syntactic rules assign the θ-roles their syntactic positions. This interaction between semantic and syntactic structures explains why

    1a) Romeo killed Tybalt.

    describes an event that is different than

    1b) Tybalt killed Romeo.

    but equivalent to

    1c) Tybalt was killed by Romeo.

    -- in (1a) and (1b), the agent role is assigned the subject position, while in (1c), the patient role is assigned the subject position. It also explains why

    2) Romeo bought.

    is ungrammatical -- bought needs another argument, the thing that Romeo bought, poison. Changing the arity of the predicate is the same thing as changing the number of arguments of the verb, which is changing the verb's θ-roles (part of the meaning of the verb). This is what from does in

    2a) Romeo bought poison from an apothecary.

    where bought has 3 arguments, (Romeo, poison, an apothecary), but and fails to do in

    2c) Romeo bought posion and an apothecary.

    where bought has 2 arguments, (Romeo, poision and an apothecary).

    I think the second interpretation of the original sentence, with and operating on individuals, might result from a speaker's zeal to remove (perceived) redundancy. It's actually the first way that I read it. Well, it was first by a fraction of a second, but still.
  20. Feb 21, 2006 #19

    Can you recommend an intro book on semantics?
  21. Feb 21, 2006 #20


    User Avatar
    Gold Member

    Sorry, I haven't studied much semantics yet. I might still be able to help a bit though. Do you (know whether you) want a formalist or a functionalist approach? Have you already been introduced to semantics in a survey course (e.g., you're familiar with compositionality, entailment, presuppositions, assertions, how truth conditions relate to meaning, what a theory of meaning is supposed to be, and such)? How much logic do you know (up to predicate/first-order)? From what I've seen, semantics is basically just logic, but my book's coverage of semantics is paltry.

    Books that I see recommended and used most often:

    For logic for linguists, these are good so far and came highly recommended:

    Personally, I think this one looks good:

    You might want to search the descriptions and reviews in LinguistList's Publications Area if you haven't already.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: A case of English vs. classical logic
  1. Set Theory vs. Logic (Replies: 7)