## Math from the Ground Up

I'm new to this site and I'd like to begin my thread posting by asking about the foundations of mathematics. The more I learn math, the less I feel I know and it is getting a little depressing. I have two semesters of college level calculus under my belt (ending the year with Taylor and Maclaurin series), but now I want to learn about math from the ground up. I want to learn the axioms of mathematics, learn about number systems, and learn how to create and use math proofs. Can anyone suggest some books that could really help me learn more about math, starting from the beginning (the axioms)?

 why do you want to learn it? "Theory of Computation" H. Lewis(pic of Turing on front) "Computational Beauty of Nature" Gary Flake. "Problem Solving" Larson. www.mathworld.com(wolfram site) most any set theory/logic book will give you what your looking for on axioms...
 A good place to start is Peano's Axioms - they underpin the whole of modern mathematics - from those 4 axioms more or less everything else can be derived! -NewScientist

Recognitions:
Homework Help

## Math from the Ground Up

From peano's axioms almost nothing can be derived.

What do you mean by foundations?

The mathematical interpretation would be things like "what is 1" and so on, and peano would be a good example (an axiomatization of the naturals), or Goedel. i can think of almost no examples where a familiarity with such things as Zermelo frankel would be beneficial to anyone not doing set theory or high powered research stuff. the point of that foundational stuff is to put ona firm footing things that we intuitively wish to be true anyway.

However, in terms of modern mathematics such things are incredibly uninteresting. Instead I'd rather take it to mean "what things should every mathematician know"

1. Number theory - Euclid's algorithm, euler's totient function, primes, unique factorization in the integers, fermat's little theorem, wilson's theorem.

2. Rings, Groups and Fields. just the basics, say the orbit stablizer theorem, that the integers are a principal ideal domain, polynomial rings.

let's not forget analysis

3. some analysis - sequences, limits, cauchy sequences.

leveque fundamentals of number theory would be a good place to start, thence baker, and jacobsen for algebra, though that's an expensive book.

but better yet

4 complex analysis - cauchy riemann equations, morera's theorem etc

5 combinatorics - generating fucntions, recurrence relations, graph theory,

6. probability - r.v's, conditional probabilty etc.

this is not exhaustive.

Recognitions:
Homework Help
 Quote by Black Orpheus The more I learn math, the less I feel I know and it is getting a little depressing.
Mathematicians learn mathematics so that they can learn more mathematics. I find comfort in knowing that no matter how many equations I or anyone else solves, there will always remain equations yet unsolved. The world, in my opinion, is dense: endless regression, plans within plans. But that's why Mathematics works so well at explaining it: Much of math is based on the Real Number system. And what is is about this system? Between any two real numbers . . . That is, the Real Numbers are dense too!

My point is that there is no end to the infinite detail of the world and I suspect no end to Mathematics, so in some sense we all know only a very small amount.

 Quote by matt grime From peano's axioms almost nothing can be derived.
I forget that induction, number theory and all of that mean nothing. Thank you for reminding me.

Category theory , binary operations, and the idea of existence and uniqueness are of course small fry in the mathematical world. Let us not me concerned with the ideas of creating a ring.

Addition and multiplication are also examples of the nothing that can be formed from PA. How frivolous they are! Thank you once more Matt, I will now forget about Peano's Axioms and whilst filling in my tax return, I will write that you told me nothing can be proved using them so my tax return cannot be completed by anyone who has been taught using them.

Somehow I don't think the IRS will buy it :P

Oh and before you start having a go at me - ask some smart people - lecturers/professors/maths people, how important Peano's axioms are.

-NewScientist

 Matt could you recommend some books about the topics you've listed, I think it would help people. I know some basing number theory etc. but would like to deepen my knowledge... Thanks in advance
 Recognitions: Homework Help Science Advisor very little (just think of all the maths that is out there) can be *directly* deduced from peano's axioms. no geometry is implied by them, even some results in number theory cannot be deduced from them. without other axioms thrown in (eg the existence of a power set, products of sets) we can do very little. What can be deduced is peano arithmetic, which isn't even all of number theory (Paris and Harrington gave an example in 1977 of a natural statement that is true for tha naturals but independent of PA). Of course all number theory can be done without reference to peano, though some find the idea of having a set theoretic framework for the natural numbers reassuring. In my defence I would point to Tim Gowers (Field's Medal; is that high enough? i am merely a phd) who (though i can't find it right now) said that he was happy to accept the naturals as somethng we all understand without recourse to PA. the existence of inverses is not derivable from the axioms, at least not in my interpretation of the word derived. Lots of other things are consistent with PA, but that is not the same thing. sorry if i offended you by saying that very little can be derived from PA, but in my opinion that is true - little of "all mathematics" is deducible from the axioms - certainly almost anything about uniqueness is not a consequence of PA. and i find the idea that "almost everything else can be derived" from them to be very misleading. of course you may have meant "derived" in a different sense entirely from how i read it. I very much doubt that many mathematicians care one jot for peano's axioms; they're nice to have around for people who need a structural model for arithmetic, but as an abstract algebraist myself how strange it must be that i study categories associated to rings etc without ever mentioning Peano? I would like to see a derivation of the existence of left adjoints for inclusions of compactly generated thick subcategories of triangulated categories starting from PA. - Matt
 Thanks for the advice so far. In answer to nerocomp2003's question, I want to learn this because I love math. It's one of the most beautiful things I've ever seen and we're using it to (more or less) effectively describe our universe. It's beautiful because it works; it is truth (albeit relative truth), and there are very few other things that you can find truth like this in. Anyway, if I do have a career in mathematics in the future, I would probably want to do research involved in furthering physics, rather than just working with pure mathematics for the joy of it. But right now I'm torn between music, math and physics...
 He won a Medal for for research on functional analysis and combinatorics - slightly different field wouldn't you say. I am also intrigued which axioms a true master like yourself would use to under pin mathematics? As for proving the proof of the existance of left adjoints - surely you would have no problem yourself doing this, as someone with your brilliance should easily form a proof based upon the theory of sketches - or not? Oh and here is some work from my coleage in Helsikini, Johan Lilius. http://www.tcs.hut.fi/Publications/b...UT-TCS-A33.ps/ that should help you see the proof If you are having difficulty. -NewScientist
 Recognitions: Homework Help Science Advisor Start with TIm Gowers "A very short introduction to mathematics" (cheap 10 bucks or so i'd imagine), to see the culture of mathematics explained. then LeVeque's fundamentals of number theory (Dover). after that you're on your own since i never had a book as an undergraduate (except arfken for mathematical physics). i seem to recall occasionally dipping into jacobsen's algebra. sorry, i'm not best placed to advise on these books. call me when you get to graduate texts and maybe i can help.

Recognitions:
Homework Help
 Quote by NewScientist Oh and here is some work from my coleage in Helsikini, Johan Lilius. http://www.tcs.hut.fi/Publications/b...UT-TCS-A33.ps/ that should help you see the proof If you are having difficulty. -NewScientist

that link leads me nowhere; why the ad hominem attacks and implications about my intelligence? because i only gave a one line rebuttal of an assertion initially?

a quick check i Prof Lilius's publications indicates they are in such things as software modelling. was that supposed to be aimed at my question about bousfield localization as derived from PA? if there are such applications then i would be very keen to speak with him.

mathematics cannot almost entirely be simply axiomatized from 4 axioms. axioms that do not even necessarily force the acceptance of sets and all the operations on set - how can we get binary operations if we do not know that products of sets are sets? throw in ZF at least.

 Recognitions: Gold Member Welcome to PF, Black Orpheus! Does this excerpt sound like what you're looking for? If you aren't really interested in logic but just want an overview of its connection to math, here is one general picture: You start with a formal language L. You create L "from scratch". Everything that you do will be in connection with L. L will have a set of symbols. For example, in English, the symbols would be the alphabet, numbers, punctuation symbols, etc. The symbols of L may fall into different categories, just as in English. You may have propositional symbols, ex. a, b, c, ... variable symbols, ex. x, y, z... connective symbols, ex. ~, &, v, -> predicate or relation symbols, ex. =, < , >, $\in$ function or operation symbols, ex. +, -, *, / a special type of function symbols, constants, ex. 0, 1, 2, 3, ... quantifier symbols, ex. $\forall , \exists$ punctuation symbols, ex. (, ), {, }, . and so on, depending on how you intend to use your language. BTW, the symbols of L don't need to have a written form, but in order to talk about L, you represent the symbols of L with written symbols. You then string the symbols of L together to get expressions of L. For example, a->~b, =a5+2h64>, 1=5, and 1+1=2 would be expressions. But just as in English, many of the expressions are nonsense. For instance, the English expression "fhdea phradji qu4032rafdjkasjf" doesn't "mean" anything in English. So we single out some expressions as meaningful: the words, sentences, etc. The same is done in L. You define the formulas of L. The formulas are the "meaningful" expressions. Your symbol categories will play a large role in defining your formulas, since certain symbols are intended to be used in a specific way. For instance, in English, the period symbol, ., is intended to come at the end of a sentence. The definition of formulas of L might say: 1) a propositional symbol is a formula. 2) If P is a formula, then ~P is a formula. 3) If P and Q are formulas, then ->PQ is a formula. This is a definition of formulas for a language for propositional logic. It is more complicated for more complicated languages and logics. The set of formulas is what we're really interested in; The formulas are all of the meaningful things that can be said in L. Once you have the formulas of L, a division happens: You will define a) the basic semantics and b) the calculus (a general term- not the field of math). You're probably most familiar with the calculus. This is the (possibly empty or infinite) set of axioms and the set of inference rules. The axioms, if you have any, are formulas of L. You just choose some formulas! Well, you probably choose them carefully. :) An inference rule has two parts: a set of hypotheses and a conclusion. The hypotheses and conclusion are formulas. A rule states that some conclusion can be inferred from the hypotheses. This is how you get theorems from axioms, i.e., how you prove things. Initially, the axioms act as hypotheses. You apply a rule to an axiom, or set of axioms, and you get another formula- a theorem. You can then take these new theorems, apply rules to them, and get more theorems. Pretty simple, ay? A proof is just a sequence of formulas that obey your inference rules. Of course, figuring out which rules to apply to which theorems (the axioms are also considered theorems) in order to get a specific conclusion is where the intelligence and creativity of the mathematician come into play. The semanitcs basically tells you what the formulas mean, which are true, and which are false. Remember, the formulas are just meaningless symbols so far. For instance, you may have an equality symbol, =. You need to define the circumstances that make a formula with the equality symbol true. It's kind of difficult to give an example because, well, I haven't gone into the necessary details. I gave the definition of formulas for a language for propositional logic above, so I'll use it. A valuation V on L (part of the semantics) is a function from the set of formulas of L to the set {T, F} of truth-values. For instance, if P and Q are formulas and V is a valuation, 1) PV denotes the truth-value assigned to P by V (i.e. the value of V at P). 2) (~P)V = {F iff PV = T, T otherwise}. 3) (->PQ)V = {F iff PV = T and QV = F, T otherwise}. Pretty simple, right? So you will want to choose your inference rules so that truth is preserved. That is, rules such that if the hypotheses are all true, then the conclusion is also true. Make sense? That way, if you choose axioms that are true, and have truth-preserving inference rules, all of your theorems will be true! That meaningless, mechanical manipulation of symbols that happens with the calculus results in a collection of true statements. Well, again, it's more complicated for more complicated systems. The valuations are not so simple, and things that worked for propositional logic, like truth tables, don't work anymore. I can go into more detail about something if you're interested. You can see a calculus for propositional logic (this one has no axioms) here, to get the idea. Are you starting to see how it's all built up? Another part of the semantics for more complicated languages is a structure (or interpretation). A mathematical structure S consists of: 1) A non-empty set U, called the universe or domain of S; The members of U are called the individuals of S; 2) A set of basic operations on U; 3) A non-empty set of basic relations on U. An example: 1) the individuals are natural numbers, U = {0, 1, 2, 3, ...}; 2) the four basic operations are the designated individual 0, the unary operation s, which assigns to each number n its immediate successor, and two binary operations + and *, which assign to each pair of numbers their sum and product, respectively; 3) the only basic relation is the identity relation {(n, n) : n is in U}. The connection between the language L and the operations and relations can be seen in the categories I listed above. You have predicate and function symbols in your language. In defining a structure on L, i.e., what the language "means", the predicate symbols become relations and the function symbols become operations and constants.
 Recognitions: Gold Member Oh, yes, if you want to learn more about logic, I've listed several sites in the links section here at PF: http://www.physicsforums.com/local_l...links&catid=45 I don't know of a single book that really delivers everything, but the very best introduction to logic I've found is Wilfrid Hodges' Logic. It's short, and he's seriously laugh-out-loud funny. He also combines informal conversation with formality and precision like a pro (something very hard to find). Another solid introduction is Copi & Cohen's Introduction to Logic. Your library should have at least one edition of this book (I think it's in its 12 edition- a real "classic"). You can just skip over whatever doesn't interest you. For mathematical logic, Mendelson's and Shoenfield's both titled Introduction to Mathematical Logic and Enderton's A Mathematical Introduction to Logic are supposed to be good. I didn't really like the first two and haven't read the last (though it does look good). I use Machover's Set theory, logic and their limitations. It's nice, but unless you have a professor, you will probably need a more verbose supplement. There are also many resources on the internet. It doesn't sound like you really want to study mathematical logic much. An introduction to logic, a little symbolic logic, and a bit of mathematical logic should do the trick. You can accomplish that over the summer (you have two months left?) with Hodges, parts of Copi & Cohen, and those online resources.
 if your torn between music, physics, and math... study the following the mathematical modelling of biophysics in studying the brain process in understanding music... you'd need to know about FFTs and the physics of sound...then you'd need to know the neuropsych of sound in the brain..then you'd need to know about time based stuff on music and how the brain stores it...all the while taking a mathematical modelling approach to biophysics of it.