# How math is builded up from sets

• Malna

#### Malna

Hi there!
I would like to understand the basic concepts of how mathematics is builded up from sets of elements to anything else (i.e: an equation)

For example here is a formula: 2-1
So are there different sets of elements like

A=(2) B=(-) C=(1) ?

And maybe there is an operation with sets what put the two, minus and the one together?

Recently I thought the function order two elements together, but I've just realized what function does is transform a set of elements to another one like from 2 it made 2-1
2->2-1 if that's right.

So I don't understand what put elements together

I would be really grateful is anybody could help me with this
Malna

Hi there!
I would like to understand the basic concepts of how mathematics is builded up from sets of elements to anything else (i.e: an equation)

For example here is a formula: 2-1
So are there different sets of elements like

A=(2) B=(-) C=(1) ?
No, I don't think so. Ignoring any set theoretic notions for the time being, you have sets of numbers, and you have four arithmetic operations on them: addition, subtraction, multiplication, and division. This is the basis of arithmetic, which is taught in the early grades of school.

More advanced mathematics, such as abstract algebra, starts with the notion of a group, which is a set of elements of some kind, together with one operation, and a few axioms about the operation.
Malna said:
And maybe there is an operation with sets what put the two, minus and the one together?

Recently I thought the function order two elements together, but I've just realized what function does is transform a set of elements to another one like from 2 it made 2-1
2->2-1 if that's right.

So I don't understand what put elements together

I would be really grateful is anybody could help me with this
Malna

The most common definition of ##\mathbb R## (the field of real numbers) involves a function from ##\mathbb R## into ##\mathbb R## and a function from ##\mathbb R\times\mathbb R## into ##\mathbb R## that are required to satisfy a bunch of conditions. I will denote the first function by - and the second one by +.

-x is just the conventional notation for -(x), i.e. the output produced by - when the input is x.

x+y is just the conventional notation for +(x,y), i.e. the output produced by + when you give it (x,y) as input.

This definition of ##\mathbb R## doesn't mention subtraction directly, but there's an almost obvious way to define it, using the functions mentioned above. It's the function ##S:\mathbb R\times\mathbb R\to\mathbb R## defined by S(x,y)=x+(-y) for all real numbers x,y. Then we can introduce the notational convention to write x-y instead of S(x,y).

x-y=x+(-y).

In this approach to real numbers, there are only two binary operations: addition and multiplication. There are also two unary operations (a function ##x\mapsto -x## from ##\mathbb R## into ##\mathbb R## and a function ##x\mapsto\frac 1 x## from ##\mathbb R-\{0\}## to ##\mathbb R-\{0\}##). Subtraction is defined as described above. Division is defined similarly.

It's also possible to start with four binary operations (addition, subtraction, multiplication and division), and use them to define the two unary operations.

Last edited:

It makes more sense to me that you should start with something that can do everything, and then you find components of it that explain specific things, including sets, lines, circles, algebra, and calculus. Essentially, start with a general case, then make it specific. Unfortunately, schools never function this way. They always teach you special cases, forcing you to memorize numerous concepts that you haven't yet unified, and preventing you from really seeing the big picture.

The attempts to go from set theory to all math have failed. Bertrand Russell tried about 100 years ago, and then about 50 years ago they altered set theory so they could (using the zemelo-fraenkel set theory). A bit more recently, they created category theory, which was kind of a compromise to the previous rigidity. It seems like they said: 'there's no way to do this, so let's just get rid of our rules or make them so flexible such that we can force this to work'. That thought process, in my opinion, is the prevailing reason behind string theory and quantum theory (not discounting its uses, of course), but that's a whole separate topic.

Last edited:
It makes more sense to me that you should start with something that can do everything, and then you find components of it that explain specific things, including sets, lines, circles, algebra, and calculus. Essentially, start with a general case, then make it specific. Unfortunately, schools never function this way. They always teach you special cases, forcing you to memorize numerous concepts that you haven't yet unified, and preventing you from really seeing the big picture.

Disagree. You can't see the point of the general case unless you have lots of examples at your disposal.

"Too many people write papers that are very abstract and at the end they may give some examples. It should be the other way around. You should start with understanding the interesting examples and build up to explain what the general phenomena are. This was you progress from initial understanding to more understanding. This is both for expository purposes and for mathematical purposes. It should always guide you."

--Michael Atiyah

I think my overarching concern here is that presenting things in complete generality is sort of dogmatic. I always think in terms of how I could have come up with the theory myself, in a perfect world. The inspiration usually comes from starting with the special cases. No one would think to do ring theory all of the sudden without anything coming before it. It's only because you have examples, like matrix rings or quaternions that you start caring about rings. Until you have a bunch of examples that you need to unify, so that you don't have to prove things separately for each of them, there's no reason to unify. Doing so would be dogmatic. Besides, proving things separately for different cases is not always bad. Sometimes, it's easier to see how to do it in the special case, so it's more motivated, and it adds extra repetition, if you are aware of it, which can also be a good thing for learning.

In this context, though, it's true that, pedagogically, it doesn't make sense to start with the foundations. You just work a little more informally, rather than trying to build everything from the ground up. But when you're ready, you can learn something about the nature of math by studying the foundations.

The attempts to go from set theory to all math have failed. Bertrand Russell tried about 100 years ago, and then about 50 years ago they altered set theory so they could (using the zemelo-fraenkel set theory).

It wasn't exactly a failure. It is expressible in terms of Zermelo-Fraenkel axioms, at least in principle. What failed, as per Godel, was Hilbert's program of proving consistency and coming up with an effective procedure to find and prove all theorems.

A bit more recently, they created category theory, which was kind of a compromise to the previous rigidity. It seems like they said: 'there's no way to do this, so let's just get rid of our rules or make them so flexible such that we can force this to work'. That thought process, in my opinion, is the prevailing reason behind string theory and quantum theory (not discounting its uses, of course), but that's a whole separate topic.

Actually, category theory is motivated by algebraic topology (fundamental group, homology, etc.) and wasn't originally intended for foundational purposes. I don't really know much about the attempts to use it for that, though.

As to the original question, the way math reduces to set theory is roughly something like this. If you want to build the real numbers, you make them out of rational numbers, but you sort of fill in the holes, like pi or e or the square root of 2 that sort of need to be there (for example, the diagonal of a unit square has length equal to the square root of 2). I don't want to get into the details of that construction (actually, there are two equivalent ways: Dedekind cuts or completion), so let's just take for granted that if you give me the rational numbers, I can build the real numbers out of them.

Rational numbers can be reduced to pairs of integers. How? Just think of 4/5 as the ordered pair (4,5). When you do this, you have to keep in mind that there's some reduncy because 1/2 = 2/4. So the pair (1,2) should be identified with (2,4) and so on. And you have to define addition and all that and make sure that it's well-defined, so there are some more details. So, the upshot is that if you give me the integers, I can build you the rational numbers.

Similarly, if you give me the natural numbers, I can build you the integers. This time, again, you can look at pairs of natural numbers like (9,4) and (12, 18). They represent the difference of the pair of numbers, so (9,4) represents 4-9 = -5, and (12,18) represents 18-12 = 6. Again, you have some redundancy like (10, 5) also representing -5, and the details of defining addition and multiplication. But if you give me the natural numbers, I can build you the integers out of them.

And it's natural numbers that are built directly out of sets. Basically, it's just what you would think it would be. The number n is represented by a set with n elements in it. And what the Zermelo-Fraenkel axioms do is give you enough starting assumptions to be able to carry that out. You have to start with something. So, you take the empty set, which has 0 elements, then you have zero, you take a set with 0 in it for a set with 1 thing in it, then you can get {0,1}, which has 2 things in it, and so on. And the axioms just give you the tools to make this actually work. They may be unsatisfactory in some ways. I'm not a logician, so I don't know. But on some level, it does seem to work.

The attempts to go from set theory to all math have failed.
No it hasn't. Where do you get this from.

Disagree. You can't see the point of the general case unless you have lots of examples at your disposal.

"Too many people write papers that are very abstract and at the end they may give some examples. It should be the other way around. You should start with understanding the interesting examples and build up to explain what the general phenomena are. This was you progress from initial understanding to more understanding. This is both for expository purposes and for mathematical purposes. It should always guide you."

--Michael Atiyah

I think my overarching concern here is that presenting things in complete generality is sort of dogmatic. I always think in terms of how I could have come up with the theory myself, in a perfect world. The inspiration usually comes from starting with the special cases. No one would think to do ring theory all of the sudden without anything coming before it. It's only because you have examples, like matrix rings or quaternions that you start caring about rings. Until you have a bunch of examples that you need to unify, so that you don't have to prove things separately for each of them, there's no reason to unify. Doing so would be dogmatic. Besides, proving things separately for different cases is not always bad. Sometimes, it's easier to see how to do it in the special case, so it's more motivated, and it adds extra repetition, if you are aware of it, which can also be a good thing for learning.

In this context, though, it's true that, pedagogically, it doesn't make sense to start with the foundations. You just work a little more informally, rather than trying to build everything from the ground up. But when you're ready, you can learn something about the nature of math by studying the foundations.

It wasn't exactly a failure. It is expressible in terms of Zermelo-Fraenkel axioms, at least in principle. What failed, as per Godel, was Hilbert's program of proving consistency and coming up with an effective procedure to find and prove all theorems.

Actually, category theory is motivated by algebraic topology (fundamental group, homology, etc.) and wasn't originally intended for foundational purposes. I don't really know much about the attempts to use it for that, though.

As to the original question, the way math reduces to set theory is roughly something like this. If you want to build the real numbers, you make them out of rational numbers, but you sort of fill in the holes, like pi or e or the square root of 2 that sort of need to be there (for example, the diagonal of a unit square has length equal to the square root of 2). I don't want to get into the details of that construction (actually, there are two equivalent ways: Dedekind cuts or completion), so let's just take for granted that if you give me the rational numbers, I can build the real numbers out of them.

Rational numbers can be reduced to pairs of integers. How? Just think of 4/5 as the ordered pair (4,5). When you do this, you have to keep in mind that there's some reduncy because 1/2 = 2/4. So the pair (1,2) should be identified with (2,4) and so on. And you have to define addition and all that and make sure that it's well-defined, so there are some more details. So, the upshot is that if you give me the integers, I can build you the rational numbers.

Similarly, if you give me the natural numbers, I can build you the integers. This time, again, you can look at pairs of natural numbers like (9,4) and (12, 18). They represent the difference of the pair of numbers, so (9,4) represents 4-9 = -5, and (12,18) represents 18-12 = 6. Again, you have some redundancy like (10, 5) also representing -5, and the details of defining addition and multiplication. But if you give me the natural numbers, I can build you the integers out of them.

And it's natural numbers that are built directly out of sets. Basically, it's just what you would think it would be. The number n is represented by a set with n elements in it. And what the Zermelo-Fraenkel axioms do is give you enough starting assumptions to be able to carry that out. You have to start with something. So, you take the empty set, which has 0 elements, then you have zero, you take a set with 0 in it for a set with 1 thing in it, then you can get {0,1}, which has 2 things in it, and so on. And the axioms just give you the tools to make this actually work. They may be unsatisfactory in some ways. I'm not a logician, so I don't know. But on some level, it does seem to work.
You have some good responses here. I can't address everything right now, so here's one nugget:

I have a simpler way of looking at addition. Imagine you have a unit circle with radius = 1. Place as many duplicate circles as you want in a row next to this circle, with all of the circles barely touching each other (technically, each circle only intersects at one point with each neighboring circle, and the intersection points if connected will make a straight line). Now, draw a circle around this row of circles, with the only intersection points being at the two most outside circles of the row, at one point on each circle. That radius will be equal to the radius of the sum of every circle's radius within the circle.

This also works if you have circles of various radii inside of the larger circle. They don't have to be duplicates.

Now, how about multiplication? All you do is raise this concept one order. For a x b, you have a=number of connected unit circles of r=b radii. Draw a larger circle around this, and you have a circle with the radius = a x b.

Same goes for exponents. You just raise multiplication as many orders as necessary to satisfy the exponent.

Subtraction, division, and logarithms are merely the inverse functions of the aforementioned concepts. You just need to understand those first three concepts, and then you can predict the inverse. We don't think of subtraction that way because it's so ingrained, but it's logically the same thing as practicing indefinite integrals by imaging the derivative of your solution.

There's only a couple main types of functions left: trig/inverse trig and derivative/antiderivative. I haven't 100% satisfied these two concepts with only circles yet, but I have ideas. I think trig is just a ratio of length (two lengths to be specific) to angle. Therefore, you have circles that can combine at various angles in that model, as apposed to my previous example of arithmetic where they have to be lined up in a row.

With all due respect to the people that don't like new theories on here, I'm not advancing anything new. This is geometry. I'm simply pointing out connections between geometry and other parts of math that people think are separate and less intuitive (but are not necessarily so, according to what I am saying).

I use circles rather than the elementary school version of this concept, which are squares, because circles have many more properties to use. Circles should be able to explain trig and derivatives, but squares fail my "intuition test" for absolute correctness in explaining those functions.

Last edited:
You have some good responses here. I can't address everything right now, so here's one nugget:

I have a simpler way of looking at addition. Imagine you have a unit circle with radius = 1. Place as many duplicate circles as you want in a row next to this circle, with all of the circles barely touching each other (technically, each circle only intersects at one point with each neighboring circle, and the intersection points if connected will make a straight line). Now, draw a circle around this row of circles, with the only intersection points being at the two most outside circles of the row, at one point on each circle. That radius will be equal to the radius of the sum of every circle's radius within the circle.

This also works if you have circles of various radii inside of the larger circle. They don't have to be duplicates.

Now, how about multiplication? All you do is raise this concept one order. For a x b, you have a=number of connected unit circles of r=b radii. Draw a larger circle around this, and you have a circle with the radius = a x b.

Same goes for exponents. You just raise multiplication as many orders as necessary to satisfy the exponent.

Subtraction, division, and logarithms are merely the inverse functions of the aforementioned concepts. You just need to understand those first three concepts, and then you can predict the inverse. We don't think of subtraction that way because it's so ingrained, but it's logically the same thing as practicing indefinite integrals by imaging the derivative of your solution.

There's only a couple main types of functions left: trig/inverse trig and derivative/antiderivative. I haven't 100% satisfied these two concepts with only circles yet, but I have ideas. I think trig is just a ratio of length (two lengths to be specific) to angle. Therefore, you have circles that can combine at various angles in that model, as apposed to my previous example of arithmetic where they have to be lined up in a row.

You are missing the point. Your definitions are not satisfying because it doesn't explain:
1) What a circle is exactly
4) How many circles you have to place for addition or multiplication

Now the ancient greeks would likely have accepted your arguments, since circles were seen as an object that exists in nature. But we have found out that it doesn't. We can never construct a perfect circle and the geometry of the universe is not euclidean. So we cannot use any concepts of euclidean geometry. Euclidean geometry is essentially an invention of mankind, and thus we cannot make sure that Euclidean geometry will give the correct concepts.

The entire crisis in set theory was beginning from recognizing that Euclidean geometry does not exist in nature, and thus we cannot know that it is consistent. This is why mathematicians like Dedekind and Peano based Euclidean geometry and arithmetic on natural numbers. And natural numbers are then based on sets. So we reach the conclusion that if sets are consistent, then so is basic arithmetic and Euclidean geometry. We cannot prove however that sets are consistent. But we did manage to base all (or >99.99% of math) on nothing else but sets. No other concepts are needed (such as geometrical concepts of circles).

I have a simpler way of looking at addition. Imagine you have a unit circle with radius = 1. Place as many duplicate circles as you want in a row next to this circle, with all of the circles barely touching each other (technically, each circle only intersects at one point with each neighboring circle, and the intersection points if connected will make a straight line). Now, draw a circle around this row of circles, with the only intersection points being at the two most outside circles of the row, at one point on each circle. That radius will be equal to the radius of the sum of every circle's radius within the circle.

This also works if you have circles of various radii inside of the larger circle. They don't have to be duplicates.

Now, how about multiplication? All you do is raise this concept one order. For a x b, you have a=number of connected unit circles of r=b radii. Draw a larger circle around this, and you have a circle with the radius = a x b.

Same goes for exponents. You just raise multiplication as many orders as necessary to satisfy the exponent.

Subtraction, division, and logarithms are merely the inverse functions of the aforementioned concepts. You just need to understand those first three concepts, and then you can predict the inverse. We don't think of subtraction that way because it's so ingrained, but it's logically the same thing as practicing indefinite integrals by imaging the derivative of your solution.

There's only a couple main types of functions left: trig/inverse trig and derivative/antiderivative. I haven't 100% satisfied these two concepts with only circles yet, but I have ideas. I think trig is just a ratio of length (two lengths to be specific) to angle. Therefore, you have circles that can combine at various angles in that model, as apposed to my previous example of arithmetic where they have to be lined up in a row.

With all due respect to the people that don't like new theories on here, I'm not advancing anything new. This is geometry. I'm simply pointing out connections between geometry and other parts of math that people think are separate and less intuitive (but are not necessarily so, according to what I am saying).

I use circles rather than the elementary school version of this concept, which are squares, because circles have many more properties to use. Circles should be able to explain trig and derivatives, but squares fail my "intuition test" for absolute correctness in explaining those functions.

Couple points:

1) The goal of set theory is to formalize things, not make them intuitive. It's sort of a reductionist project in a way. You want to boil everything down to a few simple principles, out of which you can build everything else in a precise, formal way. As such, you are working at cross purposes towards set theory, so the fact that you might have a simpler way of looking at addition and everything else is beside the point.

2) I don't see the point of using circles or squares. If I think geometrically about addition, I think just sticking two line segments together. And multiplication can either be viewed in terms of the area of a rectangle or, for natural numbers, you make one copy of the second number for every unit of length in the first and string it all together. If you read book 5 of Euclid's Elements, he presents a treatment of ancient Greek number theory along these lines (and book 5 also deals with the ancient Greek concept of real numbers, which is very similar to Dedekind cuts--Dedekind credits them for his inspiration). But again, that's just an informal way of looking at it, which doesn't have too much to do with the purposes of set theory and the foundations of mathematics because the goal is totally different.

3) All the geometry you are talking about occurs within the Euclidean plane. Guess what the Euclidean plane is? It's R^2. So, the Euclidean plane can also be built out of real numbers, hence in terms of rationals...all the way down until you get to sets.