Kuratowski's Definition of Ordered Pairs

  • Thread starter gatztopher
  • Start date
  • Tags
    Definition
In summary: Ordered sets are just a special case where the elements are always ordered, no matter what order you give them in).In summary, Kuratowski's definition of ordered pairs is not clicking for me. Part of the problem is I haven't had a serious look at naive set theory since high school, but after reading the webs for a couple of hours, things are good for me except for this one piece. My points of confusion are:1. Why is an ordered pair defined in terms of unordered pairs?2. How was this definition arrived at?3. Where I've looked, it's usually just stated without any context for
  • #36
Got it. So yes, you really don't want the be expressing things in first-order logic directly -- you want efficient data structures resembling the set-theoretic objects.

An important consideration here is how you want to use things. For example, here are two ways to code a multiset:
* As an expandable array, containing multiple copies of an object if it's in the multiset more than once.
* As a fixed-length unsigned integer array, with each uint representing the number of times the element is in the multiset.

The second is good when you have many copies of each element and few distinct elements. The first is good when you have few copies of each element or many distinct objects.
 
Physics news on Phys.org
  • #37
CRGreathouse said:
Got it. So yes, you really don't want the be expressing things in first-order logic directly -- you want efficient data structures resembling the set-theoretic objects.

An important consideration here is how you want to use things. For example, here are two ways to code a multiset:
* As an expandable array, containing multiple copies of an object if it's in the multiset more than once.
* As a fixed-length unsigned integer array, with each uint representing the number of times the element is in the multiset.

The second is good when you have many copies of each element and few distinct elements. The first is good when you have few copies of each element or many distinct objects.

Thanks GR. The first item is what I'm doing now I'm not using actual data now but working on a general model for organizing data for subsequent analysis. My background is in medical epidemiology and I've worked with large data sets. I'm now retired (not too old yet though)and I'm free to think about the general principles of organizing data prior to statistical analysis. The latter is straightforward once the objects and methods are defined.
 
  • #38
Yes, but you should have an idea of what the data would look like. So if you're measuring, say, the expected spread of a disease you have billions of nodes, each of which will connect with very few others; you'd want to use a sparse representation of a matrix rather than a dense one.

I don't know what epidemiological application you have for tuples (so many possibilities!) so I won't hazard a guess as to what sort of representation would be best there.
 
  • #39
CRGreathouse said:
Yes, but you should have an idea of what the data would look like. So if you're measuring, say, the expected spread of a disease you have billions of nodes, each of which will connect with very few others; you'd want to use a sparse representation of a matrix rather than a dense one.

I don't know what epidemiological application you have for tuples (so many possibilities!) so I won't hazard a guess as to what sort of representation would be best there.

Epidemiology has become much more mathematical in the last 30-40 years, and it isn't just concerned with "epidemics". It's really concerned with outcome likelihoods given the relevant data. It can involve very large numbers of data points with many variables. Vectors in vector spaces with a large number of dimensions require large tuples.
 
Last edited:
  • #40
SW VandeCarr said:
Epidemiology has become much more mathematical in the last 30-40 years, and it isn't just concerned with "epidemics".

I have a friend who recently got his Master's in the field. I'm not terribly familiar with it, but I know at least that much. :)

SW VandeCarr said:
It's really concerned with outcome likelihoods given the relevant data. It can involve very large numbers of data points with many variables. Vectors in vector spaces with a large number of dimensions require large tuples.

This still doesn't tell me what kind of data structure you need.
 
  • #41
CRGreathouse said:
This still doesn't tell me what kind of data structure you need.

There are several structures that could be used, but I think I indicated that an expandable array (by row and/or column) is the one I'm investigating. These arrays would be stacked in temporal sequence. My interest is defining meaningful ways to construct sets in terms of the hypothesis being evaluated using a fixed but expanding data base. That is, data already stored doesn't change or repeat. Because time series analysis can be done, simulations are also possible.

The basic structure is relational but the stacking provides a number of table profiles: cuts along columns, cuts along rows, and "horizontal" point in time cuts. SQL (I'm told) can be used, probably in conjunction with an object query language. Column order is considered fixed (hence ordered tuples), while row order need not be fixed, but in practice, probably would be.

http://www.agiledata.org/essays/mappingObjects.html
 
Last edited:
  • #42
gatztopher said:
1. Why is an ordered pair defined in terms of unordered pairs? Doesn't {{a}, {a, b}} = {{a, b}, {a}} = {{b, a}, {a}}, and if so, how does this in any way become ordered

There are different order relationships. The inclusion relationship between two sets is an order. So we can say that R<S ⇔ R⊂S.

Therefore it is easy to distinguish, and order, the two sets given in this set:
{a}<{a,b}={b,a} ⇔ {a}⊂{a,b} which is true.

We can define L(X) as e where {e}=y such as y∈X and ∀z∈X, y⊂z.
Similarly, R(X) is e where {e}=z∖y such as y∈X a, z∈X and y⊂z.

(However, these definitions have a difficulty with (a,a) which is represented by {{a},{a,a}}={{a}}; we would need a more complex definition R for this case).

2. How was this definition arrived at? Where I've looked, it's usually just stated without any context for why or how it emerged,

I guess it occurred during the axiomatization of mathematics and set theory. Once set theory had been axiomatized, logicians wanted to define the rest of mathematics in terms of sets.
 

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
3K
  • Quantum Interpretations and Foundations
3
Replies
79
Views
5K
Replies
7
Views
4K
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
3K
  • Atomic and Condensed Matter
Replies
5
Views
1K
Back
Top