A Complex Numbers Not Necessary in QM: Explained

  • #51
PeroK said:
Let ##S## be a set of rational numbers. All rational numbers are definable. Therefore ##\sup S## is definable.

Are all sets of rational numbers definable? The set of all sets of rational numbers is uncountable.
 
Physics news on Phys.org
  • #52
A. Neumaier said:
if ##S## is a set of definable real numbers then ##\sup S## gives a definition of its supremum, hence the latter is also definable.

In this light of the response I just gave to @PeroK, there is a missing step in this argument which in fact is invalid. The supremum of a definable set of definable numbers is definable; but not all sets of definable numbers are definable sets; they can't all be, because the set of all sets of definable numbers is uncountable.
 
  • Like
Likes PeroK
  • #53
PeterDonis said:
Are all sets of rational numbers definable? The set of all sets of rational numbers is uncountable.

I think the problem is that you need a new definition of a set. For example, the set ##\{1/2, 3/4, 7/8 \dots \}## is a specific set of definable numbers. But the set ##\{a_1, a_2, \dots \}## where the ##a_n## are arbitrary, unspecified definable numbers won't have the property being claimed.

There are too many of these sets, so something would need to be done about that.
 
  • #54
PeroK said:
the set ##\{a_1, a_2, \dots \}## where the ##a_n## are arbitrary, unspecified definable numbers won't have the property being claimed

Yes, which is why I think that you have found a missing premise in the argument @A. Neumaier was making that, per my post #52, is actually invalid; so I'm not convinced that the definable real numbers have all of the necessary properties of the full set of real numbers to allow real analysis to be "built" on just the definable real numbers.
 
  • #55
PeterDonis said:
That's what I thought, but then I started reading about things like this...

https://en.wikipedia.org/wiki/Specker_sequence

...which is a sequence of computable real numbers whose supremum is not computable. But "computable" would seem to be a stricter notion than "definable", since the latter does not require that you actually know how to compute the supremum (for instance), only that you know how to define it using a countable set of axioms. So I guess that's the difference.
I am hardly an expert on this, but based on what I have read on various points or sources, here are few things that might be helpful:
(1) For example, I think the things like specker sequence example might just be an artifact of requiring decimal expansions to be computable. When one switches or changes the definitions even computable analysis goes quite far.

(2) Outside of pure comptuable mathematics, a vast majority of mathematics can be done in very weak systems. One often cited system is ACA0 (one of systems of reverse mathematics) ... but I am not really familiar with details.
But for example, things like specker sequence example will hardly be a problem when we use arithmetic sets (even with decimal expansion definition) ... which are very small but natural collection of subsets of natural numbers.

But even outside of reverse mathematics, there are good number of revisionist approaches that have been applied successfully (there are number of people who have worked on this successfully).
 
  • #56
A. Neumaier said:
The stationary states whose energies gave the connection to the older quantum theory from spectroscopy has complex phases. With real wave functions one can handle static issues only.

Do you have a reference for that? I've read a few QM-books and find it odd that somehow this question is not or barely treated :)
 
  • #57
DarMM said:
It has global degrees of freedom inconsistent with special relativity.
Do you have a reference or can you elaborate?
 
  • #58
haushofer said:
Do you have a reference or can you elaborate?
Here are two I like:
https://arxiv.org/abs/1611.09029
http://www.dima.unige.it/microlocal/wp-content/uploads/2016/12/OPPIO.pdf

You can see that only the complex case doesn't contradict Poincaré symmetry.

It's also the only case that allows local tomography, i.e. the statistics of the state is recoverable from local measurements. This means for example that the state of a two particle system can be recovered from measurements on both particles individually:
https://arxiv.org/abs/1202.4513
 
  • Like
Likes maline
  • #59
haushofer said:
Historically, when did people realize the wavefunction needs to be complex/have 2 real degrees of freedom and one real degree of freedom (i.e. a real scalar function) does not suffice? Was it in the introduction of Heisenberg's commutation relations?
The commutation relations are due to Born. Always Born is forgotten although without him Heisenberg didn't even know what he was doing at Helgoland. The formulation of matrix mechanics is mostly achieved by Born and Jordan (the latter even quantizing the electromagnetic field already in the famous 2nd paper by Born, Jordan, and Heisenberg quite a while before Dirac).
 
  • Like
Likes Auto-Didact, Demystifier, haushofer and 1 other person
  • #60
Summing this up (@atyy this is a bit more accurate than my previous post):

QM is a probability theory with multiple sample spaces, it then has angles ##\theta_j## that represent relations between these sample spaces. The probabilities and interference angles can (at first glance) be combined into vectors that are either real, complex or quaternionic which results in a much simpler Hilbert space formalism, where as dealing with probabilities and interference angles directly is cumbersome.

However on closer inspection the quaternionic case has "too many" angles resulting in the possibility of one sample space interfering with another in such a way as to force its probabilities to exceed ##1##. Or another way of looking at it, the quaternionic uncertainty relations can imply uncertainties that break unitarity.

The real case then turns out to break Poincaré symmetry.

Thus the only multiple sample space consist probability theory is one whose angles can be encoded in complex vectors, i.e. QM.
 
  • Like
Likes Auto-Didact and Fra
  • #61
haushofer said:
Do you have a reference for that? I've read a few QM-books and find it odd that somehow this question is not or barely treated :)
Trivialities need no references and are rarely treated explicitly. If ##H\psi_0=E\psi_0## at ##t=0## with real ##\psi_0## then ##\psi(t)=e^{-itE/\hbar}\psi_0## is complex for most times ##t##.
 
  • Like
Likes Auto-Didact
  • #62
DarMM said:
[...] do with the definables? Can all of analysis be built atop them?
Yes, but it doesn't change anything. Any theory represented in first order logic is independent of the model used to represent it.
PeroK said:
If it can I'll eat my real analysis book.
Guten Appetit!
PeroK said:
Let ##S## be a set of rational numbers. All rational numbers are definable. Therefore ##\sup S## is definable. But, every real number is the supremum of a set of rational numbers. Hence, every real number is definable.
No. Most sets of rationals are not definable. (There are uncountably many sets of rationals, but only countably many of them can be defined.)
PeterDonis said:
not all sets of definable numbers are definable sets; they can't all be, because the set of all sets of definable numbers is uncountable.
Yes.
PeterDonis said:
This would seem to imply that there are more computable reals than definable reals.
No. Their number is countable but they do not form a model for the reals since the supremum axiom fails for them.
PeroK said:
But the set ##\{a_1, a_2, \dots \}## where the ##a_n## are arbitrary, unspecified definable numbers won't have the property being claimed.
This is not a well-defined set, as you specify neither the meaning of the ##a_i## nor the meaning of ##\dots##.
PeterDonis said:
I'm not convinced that the definable real numbers have all of the necessary properties of the full set of real numbers to allow real analysis to be "built" on just the definable real numbers.
Please acquaint yourself with Skolem's paradox, which - although there are more than uncountably many sets - gives countable models for ZFC, essentially by taking the definable sets as the sets in the model. The point is that there is no absolute notion of countability (as mentioned towards the end of the cited Wikipedia article). The notion of countability in the language describing the model is different from the notion of countability in the model itself!
 
  • Like
Likes Auto-Didact, eloheim and dextercioby
  • #63
A. Neumaier said:
The point is that there is no absolute notion of countability (as mentioned towards the end of the cited Wikipedia article). The notion of countability in the language describing the model is different from the notion of countability in the model itself!
Does the constructible universe represent a similar kind of phenomenon, or whether it's a bit different from this? I am completely unfamiliar with it (with only a very vague intuition about it), but I am trying to understand whether this lies along the lines of what you are saying here.
My vague/pop science understanding of it as the "thinnest" class of sets that could serve as a model of ZFC, and any other model would be "thicker" in the sense that it has more sets at the same level (is this anywhere close to being accurate?). But does this relate to uncountability too in some concrete way?

P.S. It isn't a well-thought out question (and also probably too naive/faulty) so you might skip it. I am mostly asking to get a bit better intuition (for myself).
 
Last edited:
  • #64
SSequence said:
Does the constructible universe represent a similar kind of phenomenon?
The constructible universe serves a related purpose. It shows that within any model of ZF (Zermelo-Fraenkel axioms) one can find another model for ZFC (i.e, in which the axiom of choice also holds), and in which the generalized continuum hypothesis is valid.

All that stuff on model dependent issues is studied very thoroughly and in many ramifications by people working in mathematical logic.
 
Last edited:
  • Like
Likes Auto-Didact and SSequence
  • #65
PeroK said:
There are too many of these sets, so something would need to be done about that.
A. Neumaier said:
The notion of countability in the language describing the model is different from the notion of countability in the model itself!
The first notion of countability is a notion on the metalevel, the second one one the object level. If one confuses the two by mixing the levels, one gets logical nonsense of the same kind as in Russell's paradox - even with finite natural numbers:

Let ##n## be the smallest natural number that cannot be defined using less than 100 characters. This seems to define a natural number using less than 100 characters that by its very definition cannot be defined with less than 100 characters.
 
Last edited:
  • Like
Likes Auto-Didact and Demystifier
  • #66
I see two questions discussed here.

One is the origin or reason for the seemingly natural use of complex
numbers in QM, and the other one is the issue/problem(?) of uncountable
or infinite sets.

Superficially they are independent and think DarMM put it well in that QM can be seen as a natural or efficient way to represent the information in a generalised probability theory but there you have different but dependent conjugate sample spaces.

So the question left is not, why complex numbers, its - why does nature seem to prefer non-commutative P-spaces? Is there answer to this within physics?

The other question, of the physical correspondence to infinite (or worse
uncountable) amounts of information gets less philosophical and more
physical if you try to understand quantum mechanics as a form of
information mechanics between interacting agents. Here the problem
becomes that of how an agent (not a human scientist) can encode and
process infinite amounts of information in finite time? So i think any countable mimic of the reals, will not solve the problem here, the problem is still infinite sets, and countable infinity is bad enough. Its just that if its uncountable infinite, you are sort of permanently LOST - you lost track of all orders. So things are more under control as long as things are countable, but we still have the ordering problem when allowing these logical systems to interact in time.

Incidently I think they these two problems are related, because when you try to see physical interactions as computations in competition, wise
resource handling becomes a survival trait. And its my firm understanding that this is the best "explanation" to WHY nature prefers non-commucative structures, its simply because its the most efficient way to structure yourself in an environment of hostile fellow agents trying to decode you.

This puts the coding into evolutionary context.

/Fredrik
 
  • #67
Fra said:
any countable mimic of the reals, will not solve the problem here, the problem is still infinite sets, and countable infinity is bad enough.
The subset of defined objects used by humanity is very finite, certainly of size less than ##10^{30}## [##=10^{12}## words or formulas produced per person ##\times 10^{10}## persons per generation ##\times 10^8## estimated generations humanity might exists], and hence less than the number of atoms in 20 tons of carbon.

This is more than enough for doing physics. The remaining countably many definable things are just the reservoir for creative work!
 
Last edited:
  • Like
Likes dextercioby
  • #68
DarMM said:
Here are two I like:
https://arxiv.org/abs/1611.09029
http://www.dima.unige.it/microlocal/wp-content/uploads/2016/12/OPPIO.pdf

You can see that only the complex case doesn't contradict Poincaré symmetry.

It's also the only case that allows local tomography, i.e. the statistics of the state is recoverable from local measurements. This means for example that the state of a two particle system can be recovered from measurements on both particles individually:
https://arxiv.org/abs/1202.4513
Thanks. It seems rather technical though, but that's my problem ;) Is there an intuitive way of understanding this theorem? And is something similarly true for the non-relativistic case, i.e. for the Bargmann algebra?
 
  • #69
A. Neumaier said:
Trivialities need no references and are rarely treated explicitly. If ##H\psi_0=E\psi_0## at ##t=0## with real ##\psi_0## then ##\psi(t)=e^{-itE/\hbar}\psi_0## is complex for most times ##t##.
Well, yes, but then you assume a certain operator form for the Hamiltonian. Maybe I'm stupid or miss something simple, so let's rephrase my question. Imagine I try to construct QM from first principles similarly as Schrodinger did. However, I want my wave functions and operators to be strictly real. What's, according to you, the first inconsistency that blows then into my face?
 
  • #70
haushofer said:
Well, yes, but then you assume a certain operator form for the Hamiltonian. Maybe I'm stupid or miss something simple, so let's rephrase my question. Imagine I try to construct QM from first principles similarly as Schrodinger did. However, I want my wave functions and operators to be strictly real. What's, according to you, the first inconsistency that blows then into my face?
That you cannot even begin. Which dynamics does your try assume? And how does it account for the spectral features that had to be explained (and were explained) by Schrödinger?
 
  • Like
Likes DanielMB and dextercioby
  • #71
A. Neumaier said:
Yes, but it doesn't change anything. Any theory represented in first order logic is independent of the model used to represent it.

Guten Appetit!

No. Most sets of rationals are not definable. (There are uncountably many sets of rationals, but only countably many of them can be defined.)

Yes.

No. Their number is countable but they do not form a model for the reals since the supremum axiom fails for them.

This is not a well-defined set, as you specify neither the meaning of the ##a_i## nor the meaning of ##\dots##.

It is a well-defined set in the usual mathematical framework. All of mathematics generally deals with objects being defined only by their properties. E.g. let ##f## be a continuous function, of which there are uncountable many.

You have introduced a non-standard approach where numbers, sets and functions (presumably) are restricted to ones that can be specified by some further criteria. Leaving the remaining numbers, sets or functions "anonymous". Presumably , however, these objects still exist in the new mathematical framework. For example, you don't have any uncountable sets of numbers with non-zero measure over which to integrate, unless you include all the real numbers..

In particular, you are now confusing your new definition of a definable set with the concept of a well-defined set in standard analysis.

Finally, there is no paradox in standard real analysis with the set of all real numbers. It's not a definable set in your terminology but that doesn't make it paradoxical.
 
  • #72
PeroK said:
It is a well-defined set in the usual mathematical framework.
No, it isn't. ##\dots## is not a well-defined piece of notation, unless you know the law of formation of the ##a_i##.
Moreover, you mix the metalevel and the object level by treating anonymous defined numbers as well-defined things on the object level.
Mixing levels may lead to contradictions, as in my example in post #65.

PeroK said:
Finally, there is no paradox in standard real analysis with the set of all real numbers. It's not a definable set in your terminology
You thoroughly misunderstand what is being done in mathematical logic. The set of all real numbers is a well-defined object (e.g., in ZFC with Dedekind cuts).
 
  • Like
Likes Auto-Didact
  • #73
A. Neumaier said:
No, it isn't. ##\dots## is not a well-defined piece of notation, unless you know the law of formation of the ##a_i##.
Moreover, you mix the metalevel and the object level by treating anonymous defined numbers as well-defined things on the object level.
Mixing levels may lead to contradictions, as in my example in post #65.You thoroughly misunderstand what is being done in mathematical logic. The set of all real numbers is a well-defined object (e.g., in ZFC with Dedekind cuts).

The usual definition of a limit has, for example, ##\forall \ \epsilon > 0##.

What you are saying is that that is wrong and it should be:

##\forall \ ## definable ##\epsilon > 0##.

Using definable reals may be an alternative, but you cannot argue that mainstream real analysis is based on a countable subset of the real numbers.
 
  • #74
PeroK said:
Using definable reals may be an alternative, but you cannot argue that mainstream real analysis is based on a countable subset of the real numbers.
I didn't claim that. Mainstream real analysis applies equally to any model of the real numbers, no matter whether or not that model is countable. I only claimed that the definable reals form a model of the real numbers when interpreted in the model of ZFC consisting of definable sets.

PeroK said:
The usual definition of a limit has, for example, ##\forall \ \epsilon > 0##.

What you are saying is that that is wrong and it should be:

##\forall \ ## definable ##\epsilon > 0##.
No.

Anything said in group theory about a group applies in different ways to the different models of a group.

Anything said about Peano arithmetic that applies to the standard model ##N## of the natural numbers also applies to a nonstandard model ##N'=2N## defined inside of ##N## by redefining the successor notion to mean adding 2.
The theory remains completely unaltered.

The same happens for the real numbers. From a standard model ##R## of the reals inside ZFC we may construct a second model ##R'## of the real numbers inside ##R## consisting only of the definable reals in ##R## (interpreted in the nonstandard set theory ZFC' of all definable sets inside ZFC). Then the same abstract notions mean in ##R## what they are defined in ##R## and in ##R'## what they are defined in ##R'##, though on the level of theory, there is no difference.

The point is that the notion of bijection also changes, so that only some of the bijections in ZFC remain bijections in ZFC', which means that a set that is countable in ZFC - but only by means of undefinable bijections to the natural numbers) is no longer definable in ZFC'. The same happens with ##\forall##. It means for all elements in the model, and hence in R' for all definable reals. In contrast, adding in the theory 'definable' after ##\forall##, as you did, would alter the theory!
 
  • #75
A. Neumaier said:
Anything said about Peano arithmetic that applies to the standard model ##N## of the natural numbers also applies to a nonstandard model ##N'=2N## defined inside of ##N## by redefining the successor notion to mean adding 2.
The theory remains completely unaltered.
While this is correct, I think there is a huge difference between the real numbers in models (of set theory) and natural numbers in models of PA. With PA I know what structure I "really" have in mind. I don't really care whether any other structure satisfies the axioms or not (meaning it is not mandatory to look at the other models at all to know what you are talking about).
Sure there is a problem of LEM (in a limited sense), but that is also circumvented by using its counter-part in form of HA (as far as I can understand).

But with ZFC, I have not the slightest idea (in any real sense), even when giving absolutely zero thought to LEM. As per my limited understanding, as soon the subsets of ##\omega## we use are anything smaller than constructible reals, the resulting collection of reals will be "too small" to serve as a model (is this correct?).
It is still good to know that much is recoverable without definitions that are simply highly impredicative to me (even in smaller collections of real numbers). Sure it is nice to know that we have these constructions and we can use them, but beyond that, I am not sure all of it "means" anything [well if set-theory is sound for number-theoretic statements, then I suppose it does in a way ... till yet I don't think anyone knows of such a statement that is intuitively "very clearly" true or false but yet ZFC proves it the other way ... meaning people do have some sort of (even if vague) soundness belief regarding it even when they don't say it]
 
  • #76
SSequence said:
With PA I know what structure I "really" have in mind. [...]
But with ZFC, I have not the slightest idea (in any real sense)
I know what I have in mind for Peano arithmetic, for the reals, and for ZFC - namely the models obtained by the definable natural numbers, reals, and ZFC sets.

This is what mathematicians actually work with - always with finite formulas that at worst involve anonymous numbers or sets implicitly or explicitly quantified over. (Unless they are logicians and then get interested in ramifications about what sort of models are possible.) Though we cannot answer easily most of the (countably many) questions we might pose. But we restrict to questions that we find useful, and where we expect to be able to make progress.

You might be interested in reading my paper The FMathL mathematical framework, which addresses such things from my personal perspective.

SSequence said:
As per my limited understanding, as soon the subsets of ##\omega## we use are anything smaller than constructible reals, the resulting collection of reals will be "too small" to serve as a model (is this correct?).
No. For example, calling ZFC' the model of ZFC-constructible sets we may consider the model ZFC'' of ZFC'-definable sets, etc., and in this way get an infinite nested sequence of smaller and smaller models ZFC##{}^k,~k=0,1,2,\ldots##.
 
  • Like
Likes Auto-Didact and SSequence
  • #77
A. Neumaier said:
I know what I have in mind for Peano arithmetic, for the reals, and for ZFC - namely the models obtained by the definable natural numbers, reals, and ZFC sets.

This is what mathematicians actually work with - always with finite formulas that at worst involve anonymous numbers or sets implicitly or explicitly quantified over. (Unless they are logicians and then get interested in ramifications about what sort of models are possible.) Though we cannot answer easily most of the (countably many) questions we might pose. But we restrict to questions that we find useful, and where we expect to be able to make progress.

You might be interested in reading my paper The FMathL mathematical framework, which addresses such things from my personal perspective.
Yes, I think that usually one of the purposes of setting up a background frame-work or theory is to cast away philosophy to background (since, at least in some ways that comes "before" we set-up everything), and start getting to doing things.

In my own personal view, the strongest theory that I "think" I could convince myself of soundness (beyond all reasonable doubts) is HA (ofc I could choose something really weak ... but note that I used "strongest"). Beyond that, I can't say with complete certainty one way or other. Of course that doesn't mean at all that I believe this corresponds to all number-theory statements that could be proven (not at all ofc!).

Maybe someday my view will change ... or maybe not.

A. Neumaier said:
No. For example, calling ZFC' the model of ZFC-constructible sets we may consider the model ZFC'' of ZFC'-definable sets, etc., and in this way get an infinite nested sequence of smaller and smaller models ZFC##{}^k,~k=0,1,2,\ldots##.
Hmmm I find this genuinely interesting (even if I don't understand it). OK a small question ... very naive but honest question (sorry if it's really off): What about ##\omega_1## in all these "smaller" models? ... if they also contain less reals. Since there can be no infinite backward chain, it will remain "same" in infinitely many of these models?
 
Last edited:
  • #78
haushofer said:
Well, yes, but then you assume a certain operator form for the Hamiltonian. Maybe I'm stupid or miss something simple, so let's rephrase my question. Imagine I try to construct QM from first principles similarly as Schrodinger did. However, I want my wave functions and operators to be strictly real. What's, according to you, the first inconsistency that blows then into my face?

A. Neumaier said:
That you cannot even begin. Which dynamics does your try assume? And how does it account for the spectral features that had to be explained (and were explained) by Schrödinger?

I don't know if haushofer can or "cannot even begin", but apparently Schrödinger could:-) I have mentioned his work (Nature, v.169, 538 (1952)) several times. He noted that one can make the wave function real by a gauge transform (he considered the Klein-Gordon equation in electromagnetic field). Schrödinger's conclusion: “That the wave function of [the Klein-Gordon equation] can be made real by a change of gauge is but a truism, though it contradicts the widespread belief about `charged’ fields requiring complex representation".

Let me emphasize that this is not about a replacement of complex numbers by pairs of real numbers. I am sure you don't need any help to understand how Schrödinger's approach of 1952 can "account for the spectral features that had to be explained (and were explained) by Schrödinger" in 1926.

I cannot be sure that real functions are enough for everything in quantum theory, but they are sufficient for much more than "widespread beliefs" would suggest (please see references in https://www.physicsforums.com/threa...d-of-spinor-field-in-yang-mills-field.960244/).
 
  • #79
A. Neumaier said:
The subset of defined objects used by humanity is very finite, certainly of size less than ##10^{30}## [##=10^{12}## words or formulas produced per person ##\times 10^{10}## persons per generation ##\times 10^8## estimated generations humanity might exists], and hence less than the number of atoms in 20 tons of carbon.

This is more than enough for doing physics. The remaining countably many definable things are just the reservoir for creative work!
I agree with this.

Maybe I was fuzzy, why argument was not "against countable sets" in favour of reals, it was the opposite - against infinite set and against real analysis as the ideal language for an inference frameworkm because the infinitely embeddings we humans use for "creative work", makes things confusing. The embedding is non-physical. And even if everyone agrees, we are still lost in this embeeding due to the way the current paradigms is like.

I mean, if we see QM as a generalized P-theory, which in turn is based on real numbers, we already stepped over how this correspondes to reality. The problem is not to assign a real number to a degree of belief, the problem is them when one tries to consider the a probability of this probability a prio, as there are infinitely many options. And how can we claim to understand how this is to be normalized if the embedding is non-physical? This is what i want to reconstruct and cure.

So my argument is that if the possible distinguishable states; from the perspective of an agent (say an subatomic structure), this is probably also finite at any instant of time. If you start to think in therse terms, we are lead to researching new ways to "computing" and "representing things", not from human perspective, but from the inside perspecitve that has a better physical correspondence that does the continuum embedding that views everything from an infinite boundary (ie scattering perspective), where one in practice always have infinite amount of memory and processing power relative to the subsystem in the middle -> here the continuum approxiamtion is fine! And this is the perpective that is also the basis for QFT etc as i understand ing. But this is not satisfactory in QG and unification approaches where you also want to address the measurement problem.

/Fredrik
 
  • #80
PeroK said:
It is a well-defined set in the usual mathematical framework. All of mathematics generally deals with objects being defined only by their properties. E.g. let ##f## be a continuous function, of which there are uncountable many.

You have introduced a non-standard approach where numbers, sets and functions (presumably) are restricted to ones that can be specified by some further criteria. Leaving the remaining numbers, sets or functions "anonymous". Presumably , however, these objects still exist in the new mathematical framework. For example, you don't have any uncountable sets of numbers with non-zero measure over which to integrate, unless you include all the real numbers..

In particular, you are now confusing your new definition of a definable set with the concept of a well-defined set in standard analysis.

Finally, there is no paradox in standard real analysis with the set of all real numbers. It's not a definable set in your terminology but that doesn't make it paradoxical.

As far as I know, nobody tries to do mathematics using only definable objects, because the usual mathematical axioms don't hold when restricted to definable objects. However, the set of reals is certainly definable.

The definition of "definable" is this: An object ##O## is definable (relative to a language, and relative to an intended model of that language) if there is a formula ##\phi(x)## such that ##O## is the only object satisfying that formula. In the particular case of sets, people often say that a collection ##S## is definable if there is a formula ##\phi(x)## and ##S## consists of all the things satisfying formula ##\phi##.

In the particular case of the reals, you have to work your way up to it:
  • An ordinal is a set that is well-ordered by set membership.
  • A natural number is a finite ordinal
  • An integer is an equivalence class of pairs of naturals, where ##(x,y) \equiv (x',y')## iff ##x+y' = x' + y## (##(x,y)## is to be interpreted as ##x - y##).
  • A rational is an equivalence class of pairs of integers ##(x,y)## with ##y \neq 0##, and where ##(x,y) \equiv (x',y')## iff ##x \cdot y' = x' \cdot y##.
  • A real number is a set ##r## of rationals such that if ##x \in r## and ##x \lt y##, then ##y \in r##.
Then you have a formula ##real(r)## saying that ##r## is a real, and voila, the set of reals is a definable collection.

(This way of defining the basic objects of mathematics is a pain, because at every level of complexity, you have different objects. The zero for naturals is not the zero for integers, which is not the zero for rationals, which is not the zero for reals, which is not the zero for complex numbers. But each level contains a "copy" of the objects in the previous level.)

The set of all reals is definable. But there is a distinction between the set of all reals and the set of all definable reals. Weirdly, the set of all reals is definable, but the set of all definable reals is not, because "definable" is not definable. This is where you have to be careful about the distinction between language and metalanguage. Given a language ##L##, you can, in the meta language, define what it means to be definable in language ##L##. But not in ##L## itself.
 
  • Like
Likes Auto-Didact, eloheim, Demystifier and 2 others
  • #81
stevendaryl said:
The definition of "definable" is this: An object ##O## is definable (relative to a language, and relative to an intended model of that language) if there is a formula ##\phi(x)## such that ##O## is the only object satisfying that formula. In the particular case of sets, people often say that a collection ##S## is definable if there is a formula ##\phi(x)## and ##S## consists of all the things satisfying formula ##\phi##.

In the particular case of the reals, you have to work your way up to it:
  • An ordinal is a set that is well-ordered by set membership.
  • A natural number is a finite ordinal
  • An integer is an equivalence class of pairs of naturals, where ##(x,y) \equiv (x',y')## iff ##x+y' = x' + y## (##(x,y)## is to be interpreted as ##x - y##).
  • A rational is an equivalence class of pairs of integers ##(x,y)## with ##y \neq 0##, and where ##(x,y) \equiv (x',y')## iff ##x \cdot y' = x' \cdot y##.
  • A real number is a set ##r## of rationals such that if ##x \in r## and ##x \lt y##, then ##y \in r##.
Then you have a formula ##real(r)## saying that ##r## is a real, and voila, the set of reals is a definable collection.

(This way of defining the basic objects of mathematics is a pain, because at every level of complexity, you have different objects. The zero for naturals is not the zero for integers, which is not the zero for rationals, which is not the zero for reals, which is not the zero for complex numbers. But each level contains a "copy" of the objects in the previous level.)

The set of all reals is definable. But there is a distinction between the set of all reals and the set of all definable reals. Weirdly, the set of all reals is definable, but the set of all definable reals is not, because "definable" is not definable. This is where you have to be careful about the distinction between language and metalanguage. Given a language ##L##, you can, in the meta language, define what it means to be definable in language ##L##. But not in ##L## itself.

Thanks for that. Although I've never formally studied the set-theoretic foundations of mathematics, nothing you say surprises me. That's what I understood to be the case.

However:

stevendaryl said:
As far as I know, nobody tries to do mathematics using only definable objects, because the usual mathematical axioms don't hold when restricted to definable objects.

The whole argument presented on this thread, certainly as far as I can follow it, is that you can do mathematics using only definable reals and all of analysis and calculus survives intact:

DarMM said:
This is very interesting! What can one not do with the definables? Can all of analysis be built atop them?

PeroK said:
If it can I'll eat my real analysis book.

A. Neumaier said:
Guten Appetit!

So, what do you think? Do I have to eat my analysis book or not?
 
  • #82
Go for Abbott, Rudin is a bit bitter.
 
  • #83
PeroK said:
The whole argument presented on this thread, certainly as far as I can follow it, is that you can do mathematics using only definable reals and all of analysis and calculus survives intact:

I don't think that's true. Or it depends on exactly what you mean.

If you have a fixed language, ##L##, then you can prove that there are only countably many reals that are definable in that language. So ordinary measure theory would say that you can't have a set of definable reals with a nonzero Lebesgue measure.

However, if you leave it vague exactly what "definable" means, maybe you don't run into problems. In intuitionistic mathematics, you can't prove the existence of a noncomputable real, but the statement "all reals are computable" is not provable (although it might be true in some sense).
 
  • #84
I have done some Googling, and I did not find the argument, but I saw an argument once that you need complex numbers for quantum amplitudes if you want there to be continuous transformations relating any two quantum states. This sounds like something that @bhobba would know about.
 
  • #85
How about the following statement? For any real number ##x## there is a language ##L## in which ##x## is definable. However, there is no language ##L## such that any real number ##x## is definable in ##L##.
 
  • #86
stevendaryl said:
the statement "all reals are computable" is not provable (although it might be true in some sense).
It is false in any meaningful sense, because in any model, the number of reals is uncountable relative to this model but the number of computable reals is countable. The same holds for definable in place of computable.

But given any model M of ZFC you can construct a countable model C of ZFC (consisting of the set of meaningful formulas, factored by the equivalence relation of being equal in M) . The reals in C are by construction countable in terms of the notion of countable defined in M, but uncountable in terms of the notion of countable defined in C.

Demystifier said:
How about the following statement? For any real number ##x## there is a language ##L## in which ##x## is definable. However, there is no language ##L## such that any real number ##x## is definable in ##L##.
I don't think this is true.
 
  • #87
A. Neumaier said:
It is false in any meaningful sense, because in any model, the number of reals is uncountable relative to this model but the number of computable reals is countable. The same holds for definable in place of computable.

My quote was from the standpoint of intuitionistic mathematics. There, they don't assume the existence of any noncomputable reals. Or rather, there is no proof that there exists a noncomputable real.
 
  • #88
stevendaryl said:
My quote was from the standpoint of intuitionistic mathematics. There, they don't assume the existence of any noncomputable reals. Or rather, there is no proof that there exists a noncomputable real.
The intuitionistic reals behave mathematically very different from the reals taught in any analysis course.

In intuitionistic math, most concepts from ZFC ramify into several meaningful nonequivalent ones, depending on which intuitionistic version of the axioms one starts with (all of which would become equivalent if the axiom of choice were assumed in addition). Thus one has to be very careful to know which version of the reals one is talking about.
 
  • #89
stevendaryl said:
I have done some Googling, and I did not find the argument, but I saw an argument once that you need complex numbers for quantum amplitudes if you want there to be continuous transformations relating any two quantum states. This sounds like something that @bhobba would know about.

I think Hardy's axiom 5 from this paper (Quantum Theory from Five Reasonable Axioms) mentions something like this.

https://arxiv.org/abs/quant-ph/0101012

Cheers
 
  • Like
Likes bhobba
  • #90
cosmik debris said:
I think Hardy's axiom 5 from this paper (Quantum Theory from Five Reasonable Axioms) mentions something like this.

It's tied up with entanglement:
https://arxiv.org/abs/0911.0695

Thanks
Bill
 
  • #91
A. Neumaier said:
I don't think this is true.
Why?
 
  • #92
Demystifier said:
How about the following statement? For any real number ##x## there is a language ##L## in which ##x## is definable. However, there is no language ##L## such that any real number ##x## is definable in ##L##.

Well, I think that's trivially true. Given any real ##r## between 0 and 1, you can add a function symbol ##f## and add infinitely many axioms saying ##f(n) = r_n##. Then within this theory, the number ##r## is definable.
 
  • #93
Demystifier said:
Why?

stevendaryl said:
Well, I think that's trivially true. Given any real ##r## between 0 and 1, you can add a function symbol ##f## and add infinitely many axioms saying ##f(n) = r_n##. Then within this theory, the number ##r## is definable.
No. The problem is that you cannot ''give'' undefinable reals!

Given any real r makes r an anonymous real, never a particular one. It is just the conventional way of expressing that what follows has a formal variable r quantified over with the all quantor. Thus nothing is actually defined.
 
  • #94
A. Neumaier said:
No. The problem is that you cannot ''give'' undefinable reals!

In mathematical logic, one is allowed to consider theories with a non-computable collection of axioms. For example, the true theory of arithmetic. We can't actually write down such a collection, but it exists (in the same sense that any abstract mathematical objects exist). So for every real ##r##, there exists (as a mathematical object) a theory that defines ##r## uniquely. We can't write it down, but that's a different matter.
 
  • #95
stevendaryl said:
In mathematical logic, one is allowed to consider theories with a non-computable collection of axioms. For example, the true theory of arithmetic. We can't actually write down such a collection, but it exists (in the same sense that any abstract mathematical objects exist). So for every real ##r##, there exists (as a mathematical object) a theory that defines ##r## uniquely. We can't write it down, but that's a different matter.
Where is one allowed to do that? Not in first order logic, which we discuss here. In strange logics, strange things may of course happen.
 
  • #96
A. Neumaier said:
Where is one allowed to do that? Not in first order logic, which we discuss here. In strange logics, strange things may of course happen.

I am talking about first-order logic. In mathematical logic, one can study theories where the set of axioms are noncomputable.
 
  • #97
stevendaryl said:
I am talking about first-order logic. In mathematical logic, one can study theories where the set of axioms are noncomputable.
Please give a reference where this is done and leads to significant results. In this case one doesn't even know what the axioms are...
 
  • #98
A. Neumaier said:
Please give a reference where this is done and leads to significant results. In this case one doesn't even know what the axioms are...

Well, the most important non-axiomatizable theory is the theory of true arithmetic. You define the language of arithmetic, which is typically:
  • constant symbol ##0##
  • unary function symbol ##S(x)##
  • two binary function symbols ##+## and ##\times##
  • one relation symbol ##=##
You can, in set theory, define an interpretation of these symbols in terms of the finite ordinals, and then you can define the theory of true arithmetic as the set of formulas in this language that are true under this interpretation.

It's a noncomputable set of formulas, but it's definable in ZFC. (and much weaker theories)
 
  • #99
stevendaryl said:
Well, the most important non-axiomatizable theory is the theory of true arithmetic. You define the language of arithmetic, which is typically:
  • constant symbol ##0##
  • unary function symbol ##S(x)##
  • two binary function symbols ##+## and ##\times##
  • one relation symbol ##=##
You can, in set theory, define an interpretation of these symbols in terms of the finite ordinals, and then you can define the theory of true arithmetic as the set of formulas in this language that are true under this interpretation.

It's a noncomputable set of formulas, but it's definable in ZFC. (and much weaker theories)
''true arithmetic'' is not a theory in first order logic, but ''the set of all sentences in the language of first-order arithmetic that are true'' in the standard model of the natural numbers (itself not a first order logic notion).

Yes, it is a noncomputable set of formulas, but not a set of axioms of some first order theory. It is a nonaxiomatizable theory (i.e., not a first order logic theory), as you correctly said.
 
  • #100
A. Neumaier said:
''true arithmetic'' is not a theory in first order logic,

Yes, it is. In the study of mathematical logic, a "theory" is a set of formulas closed under logical implication.
 
Back
Top