A Philosophy of quantum field theory

Demystifier
Science Advisor
Insights Author
Messages
14,566
Reaction score
7,159
I usually don't read papers on philosophy of quantum field theory, but this one is really good: http://philsci-archive.pitt.edu/8890/

In particular, the prelude which I quote here is a true gem:
"Once upon a time there was a community of physicists. This community be-
lieved, and had good reason to believe, that quantum mechanics, not classical
mechanics, was the right framework in which to do physics. They also had
a good understanding, at the classical level, of the dynamics of solid bodies
(vibrations in crystals, for instance): they knew, for example, that some such
bodies could be analysed using Lagrangians like

L =φ^2 − (∇φ)^2 + (higher terms) (1)

where φ(x) is the displacement of the part of the crystal which is at position x
at equilibrium.
But the physicists were sad, because they knew nothing at all about the
microscopic structure of matter, and so they did not have a good quantum
theory of vibrations in crystals or of other solid-matter dynamics.
So one day, they set out to quantize their classical theories of solid mat-
ter. At first, they tried to do it naively, by putting the classical theory into
Hamiltonian form and replacing classical observables with self-adjoint opera-
tors. This worked quite well until the higher-order terms in (1) were included.
But when the physicists tried to include those higher order terms, the theory
became mathematically very badly behaved — all the calculations contained
integrals that diverged to infinity.
Soon the physicists discovered that they could extract working calculational
results if they just assumed that displacements couldn’t vary on arbitrarily
short lengthscales. This amounted to “cutting off” the range of integration in
the divergent integrals, so that they got a finite result. When they did their
calculations this way, the answers agreed very well with experiment.
But the physicists were still sad. “It’s ad hoc”, they said. “It’s inelegant”,
they lamented. “It conflicts with the Euclidean symmetries of solid matter”,
they cried.
So they went back to basics, and looked for an axiomatised, fully rigorous
quantum theory, with displacements definable on arbitrarily short lengthscales
and with exact Euclidean symmetries.
And to this day, they are still looking."
 
  • Like
Likes Gangzhen JIAO, physika, BvU and 2 others
Physics news on Phys.org
He gets points for mentioning Jon Kogut. He loses points for mixing up regularization and renormalization.
 
  • Like
Likes vanhees71 and Demystifier
His basic point is that our working assumption about "reality" should be that there is some UV cutoff, with QFT as an effective theory per Wilson. This because the long effort to construct a mathematically sensible, UV-complete QFT (in 4D, with interactions) has failed, so perhaps such a theory cannot even exist.

Well, how is the situation with a cutoff? Do we have a mathematically rigorous account of what fields-with-cutoff are, without committing to a specific cutoff scheme such as a lattice?
 
maline said:
the long effort to construct a mathematically sensible, UV-complete QFT (in 4D, with interactions) has failed,
It has not failed, it just remained inconclusive. Not a single argument is known that would indicate that such QFTs do not exist.
 
  • Like
Likes vanhees71 and weirdoguy
Demystifier said:
when the physicists tried to include those higher order terms, the theory
became mathematically very badly behaved — all the calculations contained
integrals that diverged to infinity.
The same happens already for the ordinary quantum mechanics of a single particle in a delta function potential, which is perfectly well-defined if one takes the right limit. No cutoff is needed.

Thus the bad behavior just means that the naive approach was wrong, not that cutoff is the natural remedy.
 
A. Neumaier said:
The same happens already for the ordinary quantum mechanics of a single particle in a delta function potential, which is perfectly well-defined if one takes the right limit. No cutoff is needed.

Thus the bad behavior just means that the naive approach was wrong, not that cutoff is the natural remedy.
I disagree. Even if there is a way to treat the delta function rigorously, from a physical point of view a cutoff is a natural remedy. In nature we don't have potentials that look exactly like a delta function, but we do have potentials that look like very narrow Gaussians.
 
  • Like
  • Informative
Likes vanhees71 and AndreasC
Demystifier said:
I disagree. Even if there is a way to treat the delta function rigorously, from a physical point of view a cutoff is a natural remedy. In nature we don't have potentials that look exactly like a delta function, but we do have potentials that look like very narrow Gaussians.
This is not a good argument. In nature we also don't have real or complex numbers, but without them physics - including your favorite, Bohmian mechanics! - would be impossible.
 
A. Neumaier said:
This is not a good argument. In nature we also don't have real or complex numbers, but without them physics - including your favorite, Bohmian mechanics! - would be impossible.
Bohmian mechanics (and quantum mechanics) can be written down in terms of real numbers only, without using complex numbers. And more to the point, it's possible to approximate (arbitrarily well) quantum/Bohmian mechanics by using a (sufficiently large) discrete set of numbers instead of reals.
 
What exatcly is the point of this paper?
 
  • #10
martinbn said:
What exatcly is the point of this paper?
It is explicitly a philosophy paper. The point is about which concept of QFT should be thought of as "true".
 
  • #11
martinbn said:
What exatcly is the point of this paper?
The paper is about how to think philosophically about UV divergences in QFT. In my own words, I would say that it argues that a "naive" cutoff way of thinking is in fact better than attempts to retain continuum with a fancy-schmancy functional analysis way of thinking.
 
  • Like
Likes Tendex and vanhees71
  • #12
A. Neumaier said:
It has not failed, it just remained inconclusive. Not a single argument is known that would indicate that such QFTs do not exist.
How long must a research program run without results, before we are allowed to say it has "failed"?

The basic problem is by now well-known: if the fields are operator-valued distributions, then defining interactions as products of fields at a point doesn't make sense, because distributions cannot generally be multiplied. Some distributions can be multiplied using Hormander's method, but the time-ordered product fails Hormander's criterion.

Is there any good reason to believe this issue can be overcome?
 
  • Like
Likes Auto-Didact and dextercioby
  • #13
maline said:
Well, how is the situation with a cutoff? Do we have a mathematically rigorous account of what fields-with-cutoff are, without committing to a specific cutoff scheme such as a lattice?
Is there a necessity for this? If the cutoff is something physical, that means, there is some more fundamental theory and this theory has an effective cutoff. So the real world is, metaphorically, committed to a particular cutoff. We have to guess the correct theory, and that means, we also have to commit to the particular cutoff scheme which is equivalent to the true fundamental theory.
 
  • #14
maline said:
but the time-ordered product fails Hormander's criterion.

Is there any good reason to believe this issue can be overcome?
Epstein-Glaser?
 
  • Like
Likes vanhees71
  • #15
maline said:
How long must a research program run without results, before we are allowed to say it has "failed"?
In mathematics and mathematical physics, something has failed only when its goals are proved to be unattainable. Examples are the quadrature of the circle or the trisection of angles with rule and compass.

Kepler's conjecture (that the face centered cubic lattice gives the densest lattice packing in 3 dimensions) was positively settled after having been open for more than 300 years. The Riemann hypothesis is over 170 years old and still considered to be one of the most important open problems in mathematics. Quantum field theory is much younger.

Note that good problems yield many interesting results even when the main problem remains unsolved for a long time. This was not only the case for Kepler's conjecture and the Riemann hypothesis but also for algebraic quantum field theory. In 2 and 3 dimensions the existence of interacting Wightman fields has been constructively proved. The 4-dimensional case is simply significantly harder.
maline said:
The basic problem is by now well-known: if the fields are operator-valued distributions, then defining interactions as products of fields at a point doesn't make sense, because distributions cannot generally be multiplied. Some distributions can be multiplied using Hormander's method, but the time-ordered product fails Hormander's criterion.

Is there any good reason to believe this issue can be overcome?
Yes. Causal perturbation theory addresses precisely this point. See my insight article on [URL='https://www.physicsforums.com/insights/causal-perturbation-theory/']causal perturbation theory[/URL].
 
  • Like
Likes weirdoguy and mattt
  • #16
Demystifier said:
Bohmian mechanics (and quantum mechanics) can be written down in terms of real numbers only, without using complex numbers. And more to the point, it's possible to approximate (arbitrarily well) quantum/Bohmian mechanics by using a (sufficiently large) discrete set of numbers instead of reals.
But the approximate theory loses all properties that make the theory physically interesting. For example, the continuity equation goes down the drain.
 
  • #17
A. Neumaier said:
the continuity equation goes down the drain.
What do you mean by that?
 
  • #18
Demystifier said:
What do you mean by that?
Discrete approximations do not satisfy a continuity equation. hence the basic mechanism of Bohmian mechanics is no longer justified.
 
  • #19
Demystifier said:
I disagree. Even if there is a way to treat the delta function rigorously, from a physical point of view a cutoff is a natural remedy. In nature we don't have potentials that look exactly like a delta function, but we do have potentials that look like very narrow Gaussians.
This flashed a light bulb in my head. We learn to be accustomed from very early on with real numbers and limits and I guess we don't really think about how weird all these things are. The delta function is a good example of this. I guess it is easy to forget your approximation is not more fundamental than nature.

Though I also understand what Neumaier is saying. One has to be careful not to throw the baby out with the bath water.
 
  • #20
A. Neumaier said:
Causal perturbation theory does not work with Lagrangians!
In your insights article you write: "
To define particular interacting local quantum field theories such as QED, one just has to require a particular form for the first order approximation of S(g). In cases where no bound states exist, which includes QED, this form is that of the traditional nonquadratic term in the action, but it has a different meaning."
Exactly in what way is the meaning different from the one in the traditional action? Does the approximation follow an local action principle or not?
It looks to me like it just uses a renormalized Lagrangian instead of the usual bare one since it changes the moment when the renormalizations is performed to a previous step instead of the usual latter one. But the local action is still there in the background, just more rigurously renormalized from the start.
 
Last edited:
  • #21
A. Neumaier said:
Discrete approximations do not satisfy a continuity equation. hence the basic mechanism of Bohmian mechanics is no longer justified.
Discrete entities, like atoms, which are conserved and can be counted, with either continuous trajectories or, if even space is discrete, switch only to neighbor cells in each time step, are the straightforward microscopic models for things described by a continuity equation. To justify the continuity equation in the large distance limit is, in this case, quite trivial. Thus, no basis is lost.
 
  • #22
Sunil said:
Is there a necessity for this? If the cutoff is something physical, that means, there is some more fundamental theory and this theory has an effective cutoff. So the real world is, metaphorically, committed to a particular cutoff. We have to guess the correct theory, and that means, we also have to commit to the particular cutoff scheme which is equivalent to the true fundamental theory.
What I would like from a theory of QFT-with-cutoff is a rigorous formulation and proof of the idea that the details of the cutoff do not matter. For that, I think we need a cutoff-independent formulation of the concept of a cutoff itself, and of the fields subject to it.
 
  • Like
Likes Tendex
  • #23
Sunil said:
Discrete entities, like atoms, which are conserved and can be counted, with either continuous trajectories or, if even space is discrete, switch only to neighbor cells in each time step, are the straightforward microscopic models for things described by a continuity equation. To justify the continuity equation in the large distance limit is, in this case, quite trivial. Thus, no basis is lost.
Remember the context of my statement! Continuous trajectories are impossible without using real numbers.

Discrete trajectories may conserve the particle number but not via a continuity equation. Moreover, the conservation of particles is known to be violated in physics, while the conservation of various currents follows from symmetry considerations. All this is lost when avoiding real numbers.
 
  • #24
Demystifier said:
oon the physicists discovered that they could extract working calculational
results if they just assumed that displacements couldn’t vary on arbitrarily
short lengthscales. This amounted to “cutting off” the range of integration in
the divergent integrals, so that they got a finite result. When they did their
calculations this way, the answers agreed very well with experiment.

As if this is what anyone actually does in practice. More like the merry band of physicists made it all work by working in 4 + *whatever* dimensions and arbitrarily tuned some constants so the Green's functions would fit certain conditions while leaving the rest of the theory in a complete divergent mess.
 
  • #25
Demystifier said:
The paper is about how to think philosophically about UV divergences in QFT. In my own words, I would say that it argues that a "naive" cutoff way of thinking is in fact better than attempts to retain continuum with a fancy-schmancy functional analysis way of thinking.
In other words no value at all.
 
  • #26
maline said:
What I would like from a theory of QFT-with-cutoff is a rigorous formulation and proof of the idea that the details of the cutoff do not matter. For that, I think we need a cutoff-independent formulation of the concept of a cutoff itself, and of the fields subject to it.
This would require first a proof of the unfeasibility of the standard mathematical continuum way as commented by Neumaier, and if that happened it is not clear what kind of theory that situation could allow to construct, at least with the current concept of the mathematical continuum of the reals.
 
  • #27
martinbn said:
In other words no value at all.
Unless you don't mind the same judgement for your post could you elaborate a bit on why no value at all?
 
  • #28
martinbn said:
In other words no value at all.
Again, this is explicitly a philosophy paper. If you do not appreciate philosophy, then you are not meant to find value in it.
 
  • Like
Likes Demystifier
  • #30
Because some comments have been moved, I will reproduce the reevant part of the conversation before responding:
maline said:
His basic point is that our working assumption about "reality" should be that there is some UV cutoff, with QFT as an effective theory per Wilson. This because the long effort to construct a mathematically sensible, UV-complete QFT (in 4D, with interactions) has failed, so perhaps such a theory cannot even exist.
A. Neumaier said:
It has not failed, it just remained inconclusive. Not a single argument is known that would indicate that such QFTs do not exist.
maline said:
How long must a research program run without results, before we are allowed to say it has "failed"?

The basic problem is by now well-known: if the fields are operator-valued distributions, then defining interactions as products of fields at a point doesn't make sense, because distributions cannot generally be multiplied. Some distributions can be multiplied using Hormander's method, but the time-ordered product fails Hormander's criterion.

Is there any good reason to believe this issue can be overcome?
Demystifier said:
Epstein-Glaser?
maline said:
I don't know very much on these topics, so please enlighten me if I'm wrong. But if I understand correctly, Epstein-Glaser and the like merely describe in precise terms how renormalization is to be done, without making any attempt to justify the procedure "from first principles". They don't describe what it is that we are trying to calculate using these prescriptions. They provide an answer, but not the question!

Indeed, this will be true of any formalism that begins with a Lagrangian. Renormalization does not allow us to calculate the observable values of masses and coupling constants from the Lagrangian, meaning that predicting physical values requires some additional input. A properly specified theory would include "fundamental" parameters that eventually fix the measured values. If all we have is a Lagrangian, we simply do not know what we are calculating.

Another aspect of the same thing (I believe) is that Epstein-Glaser is fundamentally perturbative - that is, the power series is the only output; there is no function that the series is intended to approximate!

So I'm not sure these methods bring us any closer to explaining what we actually mean by QFT interaction terms...
A. Neumaier said:
Causal perturbation theory does not work with Lagrangians!

In the causal approach, the meaning is precisely given by the axioms for the parameterized S-matrix. The construction is at present perturbative only. Missing are only suitable summation schemes for which one can prove that their result satisfies the axioms nonperturbatively. This is a nontrivial and unsolved step but not something that looks completely hopeless.
Okay, I admit that this axiom-based approach is more successful than I gave it credit for. It's not just "here is a well-defined algorithm that that reproduces what physicists do", but rather "if a full theory exists at all, with the desired first-order term, then we can prove that this algorithm gives the correct power series expansion for the full theory".

The Hilbert space is the asymptotic Fock space with the empirically measured masses, and the measurable coupling constants are fixed by the finite coefficients given in the first-order term (is this correct, or is there still an arbitrary choice of renormalization scheme that changes the predicted values?)

Nevertheless, this approach provides no evidence at all that the full theory does in fact exist. Nor do we have any other reason to think it does! We know that the standard formulation, in terms of integrating a Hamiltonian, is meaningless because of the improperly multiplied distributions. Having a procedure to deal with them does not change that fact.

The experimental success of QFT cannot provide evidence that it exists mathematically, because they occur in a universe that, at high energy, is dominated by quantum gravity. Whatever the true quantum gravity looks like, we have little reason to think it is a QFT on a background manifold. So our experiments are very unlikely to be the outcome of any UV-complete QFT!

Thus we have little or no reason to think any (interacting, non-super-renormalizable) QFT exists, and the ongoing lack of success in finding one provides at least some evidence that it does not.
 
Last edited:
  • #31
A. Neumaier said:
Discrete approximations do not satisfy a continuity equation. hence the basic mechanism of Bohmian mechanics is no longer justified.
The point of continuity equation is not non-discreteness but conservation. There is a discrete version of conservation equation.
 
  • #32
martinbn said:
In other words no value at all.
Value is always subjective.
 
  • #33
maline said:
Because some comments have been moved, I will reproduce the relevant part of the conversation before responding:
Please post this in the thread where these posts were moved and your contribution is on topic; then I'll reply.
maline said:
we have little reason to think it is a QFT on a background manifold. So our experiments are very unlikely to be the outcome of any UV-complete QFT!

Thus we have little or no reason to think any (interacting, non-super-renormalizable) QFT exists,
While you may have little reason to think that any interacting, non-super-renormalizable QFT exists, I on the contrary have strong reasons to think that interacting, non-super-renormalizable QFTs exist. But I don't want to continue discussing technical matters of QFT in a philosophical thread.
 
  • #34
A. Neumaier said:
Please post this in the thread where these posts were moved and your contribution is on topic; then I'll reply.
I think my post does belong on this thread. The original post was a paper saying that we should consider "the real QFT" to mean QFT-with-cutoff, rather than a supposed continuum theory. That is the point I am trying to argue here: that it is likely that QFT does not have a rigorous UV limit at all.

Causal Perturbation Theory was brought in as an example of what we do know about mathematical QFT, and I did put in some side points that indeed belonged in that thread. I suppose I will move the bracketed comment on summation to there as well. But here I want to discuss the evidence for and against the existence of QFT in the UV.
 
  • Skeptical
Likes weirdoguy
  • #35
maline said:
the point I am trying to argue here: that it is likely that QFT does not have a rigorous UV limit at all.
But properly arguing about rigorous matters of any kind requires technical details (here about rigorous quantum field theory, which means AQFT) rather than foundational issues or philosophical principles. The only nontechnical argument you gave is that you think there is little reason for a rigorous QFT involving gravity and hence for a rigorous UV limit of physical relevance. But on the nontechnical level there is as little reason against a rigorous QFT involving gravity; so a discussion on this level is moot.

Once one does not require rigor, there is at least overwhelming evidence that QCD has a valid UV limit, independent of the unsolved questions about quantum gravity. Thus without requiring rigor your arguments are worthless.
 
Last edited:
  • #36
A. Neumaier said:
But properly arguing about rigorous matters of any kind requires technical details (here about rigorous quantum field theory, which means AQFT) rather than foundational issues or philosophical principles. The only nontechnical argument you gave is that you think there is little reason for a rigorous QFT involving gravity and hence for a rigorous UV limit of physical relevance. But on the nontechnical level there is as little reason against a rigorous QFT involving gravity; so a discussion on this level is moot.
Sorry, but if the question is about the possibility of existence of some technical construction, in a situation where it is not known to exist, proper arguments do not necessarily require technical details. The classical argument that if such a thing would exist it would have, with high probability, already been found is non-technical. The clarification where the burden of proof is located - namely on the side of those who argue for the existence - and the consequence that the "zero hypothesis" in this case is that it does not exist is also completely non-technical but nonetheless valid.

Instead, technical arguments for the non-existence have - justly - already a bad reputation. These would be impossibility theorems. But the list of misguided impossibility theorems is long. The most well-known one is von Neumann's impossibility proof for hidden variables. Actually, history repeats itself by a large number of ##\psi##-ontology theorems, starting with the PBR theorem, in a situation where Caticha has already given a nice realist ##\psi##-epistemic interpretation with his "entropic dynamics".
A. Neumaier said:
Once one does not require rigor, there is at least overwhelming evidence that QCD has a valid UV limit, independent of the unsolved questions about quantum gravity.
A simple question where I have never seen an answer: What would be the difference between a theory with free particles from the start and the UV limit of an asymptotically free theory? If the interaction goes to zero in the limit, the limit itself would be useless.

Literature:
Leifer, M.S. (2014). Is the Quantum State Real? An Extended Review of ##\psi##-ontology Theorems. Quanta 3(1), 67-155, arxiv:1409.1570
Caticha, A. (2011). Entropic Dynamics, Time and Quantum Theory, J. Phys. A 44 , 225303, arxiv:1005.2357
 
  • #37
Sunil said:
The classical argument that if such a thing would exist it would have, with high probability, already been found is non-technical.
It is non-technical but logically faulty, hence irrelevant.

Everything that is difficult to find but has not yet been found would not exist with high probability, according to this argument. Moreover, the probabilities involved are extremely subjective, and do not mean anything except for the subject uttering them.

I gave the example of Kepler's conjecture where it took several hundred years to settle it positively. This doesn"t mean that the conjecture was almost certainly false before it was proved. Instead, those informed always considered it most likely to be true (for technical reasons)! The Riemann hypothesis is also regarded as true by many even though in spite of the attempt of a number of the best mathematicians, no proof has been found in the last 150 years.
 
  • #38
Sunil said:
What would be the difference between a theory with free particles from the start and the UV limit of an asymptotically free theory? If the interaction goes to zero in the limit, the limit itself would be useless.
You mix up two different limits.

An asymptotically free quantum field theory is not free except in the limit of infinitely tiny distances. But only the theory at finite distances has physical relevance. Particles are not free at all in an asymptotically free theory! In QCD, the fundamental particles, the quarks, are even confined to short distances.

The observable particles are those that are asymptotically free in a very different asymptotic limit - the one where time goes to plus or minus infinity.
 
  • #39
A. Neumaier said:
It is non-technical but logically faulty, hence irrelevant.

Everything that is difficult to find but has not yet been found would not exist with high probability, according to this argument. Moreover, the probabilities involved are extremely subjective, and do not mean anything except for the subject uttering them.
"Logically faulty" is itself faulty once applied to an argument which does not even claim that it is decisive, and which was, moreover, reduced here to a short phrase, not more than a reference to this argument, instead of the argument itself.

In fact, most of the arguments people like to name "logically faulty" are completely adequate if understood not at certain logical conclusions but as part of plausible reasoning. So if from A follows B, it is "logically faulty" to conclude from B that A, but if we have B this makes the hypothesis nonetheless A more plausible. If this increase in plausibility is a serious one or not depends on the circumstances, that means, on the details. To summarize, in domains where one cannot expect certain proofs, but only plausible reasoning, one can misrepresent as "logically faulty" almost everything.

What would be the details in this case? First of all, the amount of intellectual energy which has been already spend trying to solve the problem. In this case, we have a whole subdomain of mathematical physics, AQFT, working essentially only on this problem, and a large prize for those who solve it. Then, observation of what has been reached, and why the methods used have not been sufficient yet. This requires some knowledge of what has been reached, which I have not. But what I have heard is that the successful examples are all superrenormalizable. In this case, I would see no base for hope.
A. Neumaier said:
This doesn"t mean that the conjecture was almost certainly false before it was proved. Instead, those informed always considered it most likely to be true (for technical reasons)! The Riemann hypothesis is also regarded as true by many even though in spite of the attempt of a number of the best mathematicians, no proof has been found in the last 150 years.
The question what is most plausibly true if the problem itself is not solved is, of course, something which depends on the problem. Last but not least, the hypothesis that A is true is nothing but a reformulation of the hypothesis that not A is true, solving one solves automatically the other too, but the answers are opposite. So, one needs at least some information to identify what plays the role of the null hypothesis.

But in this case, it is quite simple. We look for a theory of physics with certain properties. But, in fact, we don't look for a completely arbitrary one, but, for other reasons, for a simple one. If a research program with the final aim to construct a simple theory has failed to construct yet even a horribly complex one, and simple theories are in general easier to find and construct (if one is not restricted by fitting observations, as in this case) the answer is the natural one: To give up the research program because it has no chance to reach the final aim.
 
  • Skeptical
Likes weirdoguy
  • #40
Sunil said:
In fact, most of the arguments people like to name "logically faulty" are completely adequate if understood not at certain logical conclusions but as part of plausible reasoning.
So it is logically faulty but in your eyes plausible. Very well. I and many others look with different eyes.
Sunil said:
What would be the details in this case? First of all, the amount of intellectual energy which has been already spend trying to solve the problem. [...] In this case, I would see no base for hope.
By the same logically faulty but in your eyes plausible argument there is no base of hope for solving any of the Clay Millenium Problems - not only the 7th that you address. But those responsible for them posed them because they expect them to be solved!
Sunil said:
the answer is the natural one: To give up the research program because it has no chance to reach the final aim.
So you should give up! Indeed, with what you find plausible you'll never solve one of the millennium problems.

Others, with their own plausibilities, see hope and do not give up. Some will succeed in due time.
 
  • #41
Tendex said:
This would require first a proof of the unfeasibility of the standard mathematical continuum way as commented by Neumaier, and if that happened it is not clear what kind of theory that situation could allow to construct, at least with the current concept of the mathematical continuum of the reals.
I disagree. If a theory with a cutoff is a well-defined theory (say, a lattice theory), it does not matter at all if a limit lattice distance to zero exists or not. The mathematical concept of a continuum is irrelevant here, the lattice theories use real numbers too.

A. Neumaier said:
So it is logically faulty but in your eyes plausible. Very well. I and many others look with different eyes.
Don't distort what I write. Only a misinterpretation of it as a rigorous statement would be logically faulty. In the straightforward interpretation it assigns only some low probability and is in no way logically faulty.

A. Neumaier said:
By the same logically faulty but in your eyes plausible argument there is no base of hope for solving any of the Clay Millenium Problems - not only the 7th that you address. But those responsible for them posed them because they expect them to be solved!
Wrong. I have clarified that the power of the argument depends on what one adds. And I have added some considerations which cannot be applied to the other Millenium problems.
A. Neumaier said:
So you should give up! Indeed, with what you find plausible you'll never solve one of the millennium problems.
I cannot give up something I have never started. Those problems are simply not interesting for me.
A. Neumaier said:
Others, with their own plausibilities, see hope and do not give up. Some will succeed in due time.
No problem, they can do whatever they like. Most will give up in due time.
 
  • Skeptical
Likes weirdoguy

Similar threads

Replies
25
Views
5K
Replies
10
Views
2K
Replies
4
Views
2K
Replies
2
Views
2K
Replies
13
Views
2K
Replies
2
Views
4K
Replies
9
Views
5K
Back
Top