Does string theory falsify the theory of special relativity?

In summary, it seems that string theory allows for 9 dimensions in relation to space, which is in contrast to the 3 dimensions that space is traditionally believed to have. However, from what I understand, this is only a theoretical possibility and has not yet been observed.
  • #1
Dennis Blewett
1
1
Hello,

I am spending time learning more about the theory of special relativity and string theory. One of the things that I have read about string theory is that it includes other dimensions in relation to space (space has 9 dimensions in string theory, supposedly). However, from what I understand from the theory of special relativity, space has only three dimensions. However, I think Einstein never really explicitly said that there are only three dimensions to space, just that dimensions to space exist, as velocity exists with spatial dimensions being translated in by a body of mass.

It seems that since Einstein never explicitly stated that there were more than three dimensions, then the theory of special relativity allows for 9 dimensions.

I'm reading that string theory supposedly builds upon the theory of special relativity, but it seems to me that it falsifies special relativity with the additional dimensions of space being needed. I understand that the theory of special relativity doesn't account for other dimensions that may exist (multiverse, possibly).
 
Last edited:
Physics news on Phys.org
  • #2
The additional dimensions are I think all microscopic, with only still the 3+1 macroscopic dimensions of Minkowski spacetime on which SR rests.
For string theory to falsify another theory, it would need to make a different prediction than the theory being falsified. Last time I checked, string theory makes no predictions at all, so it cannot yet falsify anything.
 
  • #3
Dennis Blewett said:
but it seems to me that it falsifies special relativity with the additional dimensions of space being needed.
The only thing that falsifies a theory is an observation that disagrees with a prediction made by that theory. So far that hasn’t happened with relativity, so it’s not falsified by string theory (or anything else for that matter).

It’s actually quite unusual for a new theory to falsify an existing and accepted theory - the experiments and observations supporting the old theory are still around and just as valid as before. Much more often, the new theory extends our understanding of the phenomena in question; I would bet long odds that that’s how it turns out with string theory if and when it makes any experimentally testable predictions.
 
  • #4
Theories cannot falsify other theories. Only evidence can falsify theories.

Generally, the idea of falsification is somewhat overstated by Popper. What usually happens is that a theory which is validated in some domain will always be valid in that domain. Experiments which Popper would claim “falsify” a theory are typically considered to restrict the domain of applicability.
 
  • Like
Likes PeroK
  • #5
Dennis Blewett said:
I am spending time learning more about the theory of special relativity and string theory.
SR (Special relativity) is an elementary cornerstone of modern physics. String Theory must contain SR as a special case; otherwise, string theory would be incompatible with experimental results.

SR is a special case of GR (General Relativity), which is the current model for spacetime used in cosmology, for example. GR itself is quite a flexible theory. It rests on a branch of geometry (Lorentzian manifolds) and, in principle, you can study spacetimes with any number of dimensions. E.g. in Gravity by James B. Hartle, which is good introductory textbook on GR, he gives an example of how an extra "compactified" fifth dimension could be detectable only in an experiment with very high-energy photons.

If that experiment could be carried out, then we might find that the specific manifold in our universe has a fifth dimension, but we would still have the theory of GR and the special case of SR.

GR does not say, therefore, "this is precisely the spacetime we live in and it has four dimensions". Instead, it says: spacetime is a Lorentzian Manifold, go out and do some experiments to see what specific manifold we have in our universe. That's why, for example, the theory itself didn't need to be replaced when it was discovered that the universe expansion is accelerating: instead, a new source of energy (dark energy) needed to be added to the model.

As others have noted, Popper's view of falsification is not particularly useful if you want to understand how modern physics models the universe we live in.
 
Last edited:
  • Like
Likes Dale
  • #6
Dale said:
Theories cannot falsify other theories. Only evidence can falsify theories.

Generally, the idea of falsification is somewhat overstated by Popper. What usually happens is that a theory which is validated in some domain will always be valid in that domain. Experiments which Popper would claim “falsify” a theory are typically considered to restrict the domain of applicability.

That's nonsense in terms of Popper and falsifiability. You also appear to intentionally miss the point of the OP, who was clearly using the word falsify in a different sense than the technical Popperian meaning, in particular intending to ask whether string theory contradicts special relativity, and hence would cause special relativity to be falsified by the relevant observations if it were an accurate model of the world.
 
  • #7
madness said:
That's nonsense in terms of Popper and falsifiability.
What specifically is nonsense?
 
  • #8
Dale said:
What specifically is nonsense?

The notion that Popperian falsification would restrict the domain of applicability of the theory, and that the theory would remain valid in the original domain. If you read The Logic of Scientific Discovery by Popper, you will see that the notion of a theory being approximately valid in one domain, and of falsifiability being a question of domains of applicability, is completely incongruent with the whole approach of falsifiability.
 
  • #9
madness said:
The notion that Popperian falsification would restrict the domain of applicability of the theory, and that the theory would remain valid in the original domain.
That isn’t what I said. I said Popper’s idea of falsification is overstated. Meaning that it doesn’t actually reflect how the scientific community uses experimental evidence.

madness said:
If you read The Logic of Scientific Discovery by Popper, you will see that the notion of a theory being approximately valid in one domain, and of falsifiability being a question of domains of applicability, is completely incongruent with the whole approach of falsifiability.
Yes. Which is exactly why Popper’s notion is overstated. It is incongruent as you say with the way scientists actually use evidence. Popper’s approach is far too “black and white” or perhaps “throw the baby out with the bath water”.

The domain of applicability approach is much closer to the actual way scientists use theories and evidence, and in my opinion it is a much better approach (plus it is close to Bayesian inference).
 
  • Like
Likes weirdoguy, Motore, PeroK and 1 other person
  • #10
Dale said:
That isn’t what I said. I said Popper’s idea of falsification is overstated. Meaning that it doesn’t actually reflect how the scientific community uses experimental evidence.

Yes. Which is exactly why Popper’s notion is overstated. It is incongruent as you say with the way scientists actually use evidence. The domain of applicability approach is much closer, and in my opinion it is a much better approach (and it is close to Bayesian inference).

There is a more modern view called Bayesian confirmation theory which bridges the gap between old fashioned verificationism and Popperian falsificationism within a Bayesian framework. ET Jaynes argued that that science is fundamentally Bayesian process. But you have to abandon absolute falsificationism and accept some degree of verificationism in that case.

Your comment on falsificationism being incongruent with the way scientists actually use evidence is irrelevant, unless you are arguing for a descriptivist view of science rather than a prescriptivist one. Scientists do a lot of stuff that is wrong in practice, and Popper developed his theory largely in response problematic trends in science at the time. Science should be congruent with the proposed method, not vice versa. Unless you want to appeal to Paul Feyerabend and his anti method camp.
 
  • #11
madness said:
Your comment on falsificationism being incongruent with the way scientists actually use evidence is irrelevant, unless you are arguing for a descriptivist view of science rather than a prescriptivist one.
I disagree that it is irrelevant. Popper’s approach is well known, even famous. The fact that the scientific community as a whole actually uses only select parts of his ideas indicates that the community as a whole has evaluated his concepts, found them overstated, and adopted only the more modest parts judged worthwhile. The actual behavior of scientists, post Popper, is relevant because it is evidence of our assessment of his outlook. We find it overstated.

madness said:
Scientists do a lot of stuff that is wrong in practice, and Popper developed his theory largely in response problematic trends in science at the time.
While that is true, the things that scientists do wrong today are largely different things from the things that Popper was responding to in his day. So that in no way contradicts the assessment that his approach is overstated. Personally, I think that we are right to adopt only the parts of Popperism that we have, and that adopting the remaining parts would not fix any of today’s problems.

madness said:
Science should be congruent with the proposed method, not vice versa.
If by “proposed method” you mean Popper’s philosophy, then I disagree. If you mean the Bayesian approach, I tend to lean that way.
 
  • #12
Dale said:
I disagree that it is irrelevant. Popper’s approach is well known, even famous. The fact that the scientific community as a whole actually uses only select parts of his ideas indicates that the community as a whole has evaluated his concepts, found them overstated, and adopted only the more modest parts judged worthwhile. The actual behavior of scientists, post Popper, is relevant because it is evidence of our assessment of his outlook. We find it overstated.

While that is true, the things that scientists do wrong today are largely different things from the things that Popper was responding to in his day. So that in no way contradicts the assessment that his approach is overstated. Personally, I think that we are right to adopt only the parts of Popperism that we have, and that adopting the remaining parts would not fix any of today’s problems.

If by “proposed method” you mean Popper’s philosophy, then I disagree. If you mean the Bayesian approach, I tend to lean that way.

In the Bayesian approach falsifiability is relaced with confirmation and disconfirmation, i.e. evidence which increases the likelihood (or posterior) probability that an hypothesis/theory is true (sometimes the likelihood ratio of two competing hypotheses). One doesn't think in terms of falsification in that case (although disconfirmation is related to falsification).

More importantly, in relation to the original point, the distinction between Bayesian/Popperian views is orthogonal to the question of whether falsifying/disconfirming evidence would be considered to restrict the domain of applicability of a theory. When a theory is falsified/disconfirmed it can no longer be considered valid in either the Popperian or Bayesian approach - the hypthosesis that a theory is approximately true in a certain domain is mutually exclusive with the hypothesis that the theory is is exactly true (even in that restricted domain). So one would have to reject the initial hypothesis and replace it with a new one (in the Bayesian approach, reduce the likelihood of the initial hypothesis and increase the likelihood of the new one).
 
  • Skeptical
Likes weirdoguy and PeroK
  • #13
madness said:
the hypthosesis that a theory is approximately true in a certain domain is mutually exclusive with the hypothesis that the theory is is exactly true
Scientists mostly no longer believe the hypothesis that any theory is exactly true. At least not any current theory. Again, this is partly why Popper’s ideas are overstated. Or as I said above too “black and white”. Popper’s entire “orthogonal” axis you mention has been discarded in modern scientific practice in favor of less extreme axes like domains of applicability, and IMO that is a good thing.

Anyway, I think that my claim re: Popper is correct or at least it accurately reflects the view of the modern scientific community.
 
  • #14
madness said:
In the Bayesian approach falsifiability is relaced with confirmation and disconfirmation, i.e. evidence which increases the likelihood (or posterior) probability that an hypothesis/theory is true (sometimes the likelihood ratio of two competing hypotheses). One doesn't think in terms of falsification in that case (although disconfirmation is related to falsification).

More importantly, in relation to the original point, the distinction between Bayesian/Popperian views is orthogonal to the question of whether falsifying/disconfirming evidence would be considered to restrict the domain of applicability of a theory. When a theory is falsified/disconfirmed it can no longer be considered valid in either the Popperian of Bayesian approach - the hypthosesis that a theory is approximately true in a certain domain is mutually exclusive with the hypothesis that the theory is is exactly true (even in that restricted domain). So one would have to reject the initial hypothesis and replace it with a new one.
As @Dale has pointed out, all this has little to do with how physics works, is understood or taught. Modern physics includes Classical Mechanics, Classical Electromagentism, Thermodynamics, Fluid Mechanics, Relativity (Special and General), Quantum Theory (QM, QED, QCD, QFT), Particle Physics, Cosmology (Big Bang etc.) etc.

These are not theories waiting to be falsified or invalidated. And, very few physicists, would talk about the "exact truth" of any of them. The pragmatic physicist would see these as successful models, with each covering some particular set of natural phenomena. Some are more fundamental than others, but together they form the overall picture as of 2021. It's almost unthinkable that any of these models will disappear from the physics curriculum.

The notion that physics operates by "rejecting an initial hypothesis and replacing it with a new one" is not how physics works - except where a new model is being developed. In time, the model becomes robust and is then never replaced but may be extended and/or have a limitation of applicability placed on it. Coulomb's law being a perfect case in point.

Physics is done, by definition, the way physicists do it. There is no obligation for physics to be done the way a philosopher imagines physics is done. Even if the name of that philosopher is Karl Popper.

You may be interested in:

 
  • Love
Likes Dale
  • #15
Dale said:
Scientists mostly no longer believe the hypothesis that any theory is exactly true. At least not any current theory. Again, this is partly why Popper’s ideas are overstated. Or as I said above too “black and white”. The entire “orthogonal” axis you mention has been discarded in modern scientific practice, and IMO that is a good thing.

Anyway, I think that my claim re: Popper is correct or at least it accurately reflects the view of the modern scientific community.

I think it would be very difficult to couch the belief that no theory is exactly true in a Bayesian way. One can have uncertainty on the parameters, on experimental measurements, one can have a family of alternative hypotheses with an uncertainty distribution across them given current evidence and prior beliefs. But I don't think that it is possible to express the "hypothesis" that "no theory is exactly true" within a Bayesian framework for doing science.

Maybe I can be even more explicit. We can have a set if hypotheses ##H_i##, and evidences ##E_k##. We can have an uncertainty related to ##p(H_i)## (prior belief), ##p(H_i| E_1,...,E_K) ## (posteriors), perhaps prior probabilities ##p(E_k)##. But we can't write down anything about the notion that none of the ##H_i## are exactly true, because Bayesian theory doesn't have a concept of degrees of truth. There are logical systems which do, but Bayenism isn't one of them.
 
  • #16
madness said:
But I don't think that it is possible to express the "hypothesis" that "no theory is exactly true" within a Bayesian framework for doing science.
##P(H|E)<1##

In any case, that is not terribly relevant to the comments re Popper and falsification. I never claimed that a purely Bayesian approach was perfect. Merely that Popper’s idea of falsification is overstated, which I think is clear.

If you wish to discuss Bayesian ideas please don’t hijack this thread. Start another
 
  • #17
PeroK said:
As @Dale has pointed out, all this has little to do with how physics works, is understood or taught. Modern physics includes Classical Mechanics, Classical Electromagentism, Thermodynamics, Fluid Mechanics, Relativity (Special and General), Quantum Theory (QM, QED, QCD, QFT), Particle Physics, Cosmology (Big Bang etc.) etc.

These are not theories waiting to be falsified or invalidated. And, very few physicists, would talk about the "exact truth" of any of them. The pragmatic physicist would see these as successful models, with each covering some particular set of natural phenomena. Some are more fundamental than others, but together they form the overall picture as of 2021. It's almost unthinkable that any of these models will disappear from the physics curriculum.

The notion that physics operates by "rejecting an initial hypothesis and replacing it with a new one" is not how physics works - except where a new model is being developed. In time, the model becomes robust and is then never replaced but may be extended and/or have a limitation of applicability placed on it. Coulomb's law being a perfect case in point.

Physics is done, by definition, the way physicists do it. There is no obligation for physics to be done the way a philosopher imagines physics is done. Even if the name of that philosopher is Karl Popper.

You may be interested in:



It's hard to tell exactly what you are claiming here. All I'm pointing out is that limiting the domain of applicability, or extending the model, both constitute rejecting the original hypothesis and replacing it with a new one from the formal perspective of either Popper's "Logic of Scientific Discovery" or from the more modern Bayesian Confirmation Theory developed by Jaynes, Jeffreys, Hempel, Putnam, and others.

You also claim that there is no obligations for physicists to follow any particular methodological rules, in which case you are appearling to Feyerabend's view https://en.wikipedia.org/wiki/Paul_Feyerabend, which is called "epistemological anarchism".
 
  • Skeptical
Likes weirdoguy
  • #18
Dale said:
##P(H|E)<1##

In any case, that is not terribly relevant.

That's wrong. That states that there is a probability less than 1 that the hypothesis is true given the evidence. Your claim was that no hypothesis can be entirely true. You should not conflate probabilities with truth values.
 
  • Skeptical
Likes weirdoguy
  • #19
madness said:
Your claim was that no hypothesis can be entirely true.
That was not my claim.

Again, the Bayesian stuff is off topic here and should be in a separate thread, but in any thread I would strongly recommend that you stop misrepresenting my statements.
 
  • #20
Seems this thread has become too philosophical and other things here.

As much as I would like to discuss Popper's book, especially as it was Einstein himself who first started that discussion, and me, too, has a very distant view on Popper's hypotheses, which are purely philosophical and completely inappropriate to assess natural sciences in my opinion, I have to say that such a debate is not only off-topic but also excluded per our current rules.

This thread is closed.
 
  • Like
Likes suremarc

1. What is string theory?

String theory is a theoretical framework in physics that attempts to reconcile general relativity and quantum mechanics by describing the fundamental building blocks of the universe as tiny, vibrating strings instead of point-like particles.

2. How does string theory relate to special relativity?

String theory is built upon the principles of special relativity, which states that the laws of physics are the same for all observers in uniform motion. In string theory, the strings vibrate at different frequencies, which correspond to different particles and their properties, all while maintaining the principles of special relativity.

3. Does string theory falsify special relativity?

No, string theory does not falsify special relativity. In fact, string theory is built upon the principles of special relativity and is consistent with its predictions. String theory expands upon special relativity by providing a more comprehensive framework for understanding the fundamental nature of the universe.

4. Are there any experiments that support or refute string theory's compatibility with special relativity?

Currently, there are no experiments that directly support or refute the compatibility of string theory with special relativity. However, string theory has made predictions about phenomena that have yet to be observed, such as the existence of additional dimensions. If these predictions are confirmed through experiments, it would provide evidence for the compatibility of string theory and special relativity.

5. Are there any alternative theories that challenge the compatibility of string theory and special relativity?

There are alternative theories, such as loop quantum gravity, that attempt to reconcile general relativity and quantum mechanics without the use of strings. However, these theories are still in development and have not yet been able to fully explain all of the phenomena that string theory can. Furthermore, these alternative theories are also built upon the principles of special relativity, so they do not necessarily challenge its compatibility with string theory.

Similar threads

  • Special and General Relativity
Replies
10
Views
751
  • Special and General Relativity
2
Replies
61
Views
4K
  • Special and General Relativity
Replies
29
Views
2K
  • Special and General Relativity
Replies
22
Views
1K
  • Special and General Relativity
Replies
8
Views
205
  • Special and General Relativity
Replies
7
Views
1K
  • Special and General Relativity
Replies
7
Views
1K
  • Special and General Relativity
Replies
15
Views
876
  • Special and General Relativity
Replies
1
Views
802
  • Special and General Relativity
Replies
29
Views
1K
Back
Top