Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Changing the speed of light does nothing?

  1. Dec 17, 2004 #1
    I've heard that if the speed of light were to change there would be no effect on the universe because changing it would change other things in a way that everything would still be the same. Can someone please explain this or tell me what would happen if it isn't true?
  2. jcsd
  3. Dec 18, 2004 #2


    User Avatar
    Science Advisor

    What do YOU mean by "changing the speed of light"? If we were to declare that the speed of light is "1 unit" then all we've done is change our measuring unit and there would, in fact, be no difference. On the other hand, if we were to change the speed of light RELATIVE TO THE SPEEDS WE NORMALLY USE we would see differences. I believe George Gamow wrote a childrens book ("Mr. Tompkins in Wonderland") on the subject.
  4. Dec 18, 2004 #3


    User Avatar
    Staff Emeritus
    Science Advisor

    "Changing the speed of light" is just too vague to mean anything without some more description of what, exactly, is being changed.
  5. Dec 18, 2004 #4
    I'm talking about the maximum speed of everything, 'c', you know what I mean. I think it's obvious that I don't mean light itself. Also, I obviously don't mean changing the units.
    What if the speed of light was 1/1000th as fast, starting now. What effect would this have on things?
    Similarly, is there anything tied to the speed of light which changing it would effect lights speed? I'm not talking about light itself, I'm talking about the maximum speed of everything.
  6. Dec 19, 2004 #5
    Read Thompkins.
  7. Dec 19, 2004 #6


    User Avatar

    Staff: Mentor

    It would change so much that there is little that would stay the same. For starters, the output of the sun would change and gravity would change, so earth would fly out of its orbit. Your tv wouldn't work, your cd player wouldn't work, nuclear plants wouldn't work. GPS, cell phones, the internet, etc, etc, etc.
  8. Dec 19, 2004 #7
    The question seems pretty offtrack to me. But consider that c is directly linked to Coulomb's k. So if you mean changing the ratio of c and the electron charge, it seems there would be implications at the atomic scale i.e. matter would not hold together in the same way.
    Last edited by a moderator: Dec 19, 2004
  9. Dec 19, 2004 #8
    If it is shown as some of the experimenters such as John D Barrow in Australia claim -that the velocity of light has been slowing from some early time (e.g. the big bang), it may have no effect upon physics because of the way c enters into the equations. Moreover, a change in the velocity may not be measurable - much like the difficulty in trying to measure a different clock rate in a moving reference frame. C may turn out to be more profound than a limit upon local light velocity - as I have previously proposed, perhaps we should consider c primarily as the velocity of expansion of the Hubble sphere and then ask the question whether this rate determines the local limit of wave propagation rather than the local properties determining the cosmic parameters - sort of like the tail wagging the dog. Moreover, if c is a factor that arises from the ratio of a change in the cosmic scale factor (dR) to the change in the cosmic age (dt) - it may not be possible to detect any other value for the ratio "c" even though the individual factors (dt) and (dR). change
    Last edited: Dec 19, 2004
  10. Dec 19, 2004 #9


    User Avatar
    Staff Emeritus
    Science Advisor

    I'm afraid that it really does NOTmake sense to talk just about "varying c" as if it were a physical change. One has to provide considerably more detail for the concept to even make sense.

    The reason for this is that 'c' is a constant that has dimensions. So you cannot escape the problem of what units you use to measure 'c' with. There is no difference between changing the speed of light, and shrinking the meter, for instance.

    It turns out that in another thread

    this paper

    was presented as a serious discussion of the idea of "variable speed of light". The first part of the paper is a long discussion of what the author actually means by varying the speed of light.

    The author is a maverick, but he's actually taken a serious look at this old chestnut. The theories he comes up with are not particularly likely to be true and have basically no evidence to suggest that they might be true, but they are self consistent. This is no mean feat for a "variable speed of light" theory, which is a bit like a dancing bear. It's not that the bear dances particularly well, it's just amazing that the bear (the theory in this case) dances at all (is consistent with itself and makes testable predictions).

    Because I think this is an interesting point, well explained, and because I have my doubts that you will actually read the link I'll provide a rather long quote from the paper, discussing the difficulties with the idea of the variable speed of light. If you get interested, you can then read the paper for the resoultion of the difficulties.

  11. Dec 19, 2004 #10


    User Avatar
    Staff Emeritus
    Gold Member
    Dearly Missed

    I don't think he was talking about a variable c. If I understood, the question was: Suppose the the value of c was 300 km per second instead of 300,000. What would be the physical effects if everything else was the same. Obviously relativistic effects instead of being unnoticably small at ordinary human speeds would be measurable, a car moving 100 km.h would be going about .03 km/s, or (1/900)c. Beta would differ from 1 by a small but measurable amount.
  12. Dec 19, 2004 #11


    User Avatar
    Staff Emeritus
    Science Advisor

    There are at least two different ways of changing the speed of light, and possibly more, with differing physical consequences. The question doesn't have a single, unambiguous answer IMO.

    One way of changing 'c' is to change the value of the permitivity of free space. This sort of change is probably going to affect the size of the atoms, as the bohr radius is directly proportional to the permitivity of free space. This assumes that one is _not_ changing hbar, and the mass and charge of the electron, and _is_ changing the fine structure constant. Different assumptions might give different results here. Since the physical constants are all interrelated, one has to be clear about which constants are changing, and which are not. When changing a dimensionful constant like 'c', it's not obvious what to choose to remain constant, and what to chose to remain non-constant among the physical constants.

    I have particular choices I've made here, but others might not make the same choices. Here's one specific example of the general issue which arises:

    the fine structure constant is

    alpha = e^2 / 4 * pi * E0 * hbar * c

    So, one cannot have the fine structure constant, the charge on the electron, the permitivity of free space, and planck's constant all be constant when one changes 'c'. Yet all of these are "fundamental constants".

    Thus one has to "pick and choose" which constants are the ones that are changing from the above list, and which are not changing. I would personally go with changing the fine structure constant, but it's not clear to me that someone else posed with the same problem would make the same assumption.

    Another way of changing 'c' is to change the magnetic permeability of free space. This sort of change will have different results than changing the permitivity of free space. (Perhaps a closer inspection will reveal some degree of interrelationship between these two changes, perhaps not.)

    One fairly robust consequence of changing 'c' is that the Schwarzschild radius associated with a given mass will get larger. The only needed assumption is that 'c' changes, and that 'G" does not change. G and 'c' are not tightly related, so this seems like a reasonable assumption.

    One can fairly robustly conclude from this that if general relativity is assumed to be correct, the sun would be a black hole if one reduced the speed of light by a factor of 1000.

    This is because the Schwarzschild radius is GM/c^2, and reducing c by a factor of 1000 increases the Schwarzschild radius by a factor of a million.

    Thus the Schwarzschild radius of 1.5km for a solar mass becomes 1.5 million km. The actual radius of the sun is smaller than this, about 700,000 km. So, if we reduced 'c' by a factor of 1000, the sun would be a black hole.
  13. Dec 19, 2004 #12
    I agree w/you here pervect, 100%, as well as your point that a change in the vacuum permeability could also alter c. However, since c=1/sq.rt(e X u), then an increase in permittivity could take place without a change in c provided there is an equivalent reduction in permeability such that the factor (e X u) remains unchanged; (or vice versa). :wink:

    I think Pervect answered this pretty well; however, let me add something .
    It is now becoming well known that in certain situations the value of c in a vacuum can change locally due to the fact, that a massless particle's speed is dependent upon the energy density of the vacuum;(that is equivalent to saying it depends upon the vacuum permittivity and permeability). As the energy density, for example, between two Casimir plates is lowered it is predicted that the value of c will exceed that of a 'normal' vacuum value (for light traveling normal to the plates). This however, is hardly realizable experimentally due to the closeness of the plate separation and the smallness of the effect.

    Creator :biggrin:
    Last edited: Dec 19, 2004
  14. Dec 29, 2004 #13
    :Changing the speed of light does nothing.

    How can we know?
    This is one of those problems where you can only really give a sensible answer by doing an experiment.

    Obviously your not referring to light travelling though glass or air which both lower its speed, or mirrors and other non transparent materials which both generate instantaneous c values of zero. These are merely trivial examples, I'm assuming you are talking about changing the curvature of space-time in a more general way.

    There are theories that might let you create experiments that look at c itself. Using enormously high energies, a machine called an 'Aperture Engine' can create a small local space-time often called a 'subspace'. Because it has very low energy this 'subspace' can be manipulated to change the laws of physics within it. It might be possible to manipulate a subspace to change its local value of c directly. This could answer your question at least partly.

    Unfortunately it might be very difficult to recover the science behind Aperture Engines because most of it was thrown away in the Kennedy era and as far as I know very little remains. You would have to undo a lot of the military censorship in physics as well to even be able to work on them, and that could end up answering your question first anyway.
    (The censorship is a lot less hard than it looks though because it was done for very little reason at a time of panic when the echelons of power were riddled with drug use. "apparently the Generals latest LSD fantasy has told him that our new project will soon become a real threat to humanity". If the main reasons for their panic had been real then the universe couldn't even exist -and even if it did the Earth could spontaneously explode at any moment.)
    Last edited: Dec 30, 2004
  15. Dec 30, 2004 #14

    Changing the speed of c, to lets say 100000 MPS, would change a lot. The creator of the universe has made these constants very precise, and if they were off, everything would malfunction, we would see a universe full of entropy.
  16. Dec 30, 2004 #15
    if the speed of light were to change of course there would be no effect on the universe .Because other things would be change.THE SPEED İS ....V=X/t
    so, when the speed changes,the way and the time will change ,too.So the changing of speed is possible
    Last edited: Dec 30, 2004
  17. Dec 30, 2004 #16
    I had always considered "variable constants" to be without much experimental support and probably of only pedagogical interest. The Thompkins book shows what happens when constants are changed arbitrarily and incorrectly. Inserting a constant into a calculator and screwing up the exponent by a few orders of magnitude gives wrong and bizarre answers. However for students learning physical laws that involve these constants, these errors, if intentionally made, drive home the effect of the constant in physical law. Thus if c was 200km/hr we'd need special relativity to drive a car. c does not have that value, but the error may have some pedagogical merit. This is what I thought the original poster was asking about.

    For those who wish to muck about inside the error bars, there is the possibility that constants might change. I've just had a look at Duff's paper:
    http://arxiv.org/PS_cache/hep-th/pdf/0208/0208093.pdf [Broken]
    and actually learned something. Duff points out that it is only for some dimensionless ratios that a time dependence makes any sense. He shows it is nonsense to let dimensioned constants vary in time. The units are our arbitrary creations and he gives several different sets of units. If the fine structure constant varied, then in one set of units it would be blamed on c changing and in others it would be Planck's constant changing, etc. This is true but differs from the folklore. His critics are most amusing: "We all know the folklore and Duff is wrong". Burning Duff at the stake makes as much sense.
    Last edited by a moderator: May 1, 2017
  18. Dec 30, 2004 #17


    User Avatar
    Staff Emeritus
    Science Advisor

    I used to believe Duffy's position (without having heard of Duffy) myself, until I read


    which makes a reasonably good case that the choice of units is important for a clear exposition of a theory. He gives an example of how one could define times in terms of "hearbeats", and shows why this sort of definition makes physics unnecessarily complex - the point being that the choice of units is made to make the theory simple to explain and analyze.

    While I don't believe the actual theories proposed in this paper are particularly interesting (being very strained and awkward), I do agree with the author's point that it is an overstatment to say that one can never vary a constant with dimension.

    I'd agree that a theory that varies a constant with dimension is incomplete if no further information is given. (This is often the case when non-scientists ask the question - they think it is unambiguous, and it's hard to explain to them why the question is ambiguous).

    People can make reasonable and different assumptions about what "remains constant" when a dimensonful constant is varied, so any such theory is incomplete unless these issues are addressed.
  19. Dec 31, 2004 #18
    You and Duff are the only people I have heard talk of "Stoney Units" and elsewhere you use them like Duff to argue a point. Some people have the benefit of a better education or maybe they are just brighter than me. Duff's paper was a revelation to me, unlike the paper you cited. You characterize these theories very charitably. I agree with you that units should be carefully chosen for simplicity of analysis and explanation, but simplicity, while usually and amazingly correct, is not a physical demand. Defining time, as Newton suggested, to make motion look simple gives us an expanding universe, instead of one with static distances and young clocks that tick faster. I think the point is that if a dimensioned constant appeared to be changing, it would be difficult without some kind of simplicity assumption to blame the change on that constant and not something else. I think that is the sort of thing you are referring to when you speak of incompleteness.

    By the way, at least half the papers I've found on radiation from an accelerated charge assume that radiation is observer dependent. We'll see.
  20. Jan 1, 2005 #19


    User Avatar
    Staff Emeritus
    Science Advisor

    Simplicity doesn't prove a theory correct, agreed. But in formulating a theory, it is desirable to formulate it as simply as possible. If one has two formulations of a theory that are equivalent, one simple, one complex, the simple formulation is preferable. The simple formulation of a theory may require particluar units to be used. Thus, it may be counterproductive to insist a priori that particular units be used (such as units where 'c' is constant), even though it is always possible in principle to formulate a theory in this manner.

    This was the point of the example. It is possible to reformulate modern physics in "hearbeat units", in such a way that it would be equivalent to modern physics defined in the units we usually use. But it would be extremely undesirable.

    Personally, I tend to agree with the common wisdom that the simplest and most accurate formulation of a theory that represents reality will _probably_ result from the units we already use, units in which 'c' is constant. In fact, I'd be willing to wager on it. But this is prejudice. Or perhaps experience.
  21. Jan 1, 2005 #20


    User Avatar
    Science Advisor
    Gold Member

    The concordant model places very tight constraints on how much planck units may vary with respect to each other over time. The wiggle room is very small [less than one part per trillion] and is very soundly supported by observational evidence.
  22. Feb 1, 2008 #21
    Sometimes it would be very nice to be able to delete your old posts (blush). Those 'aperture engines' were actually some kind of particle accelerators basically no more than synchrotron's. And the rather embarrassing theory was speculation based on speculation. As for the bit at the end, the military did 'censor' all kinds of things but it wasn't so much active censorship, and more in the way of a large bin.
  23. Feb 1, 2008 #22


    User Avatar

    well, my question for you is exactly how would you know? precisely what measurement would tell you?

    well the Planck Length would increase by a factor of 10003/2, the Planck Time would increase by a factor of 10005/2, and the Planck mass would decrease by factor of 1000-1/2. so now my question for you, Donk, is what you're expecting for the Bohr radius, the period of cesium radiation in atomic clocks, and the masses of particles (say, the electron)? are they changing in the same proportion as the corresponding Planck units? if you say "yes, all existing ratios of physical quantity remain the same, just the speed of light is changing", then i would ask how you, who are affected by all of the laws of physics, would even know of this change of c? if you say "no, these Planck units are changing value (due to a change in c) but all the other dimensioned quantities somehow stay the same", then i would say that things would be noticably different (if possible at all), but the salient fact is that these dimensionless ratios of Bohr radius to Planck length, or period of Cesium radiation to Planck time, or mass of electron to Planck mass, those dimensionless parameters are the salient parameters. it's meaningful if the fine-structure constant has changed (being dimensionless). it's not meaningful to say that some dimensionful parameter has changed unless you say in reference to what other like-dimensioned parameter. and then if you do that, i would say that the important fact is that dimensionless ratio has changed.

    sure, any physical parameter with "c" appearing in its expression. the parameters that i would direct your attention to would be the Planck units (or choose another set of natural units).

    well, i'm with you, Rob (as people here well know).

    i tried some years ago getting through Magueijo's objection, but he never dispelled the "myth" of the constancy of c being a matter of logical consistency. (maybe pervect can explain it to us.)

    what would be persuasive to me would be if a whole bunch of dimensionless parameters, all with c in them and "linearly" independent (think of the logarithms of all of these dimensionless parameters, expressible as a sum of the logs of all of the component parameters, being linearly independent), if several ostensibly constant parameters varied in different manners in such a way that would be consistent with c varying and no other component factor, that would be persuasive. but if it's just one parameter, like [itex]\alpha[/itex], the salient fact would be that [itex]\alpha[/itex] varied and we would have no way of knowing if that were due to e, [itex]\hbar[/itex], c, or [itex]\epsilon_0[/itex]. maybe it was due to a variation of 4 or [itex]\pi[/itex].
    Last edited by a moderator: May 3, 2017
  24. Feb 2, 2008 #23


    User Avatar

    i dunno if pervect is gonna slap me down for this, but i don't think this is OT, it's about the broader question about the relative meaningfulness of changing dimensionful vs. dimensionless ostensibly constant physical parameters, of which c is one example.

    it is commonly said that gravity is relatively a very weak force, in comparison to the other forces between elementary particles, let's say electrons. considering both inverse-square laws regarding the electrostatic and gravitational interactions, the attractive gravitational force is about 10-43 times as strong as the repulsive electrostatic force. for heavier particles the disparity is nearly as severe; 10-38. so atomic physicists don't worry about gravitational effects in their equations and this is the reason why your foot doesn't just sink into the concrete because of all that empty space between the subatomic particles in the atom. but is it fundamentally that gravity is relatively such a weak force? or is it as Frank Wilczek puts it:

    anyway the electron mass in units of the Planck mass is then [itex]\sqrt{\alpha}[/itex] 10-(43/2) or about 10-(45/2), it's because that number (and similarly for the other particles) is so small that you don't have to worry about gravity in atomic physics. (however, the reason why you have to worry about the E&M forces is because [itex]\sqrt{\alpha}[/itex] is about 10-1, much closer to one.) what's interesting is that this mass ratio is directly related to the size of the Bohr radius (in units of Planck length).

    [tex] a_0 = \frac{4\pi\epsilon_0\hbar^2}{m_e e^2} = \frac{m_P}{m_e \alpha} L_P [/tex]

    [itex]\alpha[/itex] is only a couple orders of magnitude so the reason that the size of atoms are so big (in terms of Planck units) is because the mass of the electron is so small. or it could be said the other way around: the reason that the masses of particles are so small is because the size of atoms is so big. either way. but the reason why gravity is comparably so weak is the equivalent to these. it's a circular mantra: "gravity between particles is relatively weak because the masses of particles are so small which is because the size of atoms is so big which is because the masses of particles are so small which is because gravity is relatively weak which is because ..."

    anyway, just like what Wilczek said about "the strength of gravity [being] simply is what it is, a primary quantity...", the same can be said about the speed of light and the other fundamental interactions. why it measures to be 299792458 m/s is question of historical accident. a good question might be: why such a large number? or why so fast compared to our everyday experience? that is because the meter is about as big as we are and a second (or a heartbeat) is about as fast as we can think thoughts is rapid succession (how fast time seems to be flying past our consciousness). at least within an order of magnitude. that sets the scale, experientially or perceptually, for us. now the scale for nature is set about 10-35 meter and 10-43 second. there are a lot more Planck times in a second than there are Planck lengths in a meter. about 10(-35)-(-43) or 108 times more. that is more basically why light seems so fast.
    Last edited by a moderator: May 3, 2017
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook