I did some research into this. Most consequences of changes of alpha are either linear, or small powers of alpha. Some are larger powers. I have some conclusions.
To begin with, an instantaneous change of alpha by even a tiny amount would immediately kill you. Typical atomic bonds are about 2x10
-10m long and have spring constants around 7x10
29Nm
-1;
n identical springs with constant
k in series have spring constant
k/
n. If we have a chain of atoms joined by bonds whose length is
dm we get
n = 5x10
9d and the spring constant of the chain is 1.4x10
20d-1Nm
-1.
An instant change of alpha by one part in 10
x will not instantly change the length of the chain, but will instantly change the length that would be in equilibrium by one part in 10
x. The result is a displacement of 10
xd. The spring force that instantly arises is then 10
xdm*1.4x10
20d-1Nm
-1 = 1.4x10
20 + xN. Even a part-per-million change in alpha results in forces in the petaNewtons, independently of the size of the chain, acting on an atom-sized mass for a seriously ginormous acceleration.
If we momentarily ignore all contraction transverse to the length of a beam of material, the material may be modeled as chains like the above in parallel attached to a thin sheet of atoms at its end. If there are
N atoms in an atom-thick cross-section there are
N springs with the above constant in parallel attached to the sheet and
N atoms in the sheet. Parallel springs are additive, so the spring constant, and the force, is multiplied by
N. The mass it acts on is also, obviously, multiplied by
N. So, with our simplifying assumptions, that ginormous acceleration is unchanged whether the beam is an atom thick or macroscopic.
Transverse contraction, bonds at angles other than straight to the surface from the center of mass, and other complications exist, but if they don't cause many-order-of-magnitude changes the accelerations experienced by any material object will be enormous for even very small changes in alpha -- divided by a factor of alpha squared for the changed timescale of electromagnetic phenomena, of course. Which is going to be minuscule.
Upshot: an instant change in alpha by more than parts per quadrillion is instantly lethal to everything, everywhere. Also, those huge restoring forces from tiny changes explain why commonplace solid materials are so darn incompressible.
A slower change could produce tolerable forces and accelerations. I calculate that changing alpha by one part per million over a period of one second would result in Earth shrinking by roughly seven meters in radius in a similar time, with accelerations similar to that due to gravity. The damage would then be that from everything freefalling two storeys, plus accompanying turbulence, earthquakes, atmospheric storms, and other consequences. It's enough to be a dino-killer event even then, especially when you note that for part of that second there'd be enough underpressure in magma chambers to "uncork the champagne", so to speak, and cause numerous massive eruptions around the world. A pretty drastic consequence for such a tiny tweak.
The energy released by the contraction would presumably end up as heat. Potential energy in a spring is 0.5
kd2. For a typical atomic bond, and a length change of 10
x, that's 0.5*7x10
29Nm
-1*(2x10
x - 10m)
2 = 1.4x10
2x + 10J. For
x = -6 that's 0.014J
per molecular bond; an order of magnitude estimate for the energy released in a material object is thus 10
22J/g -- exajoules released from every grain of sand! (I'm actually somewhat skeptical of this number; for reference, the Fat Man nuke yielded about 10
14J, and annihilation of that same gram of matter to energy would yield a comparable amount, so that "gram" actually gets 108 times more massive from its potential energy at the moment alpha changes, suggesting Earth wouldn't even nuke, but implode into a black hole -- seems excessive).
Some of the energy ends up as sound, but that'll turn into heat eventually; accelerating charges emit light, but only the object's surface could shed energy this way, so all the energy in the interior would be converted into heat unless it had some way to become, or get transferred to, weakly-interacting particles of some kind.
Obviously, for even the slow version not to nuke everything with sheer heat output the vast majority of this energy would have to be radiated or absorbed by some mechanism.
One possibility is quantum mechanical: if alpha ever actually changes out from under us, then alpha was a quantum scalar field which decayed to a lower energy level. The energy released by a tiny drop may not be especially huge, and the decay itself, to have waited until now before happening, would have to have an energy barrier -- picture a potential with two wells, one slightly lower than the other, and a hill between them. The decay could take energy from the shrinking bonds it sweeps over to enhance its ability to tunnel through the hill, and then release that energy in some weakly-interacting form, depending on the precise nature of the quantum scalar field at issue. Since it's presumably linked to the electroweak force, it's not ridiculous that it could release energy in a weak form -- possibly by pair-creation of dark matter or neutrinos -- that would flee with the decay wavefront and help fuel its propagation, along with the direct energy of the released field potential.
Another is that to an electromagnetically-constructed object there has been a change in energy and temperature scales. In particular, the thermal energy carried by microscopic particles' motions doesn't instantaneously change, but the energy an EM-based observer sees is different by α-2, and, in particular, smaller if alpha increased, because their energy "yardstick"'s length changes as α2. The energy loss in that case is one part in 102x. The thermal energy per atom in typical materials is a few
times 1.38x10-23J/K, however, so the shortfall at temperatures of a few hundred K from a part-per-million change in alpha will be on the order of 10-32J per atom, far lower than the spring energy release and unable to absorb more than the merest fraction of it (like, one part in 1030 or so). Further, the ordinary heating from compression would recover the shortfall (exactly!) all on its own, without the spring energy release.
Long story short: anything not very gradual will cook or even explode you, absent some exotic physics related, presumably, to how alpha itself was able to change. Anything faster than a sizable fraction of a second will splat you with g-forces on top of nuking you to some ridiculously huge temperature or imploding you into a black hole.
What about long-term effects, such as from a very gradual change?
Absent highly nonlinear effects in, say, the chemistry and electrophysics of heavier elements (which could affect e.g. the structural strength of steel, the mechanical stability of UF6 fuel elements, semiconductor physics, and other possibly sensitive technological systems) there are surprisingly few for alpha changes less than a percent or so.
At that point, there are increasingly major consequences to planetary habitability because of changes to stars.
http://deepblue.lib.umich.edu/bitstream/2027.42/64225/1/jcap8_08_010.pdf
The above paper contains a mathematical model of a(n ideal, perfectly spherical) star as a function of alpha, G, and a few other constants. It's a bit hairy to pick through it to see what the effects are of changing a constant while keeping the star's mass fixed, but it can be done, and the upshot is that luminosity scales as α-3 and radius as α-1 -- yep, the latter says a star changes size with alpha the same way a material object does, despite a star being supported by nuclear rather than just electromagnetic phenomena. (Exception: neutron stars. Degeneracy pressure of neutrons is presumably independent of alpha and depends solely on the strong force.)
A planet sees a bigger luminosity change, though. An observer, or an acre of photosynthesizing plants, or a square km of heat-absorbing ocean, changes size as α-2 and so receives an immediate change in solar energy flux by the same factor. Long term, as the changes in the star's core work their way out to the surface, that becomes α-5. (The observer perceives the areas as unchanging, but the sun as having become α times as far away, and thus α-2 times its apparent luminosity, consistent with the calculation from an outside viewpoint that didn't change its size units.)
The sun is anticipated to become about 5.5 percent more luminous in the next 600 million years for hydrogen shell migration reasons. At around that time, maintaining a tolerable temperature on Earth (assuming Earth's orbit is undisturbed after all that time) requires atmospheric CO2 to go to zero, suffocating plant life of its carbon source. Leaving CO2 for the plants makes Earth too hot. Therefore, we can consider that the threshold for long-term lethal luminosity increase. Alpha decreasing by about one percent would kill everything with heat, then, as the apparent luminosity would jump by that amount eventually. The immediate increase of just two percent would be sufficient to cause severe climate disturbances, and after the core nuclear burning increase worked its way to the surface, the further increase would spell the end of the planet's biosphere.
A larger magnitude is needed for an alpha increase to cause problems: Earth's near the inner edge of the habitable zone of the Sun, and a CO2 increase could compensate for a fairly large luminosity drop. The Earth may have been largely frozen over repeatedly before about 700 million years ago; back then the Sun was maybe 90% its present luminosity. A 10% drop in apparent luminosity requires a roughly 2% increase in alpha to cause.
Alpha changes larger than a couple of percent will shift nuclear energies enough to kill off the resonances that enable the triple-alpha fusion process in red giant stars. Without this, synthesis of most heavy elements is liable to stop, limiting the future supply of "metals" for planet formation to the present supply.
Colors of stars are also changed. Surface luminosity goes as the square of radius and the fourth power of surface temperature, and also as α-3; radius goes as α-1. If the surface temperature changes as αy then the surface luminosity formula gives α-3 = α-2α4y, which is easy to solve for 4y = -1, so the surface temperature changes as the inverse fourth root of alpha. Wien's Displacement Law says that the product of wavelength peak of blackbody radiation and temperature is a constant, so if that constant is independent of α the wavelength of sunlight changes as the (direct) fourth root of α, i.e. reddens if α increases. The derivation of Wien's Displacement Law appears to be independent of α. So, star color changes with changing alpha, but quite weakly; a very large change (such as an actual doubling of alpha) would be needed to see a pronounced reddening or bluing of stars.
These stellar effects stem from changing alpha (while leaving the strong force alone) making the nuclear fusion Coulomb barrier higher. The effect is similar to shifting the mass-scale of stars up or down as some increasing function of the change in alpha. The nuclear reactions take more energy to cause and release less energy, but the latter effect is actually very slight for even sizable alpha changes (for H->He fusion, anyway -- there, doubling alpha reduces the He nuclear binding energy by only about 5%). The main consequential change is a lower reaction rate for a given temperature, pressure, and density.
Fission reactions are not very sensitive to alpha, as they depend solely on binding energy per nucleon, which as we saw isn't very sensitive to alpha. Very big increases might noticeably increase the energy from uranium fission by making the uranium nucleus less stable, and humongous ones might spontaneously destroy all uranium. A fission reactor would change size, though, with effects similar to a change, varying with α2, in neutron luminosity. Any decently self-regulating reactor wouldn't react to small perturbations but large ones would shut it down (smaller alpha) or make it melt down or even explode (larger alpha) (assuming the alpha change's associated spring energy release somehow disappeared down a hole in the first law of thermodynamics instead of vaporizing it to QGP first).
Alpha changes larger than about 10 percent will have an additional, benign but highly visible, consequence: changes in the colors of butterfly wings. I kid you not.
As alpha changes, the time scale of EM based structures (e.g. your eyes, photospheric plasma emitting light, etc.) and the energy scale of same vary with α2 (energy up, frequency up, duration down). In particular, to an observer the frequency spectrum of light from 5500K plasma doesn't change. However the wavelength does. The observer changes size with α-1, so the observed product of wavelength and frequency, and thus speed, of any wave based on EM phenomena (so, light, water waves, sound waves) is proportional to α. From the observer's perspective, in particular, the speed of light changes with α and the wavelength of a fixed frequency of light changes with it. Light absorption by pigments, including in green leaves and in the retina's green cone cells, is governed by energy, and therefore by frequency; to the observer, then, green leaves are still green and the frequency of 520nm green light is still 1014s-1. But the wavelength seems to be shifted to α*520nm. The color of anything whose color depends on wavelength instead of on frequency therefore shifts, and because their colors come from diffraction gratings, that includes butterfly wings. The observer sees the diffraction gratings in the wings at their usual size, but the wavelength of green light as shifted; once the shift gets up to 10% that's a noticeable change in color (doubling/halving alpha would shift red all the way to blue, or vice versa, as the whole visible spectrum spans only about one factor of two change in wavelength). (In unvarying units, the light wavelength is unchanged but the butterfly wing nanostructure changed size, with the same shift in its response to light.)
Unfortunately, alpha changes slow enough for you or a butterfly to survive will take longer than your life time, and indeed long enough for the butterfly to evolve in response by changing its nanostructures to maintain apparently-unchanged colors, assuming that some selective pressure is operating on those colors and that it's frequency-dependent (the opposite sex butterfly's visual pigments, or a predator's visual pigments, would be -- though the response of those could evolve to shift instead, within bounds set by the sun's color, which also would change in the event of an alpha change that big, reddening for an increase and bluing for a decrease, but only with the fourth root of alpha).
Alpha changes much larger than that, e.g. doublings and more, begin to wreck the periodic table: besides strewing the energy levels on which chemistry depends all over the place, increases make successively lighter nuclei violently unstable, while decreases make successively heavier ones stable and allow more and more isotopes with fewer and fewer neutrons. Past a certain point with increasing alpha, metals are destroyed and fusion becomes impossible as even helium fusion consumes more energy than it releases. Decreasing alpha accelerates nuclear burning in stars, with possibly a fairly sharp breakpoint when 2He becomes sufficiently long-lived. At that point, proton-proton chain fusion becomes vastly faster and more efficient, and every main sequence star very rapidly burns through its fuel and explodes, dousing the universe in helium gas.
One last note on stellar physics. You might object to increasing alpha making stars smaller and fainter, on two grounds: one, higher temperatures are needed to make fusion work and prop the star up against gravity, and two, aren't helium-burning stars, which have double the Coulomb repulsion and less released energy per fusion, red giants, so shouldn't doubling alpha turn hydrogen-burning main-sequence stars into red giants, and smaller alpha increases cause smaller amounts of luminosity and size increase?
1: No, because the mass isn't enough to support such high temperatures. The rate drops, the star shrinks, and gas pressure increases relative to radiation pressure as a mechanism of support. Past a certain point, the star couldn't exist (falls off the bottom of the shifted stellar mass ladder/main sequence) as the gas becomes degenerate before it fuses -- alpha increasing that much halts fusion, results in a core collapse and micro-nova, and leaves a helium white dwarf behind; larger stars go supernova immediately and leave neutron stars.
2: No, because what makes a red giant big and bright isn't the helium burning in the core (which can even switch off for long periods) but the continued hydrogen burning in a shell that is expanding towards the surface of the star. A pure-helium star would be small and faint compared to a normal main sequence star of the same mass, with about half the size and 1/8 the luminosity, similar to how the main sequence star would be if alpha were doubled.