# Misc. DIY: Simulate Atoms and Molecules

1. Jun 20, 2010

### andrewr

I would like to see how far I can get writing some novel & simplified simulator code for molecular modeling based on QM.

------ Background follows ---- skip to next post to get to the details of the project & starter QM question..

I am speaking from experience. Somhow the spend $1000 on a gamble just isn't appealing either. I have studied variational calculus since the 8th grade -- my original purpose in doing so was to understand Schrodinger's equation. I have a hard time believing what you say here as anything but double talk. If these methods worked perfectly or exactly as you seem to be selling them as, and if you really include what I desire to simulate -- in the sense that I am proposing -- then they would definitively predict what happens in say, "the Bell inequality", based experiments. Considering most of the physics community is still arguing over that, including ZPE (with known flaws) as one objection, and no definitive set of experiments has yet buried the competition ... I see no point in arguing over what is better, or what is "perfect". I'm interested in an experimental simulation method; the one I have chosen is just different -- it is not likely perfect either -- but I would like to try it; Are you interested in actually addressing the thread questions I am proposing or something else after my clearing the air here? If you have anything to add concerning how to compute how long an electron stays in an excited state, I'm interested. That is one of two issues which prevent me from finishing the code which I already have; as I said at the start of the thread, I'd like to see how far I get; and I mean after I know the answers to my actual questions. --Andrew. Last edited by a moderator: May 4, 2017 7. Jun 21, 2010 ### alxm -I do deny sigma and pi bonds exist in molecules. Orbitals as well. These are all abstract concepts derived from various theoretical models, not real physical things. -Force-field methodology doesn't have anything to do with 'sketching' anything. It's a semi-classical physical model, not a single scalar or vector field or anything of the sort. -All DFT methods in practical use for molecules are more accurate than the Hartree-Fock method. Obviously you don't know the relative size of dispersion effects if you think that would be a deal-breaker. -Yes, QC methods were being used prior to electronic computers, in the late 20's and 30's. Newton didn't have any problems developing his algorithm for root-finding without one either. -Computers WON'T 'do it for you'. That was the point I was trying to make. Most chemists in most situations do not use quantum chemical software at all. Most chemists don't know how to use quantum chemical software. -You think it's reasonable to demand that they give you software that took them years to write, no strings attached? Crazy. -The fact that a method is exact does not mean a given implementation of it is necessarily numerically stable in every case, or that code is bug free. -The fact that a particular implementation might not work (perhaps due to user error), does not invalidate the theory behind it in the slightest. -A quantum-chemistry program can't and won't tell you about the Bell inequality. It's an entirely different system and problem. They won't calculate scattering cross-sections or do your taxes, for that matter. Also, it's not the calculations themselves which are in dispute. -It's "Heisenberg" not "burg" (since you seem to think spellings are of great importance) -The Uncertainty Principle is not a simple experimental limit, it's a theoretical limit. This is explained in every textbook and in a thread in this forum once a month or so. -You can produce _exact_ pictures of hydrogenic orbitals simply by plotting the spherical harmonics, since that's what they are. But they won't give you a good approximation of any multi-electron system. -How to calculate lifetimes of excited states is also in every textbook. Fermi's Golden Rule, look it up. -The Schrödinger equation does not have even a superficial resemblance to the equations of the Bohr-Sommerfeld models, in my opinion. Beyond that, I won't address your 'independent research' since this forum has clear rules about that stuff. And that's pretty much all I have to say on the topic. 8. Jun 21, 2010 ### edguy99 "Nobody yet has a flawless simulator...." Although I wont dispute this, I write animation software in the timeframes/distancescale from microseconds/nanometer to show photon size and movement, attoseconds/femtometers for electron motion and Yoctosecond/yoctometers for nuclear decay for neutrons under a collection of different physical forces. You may require several animations specifically to account for distance and time (large scale and/or small scale). Have you given any thought to distance and time scale for your simulation? Some view this type of animation as "speculative" and do not like the discussion here, but I would be happy to discuss it if you would like to send me a message or sample picture of what you have in mind. Regards 9. Jun 22, 2010 ### andrewr Hi; I think the photon scale and the electron are interchanged? I was planning on using my electronics simulator base as-is for the project at first -- and modifying it later for improvements. There is a saying "one in the hand is worth two in the bush." The electronics simulator auto-adjusts time-steps depending on interaction values -- that is, the timesteps computed become larger as the path(s) become more predictable / repeating of past history -- and smaller where a small change can cause a large instability. The algorithm is complicated but like a video game drawing algorithm -- is carefully optimized in the inner loops; Roughly speaking, it allows me to simulate around 1000+ items in a field, and easily 100,000 nodes+ in electronic circuity on a modest single core Celeron processor (SSE2/SIMD) -- although, the computation time drops exponentially with the *randomness (non-repetitive change); and conversely accelerates with the repetition of similar events/changes. (*randomness within a steady state is not what I mean.) The details of the algorithm are beyond the scope of Quantum Physics, obviously, and are a programming/data-structures nightmare; but it's extremely powerful and fast compared to alternative solutions used in the electronics simulator industry (eg: typically Laplace transformations). At heart, I have opted for a compute quickly -- and use a statistical method to do quality control. If you are familiar with industrial process control -- the method is essentially the same kind of thing. In the electronics realm, I specify the time-step I wish to output images/plot at, and the simulator discards all intermediate states between frames to be drawn. This eliminates the highly variable time-step plot problems required for efficient & accurate calculation of events; In electronic simulation, the time-step typically varies from milliseconds down to picoseconds. A crude estimate for (super)atomic phenomena is in the range of milliseconds down to sub-atto second 0.01 atto (YUCK). I haven't given much thought to the time-steps I will output -- outputting every subtle change doesn't make a whole lot of sense as a large number of small changes are required (typcially) to make a long term macroscopically noticed change; I plan, for right now, in just hand setting the time-steps as I do in electronics simulation for different regions of interest. Eg: large time-steps which allow me to ignore the development of the initial state (typical state) of the system -- but still do a sanity check. And then focus the time-step down in regions where interesting things are happening that I am trying to understand better. Even though time is involved in my simulation, I was not planning on making time evolved movies in the end. The reason for this is twofold -- 1) the simulator itself determines how to focus CPU power computationally on different regions of space; This is what allows the simulation to proceed at much quicker rates than would be possible if all points were computed for the same time-step everywhere in parallel. Items of circuitry / space which are sufficiently decoupled to introduce small errors by reducing the update rate of *changes* in interaction become time-wasters each time a displaying of state occurs. Of course, blindly printing every time-step automatically chosen by the simulator is the absolute worst time waster. eg: 1e6 + times slower.... 2) In electronics, the time-evolving waveforms are often important; so much so that they become unintelligible if the time-step varies, so I am forced to plot a maximum & fixed time-step rate if I wish to easily extract information about the evolution of the waveforms in time -- so that is the way my simulator works now -- and to keep the cpu time down, when the simulator has to reduce the time-step for accurate results; intermediate steps are discarded from the plot to save time. To keep the problem tractable once I get to large (65-100 atom) systems, I plan on having the simulator ultimately record changes in state and not changes in time; and I ultimately, I would like to be able to specify which states are of interest to reduce the data even more. I do have open questions about how much information to save at each plot point; but I can't solve those in my head -- I have to actually experiment to see what is possible. The space, is unfortunately, limited by the nature of the simulation, cache memory, main memory, etc. I won't be able to simulate more than a few cubic microns of space -- and only a 1000-2000 or so electrons/protons/etc. within that space. (Presuming the questions I am trying to get answered do not significantly degrade the estimates.... engineering :) ) Why would some consider it "speculative", atto second laser pulses are regularly used to excite atoms into the wave packet superposition of states which mimics classical motion. That's experimentally done, and I wasn't aware that the people doing this were violating any principle of quantum physics. I know that when I first heard of the frequencies they were talking about and atto second that the uncertainty principle crossed my mind. For the purposes of simulation, though, these small times replace the differential element in calculus -- and anyone who uses calculus to work with Schrodinger's is automatically guilty of using small times as well, for a mathematical purpose if not a physical one. hmmm..... Separation of Church and state, or ought it be Calculus and state... hmmm.... I wasn't planning to get into that. And from the other response I got, I need to correct the extra words being put on my lips... --Andrew. 10. Jun 23, 2010 ### andrewr Alxm, I have tried a few times to pen a response, but I simply don't post because the response is so much longer than your post. There are overlapping issues in what you are remarking about -- and clearly some confusion about what I actually said, and what I appear to have said. So rather than reply head on; I hope I am able, in this way, to emulate your very compact response style which I rather wish came naturally to me. -How to calculate lifetimes of excited states is also in every textbook. Fermi's Golden Rule, look it up. I use Fermi's rule for understanding stimulated emission events. If what you are saying isn't nuts (and I do admit you have an IQ) then it appears to imply that there are no such things as truly "spontaneous" emissions. If that's the case -- my simulator already takes care of the issue, and I am wasting time asking about it; If your comment is a mistake, let me know -- otherwise I need to change the question. -Beyond that, I won't address your 'independent research' since this forum has clear rules about that stuff. And that's pretty much all I have to say on the topic. -The Uncertainty Principle is not a simple experimental limit, it's a theoretical limit. This is explained in every textbook and in a thread in this forum once a month or so. -A quantum-chemistry program can't and won't tell you about the Bell inequality. It's an entirely different system and problem. They won't calculate scattering cross-sections or do your taxes, for that matter. Also, it's not the calculations themselves which are in dispute. If it is all you have to say, then the thread won't be cluttered up with more of this. I haven't taken sides in any absolute way on controversial issues. In the last (and only other) thread I ever posted on the forums -- it took the science advisor several pages to actually come up with an answer that made any sense. I still shake my head that he could have possibly missed that using a 1024+ digit calculator means one is serious about verifying something, and not just 'estimating'. Your response gives me hope that you have more awareness than average. Beyond that, I will simply remark that the research I have cited thus far in the thread -- isn't mine. Secondly, it was searches on the physics forums and recommendations of "science advisors" and more important "physics mentors" which led me to follow the links to these controversial issues in the first place. * up -Yes, QC methods were being used prior to electronic computers, in the late 20's and 30's. Newton didn't have any problems developing his algorithm for root-finding without one either. -Computers WON'T 'do it for you'. That was the point I was trying to make. Most chemists in most situations do not use quantum chemical software at all. Most chemists don't know how to use quantum chemical software. -The fact that a method is exact does not mean a given implementation of it is necessarily numerically stable in every case, or that code is bug free. * same -You can produce _exact_ pictures of hydrogenic orbitals simply by plotting the spherical harmonics, since that's what they are. But they won't give you a good approximation of any multi-electron system. -The Uncertainty Principle is not a simple experimental limit, it's a theoretical limit. This is explained in every textbook and in a thread in this forum once a month or so. My respect for you went up a notch or stayed the same with each of these comments. I agree with them, and always did although that may not be obvious at first reading of my past comments. They do not affect why I am doing anything, though. -The fact that a particular implementation might not work (perhaps due to user error), does not invalidate the theory behind it in the slightest. -I do deny sigma and pi bonds exist in molecules. Orbitals as well. These are all abstract concepts derived from various theoretical models, not real physical things. -Force-field methodology doesn't have anything to do with 'sketching' anything. It's a semi-classical physical model, not a single scalar or vector field or anything of the sort. Gee. In one line: My "hand" is also an abstract concept built upon sigma and pi abstract comments, and I can draw my hand with my hand which does not invalidate my theory of what a hand or sigma or pi bond is -- in the slightest. -All DFT methods in practical use for molecules are more accurate than the Hartree-Fock method. Obviously you don't know the relative size of dispersion effects if you think that would be a deal-breaker. -The Schrödinger equation does not have even a superficial resemblance to the equations of the Bohr-Sommerfeld models, in my opinion. OK. Your entitled to your opinion even when it has almost nothing to do with what I said. Sommerfeld's modification is not Bohr -- and DeBroglie only interpreted, but did not change Bohr's theory. Nor can I prove or falsify what you said -- so I'll join it: Since I don't know what is the cause of what I witnessed -- Better safe to include dispersion than sorry I didn't. BTW: The experiment I am trying to understand has nothing to do with cold fusion or Blacklight power, etc. It is purely related to the change in economics over the last 100 years as to what processes are economically viable. and environmentally friendly. -It's "Heisenberg" not "burg" (since you seem to think spellings are of great importance) ! Then, you admit we are on par rather than you being so much more superior than I am ?! Wow. I'm flattered. -You think it's reasonable to demand that they give you software that took them years to write, no strings attached? Crazy. ! Touche! ! LOL: Say that again with a straight face after buying$50K software to solve a problem which you have only a week to solve before your competitors take your client away; then discover that there are bugs in the software you have purchased, and after you hold hands with tech support to solve their mistake for free (which only they can actually fix in the code) -- and after they turn around and sell your fixes to your competitor and refuse to pay you a dime. (And that is what the nicer of the companies will do. Murphy was an optimist.)

I am laughing at the notion that you might think you will convince anyone that *refusing* to spend money is crazy, or that a refusal to buy is equivalent to *DEMAND*ing something for nothing ???

By the way, thank you for giving *some* of your time away free. I am thankful to Dr. Young (no the name is not a 'simple' coincidence) for selling me a college physics education, and allowing me to use the knowledge after class as public domain -- including his comments.

If one (esp you) are crazy for giving away, or even think you are -- I have the name of some very good psychiatrists with multiple degrees and understanding of other subjects. They do interview before accepting clients, and not many get accepted, but you might get accepted if you are lucky.

11. Jun 24, 2010

### edguy99

You could be right, I'll try again (feel free to correct):

1. units Femtosecond(10^-15) and Nanometers(10^-9) to show photon size and movement - with a viewing width of 10 micrometers and a steptime of 10 femtoseconds, a 600 nanometer photon will bounce back and forth on your screen at a comfortable rate when viewed at 30 frames per second. Electron and protons will not show any motion.

2. units Attoseconds(10^-18) and Femtometers(10^-15) for electron motion - with a viewing width of 10 angstroms and steptime of 10 attoseconds, an electron with 10 evolts of kinetic energy will move through your screen at a comfortable rate of about 2 picometers/attosecond, photons travel too fast to be seen at 300 picometers/attosecond and protons (heavier) dont really move at all over this timeframe.

3. units Yoctoseconds(10^-24) and Attometers(10^-18) for nuclear decay of neutrons - with a viewing width of 10 femtometers and steptime of 1 yoctosecond, a down quark changes to an upquark and emits an electron and an anti-electron neutrino. The w-boson would be visible for less then a yoctosecond and the neutrino would float off at a comfortable pace.

You mention 3 animations: Bohr Magneton, orbital transitions, and the Stern Gerlach experiment;

I will give some thought to time and distance scales and post later. You may well have something in mind already, if so please post. With regards to changing timescales. I dont see how this is done accurately as certainly the location of particles will not change when you change the timescale, but the momentum and momentum direction does change and I dont see how they can be recalculated without going to an n-body problem... I find it saves time to do 2 or more animations, one at the slow rate and a different one at the fast rate.

12. Jun 25, 2010

### andrewr

mmmm... much better.
At this resolution, in free space: c=3.0x10**8 m/s, the wavefront will step with a distance of 300nm / femtosecond. If the screen distance is 10μm, that means steps= 10/0.3 = 33.3 frames.
So it takes around a second to cross the screen; if a wavefront simulates in dispersive media with a slower rate of travel, this would indeed be comfortable.

#2 I will just let sit, #3 -- wow. At least I don't think I will have to worry about that time-reference...

13. Jun 25, 2010

### edguy99

A couple of questions and comments.

Regarding the Stern Gerlach experiment:

If you wanted to do an animation with a viewing width of 10 meters (to show the equipment), what timestep would show electrons moving across your screen at a reasonable speed? Ie. if I had a laser gun pointed over your shoulder checking the electrons speed, how fast would they be going?

Clearly the spin up electrons move one way and the spin down electrons move the other way based on the magnetic field, but I dont think they all move "exactly" the same. Is there research that shows the pattern of the distribution of electrons that show up over time or do they indeed all move the same up or down amount?

Regarding orbital transitions:

I think you may need to consider "units Attoseconds(10^-18) and Femtometers(10^-15) for electron motion - with a viewing width of 10 angstroms and steptime of 10 attoseconds, an electron with 10 evolts of kinetic energy will move through your screen at a comfortable rate of about 2 picometers/attosecond, photons travel too fast to be seen at 300 picometers/attosecond and protons (heavier) dont really move at all over this timeframe."

A viewing width of 10 angstroms produces the kind of picture you would see of "real atoms" like the IBM pictures with scanning electron microscopes, what you dont get is the timescale. A timescale of femtoseconds works well to view things like proton/nucleon vibrations but even low energy "unbounded" electrons are moving way to fast to see. A timescale of attoseconds allows you to build your electron probability clouds for the frame and deal with a free electron floating by that you know, because of its momentum, will be somewhere else in the next few frames. This timescale has the same problem you talked about where photons will seem to appear out of nowhere.

Also,

I would appreciate if you could expand on this comment.

Regards

14. Jun 27, 2010

### andrewr

OK. (SG henceforth in my notes.)

I intend to mimic the MIT reproduction of the experiment using hydrogen and either accepting the factor of 40 mass change, and therefore expected 40x scale change -- or artificially increasing proton mass 40x to simulate a fake "potassium" atom in transit with hydrogen. I haven't calculated the speeds, but the transit time and dimensions are essentially fixed by the experiment. If I modify them arbitrarily -- then verification of the simulation will be difficult at best. If I were to animate a cross section of the experiment -- the atom would be vanishingly small. I was thinking to simply collect the
[x,z] data points at the end of many experiments for a statistical sample equivalent to the SG photograph.

Note the MIT experiment uses a slightly different shape of electromagnet than SG. I believe SG is more of a triangular wedge near a half-circular socket. MIT's is on pp. 17, from a Google search of Lab 18, MIT and Stern Gerlach:

web.mit.edu/8.13/www/JLExperiments/JLExp_18.pdf

I'm going to answer these next to questions out of order, as I think I may be clearer that way.
I stated the idea poorly. I am less certain of the shape/distribution of the EM field as one approaches an electron's vicinity. An electron is a dynamic magnetic dipole -- for there are no monopoles of magnetic "charge", and thus the field changes around the electron as it "moves";
(Dispite wikipedia's present fictional monopole account.... which is "mathematically" equivalent, supposedly -- but whoever wrote that makes me groan.... ).
A dipole's detectable field scales as r**-3, where r is somewhat ill defined, but as one gets farther away from the source(s) the error in r from variations in dipole shape contributes less and less to the overall values. I think (but am not absolutely certain) that much of the geometry distortion of a dipole is primarily in higher terms (r**-4) etc.

The primary meaning of my comment is that there are several unknowns affecting the field of an electron which become more important as computes closer to it's location. These unknowns are not necessarily a result of the Heisenberg uncertainty principle.

In general, the idea that an electron is a physical particle capable of "spinning" on an axis is discouraged -- and although I am not certain of what experiment(s) disprove the idea altogether -- I seem to remember someone mentioning historical experiments that disprove any "radius" being assigned to an electron (either detected or not...). But if an electron is not a particle with spin, then it must still at least be a distribution of electric field and time; the latter idea is true whether or not an electron is a physical particle.

In SG, the developed photograph of the silver atoms which hit the target is quite distinct. So much so that there is little variation from electron to electron (and perhaps even less spread would be realized if the experiment was improved.)

See p. 56 for a photo of the actual silver atom pattern on the film target.
http://www.owlnet.rice.edu/~hpu/courses/Phys521_source/Stern-Gerlach.pdf

A couple of things to note about the photo.

The "ideal" silver dipole would be located at the center (x axis in the photo) for one launched non-skew into the magnetic field. It is precisely here where a spike develops in the x direction -- that spike is generally reasoned to have something to do with the non-symmetry of the pole faces on the magnet, although no explanation I have read is really satisfying for a mathematical analysis of SG's actual magnet is not done. The other plots side is flattened, and that is typically described as being because of the silver atoms physically hitting the rounded part of the magnet cavity.

I didn't see a plot for the MIT version of the experiment which uses a magnet shape which ought not have that spike, either. The only obvious clue pointing to the anomalous nature of the spike is that it is asymmetrical -- The x axis was oriented vertically in the actual experiment, so the spike is even against gravity if I recall correctly. But if the explanation of the other side's thin-ness is truly because of physical clipping, then there is no way to be certain the spike would not have shown up experimentally due to the field shape if the atoms had not hit the surface of the magnet. One really needs a predictable *linear* variation in magnetic field strength over space to be able to effectively correct for physical manufacturing limitations of the slits, etc, and SG doesn't really have that.

Comparing the demagnetized plate to the magnetized one, there is a fairly strong indication that the plates were sent on the postcard willy-nilly and with no definite orientation. (The mirror image reticule pattern on the right image re-enforces that notion...) The demagnetized version ought to be a purely flat line -- but isn't. In the lower half of the line a clear broadening of the pattern can be seen -- which if the magnet is truly off, can only be attributed to the shape of the slits allowing the silver out -- or the position of the silver oven behind the slits biasing the source statistically.
In any event, a careful inspection I believe I see trace widening on the upper half of the pattern in the magnetized view suggesting that the flat line slide is actually rotated 180 degrees from how it was taken in the other photo, with an optional mirroring on top of that...

Using that information as a corrective measure, and knowing that the slit is horizontal, one can estimate the amount of classical skew in trajectory a silver atom would have (incoming y angle and x angles with respect to the demagnetized photo, in contradistinction to a perfect 90 degree angle.); and since the size of the slit is likely large compared to the wavelength of the silver atom, these classical trajectories ought to be fairly accurate. Each point, then (y on the photo) has a definite angle (source dx/dz, dy/dz) from which the silver atom could have come, and the effect of this distribution needs to be calculated and compensated for to determine what an idealized simulation ought to look like.

Generally, there ought to be fewer angles of approach as one gets closer to the edges of the slit (y on the photo), although dx/dz will be fairly constant across the whole slit except where the demagnetized photo shows thickening.

So, I think the most accurate part of the magnetized photo will be the lower right quadrant of the photo taking the approximate center of symmetry of the shape as the origin.
Using a straight edge, the line widening as one gets closer to y=0, increases very linearly (eg: right most trace / lower right quadrant) until extremely close to the magnet center where the field of the magnet becomes less known / near the surface of the magnet. So, there are two (ideal) linear effects with approach toward the center -- 1) the width of the trace, and 2) the offset of the trace from the picture's y axis
.
The offset is explained by the experiment itself -- eg: the proportionality between magnetic field intensity gradient with x offset in the photo vaires approximately linear in SG (and linearly in the MIT version, according to the MIT text). The widening of the trace, however, is something I don't know how it theoretically ought to proceed, and which is caused by "all other differences" in the electron including any Heisenburg and QM interference effects.

If only the electron's position [delta x, delta y] were affected -- I could more easily separate out what causes the thickening -- but as there is also skew, the problem isn't solvable qualitatively in my head. The variation in trace with, then, is something I don't know if it would replicate in an idealized experiment or not; and will need to work out before verifying my simulation against SG.

If I am able to correct (reduce data) in the photo by mathematical modeling of skew, other features might become visible which presently are not. But to really do this correctly, the best source of information would be a run of the MIT experiment with computer usable data points to operate on. I'll have to look around and see if anyone has posted a good run, and if not, perhaps I will put some thought into re-constructing the experiment. I have the vacuum equipment, some very pure micro-crystalline level homogeneous Iron, and the machining equipment to re-construct the magnet, along with motorized micrometers (robotic) -- but that would take quite a bit of time and effort to put together for me in my present state .... if anyone knows of an online data source, I would appreciate it.

I'll probably post a bit more tomorrow... I am still trying to resolve a second question(/s) concerning the time/intensity distribution that an electron dipole has by focusing on what a quantized magnetic moment actually means, both classically (the limit) and Quantum Mechanically. I will need to think out loud, and perhaps will make some mistakes when doing so -- but hopefully the sharp eyes of others will be helpful there and I can get enough of an answer to my second question/(s) to model it reasonably well for simulation purposes.

We'll just have to see what works once I get there.... but I can certainly try it.

.--Andrew.

15. Jun 27, 2010

### edguy99

Thank you for the post and link: web.mit.edu/8.13/www/JLExperiments/JLExp_18.pdf, this helps the discussion a lot. From the figures, it appears they shoot potassium atoms through about a .5cm channel of a 10cm long magnet and check if they hit a 4mm paladium wire that is moved around. An animation width of 1 meter sounds good but I remain stuck on the timeframe that would show on your computer screen, atoms emerging from the oven (say one per quarter second) and hitting the wire within a second or two...

The speed of the potassium atom appears to depend on the oven temperature that drives the vapor pressure that determines the speed of the atom. I have assumed here that the speed is important since a slow potassium atom spends a longer time under the magnet then a fast atom and hence would be bent more. This would seem to suggest that even a small adjustment in temperature would cause the atoms to either spread out too much and be undetectable or not spread out at all. Perhaps I miss something there or mayby the temp at 190 degrees produces a consistent enough speed that you do not see the effect. or something else..

I agree completely. There are at least 2 major problems, the size you mention - I believe radius calculations for the mass involved come out way to high (300 femtometers vs a lorentz radius of 3 femtometers) and also a problem, I think the electron can be flipped over 720 degrees and a spinning disk is only 360 degrees.

Anyway thank you for the info, and if I can nail the speed of the atoms I visualize a pretty nice first animation showing an oven blasting (with a setting for temperature) out potassium atoms at a particular speed??? and rate??? and over a timespan (microseconds???) you would see atoms hit a wire after being bent by a setable magnetic field with the proper bending of up and down electrons. The nice thing about starting the animation at this level of time and space magnification is that its pretty clear what happens at this scale and there is no need to prob the details of what the atoms actually do or look like. Probing a more detailed time or distance scale will be more difficult.

16. Jun 30, 2010

### andrewr

Without checking, that sounds correct from what I remember.

I agree.
The experiment requires the student to wait for the temperature to stabilize for a long period of time, otherwise the data will be in error -- so I am fairly sure that small variations in temperature affect speed significantly.

The divergence of the magnetic field is what causes a vertical accelerating force to appear on the magnetic dipole in the first place, and divergence is independent of speed. The traveling speed of the dipole, then, has nothing to do with vertical acceleration -- eg: the overall charge is neutral, and the + and - are both moving in the same direction so even Hall effect does not occur. However, the + and - charge would want to be deflected in opposite directions horizontally in proportion to the speed with which the atom travels through the B field, and I am sure (classically) that would cause the unpaired electron and remaining nuclear charge to align on a horizontal line with respect to each other and in a plane 90 degrees to the magnetic field direction.

There is, then, going to be a vertical force vector independent of speed, and a horizontal one which is counterbalanced by electrostatic attraction. Since acceleration is an integral of force ( on average f/m), the deflection depends on time spent in the magnetic field. Even at a constant temperature speeds will vary statistically. These variations will cause (add to) trace width spread in the photos shown; and also (small) skew angle which does the same thing. The lab sheet may contain enough information to compute the speed variation, and one can compute a percentage of average time which eg; 99.8% of atoms will fall within, and the spread of the atoms (not including skew angle) due to this variation in time will be a linear fraction of the total deflection. % time error = % y deflection in error.

You'll have to excuse my ignorance of vocabulary; as a BSEE, I was taught primarily about semiconductors which is a bulk phenomena. Different terminology is employed, perhaps somewhat simplified, because the dominant effect is not always the same as would be the case with discreet items. When you refer to a "Lorentz radius" are you speaking about a contraction in length due to motion of some kind? And regardless of that, what calculations did you do to arrive at those particular numbers?

You're welcome. And thanks for just speaking about the subject in general; in your own way you have brought up something I did not think about regarding this experiment -- and that is the fact that the scale must exceed the few cubic microns which I can simulate realistically. In order to simulate the MIT version of SG, I will have to find a way to delete nodes from one side of the space I am simulating and add those nodes to the other side so as to keep the moving atom inside a window of simulation space. (Not to mention, computing a new value for the moved nodes...)
That will, unfortunately, complicate the simulator enough to seriously increase the time it takes me to implement it as my present simulator does not have that capability. :yuck:

+1 to the to-do list.

More will be coming about the "spin" of an electron when I have a chance this evening, along with Q/Qs that I have about how to work with it -- and a short summary of what I am presently considering. (Next steps.)

17. Jul 1, 2010

### andrewr

OK. Fell asleep on vacation ... and fell asleep last night early.... and this morning again .. but I'm back.

A brief review of the ideas I need to build the simulator:

So far I have spoken about orbital transitions -- which has nothing to do with *where* an electron(/other item) is at the instant before the state change and after. Only the probability of where it is found in the future can change -- without the "actual" position changing for a single run of the simulation at the instant of state change. So, one may think of a simulated orbital change (state change) as being a change in the statistical rule about where the item will diffuse in the future.

Alexm ties "spontaneous" decay of excited state orbitals to a "disturbance" in the EM field by invoking Fermi's golden rule. This idea is new to me, for in semiconductor physics the mechanism of spontaneous decay is *NOT* explained, rather it is pragmatically considered an empirical property of the semiconductor which is modified by disturbances in the EM field (eg: in a perfect crystal, it would be empirical -- but in a doped one, the lifetime of an excited state is the empirical value *decreased* by the probability that a fermion will diffuse into a defect in the crystal causing a stimulated decay of the excited state.) Since no one has contradicted Alexm, his opinion stands as the method I will attempt to use for simulation.

The remaining issue I have to deal with is the "spin" state of an item. This state is what causes an "anomaly" in emission spectra similar to fine structure (haven't studied this, so I am speaking in general) based on the magnetic state of individual protons in the nucleus coupled to the magnetic state of the light emitting electron. The anomaly is extremely small since the Bohr magneton is affected by the mass of the item -- and protons being relatively heavy, have ~~ 1/1836 times the effect of an electron.

I have no engineering text describing the exact nature of various experiments about the Bohr magneton and guidance/input from those who might know of experiments/articles to review would be appreciated -- but I do know that spin of an electron has a mechanical moment from an experiment done by Einstein and a colleague, and I do know that the orientation of the magnetic dipole in space can change.

Since classical angular momentum is found experimentally when dealing with the Bohr magneton -- along with non-classical quantum effects -- I am simply going to write down what I think I know -- and hope for correction or at least suggestions of what I might be over-simplifying if anything.

A top (mechanical moment/dipole) spinning which has force (torque) applied to re-orient the axis of rotation will resist re-orientation by precessing about it's rotary axis. Gyroscopic formulas tend to focus on the period of precession -- and I am unaware of how one calculates how much a top will reorient it's axis of rotation with respect to time and torque; nor if such a reorientation is possible with a frictionless device....

I assume reorientation is possible even in a frictionless environment, since permanent magnetism in non-superconductor material is (as far as I know) purely a spin based phenomena and there is no friction that I am aware of for a spinning electron (there are no electrons that *DON'T* 'spin'). I suppose it is possible that quasi-Kepler motion of an electron around an atom (P orbital, etc.) might also contribute to magnetic field, but everything I was taught in chemistry indicates that unpaired electrons are the sole cause of magnetism. So I am assuming that is the case for the moment. (pun intended.)

Also, I have seen the "orientation" of spin as being what supposedly changes in the NMRI/MRI effect.
I have not seen any mathematics which explicitly show how QM-wise engineers came to predict that the magnetic field would "FLIP" when appropriate resonance radiation was applied to the electrons in hydrogen (the typical NMRI target), although the energy required being proportional to a statically applied magnetic field does make sense. The energy required to reorient the magnetic dipole being readily computable from analogous situations in DC/AC motors being used for power generation. The magnetic dipole rotor requiring work to flip orientation while in the magnetic field of the motor's stator.

I am presuming that a rule akin to Fermi's golden rule needs to be used in this case (spin flip) -- but I have not come across in commonly available literature how NMRI is deduced, nor how to handle the QM states.

Any pointers/example calculations which could be applied in the case of a non-static magnetic field would be appreciated, as large uniform magnetic fields are not what will be found in molecular simulation -- but rather weak dipole moments from various items located near the "electron" or other simulated item of interest which dynamically can re-orient.

A second issue that comes to mind, and which may/or not affect the discussion is that Ampere's law and other effects computed using calculus assume a uniform magnetic field when taking the limit as area goes to zero (eg: differential area or volumes...which have an existence whose consistency is questionable given the Heisenberg concept... ). A magnetic dipole's strength is measured in current x area enclosed by the current loop. I*area**2. However, making a large rectangular and planar loop of wire and applying a constant current to it -- and then using a compass as a probe the field's strength has convinced me that B field magnitude in the plane of the loop, and inside the loop, is stronger as one gets closer to the wire -- and weaker as one gets closer to the center of the loop.

When replacing distributed currents with individual electrons, protons, etc. The magnitude of the current is replaced with the velocity of the charged atomic item in relation to the path length required to circle an "enclosed" area. Therefore, the product of area enclosed to velocity of circulating charge (or E-field disturbance), is then constant if one is given a constant magnetic moment (eg: The identity of the Bohr magneton for an electron..) So, the diffusion of the electron in space to create the magnetic moment must have an average value in an angular sense -- and because of inertial reference frames, helical paths must also be possible where the path is closed only on a moving reference frame.

What I have stated here, is the sum total of what I am certain of regarding magnetic moments; discussion of how these ideas fit with QM/Schrodinger's would be greatly appreciated.

--Andrew.

Last edited: Jul 1, 2010
18. Jul 1, 2010

### edguy99

A quick sidenote, the June 10/2010 issue of Nature has a pretty good article:

"Electron localization following attosecond molecular photoioniztion" that starts with "For the past several decades, we have been able to probe the motion of atoms that is associated with chemical transformation and which occurs on the femtosecond (10^-15) timescale. However, studing the inner workings of atoms and molecules on the electronic timescale has become possible only with recent development of isolated attosecond (10^-18) laser pulses."

The attosecond timescale will (I think) prove to be very important in the history of the electron.

Regarding:
The object you may be thinking of is a Bloch Sphere, animations http://www.animatedphysics.com/videos/larmorfrequency.htm" [Broken]. Although not directly derived from general relativity, It does properly predict most (if not all) of the properties of the proton. Its important features are an axis that points in a particular direction and a spin direction.

and:
Proton MRI is generally modelled by the Bloch Sphere, animations http://www.animatedphysics.com/videos/rabioscillations.htm" [Broken]. For more elements with multiple protons and neutrons, you dont just have spin up/down or more commonly termed +1/2 or -1/2. Lithium-7 nucleon for example has 4 spin states labeled +3/2, +1/2, -1/2 and -3/2. It does not matter so much what the picture looks like, its just important that you label your objects with the right spin so you know how it reacts to the forces around it.

and:
The electron at this level is more difficult. As I understand it, the Bloch Sphere does not work for it as the magnetic moment is slightly over 2 times too big to work (it takes too much energy for its size to flip it..) and there are many other problems. Electron flipping is the basis of atomic clocks and I think many of the properties have come about to explain this.

The Lorentz radius is: "In simple terms, the classical electron radius is roughly the size the electron would need to have for its mass to be completely due to its electrostatic potential energy - not taking quantum mechanics into account." and comes out to 3 femtometers. I think that this much mass in a 3 femtometer radius is spinning at something like 100 times the speed of light. Ie, the electron looks too small to fit that much energy in.

You commented "A dipole's detectable field scales as r**-3, where r is somewhat ill defined" and "the product of area enclosed to velocity of circulating charge (or E-field disturbance), is then constant if one is given a constant magnetic moment "

An important structure to is consider are twistors (one animated http://www.animatedphysics.com/videos/electrons.htm" [Broken], ignore the bottom row as it is a work in progress and sometimes you have to click a couple of times to get them to run right). It has a defined axis and it has the 720 spin (watch carefully as the blue dot goes through 360 degrees, the electron ends upside down and must go another 360 degrees to get right side up). It would pack a lot more energy in a lot less space and has kind of a "wierd" layout of its dipole field...

I have assumed spin flips do not apply to the SG experiment. There is no reason why the valance electron would not simply align itelf with the external field and slowly pull the atom in one direction or the other depending on whether it was up to down.

Dont know if this helps much, but I have to get back to trying to find the speed of the potassium atoms..

Last edited by a moderator: May 4, 2017
19. Jul 2, 2010

### andrewr

Three years ago I was speaking with a physics prof. about the attosecond timescale, funny how it is just "news" now....
With such short pulses, the wavelength/frequency is not very precise. Many experimenters seem to be trying to excite the electron into "wave packet" localized states for study at present. I hope your prediction holds well over time.

Thanks, I'll look at those -- the block sphere is a state-space animation, not so much a physical space animation from what I gather so far. I expect to take a few days sorting through ideas to get a feel for what they are about, and also reviewing a book I bought on introductory QM (2nd ed. by David J. Griffiths) which my college uses in classes for physics majors (eg: I bought it on a whim to help refresh my memory and extend my BSEE knowledge...).

OK.

Oh ... m = E/c**2.
but, electrostatic potential energy ? that would be the repulsion/attraction between charges; do you mean the energy inherent in a magnetic dipole moment, or am I overlooking something obvious?
(not that it matters since the result is impractical anyway.)

OK, I see. That's going to take a little time to sink in.

That's a problematic assumption;
The angle that the electron makes with respect to the magnetic field is arbitrary when ejected from the oven. The magnetic field, then, applies a torque to the magnetic dipole attempting to align it with minimum energy in the field. The quantized spin must be "chosen" / "observed" by interaction with the magnetic field, and thus "choose" spin up or down. In essence, I envision that it must "FLIP" from an random analog angle to a quantized one either aligned or anti-aligned (unless it happens to be aligned from the start which is highly improbable).

look at a classic angular momentum demonstration:

In the classic example, when the person's hands apply a large amount of torque the gyro-wheel re-orients its axis of rotation quite rapidly -- however, when a smaller force is applied (eg: the free hanging state) the wheel will precess for quite some time with very minimal change in rotation axis.

In the SG experiment, one has to determine which behavior will result from the magnetic field (or to what degree the axis will rotate / flip ) from the initial condition which has the orientation of spin randomly and continuously assigned an angle relative to the magnetic field.

In the demonstration -- I am not entirely certain that the wheel would have flipped at all without friction reducing the speed of the wheel, however -- if friction is not the main player in the effect, then a Bohr magneton can also be expected to slowly orient itself toward the magnetic field with time in addition to precessing.

There is nothing stopping the "valence" electron from orienting itself relative to the nucleus very quickly -- however, during this orientation, I would not expect the rotational axis to change much. That is, the electron will translate easily relative to the nucleus -- but like a gyroscope -- it not change its rotation "angle" to change it's position relative to the nucleus.

It is a good start; thank you; though it does add questions...

BTW:
I think the average (non-relativistic) speed, due to temperature, is simply:
v = sqrt( 3*Kb*T / m )
Kb=boltzman constant.
T=temperature (kelvin)
m=mass of a Potassium atom.

I am not sure of the statistical spread (variation/deviation) of speed off the top of my head; but you can at least get an error slope %speed chg vs. temperature change (degree K or C) which would be useful.

--Andrew.

20. Jul 2, 2010

### edguy99

I have a gyroscope just like this and I find it much easier to compare with a bloch sphere. I like how it spins with its axis straight up (or slightly askew as it precesses under gravity) whereas the bicycle wheel has a horizontal axis relative to gravity.

Many thanks for this, for some reason I thought I had to know much more to calculate the speed.

Last edited by a moderator: Sep 25, 2014
21. Jul 10, 2010

### andrewr

DIY: standard deviation of gas molecule speed

Half way through the video -- they use a circus like stool and have the gyroscope precess with a horizontal orientation....

Hey, edguy -- I just recalled a way of how to get the variation in molecule speed based on temperature of a gas.
Doppler broadening theory.... classical will work as the velocities are low.
Doppler shift based on velocity toward/away from an observer goes like:

$$\nu\approx\nu_{0}\left(1+\frac{V}{c}\right)$$

The half-width broadening of any rest emitter frequency (ν0) photon emitted from an atom (std deviation, I think) goes like:
$$\delta_{Hz}\approx1.67\nu_{0}\sqrt{\frac{2RT}{m}}$$

So, 50% is 1deviation, 97+% is around 2deviations, and plugging in to Doppler shift;
The difference in Doppler shift has to equal the deviation, and will reveal the uncertainty in speed for 50% or 97+% of deviating atoms going through the slit.

$$\delta_{Hz}=\nu_{0}\left\{ \left(1+\frac{V_{0}+\Delta V}{c}\right)-\left(1+\frac{V_{0}}{c}\right)\right\}$$
$$\delta_{Hz}=\nu_{0}\left(\frac{\Delta V}{c}\right)$$

Which, substituting in the standard doppler broadening formula yields:
$$\nu_{0}\left(\frac{\Delta V}{c}\right)=1.67\nu_{0}\sqrt{\frac{2RT}{m}}$$
$$\Delta V\approx1.67c\sqrt{\frac{2RT}{m}}$$

For 2 deviations, it becomes:
$$\Delta V\approx3.34c\sqrt{\frac{2RT}{m}}$$

An anamanipia walah!.... if it is correct... I ought to re-read the MIT paper and see if it is in there someplace.

Last edited by a moderator: Sep 25, 2014
22. Jul 11, 2010

### andrewr

DIY:SAM First try at spin χ(t) state EH or B field only coupling

OK, this post is about what I have learned concerning edguy's GOOD suggestion of studying Bloch spheres...
If you see any critical mistakes, let me know ... please ... fortunately the information about that Bloch spheres was easier to find than discovering Pauli spin matrix equations, which is what this is really about -- I think. MRI/etc use these equations -- and a little bit of work and I think they will apply nicely to time-step based simulation; their form has to be changed to make program code, but that's about it.

The Bloch sphere describes qubits which are a two state item, just like an electron.... hmm I wonder, they are probably using electrons?...
The formulas used for a Bloch sphere also represent the rotation disturbance of an electron. I am just getting used to how these spin computations work as I don't generally use spin in EE semiconductor calculations...so this is new to me even if reaaaaly simple to everyone else in the thread...

The Bloch spheres are quite helpful for visualization -- edguy -- but looking at the website you linked to -- I noticed a couple of things. First is that the green point/vector appears to be independently rotating -- and acts somewhat like a time variable -- the remaining two dots rotate in a plane defined as the normal to the green line. As a visualization thing, staring at the animations for a while helped me to grasp the relationship of the two state variables in a Bloch sphere, and of particular help was the very first animation which plotted the height of the blue dot showing the complicated shape on the three-D sphere to be nothing more than a constant rotation in two planes.

Unfortunately, IMHO, the only external aid in precession (Larmor) animations were very unclear to me -- I couldn't, for example, tell the difference between what those spheres and the non-precessing examples on the top row of animations really is; they look the same. The "plane" added to the animation doesn't seem to make any sense to me yet; I'm still reading on the spheres from various sites.

This is what I know about spin mathematics now, since no one answered the questions so far; perhaps this will aid in explaining what I am after and getting some people with basic knowledge to give me a little assistance?

There are two possible "stationary" (eigen) states, UP and DOWN and the general solution before observation is a linear superposition of all possible states according to differential equation theory; For simplicity, since there is no spatial variable to worry about -- but just a "point", the functions of the state space can be constants: and there will be two stationary states, so traditionally a vector is used to represent them:

$$|spin+>=\left[\begin{array}{c} 1\\ 0\end{array}\right],|spin->=\left[\begin{array}{c} 0\\ 1\end{array}\right]$$

There are only two rows in a spin state variable, and coincidentally it takes two angles to make a direction -- θ,φ -- in 3D space.

I think the two states quantization HUP issue -- regardless of 3D direction for "spin" -- might even make sense classically/relativistically once one tries to use Einstein's concept that a magnetic field is a time-delay effect of the electrostatic field ... for there is no such thing as a true magnetic moment if that is the case... two dimensions of Cartesian space are used to make the equivalent of something moving in order to generate the E fields representing "spin". and that causes those two dimensions to have time varying magnetic fields when one tries to measure with respect to those axii...(eg: normal to the axis of spin/vector which is the orientation of Bohr magneton N&S pole directions) and since it varies (predictably, but affected by even the slightest disturbance from an external magnetic field...) the result is quite random looking for all practical classical physics purposes. For simulation purposes I might even make it partially random to simulate small magnetic field fluctuations from atoms that would exist in reality, but that don't exist in the simulator due to space/computation constraints.

The Bloch sphere includes this issue by defining the results of the space as the surface of a sphere -- and surface area goes as dimension squared, not cubed. The energy of the electron is fixed in its "spin" sense, no magnitude to change, and therefore no physical significance to the variable known as radius either. The system is purely represented by the surface of a fixed radius sphere.

So, glossing over the above issue for now, the two spin states +1/2 and -1/2 are called vectors (orthonormal basis vectors even??) of the spin space, or sometimes "spinors".

... and like the case for the Schrodinger equation with respect to spatial position having a state ψ(x,y,z,t), there has to be a sub-state for spin, (I need a different name so χ seems to be what people use on the net ??) χ(φ,θ,t) or χׂ(Xc,Yc,Zc,t) for a Cartesian version -- which the Bloch sphere is. So, one needs to generate a Hamiltonian of the spin state (χ) and then operators can be applied to extract information about the probability of detecting/interacting with the spin magnetically.

The functions applied turn out to be simple matrix operators; and I learned they are called Pauli spin matrices:
$$\varsigma_{x}\equiv\left[\begin{array}{cc} 0 & \frac{\hbar}{2}\\ \frac{\hbar}{2} & 0\end{array}\right],\varsigma_{y}\equiv\left[\begin{array}{cc} 0 & -i\frac{\hbar}{2}\\ i\frac{\hbar}{2} & 0\end{array}\right],\varsigma_{z}\equiv\left[\begin{array}{cc} \frac{\hbar}{2} & 0\\ 0 & -\frac{\hbar}{2}\end{array}\right]$$

What makes each one specific to the axis chosen eg: x,y, and z ... I haven't the slightest clue...
The equations look like one axis was picked arbitrarily, (like the green dot in the Bloch sphere diagrams?) and the rest were based on that one.

The only problem remaining to make a state equation is the coupling of electromagnetics to mass so that the EM field will properly interact; in part that turns out to be the classic gyromagnetic ratio of coulomb to mass; a constant called γ.

So, doing a very simple example -- eg: an electron in a constant magnetic field oriented in Z:

(note: This does not include Bloch sphere representation which changes the variable space of rotation to remove a redundancy in vectors which are 180 degrees from each other. I will have to think about what I learned concerning dividing angles by two in order to remove the redundant half the surface of a unit vector space, sphere... and the concept that 720 degrees is required to come to a TRUE identical position in vector space... not necessarily Bloch vector space, as I haven't figured that out yet.
http://en.wikipedia.org/wiki/Orientation_entanglement )

The energy of the Electron oriented similarly will be γBz as known from classical physics, so the Hamiltonian is writable by inspection, and the solution of the Schrodinger Equation of an electron (arbitrary initial state) with field pointing in the Z direction is:

$$\jmath\hbar\frac{\partial\chi}{\partial t}=\left[-\gamma B_{z}\varsigma_{z}\right]\chi$$
(Wow! I love this TeX editor I picked up....so much easier to express things! so fast to do offline! LyX the problem!)

Time evolving solution for the Schrodinger spin equation ... is simply:

$$\chi(t)=\left[\begin{array}{c} a\left\{ e^{\frac{^{\jmath\left\{ \gamma B_{z}\right\} }}{2}}\right\} \\ b\left\{ e^{\frac{^{-\jmath\left\{ \gamma B_{z}\right\} }}{2}}\right\} \end{array}\right]$$

To compute an observable probability, say in the X direction I do:

$$YisUP|\chi(t)>=\overline{\chi(t)}\varsigma_{y}\chi(t)$$

Now this is where I get stumped....

For the change in position, according to the Schrodinger equation, one has to construct a wave packet to determine the speed of the item in question. Yet, I haven't seen anyone do that for spin; they generally glibly equate γB as the Larmor frequency if one chooses a and b such that precession will occur. But, I don't see how that is arrived at; can anyone point out where to look for this?

Given a uniform magnetic field, the electron will never flip -- nor even align itself with the field.

Next:
I can see how divergence of the B field would allow one to calculate a tendency for the electron to move in x,y,z coordinates as a whole. The energy of the system determines motion for spin, thus I have to treat an inhomogeneous magnetic field like: E=γ(Bz + kz), and I will ignore the other axii inhomogeneous field... they would be analogously treated; IN SG, the net z motion is the one of interest.

But then, that means I am going to end up with a time dependency with kz in it.... hmm, can I do that?
the result would be, if possible:
$$\chi(t)=\left[\begin{array}{c} a\left(e^{\frac{\jmath\gamma B_{z}}{2}}\right)e^{\frac{^{\jmath z\left\{ \gamma B_{z}t\right\} }}{2}}\\ b\left(e^{-\frac{\jmath\gamma B_{z}}{2}}\right)e^{\frac{^{-\jmath z\left\{ \gamma B_{z}t\right\} }}{2}}\end{array}\right]$$

If that is right I can also do that for the electric field of the translating electron using ψ,where one normally one plugs the entire voltage field in ...

$$hamiltonian=-\frac{\hbar^{2}}{2m}+V$$

If that is possible, it has the advantage of allowing local calculations of Energy to show how the state will evolve in the short term, and may eliminate the need to know the entire potential field at once; clearly the spin solution could be adapted to integrate E on the fly, and compute the changes to the coefficients ... that could be an excellent approximation!

And finally, a philosophical/open minded thought/question ... hoping for computational shortcuts....

One of the things I began to think about was the meaning of tunneling; I came across a derivation of the Heisenberg uncertainty for time and energy -- and what surprised me was that I can't find any justification for saying that a particle may "borrow" energy it wouldn't classically have if it pays it back, as I often hear is the explanation for tunneling.
but, the Δτ seems to indicate how long a system/state must be in operation to detect a significant (50%?) change in the state of a system -- but it doesn't seem to have anything to do with "borrowing" energy for a time, no matter how I try to interpret the equation parts.

The question, then, becomes what does it mean to "tunnel" into classically forbidden areas? I am beginning to wonder if that has something to do with an idea equivalent to "width" of an electron or other particle -- eg: The reason one detects it in a forbidden energy area is simply that not "all" of it was there, and thus it didn't require that much energy to penetrate as far as it got -- but enough of it was available in the forbidden region to be grabbed by a detector. Sort of a Heisenberg uncertainty definition of radius. Has anyone come across any similar idea in literature? I don't intend to invent a new formula -- but I was thinking that if such an analogy were useful at all, that it might suggest a way to do a change of variables which simplifies the integration equations for linear motion .... maybe not, though, the approach might complicate them instead.

OK. Anyone care to share some thoughts/enlightenment about these very basic QM issues?

Last edited: Jul 11, 2010
23. Jul 12, 2010

### edguy99

Re: DIY: standard deviation of gas molecule speed

You are correct. I would like to expand on what I feel are important differences between the two videos http://www.youtube.com/watch?v=8H98B...eature=related"

For a bloch sphere, the axis horizontal to the force (bicycle wheel) showing precession is the most "unstable" form. Notice in both videos, how much motion the entire gyroscope is undergoing in this position (horizontal axis) in order to maintain balance.

What I am trying to say here (in a poor selection of words), is that if you were to stop the horizontal rotation (precession) of the axis on the bicycle wheel, it would compensate by moving the axis to a vertical position, either up or down depending on the direction of spin. These are the natural "up" and "down" states of a bloch sphere when placed in an external force. The bicycle wheel is in a very stable position if you let it fall over and spin on the ground.

The larmor frequency animations http://www.animatedphysics.com/videos/larmorfrequency.htm" [Broken] in row 2 and 3 show how precession works in 2 different field strengths with the precession speed (larmor frequency) being the same no matter the tilt of the sphere against the external field (It is perhaps not clear that the external field is vertical to the side view of the bloch sphere, also it represents a very "strong" field as normally the precession rate would be in the range of 100k times slower then the spin in an MRI machine - you may have to click a couple of times to get them to run right). Again, the most unstable view is #3-C where the bloch sphere has the high tilt relative to a strong field and hence a lot of movement is going on. The plane and the axis pointer represent the plane of the spinning wheel (green axis and plane of the spinning green dot). Both the plane and the axis are what precess. The toy gyroscope is great (I got mine at the Smithsonian in Washington DC) and it is very helpful to hold and feel the forces involved. It is important to note that when the bloch sphere (or toy gyroscope) is flipped from up to down, the spinning motion never stops or reverses.

I think bloch sphere calculations are valid for all proton/neutron MRI calculations (including modeling for t1/t2 relaxation times).

Great connection - thanks, all thats left is the rate (and distribution) that the atoms come out of the oven. Must be some kind of function based on the size of the slits/holes that they have to get through or...

Last edited by a moderator: May 4, 2017
24. Jul 15, 2010

### andrewr

Re: DIY:SAM gas velocity mode v. mean, Blocked Bloch,

Aye.
yes, I can see that there is a natural position. In an EM field the direction is the magnetic north, not gravity -- but you are using an analogy. I can see a useful convention for a reference being the direction of the field -- but as magnetons never "slow" down flipping has to do stimulation...
Also, gravity is uni-attractive -- whereas magnets are both attractive and repulsive; So there is a maximum instability in the same vertical orientation and the most stable position, just sign reversed.
But I have no problem calling the direction of most stability "up" in deference to your convention ?

OK. That's what I was missing. The actual visual rotation of the three dots doesn't look distinctive to the eye, so that's my critique of the image. The writer of the graphics would do well to leave a trace like image 1,1 does for the "tip" of the spinning axis which would make the precession distinctive eye candy.
It was just hard to see for me.

A Bloch sphere models any two state quantum effect, eg: spin 1/2 "particles".
t1/t2 reaction times are based on thermal kinetics -- how long it takes to "wiggle" a large number of protons loose from a precessing state, to an aligned one as neither protons nor electrons will stop precessing and align until agitated in the proper way -- eg. in sync with the spin rate to make the flippin force to not cancel out. Thermal agitation does this randomly -- but it can be induced by an EM RF wave; eg: I am sure you are correct -- the frequencies used are in the low MHz which is a proton moment, not electron, for their frequency differ by ~ the mass ratio of electron to proton, and the proton is the more massive and hence lower frequency. The chemical bonds (electrons) would be unreliable for imaging anyway, so using the Nucleus makes sense. Bloch spheres are mentioned in modeling NMRI (Nuclear MRI).

I believe you about t2, although I am not quite certain of what is happening; anyhow, for my present purposes that isn't important. I'll just think of it as something similar to Barkhausen noise...an avalanche triggered by one tiny pop...

Well, yes that's left -- and now checking the MIT paper, p.6 -- they have the same form of my velocity, v(T,m) =sqrt(3...) -- with one exception; the constant is 2 not 3. You appear to know more than you let on... The formula I gave is for the mean velocity(eg: expectation value) of a gas in thermal equilibrium <V0>. There is a 19% variation in speed between the two distributions -- not a show stopper error. The Boltzmann distribution selects the single mode, apparently, "most atoms", and that makes more sense since the goal counts atom distribution not speed once the experiment is over. eg: so the "2" is more accurate of the SG image -- for the speed distribution -- the standard deviation is good enough, using a Gaussian, unless someone wishes to generate a more accurate value.

See p. 6
http://web.mit.edu/8.13/www/JLExperiments/JLExp_18.pdf"

Reading forward to p. 8 , though -- the classical treatment is oversimplified -- it will not produce the graph they show. Or indeed, one will have to admit that the results of the SG do not predict quantization in the sense of the Z axis. (Y on the photos.) As an EE, I computed the B field, and my suspicion was confirmed -- these texts dumb down classical physics. Notice MIT does NOT predict motion in the X direction -- and one easily might not notice that the solution only predicts Z motion. But if one only plots the Z -- then one can't KNOW where on the X that Z will appear -- and thus ALL Z's are potentially suspect as valid values.... (The plot just became practically "classical" in the sense they say it isn't; but a little more analysis is required.).

The magneton does NOT reorient into the magnetic field (except when atoms collide) -- it does precess -- but the TRANSLATIONAL motion of the magneton depends on the gradient of the magnetic field; The gradient must be as large in X as in Z for conservation laws to hold. There is no-place on the magnetic field where the X portion of the gradient of the magnetic field is zero except exactly on the Z axis...(local extrema in direction of X).

That correlates with the "spike" in the actual SG experiment -- for the most stable direction is oriented toward the Z axis and atoms from both X- and X+ coordinates will preferentially drift toward X=0, (even if not a collision caused reorientation) the center, creating a single place with more atoms than anywhere else. (It goes from aligned to anti-aligned as it crosses X=0, so it will not go off the other side equally -- but is rectified non-linearly.)

IF an atom were oriented off center, anti-aligned, to the field -- its deflection in the X direction could reasonably be expected to exceed the deflection in Z, so that atoms which classically do not climb as high Z wise -- will go much farther in X. The chances of an atom being exactly on the Z axis are zero -- so that it is a fallacy to argue that the Z axis is the predictor - it isn't (I was suckered in too.) I am not going to bother with an analytical solution right now -- I may do a classical simulation to verify what I have argued later -- but just note that any solution which does not predict X deflection can't claim to fully represent the Z distribution of classical physics. eg: the higher density at X=0, is not predicted by MIT's lab. Precession does NOT cancel X effect out in a gradient.
Also, the SG DOES show quantization of the Magneton --- but just does not prove that classical predictions would not ALSO predict quantization. To be blunt -- we plug the QUANTIZED value Bμ in to the classical equation -- THUS IT BETTER BE Quantized! Actual picture again...

http://www.kcvs.ca/martin/phys/phys243/labs/sglab/stern_gerlach.html"
http://plato.stanford.edu/entries/physics-experiment/figure13.html" [Broken]

The Bloch sphere (BS) is a different matter. I don't know how to use it in (Shrodinger's Equation) SE as it is formulated -- and I haven't found any examples. The primary benefit of the Bloch sphere is to remove redundant values from the [ x,y,z ] direction vectors. Eg: Vectors which are π radians (180d) are the same vector save sign -- the point that this redundancy exists and is undesirable is well taken by me; and using the Bloch sphere to fix this is intriguing -- but I appear to be redoing someone else's work -- which is what I set out to avoid, I want to either have a solution in hand -- or blaze a new path.

On an X and Y axis, the traditional locations of spin 1/2 and spin -1/2 are formulated by the complex vector. Oddly the angle division by two used in the BS does not reduce redundant vectors -- rather it makes the spin 1/2 and spin -1/2 vectors at 180 degrees (2Pi), and extends the range of the angles represented by the sphere by a factor of 2, eg: 720 degrees. But that's OK, I like a physical representation ... it is more intuitive *if it works*. The entanglement page I read still seems to hold some promise, so I am open to the BS.

Do any of you know of a worked example where it is used *IN* solving the problem -- and *NOT* just as a representation after the fact? eg: Where one plugs a Bloch sphere variable in for χ or ψ and solves the eqn?

--------------- py. NOW FOR SOMETHING TOTALLY DIFFERENT --------------

I have a plot of a well to make, and so I thought I would lay out my first attempt's math, now a month old.
There is no point in going to all the trouble of working out Eigenstates for a numerical simulator.

$$\frac{\partial}{\partial t}\Psi=-\jmath\hbar\nabla^{2}\Psi+V(x,y,z,t)\Psi$$
The first thing to do is get rid of the idea of a complex vector exp( f(x,y,z,t) ). Mathematically, the exponential is costly to compute and often quite inaccurate on C implementations interfaced to Intel processors. (which causes it, I don't care.) So, we're going for a direct timestep (quadratures) integration to solve the equation from ground zero -- and let's see how far I can go with it. Reduce the problem to real functions A and B.
$$\Psi=A(x,y,z,t)+\jmath B(x,y,z,t),\Psi^{2}=A(x,y,z,t)^{2}+B(x,y,z,t)^{2}$$
$$A+\jmath B=A_{0}+\jmath B_{0}+\intop_{T_{0}}^{t}-\jmath\hbar\sum_{d:xyz}\left\{ A_{dd}+\jmath B_{dd}\right\} +V\left\{ A+\jmath B\right\}$$
Dropping the Σ for brevity, break into two simultaneous equations of time.
$$\left[\begin{array}{cc} A, & B\end{array}\right]=\left[\begin{array}{cc} \intop_{T_{0}}^{t}-\jmath\hbar\jmath B_{dd}+VA & ,\intop_{T_{0}}^{t}-\hbar A_{dd}+VB\end{array}\right]+\left[\begin{array}{cc} A_{0} & B_{0}\end{array}\right]$$
$$\left[\begin{array}{cc} A, & B\end{array}\right]=\left[\begin{array}{cc} \intop_{T_{0}}^{t}\hbar B_{dd}+VA & ,\intop_{T_{0}}^{t}-\hbar A_{dd}+VB\end{array}\right]+\left[\begin{array}{cc} A_{0} & B_{0}\end{array}\right]$$

The last equation is suitable for numeric integration. Choosing the initial condition is the only difficulty.

Last edited by a moderator: May 4, 2017
25. Jul 16, 2010