# DIY: Simulate Atoms and Molecules

1. Jun 20, 2010

### andrewr

I would like to see how far I can get writing some novel & simplified simulator code for molecular modeling based on QM.

------ Background follows ---- skip to next post to get to the details of the project & starter QM question..

I am speaking from experience. Somhow the spend $1000 on a gamble just isn't appealing either. I have studied variational calculus since the 8th grade -- my original purpose in doing so was to understand Schrodinger's equation. I have a hard time believing what you say here as anything but double talk. If these methods worked perfectly or exactly as you seem to be selling them as, and if you really include what I desire to simulate -- in the sense that I am proposing -- then they would definitively predict what happens in say, "the Bell inequality", based experiments. Considering most of the physics community is still arguing over that, including ZPE (with known flaws) as one objection, and no definitive set of experiments has yet buried the competition ... I see no point in arguing over what is better, or what is "perfect". I'm interested in an experimental simulation method; the one I have chosen is just different -- it is not likely perfect either -- but I would like to try it; Are you interested in actually addressing the thread questions I am proposing or something else after my clearing the air here? If you have anything to add concerning how to compute how long an electron stays in an excited state, I'm interested. That is one of two issues which prevent me from finishing the code which I already have; as I said at the start of the thread, I'd like to see how far I get; and I mean after I know the answers to my actual questions. --Andrew. Last edited: Jun 20, 2010 7. Jun 21, 2010 ### alxm -I do deny sigma and pi bonds exist in molecules. Orbitals as well. These are all abstract concepts derived from various theoretical models, not real physical things. -Force-field methodology doesn't have anything to do with 'sketching' anything. It's a semi-classical physical model, not a single scalar or vector field or anything of the sort. -All DFT methods in practical use for molecules are more accurate than the Hartree-Fock method. Obviously you don't know the relative size of dispersion effects if you think that would be a deal-breaker. -Yes, QC methods were being used prior to electronic computers, in the late 20's and 30's. Newton didn't have any problems developing his algorithm for root-finding without one either. -Computers WON'T 'do it for you'. That was the point I was trying to make. Most chemists in most situations do not use quantum chemical software at all. Most chemists don't know how to use quantum chemical software. -You think it's reasonable to demand that they give you software that took them years to write, no strings attached? Crazy. -The fact that a method is exact does not mean a given implementation of it is necessarily numerically stable in every case, or that code is bug free. -The fact that a particular implementation might not work (perhaps due to user error), does not invalidate the theory behind it in the slightest. -A quantum-chemistry program can't and won't tell you about the Bell inequality. It's an entirely different system and problem. They won't calculate scattering cross-sections or do your taxes, for that matter. Also, it's not the calculations themselves which are in dispute. -It's "Heisenberg" not "burg" (since you seem to think spellings are of great importance) -The Uncertainty Principle is not a simple experimental limit, it's a theoretical limit. This is explained in every textbook and in a thread in this forum once a month or so. -You can produce _exact_ pictures of hydrogenic orbitals simply by plotting the spherical harmonics, since that's what they are. But they won't give you a good approximation of any multi-electron system. -How to calculate lifetimes of excited states is also in every textbook. Fermi's Golden Rule, look it up. -The Schrödinger equation does not have even a superficial resemblance to the equations of the Bohr-Sommerfeld models, in my opinion. Beyond that, I won't address your 'independent research' since this forum has clear rules about that stuff. And that's pretty much all I have to say on the topic. 8. Jun 21, 2010 ### edguy99 "Nobody yet has a flawless simulator...." Although I wont dispute this, I write animation software in the timeframes/distancescale from microseconds/nanometer to show photon size and movement, attoseconds/femtometers for electron motion and Yoctosecond/yoctometers for nuclear decay for neutrons under a collection of different physical forces. You may require several animations specifically to account for distance and time (large scale and/or small scale). Have you given any thought to distance and time scale for your simulation? Some view this type of animation as "speculative" and do not like the discussion here, but I would be happy to discuss it if you would like to send me a message or sample picture of what you have in mind. Regards 9. Jun 22, 2010 ### andrewr Hi; I think the photon scale and the electron are interchanged? I was planning on using my electronics simulator base as-is for the project at first -- and modifying it later for improvements. There is a saying "one in the hand is worth two in the bush." The electronics simulator auto-adjusts time-steps depending on interaction values -- that is, the timesteps computed become larger as the path(s) become more predictable / repeating of past history -- and smaller where a small change can cause a large instability. The algorithm is complicated but like a video game drawing algorithm -- is carefully optimized in the inner loops; Roughly speaking, it allows me to simulate around 1000+ items in a field, and easily 100,000 nodes+ in electronic circuity on a modest single core Celeron processor (SSE2/SIMD) -- although, the computation time drops exponentially with the *randomness (non-repetitive change); and conversely accelerates with the repetition of similar events/changes. (*randomness within a steady state is not what I mean.) The details of the algorithm are beyond the scope of Quantum Physics, obviously, and are a programming/data-structures nightmare; but it's extremely powerful and fast compared to alternative solutions used in the electronics simulator industry (eg: typically Laplace transformations). At heart, I have opted for a compute quickly -- and use a statistical method to do quality control. If you are familiar with industrial process control -- the method is essentially the same kind of thing. In the electronics realm, I specify the time-step I wish to output images/plot at, and the simulator discards all intermediate states between frames to be drawn. This eliminates the highly variable time-step plot problems required for efficient & accurate calculation of events; In electronic simulation, the time-step typically varies from milliseconds down to picoseconds. A crude estimate for (super)atomic phenomena is in the range of milliseconds down to sub-atto second 0.01 atto (YUCK). I haven't given much thought to the time-steps I will output -- outputting every subtle change doesn't make a whole lot of sense as a large number of small changes are required (typcially) to make a long term macroscopically noticed change; I plan, for right now, in just hand setting the time-steps as I do in electronics simulation for different regions of interest. Eg: large time-steps which allow me to ignore the development of the initial state (typical state) of the system -- but still do a sanity check. And then focus the time-step down in regions where interesting things are happening that I am trying to understand better. Even though time is involved in my simulation, I was not planning on making time evolved movies in the end. The reason for this is twofold -- 1) the simulator itself determines how to focus CPU power computationally on different regions of space; This is what allows the simulation to proceed at much quicker rates than would be possible if all points were computed for the same time-step everywhere in parallel. Items of circuitry / space which are sufficiently decoupled to introduce small errors by reducing the update rate of *changes* in interaction become time-wasters each time a displaying of state occurs. Of course, blindly printing every time-step automatically chosen by the simulator is the absolute worst time waster. eg: 1e6 + times slower.... 2) In electronics, the time-evolving waveforms are often important; so much so that they become unintelligible if the time-step varies, so I am forced to plot a maximum & fixed time-step rate if I wish to easily extract information about the evolution of the waveforms in time -- so that is the way my simulator works now -- and to keep the cpu time down, when the simulator has to reduce the time-step for accurate results; intermediate steps are discarded from the plot to save time. To keep the problem tractable once I get to large (65-100 atom) systems, I plan on having the simulator ultimately record changes in state and not changes in time; and I ultimately, I would like to be able to specify which states are of interest to reduce the data even more. I do have open questions about how much information to save at each plot point; but I can't solve those in my head -- I have to actually experiment to see what is possible. The space, is unfortunately, limited by the nature of the simulation, cache memory, main memory, etc. I won't be able to simulate more than a few cubic microns of space -- and only a 1000-2000 or so electrons/protons/etc. within that space. (Presuming the questions I am trying to get answered do not significantly degrade the estimates.... engineering :) ) Why would some consider it "speculative", atto second laser pulses are regularly used to excite atoms into the wave packet superposition of states which mimics classical motion. That's experimentally done, and I wasn't aware that the people doing this were violating any principle of quantum physics. I know that when I first heard of the frequencies they were talking about and atto second that the uncertainty principle crossed my mind. For the purposes of simulation, though, these small times replace the differential element in calculus -- and anyone who uses calculus to work with Schrodinger's is automatically guilty of using small times as well, for a mathematical purpose if not a physical one. hmmm..... Separation of Church and state, or ought it be Calculus and state... hmmm.... I wasn't planning to get into that. And from the other response I got, I need to correct the extra words being put on my lips... --Andrew. 10. Jun 23, 2010 ### andrewr Alxm, I have tried a few times to pen a response, but I simply don't post because the response is so much longer than your post. There are overlapping issues in what you are remarking about -- and clearly some confusion about what I actually said, and what I appear to have said. So rather than reply head on; I hope I am able, in this way, to emulate your very compact response style which I rather wish came naturally to me. -How to calculate lifetimes of excited states is also in every textbook. Fermi's Golden Rule, look it up. I use Fermi's rule for understanding stimulated emission events. If what you are saying isn't nuts (and I do admit you have an IQ) then it appears to imply that there are no such things as truly "spontaneous" emissions. If that's the case -- my simulator already takes care of the issue, and I am wasting time asking about it; If your comment is a mistake, let me know -- otherwise I need to change the question. -Beyond that, I won't address your 'independent research' since this forum has clear rules about that stuff. And that's pretty much all I have to say on the topic. -The Uncertainty Principle is not a simple experimental limit, it's a theoretical limit. This is explained in every textbook and in a thread in this forum once a month or so. -A quantum-chemistry program can't and won't tell you about the Bell inequality. It's an entirely different system and problem. They won't calculate scattering cross-sections or do your taxes, for that matter. Also, it's not the calculations themselves which are in dispute. If it is all you have to say, then the thread won't be cluttered up with more of this. I haven't taken sides in any absolute way on controversial issues. In the last (and only other) thread I ever posted on the forums -- it took the science advisor several pages to actually come up with an answer that made any sense. I still shake my head that he could have possibly missed that using a 1024+ digit calculator means one is serious about verifying something, and not just 'estimating'. Your response gives me hope that you have more awareness than average. Beyond that, I will simply remark that the research I have cited thus far in the thread -- isn't mine. Secondly, it was searches on the physics forums and recommendations of "science advisors" and more important "physics mentors" which led me to follow the links to these controversial issues in the first place. * up -Yes, QC methods were being used prior to electronic computers, in the late 20's and 30's. Newton didn't have any problems developing his algorithm for root-finding without one either. -Computers WON'T 'do it for you'. That was the point I was trying to make. Most chemists in most situations do not use quantum chemical software at all. Most chemists don't know how to use quantum chemical software. -The fact that a method is exact does not mean a given implementation of it is necessarily numerically stable in every case, or that code is bug free. * same -You can produce _exact_ pictures of hydrogenic orbitals simply by plotting the spherical harmonics, since that's what they are. But they won't give you a good approximation of any multi-electron system. -The Uncertainty Principle is not a simple experimental limit, it's a theoretical limit. This is explained in every textbook and in a thread in this forum once a month or so. My respect for you went up a notch or stayed the same with each of these comments. I agree with them, and always did although that may not be obvious at first reading of my past comments. They do not affect why I am doing anything, though. -The fact that a particular implementation might not work (perhaps due to user error), does not invalidate the theory behind it in the slightest. -I do deny sigma and pi bonds exist in molecules. Orbitals as well. These are all abstract concepts derived from various theoretical models, not real physical things. -Force-field methodology doesn't have anything to do with 'sketching' anything. It's a semi-classical physical model, not a single scalar or vector field or anything of the sort. Gee. In one line: My "hand" is also an abstract concept built upon sigma and pi abstract comments, and I can draw my hand with my hand which does not invalidate my theory of what a hand or sigma or pi bond is -- in the slightest. -All DFT methods in practical use for molecules are more accurate than the Hartree-Fock method. Obviously you don't know the relative size of dispersion effects if you think that would be a deal-breaker. -The Schrödinger equation does not have even a superficial resemblance to the equations of the Bohr-Sommerfeld models, in my opinion. OK. Your entitled to your opinion even when it has almost nothing to do with what I said. Sommerfeld's modification is not Bohr -- and DeBroglie only interpreted, but did not change Bohr's theory. Nor can I prove or falsify what you said -- so I'll join it: Since I don't know what is the cause of what I witnessed -- Better safe to include dispersion than sorry I didn't. BTW: The experiment I am trying to understand has nothing to do with cold fusion or Blacklight power, etc. It is purely related to the change in economics over the last 100 years as to what processes are economically viable. and environmentally friendly. -It's "Heisenberg" not "burg" (since you seem to think spellings are of great importance) ! Then, you admit we are on par rather than you being so much more superior than I am ?! Wow. I'm flattered. -You think it's reasonable to demand that they give you software that took them years to write, no strings attached? Crazy. ! Touche! ! LOL: Say that again with a straight face after buying$50K software to solve a problem which you have only a week to solve before your competitors take your client away; then discover that there are bugs in the software you have purchased, and after you hold hands with tech support to solve their mistake for free (which only they can actually fix in the code) -- and after they turn around and sell your fixes to your competitor and refuse to pay you a dime. (And that is what the nicer of the companies will do. Murphy was an optimist.)

I am laughing at the notion that you might think you will convince anyone that *refusing* to spend money is crazy, or that a refusal to buy is equivalent to *DEMAND*ing something for nothing ???

By the way, thank you for giving *some* of your time away free. I am thankful to Dr. Young (no the name is not a 'simple' coincidence) for selling me a college physics education, and allowing me to use the knowledge after class as public domain -- including his comments.

If one (esp you) are crazy for giving away, or even think you are -- I have the name of some very good psychiatrists with multiple degrees and understanding of other subjects. They do interview before accepting clients, and not many get accepted, but you might get accepted if you are lucky.

11. Jun 24, 2010

### edguy99

You could be right, I'll try again (feel free to correct):

1. units Femtosecond(10^-15) and Nanometers(10^-9) to show photon size and movement - with a viewing width of 10 micrometers and a steptime of 10 femtoseconds, a 600 nanometer photon will bounce back and forth on your screen at a comfortable rate when viewed at 30 frames per second. Electron and protons will not show any motion.

2. units Attoseconds(10^-18) and Femtometers(10^-15) for electron motion - with a viewing width of 10 angstroms and steptime of 10 attoseconds, an electron with 10 evolts of kinetic energy will move through your screen at a comfortable rate of about 2 picometers/attosecond, photons travel too fast to be seen at 300 picometers/attosecond and protons (heavier) dont really move at all over this timeframe.

3. units Yoctoseconds(10^-24) and Attometers(10^-18) for nuclear decay of neutrons - with a viewing width of 10 femtometers and steptime of 1 yoctosecond, a down quark changes to an upquark and emits an electron and an anti-electron neutrino. The w-boson would be visible for less then a yoctosecond and the neutrino would float off at a comfortable pace.

You mention 3 animations: Bohr Magneton, orbital transitions, and the Stern Gerlach experiment;

I will give some thought to time and distance scales and post later. You may well have something in mind already, if so please post. With regards to changing timescales. I dont see how this is done accurately as certainly the location of particles will not change when you change the timescale, but the momentum and momentum direction does change and I dont see how they can be recalculated without going to an n-body problem... I find it saves time to do 2 or more animations, one at the slow rate and a different one at the fast rate.

12. Jun 25, 2010

### andrewr

mmmm... much better.
At this resolution, in free space: c=3.0x10**8 m/s, the wavefront will step with a distance of 300nm / femtosecond. If the screen distance is 10μm, that means steps= 10/0.3 = 33.3 frames.
So it takes around a second to cross the screen; if a wavefront simulates in dispersive media with a slower rate of travel, this would indeed be comfortable.

#2 I will just let sit, #3 -- wow. At least I don't think I will have to worry about that time-reference...

13. Jun 25, 2010

### edguy99

A couple of questions and comments.

Regarding the Stern Gerlach experiment:

If you wanted to do an animation with a viewing width of 10 meters (to show the equipment), what timestep would show electrons moving across your screen at a reasonable speed? Ie. if I had a laser gun pointed over your shoulder checking the electrons speed, how fast would they be going?

Clearly the spin up electrons move one way and the spin down electrons move the other way based on the magnetic field, but I dont think they all move "exactly" the same. Is there research that shows the pattern of the distribution of electrons that show up over time or do they indeed all move the same up or down amount?

Regarding orbital transitions:

I think you may need to consider "units Attoseconds(10^-18) and Femtometers(10^-15) for electron motion - with a viewing width of 10 angstroms and steptime of 10 attoseconds, an electron with 10 evolts of kinetic energy will move through your screen at a comfortable rate of about 2 picometers/attosecond, photons travel too fast to be seen at 300 picometers/attosecond and protons (heavier) dont really move at all over this timeframe."

A viewing width of 10 angstroms produces the kind of picture you would see of "real atoms" like the IBM pictures with scanning electron microscopes, what you dont get is the timescale. A timescale of femtoseconds works well to view things like proton/nucleon vibrations but even low energy "unbounded" electrons are moving way to fast to see. A timescale of attoseconds allows you to build your electron probability clouds for the frame and deal with a free electron floating by that you know, because of its momentum, will be somewhere else in the next few frames. This timescale has the same problem you talked about where photons will seem to appear out of nowhere.

Also,

I would appreciate if you could expand on this comment.

Regards

14. Jun 27, 2010

### andrewr

OK. (SG henceforth in my notes.)

I intend to mimic the MIT reproduction of the experiment using hydrogen and either accepting the factor of 40 mass change, and therefore expected 40x scale change -- or artificially increasing proton mass 40x to simulate a fake "potassium" atom in transit with hydrogen. I haven't calculated the speeds, but the transit time and dimensions are essentially fixed by the experiment. If I modify them arbitrarily -- then verification of the simulation will be difficult at best. If I were to animate a cross section of the experiment -- the atom would be vanishingly small. I was thinking to simply collect the
[x,z] data points at the end of many experiments for a statistical sample equivalent to the SG photograph.

Note the MIT experiment uses a slightly different shape of electromagnet than SG. I believe SG is more of a triangular wedge near a half-circular socket. MIT's is on pp. 17, from a Google search of Lab 18, MIT and Stern Gerlach:

web.mit.edu/8.13/www/JLExperiments/JLExp_18.pdf

I'm going to answer these next to questions out of order, as I think I may be clearer that way.
I stated the idea poorly. I am less certain of the shape/distribution of the EM field as one approaches an electron's vicinity. An electron is a dynamic magnetic dipole -- for there are no monopoles of magnetic "charge", and thus the field changes around the electron as it "moves";
(Dispite wikipedia's present fictional monopole account.... which is "mathematically" equivalent, supposedly -- but whoever wrote that makes me groan.... ).
A dipole's detectable field scales as r**-3, where r is somewhat ill defined, but as one gets farther away from the source(s) the error in r from variations in dipole shape contributes less and less to the overall values. I think (but am not absolutely certain) that much of the geometry distortion of a dipole is primarily in higher terms (r**-4) etc.

The primary meaning of my comment is that there are several unknowns affecting the field of an electron which become more important as computes closer to it's location. These unknowns are not necessarily a result of the Heisenberg uncertainty principle.

In general, the idea that an electron is a physical particle capable of "spinning" on an axis is discouraged -- and although I am not certain of what experiment(s) disprove the idea altogether -- I seem to remember someone mentioning historical experiments that disprove any "radius" being assigned to an electron (either detected or not...). But if an electron is not a particle with spin, then it must still at least be a distribution of electric field and time; the latter idea is true whether or not an electron is a physical particle.

In SG, the developed photograph of the silver atoms which hit the target is quite distinct. So much so that there is little variation from electron to electron (and perhaps even less spread would be realized if the experiment was improved.)

See p. 56 for a photo of the actual silver atom pattern on the film target.
http://www.owlnet.rice.edu/~hpu/courses/Phys521_source/Stern-Gerlach.pdf

A couple of things to note about the photo.

The "ideal" silver dipole would be located at the center (x axis in the photo) for one launched non-skew into the magnetic field. It is precisely here where a spike develops in the x direction -- that spike is generally reasoned to have something to do with the non-symmetry of the pole faces on the magnet, although no explanation I have read is really satisfying for a mathematical analysis of SG's actual magnet is not done. The other plots side is flattened, and that is typically described as being because of the silver atoms physically hitting the rounded part of the magnet cavity.

I didn't see a plot for the MIT version of the experiment which uses a magnet shape which ought not have that spike, either. The only obvious clue pointing to the anomalous nature of the spike is that it is asymmetrical -- The x axis was oriented vertically in the actual experiment, so the spike is even against gravity if I recall correctly. But if the explanation of the other side's thin-ness is truly because of physical clipping, then there is no way to be certain the spike would not have shown up experimentally due to the field shape if the atoms had not hit the surface of the magnet. One really needs a predictable *linear* variation in magnetic field strength over space to be able to effectively correct for physical manufacturing limitations of the slits, etc, and SG doesn't really have that.

Comparing the demagnetized plate to the magnetized one, there is a fairly strong indication that the plates were sent on the postcard willy-nilly and with no definite orientation. (The mirror image reticule pattern on the right image re-enforces that notion...) The demagnetized version ought to be a purely flat line -- but isn't. In the lower half of the line a clear broadening of the pattern can be seen -- which if the magnet is truly off, can only be attributed to the shape of the slits allowing the silver out -- or the position of the silver oven behind the slits biasing the source statistically.
In any event, a careful inspection I believe I see trace widening on the upper half of the pattern in the magnetized view suggesting that the flat line slide is actually rotated 180 degrees from how it was taken in the other photo, with an optional mirroring on top of that...

Using that information as a corrective measure, and knowing that the slit is horizontal, one can estimate the amount of classical skew in trajectory a silver atom would have (incoming y angle and x angles with respect to the demagnetized photo, in contradistinction to a perfect 90 degree angle.); and since the size of the slit is likely large compared to the wavelength of the silver atom, these classical trajectories ought to be fairly accurate. Each point, then (y on the photo) has a definite angle (source dx/dz, dy/dz) from which the silver atom could have come, and the effect of this distribution needs to be calculated and compensated for to determine what an idealized simulation ought to look like.

Generally, there ought to be fewer angles of approach as one gets closer to the edges of the slit (y on the photo), although dx/dz will be fairly constant across the whole slit except where the demagnetized photo shows thickening.

So, I think the most accurate part of the magnetized photo will be the lower right quadrant of the photo taking the approximate center of symmetry of the shape as the origin.
Using a straight edge, the line widening as one gets closer to y=0, increases very linearly (eg: right most trace / lower right quadrant) until extremely close to the magnet center where the field of the magnet becomes less known / near the surface of the magnet. So, there are two (ideal) linear effects with approach toward the center -- 1) the width of the trace, and 2) the offset of the trace from the picture's y axis
.
The offset is explained by the experiment itself -- eg: the proportionality between magnetic field intensity gradient with x offset in the photo vaires approximately linear in SG (and linearly in the MIT version, according to the MIT text). The widening of the trace, however, is something I don't know how it theoretically ought to proceed, and which is caused by "all other differences" in the electron including any Heisenburg and QM interference effects.

If only the electron's position [delta x, delta y] were affected -- I could more easily separate out what causes the thickening -- but as there is also skew, the problem isn't solvable qualitatively in my head. The variation in trace with, then, is something I don't know if it would replicate in an idealized experiment or not; and will need to work out before verifying my simulation against SG.

If I am able to correct (reduce data) in the photo by mathematical modeling of skew, other features might become visible which presently are not. But to really do this correctly, the best source of information would be a run of the MIT experiment with computer usable data points to operate on. I'll have to look around and see if anyone has posted a good run, and if not, perhaps I will put some thought into re-constructing the experiment. I have the vacuum equipment, some very pure micro-crystalline level homogeneous Iron, and the machining equipment to re-construct the magnet, along with motorized micrometers (robotic) -- but that would take quite a bit of time and effort to put together for me in my present state .... if anyone knows of an online data source, I would appreciate it.

I'll probably post a bit more tomorrow... I am still trying to resolve a second question(/s) concerning the time/intensity distribution that an electron dipole has by focusing on what a quantized magnetic moment actually means, both classically (the limit) and Quantum Mechanically. I will need to think out loud, and perhaps will make some mistakes when doing so -- but hopefully the sharp eyes of others will be helpful there and I can get enough of an answer to my second question/(s) to model it reasonably well for simulation purposes.

We'll just have to see what works once I get there.... but I can certainly try it.

.--Andrew.

15. Jun 27, 2010

### edguy99

Thank you for the post and link: web.mit.edu/8.13/www/JLExperiments/JLExp_18.pdf, this helps the discussion a lot. From the figures, it appears they shoot potassium atoms through about a .5cm channel of a 10cm long magnet and check if they hit a 4mm paladium wire that is moved around. An animation width of 1 meter sounds good but I remain stuck on the timeframe that would show on your computer screen, atoms emerging from the oven (say one per quarter second) and hitting the wire within a second or two...

The speed of the potassium atom appears to depend on the oven temperature that drives the vapor pressure that determines the speed of the atom. I have assumed here that the speed is important since a slow potassium atom spends a longer time under the magnet then a fast atom and hence would be bent more. This would seem to suggest that even a small adjustment in temperature would cause the atoms to either spread out too much and be undetectable or not spread out at all. Perhaps I miss something there or mayby the temp at 190 degrees produces a consistent enough speed that you do not see the effect. or something else..

I agree completely. There are at least 2 major problems, the size you mention - I believe radius calculations for the mass involved come out way to high (300 femtometers vs a lorentz radius of 3 femtometers) and also a problem, I think the electron can be flipped over 720 degrees and a spinning disk is only 360 degrees.

Anyway thank you for the info, and if I can nail the speed of the atoms I visualize a pretty nice first animation showing an oven blasting (with a setting for temperature) out potassium atoms at a particular speed??? and rate??? and over a timespan (microseconds???) you would see atoms hit a wire after being bent by a setable magnetic field with the proper bending of up and down electrons. The nice thing about starting the animation at this level of time and space magnification is that its pretty clear what happens at this scale and there is no need to prob the details of what the atoms actually do or look like. Probing a more detailed time or distance scale will be more difficult.

16. Jun 30, 2010

### andrewr

Without checking, that sounds correct from what I remember.

I agree.
The experiment requires the student to wait for the temperature to stabilize for a long period of time, otherwise the data will be in error -- so I am fairly sure that small variations in temperature affect speed significantly.

The divergence of the magnetic field is what causes a vertical accelerating force to appear on the magnetic dipole in the first place, and divergence is independent of speed. The traveling speed of the dipole, then, has nothing to do with vertical acceleration -- eg: the overall charge is neutral, and the + and - are both moving in the same direction so even Hall effect does not occur. However, the + and - charge would want to be deflected in opposite directions horizontally in proportion to the speed with which the atom travels through the B field, and I am sure (classically) that would cause the unpaired electron and remaining nuclear charge to align on a horizontal line with respect to each other and in a plane 90 degrees to the magnetic field direction.

There is, then, going to be a vertical force vector independent of speed, and a horizontal one which is counterbalanced by electrostatic attraction. Since acceleration is an integral of force ( on average f/m), the deflection depends on time spent in the magnetic field. Even at a constant temperature speeds will vary statistically. These variations will cause (add to) trace width spread in the photos shown; and also (small) skew angle which does the same thing. The lab sheet may contain enough information to compute the speed variation, and one can compute a percentage of average time which eg; 99.8% of atoms will fall within, and the spread of the atoms (not including skew angle) due to this variation in time will be a linear fraction of the total deflection. % time error = % y deflection in error.

You'll have to excuse my ignorance of vocabulary; as a BSEE, I was taught primarily about semiconductors which is a bulk phenomena. Different terminology is employed, perhaps somewhat simplified, because the dominant effect is not always the same as would be the case with discreet items. When you refer to a "Lorentz radius" are you speaking about a contraction in length due to motion of some kind? And regardless of that, what calculations did you do to arrive at those particular numbers?

You're welcome. And thanks for just speaking about the subject in general; in your own way you have brought up something I did not think about regarding this experiment -- and that is the fact that the scale must exceed the few cubic microns which I can simulate realistically. In order to simulate the MIT version of SG, I will have to find a way to delete nodes from one side of the space I am simulating and add those nodes to the other side so as to keep the moving atom inside a window of simulation space. (Not to mention, computing a new value for the moved nodes...)
That will, unfortunately, complicate the simulator enough to seriously increase the time it takes me to implement it as my present simulator does not have that capability. :yuck:

+1 to the to-do list.

More will be coming about the "spin" of an electron when I have a chance this evening, along with Q/Qs that I have about how to work with it -- and a short summary of what I am presently considering. (Next steps.)

17. Jul 1, 2010

### andrewr

OK. Fell asleep on vacation ... and fell asleep last night early.... and this morning again .. but I'm back.

A brief review of the ideas I need to build the simulator:

So far I have spoken about orbital transitions -- which has nothing to do with *where* an electron(/other item) is at the instant before the state change and after. Only the probability of where it is found in the future can change -- without the "actual" position changing for a single run of the simulation at the instant of state change. So, one may think of a simulated orbital change (state change) as being a change in the statistical rule about where the item will diffuse in the future.

Alexm ties "spontaneous" decay of excited state orbitals to a "disturbance" in the EM field by invoking Fermi's golden rule. This idea is new to me, for in semiconductor physics the mechanism of spontaneous decay is *NOT* explained, rather it is pragmatically considered an empirical property of the semiconductor which is modified by disturbances in the EM field (eg: in a perfect crystal, it would be empirical -- but in a doped one, the lifetime of an excited state is the empirical value *decreased* by the probability that a fermion will diffuse into a defect in the crystal causing a stimulated decay of the excited state.) Since no one has contradicted Alexm, his opinion stands as the method I will attempt to use for simulation.

The remaining issue I have to deal with is the "spin" state of an item. This state is what causes an "anomaly" in emission spectra similar to fine structure (haven't studied this, so I am speaking in general) based on the magnetic state of individual protons in the nucleus coupled to the magnetic state of the light emitting electron. The anomaly is extremely small since the Bohr magneton is affected by the mass of the item -- and protons being relatively heavy, have ~~ 1/1836 times the effect of an electron.

I have no engineering text describing the exact nature of various experiments about the Bohr magneton and guidance/input from those who might know of experiments/articles to review would be appreciated -- but I do know that spin of an electron has a mechanical moment from an experiment done by Einstein and a colleague, and I do know that the orientation of the magnetic dipole in space can change.

Since classical angular momentum is found experimentally when dealing with the Bohr magneton -- along with non-classical quantum effects -- I am simply going to write down what I think I know -- and hope for correction or at least suggestions of what I might be over-simplifying if anything.

A top (mechanical moment/dipole) spinning which has force (torque) applied to re-orient the axis of rotation will resist re-orientation by precessing about it's rotary axis. Gyroscopic formulas tend to focus on the period of precession -- and I am unaware of how one calculates how much a top will reorient it's axis of rotation with respect to time and torque; nor if such a reorientation is possible with a frictionless device....

I assume reorientation is possible even in a frictionless environment, since permanent magnetism in non-superconductor material is (as far as I know) purely a spin based phenomena and there is no friction that I am aware of for a spinning electron (there are no electrons that *DON'T* 'spin'). I suppose it is possible that quasi-Kepler motion of an electron around an atom (P orbital, etc.) might also contribute to magnetic field, but everything I was taught in chemistry indicates that unpaired electrons are the sole cause of magnetism. So I am assuming that is the case for the moment. (pun intended.)

Also, I have seen the "orientation" of spin as being what supposedly changes in the NMRI/MRI effect.
I have not seen any mathematics which explicitly show how QM-wise engineers came to predict that the magnetic field would "FLIP" when appropriate resonance radiation was applied to the electrons in hydrogen (the typical NMRI target), although the energy required being proportional to a statically applied magnetic field does make sense. The energy required to reorient the magnetic dipole being readily computable from analogous situations in DC/AC motors being used for power generation. The magnetic dipole rotor requiring work to flip orientation while in the magnetic field of the motor's stator.

I am presuming that a rule akin to Fermi's golden rule needs to be used in this case (spin flip) -- but I have not come across in commonly available literature how NMRI is deduced, nor how to handle the QM states.

Any pointers/example calculations which could be applied in the case of a non-static magnetic field would be appreciated, as large uniform magnetic fields are not what will be found in molecular simulation -- but rather weak dipole moments from various items located near the "electron" or other simulated item of interest which dynamically can re-orient.

A second issue that comes to mind, and which may/or not affect the discussion is that Ampere's law and other effects computed using calculus assume a uniform magnetic field when taking the limit as area goes to zero (eg: differential area or volumes...which have an existence whose consistency is questionable given the Heisenberg concept... ). A magnetic dipole's strength is measured in current x area enclosed by the current loop. I*area**2. However, making a large rectangular and planar loop of wire and applying a constant current to it -- and then using a compass as a probe the field's strength has convinced me that B field magnitude in the plane of the loop, and inside the loop, is stronger as one gets closer to the wire -- and weaker as one gets closer to the center of the loop.

When replacing distributed currents with individual electrons, protons, etc. The magnitude of the current is replaced with the velocity of the charged atomic item in relation to the path length required to circle an "enclosed" area. Therefore, the product of area enclosed to velocity of circulating charge (or E-field disturbance), is then constant if one is given a constant magnetic moment (eg: The identity of the Bohr magneton for an electron..) So, the diffusion of the electron in space to create the magnetic moment must have an average value in an angular sense -- and because of inertial reference frames, helical paths must also be possible where the path is closed only on a moving reference frame.

What I have stated here, is the sum total of what I am certain of regarding magnetic moments; discussion of how these ideas fit with QM/Schrodinger's would be greatly appreciated.

--Andrew.

Last edited: Jul 1, 2010
18. Jul 1, 2010

### edguy99

A quick sidenote, the June 10/2010 issue of Nature has a pretty good article:

"Electron localization following attosecond molecular photoioniztion" that starts with "For the past several decades, we have been able to probe the motion of atoms that is associated with chemical transformation and which occurs on the femtosecond (10^-15) timescale. However, studing the inner workings of atoms and molecules on the electronic timescale has become possible only with recent development of isolated attosecond (10^-18) laser pulses."

The attosecond timescale will (I think) prove to be very important in the history of the electron.

Regarding:
The object you may be thinking of is a Bloch Sphere, animations here. Although not directly derived from general relativity, It does properly predict most (if not all) of the properties of the proton. Its important features are an axis that points in a particular direction and a spin direction.

and:
Proton MRI is generally modelled by the Bloch Sphere, animations here and has the right kind of magnetic moment that you would expect from this type of spinning object. Although these animations are showing the protons "precessing", in a solid object it may be just as likely that these guys are not precessing, but somehow building up energy and then flipping. The Bloch Sphere works great for protons and neutrons. For more elements with multiple protons and neutrons, you dont just have spin up/down or more commonly termed +1/2 or -1/2. Lithium-7 nucleon for example has 4 spin states labeled +3/2, +1/2, -1/2 and -3/2. It does not matter so much what the picture looks like, its just important that you label your objects with the right spin so you know how it reacts to the forces around it.

and:
The electron at this level is more difficult. As I understand it, the Bloch Sphere does not work for it as the magnetic moment is slightly over 2 times too big to work (it takes too much energy for its size to flip it..) and there are many other problems. Electron flipping is the basis of atomic clocks and I think many of the properties have come about to explain this.

The Lorentz radius is: "In simple terms, the classical electron radius is roughly the size the electron would need to have for its mass to be completely due to its electrostatic potential energy - not taking quantum mechanics into account." and comes out to 3 femtometers. I think that this much mass in a 3 femtometer radius is spinning at something like 100 times the speed of light. Ie, the electron looks too small to fit that much energy in.

You commented "A dipole's detectable field scales as r**-3, where r is somewhat ill defined" and "the product of area enclosed to velocity of circulating charge (or E-field disturbance), is then constant if one is given a constant magnetic moment "

An important structure to is consider are twistors (one animated here, ignore the bottom row as it is a work in progress and sometimes you have to click a couple of times to get them to run right). It has a defined axis and it has the 720 spin (watch carefully as the blue dot goes through 360 degrees, the electron ends upside down and must go another 360 degrees to get right side up). It would pack a lot more energy in a lot less space and has kind of a "wierd" layout of its dipole field...

I have assumed spin flips do not apply to the SG experiment. There is no reason why the valance electron would not simply align itelf with the external field and slowly pull the atom in one direction or the other depending on whether it was up to down.

Dont know if this helps much, but I have to get back to trying to find the speed of the potassium atoms..

19. Jul 2, 2010

### andrewr

Three years ago I was speaking with a physics prof. about the attosecond timescale, funny how it is just "news" now....
With such short pulses, the wavelength/frequency is not very precise. Many experimenters seem to be trying to excite the electron into "wave packet" localized states for study at present. I hope your prediction holds well over time.

Thanks, I'll look at those -- the block sphere is a state-space animation, not so much a physical space animation from what I gather so far. I expect to take a few days sorting through ideas to get a feel for what they are about, and also reviewing a book I bought on introductory QM (2nd ed. by David J. Griffiths) which my college uses in classes for physics majors (eg: I bought it on a whim to help refresh my memory and extend my BSEE knowledge...).

OK.

Oh ... m = E/c**2.
but, electrostatic potential energy ? that would be the repulsion/attraction between charges; do you mean the energy inherent in a magnetic dipole moment, or am I overlooking something obvious?
(not that it matters since the result is impractical anyway.)

OK, I see. That's going to take a little time to sink in.

That's a problematic assumption;
The angle that the electron makes with respect to the magnetic field is arbitrary when ejected from the oven. The magnetic field, then, applies a torque to the magnetic dipole attempting to align it with minimum energy in the field. The quantized spin must be "chosen" / "observed" by interaction with the magnetic field, and thus "choose" spin up or down. In essence, I envision that it must "FLIP" from an random analog angle to a quantized one either aligned or anti-aligned (unless it happens to be aligned from the start which is highly improbable).

look at a classic angular momentum demonstration:

In the classic example, when the person's hands apply a large amount of torque the gyro-wheel re-orients its axis of rotation quite rapidly -- however, when a smaller force is applied (eg: the free hanging state) the wheel will precess for quite some time with very minimal change in rotation axis.

In the SG experiment, one has to determine which behavior will result from the magnetic field (or to what degree the axis will rotate / flip ) from the initial condition which has the orientation of spin randomly and continuously assigned an angle relative to the magnetic field.

In the demonstration -- I am not entirely certain that the wheel would have flipped at all without friction reducing the speed of the wheel, however -- if friction is not the main player in the effect, then a Bohr magneton can also be expected to slowly orient itself toward the magnetic field with time in addition to precessing.

There is nothing stopping the "valence" electron from orienting itself relative to the nucleus very quickly -- however, during this orientation, I would not expect the rotational axis to change much. That is, the electron will translate easily relative to the nucleus -- but like a gyroscope -- it not change its rotation "angle" to change it's position relative to the nucleus.

It is a good start; thank you; though it does add questions...

BTW:
I think the average (non-relativistic) speed, due to temperature, is simply:
v = sqrt( 3*Kb*T / m )
Kb=boltzman constant.
T=temperature (kelvin)
m=mass of a Potassium atom.

I am not sure of the statistical spread (variation/deviation) of speed off the top of my head; but you can at least get an error slope %speed chg vs. temperature change (degree K or C) which would be useful.

--Andrew.

20. Jul 2, 2010

### edguy99

I have a gyroscope just like this and I find it much easier to compare with a bloch sphere. I like how it spins with its axis straight up (or slightly askew as it precesses under gravity) whereas the bicycle wheel has a horizontal axis relative to gravity.

Many thanks for this, for some reason I thought I had to know much more to calculate the speed.

Last edited by a moderator: Sep 25, 2014