# DIY: Simulate Atoms and Molecules...

by andrewr
Tags: bohr magneton, excited states, simulation
 P: 263 I would like to see how far I can get writing some novel & simplified simulator code for molecular modeling based on QM. ------ Background follows ---- skip to next post to get to the details of the project & starter QM question.. My son took chemistry this last semester (High-school level) and to help him out I wanted to find a free molecular modeler tool that would allow him to visualize VSEPR like images for molecules (PI/Sigma bonds), and also for myself; I was severely disappointed. As I have have some QM work that I may later publish (free) I don't want to enter into any kind of ip contract for such a program. eg: most molecular modelers are either 1. $1K> / seat or 2. proprietary, and want signed papers ... although the methods and implementations have been around for nearly as long as computers. This became acute a few weeks ago when I succeeded in doing an electrolytic process that I previously though impossible because water would be decomposed; but it worked in spite of that. I desperately wanted to understand this chemical process although I am not a chemist but an EE. (Chem was my weak point in college). After a fairly diligent search, I discovered that the only program which was totally free in a sense usable to me was Ghemical, built on top of Jmol as a visualizer. Sad to say the code is extremely buggy in places; The Java version never got beyond the splash screen, and the C/fortan code version could not completely compile -- especially in the area of orbital interaction based on old Fortran code, so that some features were disabled in my version. I still had the crude "optimize geometry" function which I had hoped was close to VSEPR (Vander Waal's outlines...) and that would have been good enough -- but even that has serious bugs; Eg: when I built a test crystal that I knew what its bonds were the optimization turned it into a 2D item with atoms that would practically have to undergo nuclear fusion to be in the places they were optimized to... I am not adverse to fixing code -- I have done quite a bit of it including kernel drivers, but I would rather spend my time on something other than old algorithms such as Hartree Fock, slater determinants, and all the well known problems of these variational methods which make my eyes glaze over. Nobody yet has a flawless simulator.... So I would like to write my own code, both for my son and I -- but do not understand QM sufficiently to complete the project, and would like some feedback on problems I see and perhaps a heads up on any issues where I show myself to be blatantly ignorant of something. (As a BSEE I solve solid state QM problems regularly, but molecular modeling is a bit different ... ) I have figured out how to numerically simulate a field, such as E-M field, in ways that are reasonably accurate and efficient (sparse matrix). I wrote an electronics simulator using these techniques and am able to simulate circuit functions and nonlinear devices with extreme accuracy. I know this is sufficient as a foundation for actually building virtual "space" for watching EM in the classical Maxwell sense from discreetly located electrons and protons. So, I would like to experiment on the next possible step -- replacing Hartree Fock approximations, Slater determinants, etc, with the pilot wave interpretation of the Schrodinger equation and a statistical sampling of a (perhaps modified) Schrodinger equation. As inspiration for trying this, I came across an old textbook (1950's) showing all of the QM orbitals S,P,D,F, modeled *correctly* using a mechanical spinning top -- and the accuracy was surprising to me -- so I do believe that a semi-classical simulation can produce fairly accurate results.  P: 263 My first goal is a simple electron simulation, and perhaps atomic simulation of Hydrogen and Helium. In particular, I would like to be able to simulate effects of the Bohr Magneton, orbital transitions, and the Stern Gerlach experiment; which ought to work for hydrogen, but fail for Helium. For now, I am looking at the Bohr magneton as an effect proceeding from a point (or ring), affecting the EM field -- and am simply using the definition of energy in a dipole magnet to simulate the electron spin's effect using a quasi ZPE technique, and using the pilot wave idea with QM/Schrodinger's to decide where the electron may statistically go. I know how to use Boltzmann eqn. to determine the probability distribution among orbitals; But I am stumped on orbital transitions -- since I am using a dynamic time based simulation. I also hope that the speed of light effects also included in my EM field may yield results which are consistent with relativity ... but I'll settle for classical if not. What I envision is that in the Schrodinger Eqn, individual "orbital" states can be computed given the immediately surrounding EM field. Ignoring for discussion, but not simulation, the motion of the proton -- thus -- the electron can be said to be in a state who's probability distribution is known (eg: at statistical sample time t) for the EM field given, regardless of how many particles created that field. The self field of the electron/proton/etc being easily removed for that calculation. One can in temporary memory, vary the total energy of an electron in say the 1s state to the inter-orbital state 1/2(1s + 2s) to determine when the EM field would permit the electron to change to the 2S state (although no change in electron position would be recorded). I intend to experiment with various algorithms for conserving energy... and am not concerned about that yet. I know from semiconductor physics how to compute the probability of an item being in various states; but I have no idea what the mechanism/mathematics of spontaneous emission ought to look like. I haven't found any mathematics which I understand and which explain how to estimate/compute the time an electron will stay in an excited state -- and what dependencies that has on the EM field around it. In reality this may not be a problem, as I expect the EM field will always be changing and may simply naturally fall to lower levels at the appropriate times statistically, I have no way of verifying if a repeated experiment is right on average or not without some kind of estimate. So, I'll stop here as I have no idea how complex the answer will be to this first question, what determines the lifetime of an excited state. --Andrew.  P: 88 Probably I am not helping you out with this reply, but I am also interested in developing my own C code for simulating quantum physics. Up to now I have succeeded performing classical many-body gravitation and classical Ising models, but I would like to jump to quantum realm. My wife used Gaussian software to perform chemical simulations in her PhD thesis, but I would like to build my own code from scratch. Just when figuring out what to do in my mind, I have found an important problem. I would like to get a dynamic picture as the result of the simulation. I mean, the position of the electron at different times. But, according to quantum mechanics, getting the position implies collapsing the wave-function. That leads me to a Zeno effect, or kind of, because the simulator is constantly performing position measurements on the system. I am concerned about that, but I am not sure if my point is correct or I am missing something important. Thanks. P: 263 ## DIY: Simulate Atoms and Molecules... Heisenburg's principle applies to actual measurements. He himself, in one document I read, admitted that once the experiment was over his principle did not by itself prevent one from determining where the measured item was in the past. (That is not to say that it is possible to always determine it in every kind of experiment.) However, the Heisenburg uncertanty principle does destroy the ability to predict future positions after a measurement is made. There are philosophical issues which I really don't have answers to. But a hand-grenade simulator does not have to be perfect, just close. I would point you back to the physical top model I mentioned which generates the different orbitals of quantum mechanics. The device used time lapse photography to take pictures of a light which a specially designed gyrating top manipulated. In this way, the more often the top was in a certain position, the brighter that point was on the film. "Modern college physics" Fifth edition, Harvey E. White -- 1966, (C) Litton Educational Publishing, Published by: Van Nostrand Rheinhold Company, N.Y, N.Y., PP. 562. The top produced reasonably accurate pictures all the way to the 6**2F[7/2] orbital. Since the model is macroscopic, Heisenburg does not really apply. Considering the activity of the top faithfully reproduced the orbitals -- and also considering that the uncertanty at the atomic level is high enough to make the precision of individual measurements impossible -- one is left with the conclusion that even if the model is not working identical with nature -- it none the less is a good enough approximation to solve problems of interest. I analyzed the Schrodinger equation some years ago substituting in a classical velocity interpretation for probabilty, and wanted to see how that would correlate with a periodic oscillator and other such standard fare. I especially wanted to do it because I wanted to see if Schrodinger's equation took into account correlation from repeating orbits (assuming a non-stationary one). The answer was a surprising no. The probability does not include such correlation, but appears to represent the probability of a single shot experiment through each point. It is a diffusion equation, and I don't know why it doesn't contain any useful information about possible repeating "statistical" orbits -- except that the equation somehow implies that all paths (not speeds) are equally likely. Again, this is philosophical -- not so much practical for it may be simply an artifact of the equation -- but it would seem that in the 1D particle box, it is perfectly legitimate to say that the electron is under one "hump" of probability *only* and never goes through the zero probability points (infinite speed exceeds C) -- but the odds of the electron being in any one of the humps is equally likely. The alternate interpretation, and the more knee jerk one -- is to assume the electron travels by tunneling from one probability hump to another since they are all there in the solution. I am not familiar with Ising models, so I can't really comment on what the changes will be like. In the end, though, the Schrodinger equation to me looks to me like a 3D version of Bohr's equation -- that is, instead of saying the nodes along a classical trajectory must be exactly a wavelength multiple -- Schrodinger's appears to say that the phase loop around *ANY* path must be 360 degrees with the local "wavelength" varying according to the EM field. Bohr's orbits being circular and therefore tracing out a constant energy had only a single value of wavelength; so Schrodinger simply generalized it and perhaps got rid of degenerate cases. (Eg: 1s does not orbit kepler like, but hovers based on the wave equation. Don't forget a Bohr magnetic spin effect is also involved in the hovering. ) I am interested to see what happens. Sci Advisor P: 1,865  Quote by andrewr visualize VSEPR like images for molecules (PI/Sigma bonds) VESPR doesn't involve pi and sigma bonds, that's valence-bond theory.  most molecular modelers are either 1.$1K> / seat or 2. proprietary, and want signed papers
That doesn't nmean others don't exist. E.g. PyQuante, and quite a few DFT codes are available for download directly. Also programs like Dalton and ORCA require a license, but don't have any particularily draconian restrictions. Mostly they want to keep track of who is using their software, and make sure you cite them if you do.

 ... although the methods and implementations have been around for nearly as long as computers.
Longer.

 This became acute a few weeks ago when I succeeded in doing an electrolytic process that I previously though impossible because water would be decomposed; but it worked in spite of that. I desperately wanted to understand this chemical process although I am not a chemist but an EE. (Chem was my weak point in college).
QC software won't help you if you don't already know a great deal about what you're studying. You need a solid understanding of the chemistry involved before you can do calculations. (you also need a solid understanding of the methods). Whether an electrolytical reaction can occur can be determined from standard electrode potentials.

 I would rather spend my time on something other than old algorithms such as Hartree Fock, slater determinants, and all the well known problems of these variational methods which make my eyes glaze over. Nobody yet has a flawless simulator....
Which "well-known problems"?
And what's wrong with Hartree-Fock? It's been the basis of the majority of QC methods for the last 80 years, and there have been good reasons for it. (Nor is it an algorithm, btw, it's an physical/mathematical approximation. SCF is an algorithm) What do you mean by 'flawless simulator'? There are quantum-chemical methods such as CI and CC (which are based on Hartree-Fock) which are exact, in principle. You can hardly do better than that, as far as accuracy is concerned. (Scaling and speed is another matter)

 So I would like to write my own code, both for my son and I -- but do not understand QM sufficiently to complete the project
The best recommendation I can give you is to get some textbooks and start studying QM and QC, work your way up to the current state-of-the-art in the field, and then you can start thinking about how to improve on the existing methods.
P: 263
 Quote by alxm VESPR doesn't involve pi and sigma bonds, that's valence-bond theory.
That's VSEPR not VESPR; You just blew it.
Looking in my Chem book and several others either ball and stick sketches or overlapping P orbitals are sketched in the VSEPR chapter. What I said still stands: VSEPR like sketches.
If you wish to quibble, I never said Sigma and Pi orbitals are officially part of VSEPR theory -- I don't believe you are denying Sigma and Pi bonds exist in molecules, and that VSEPR theory is used to sketch molecules. So it seems you are concerned over accuracy of the sketching or something, or just didn't notice things like:

 However, molecular mechanics force fields based on VSEPR have also been developed
http://en.wikipedia.org/wiki/VSEPR_theory

And how does one sketch these force fields? Is it forbidden to use a pi LIKE bond sketch?
Or are you demanding I dot my i's and cross my t's because my statements might not fit your categories perfectly, and my kind hint that similie/analogy was meant was missed by you?

 That doesn't nmean others don't exist. E.g. PyQuante, and quite a few DFT codes are available for download directly. Also programs like Dalton and ORCA require a license, but don't have any particularily draconian restrictions. Mostly they want to keep track of who is using their software, and make sure you cite them if you do.
Draconian? I don't recall saying that ... So I'll spin that for fun -- I was merely indicating that I didn't want to offer temptation to anyone. The very fact that someone is worried about who is using their software and wanting credit implies that very temptation.
Do you trust me not to use your name and address against you *in any way* if I somehow am upset BY you and have these pieces of information? Do you want to give me your name and address with no strings attached to do with whatever I want? (I'm not Dracula, that's for sure.)

I'll look into it again ... but I think PyQuante was the one my friend tried and didn't work.
As far as I know, the DFT algorithm[/s] is/are probably worse than Hartree Fock implementations -- it is designed to reduce computational load not improve accuracy.

 Despite recent improvements, there are still difficulties in using density functional theory to properly describe intermolecular interactions, especially van der Waals forces (dispersion);
http://en.wikipedia.org/wiki/Density_functional_theory

 Longer.
The algorithms I was speaking of are computer based, hence they couldn't have been around longer -- there was at least the time for an implementation to be coded after the invention of the computer. Do you not count the Babbage machine as a computer? It does run a crude program.
Oh, am I being unfair?

 QC software won't help you if you don't already know a great deal about what you're studying. You need a solid understanding of the chemistry involved before you can do calculations. (you also need a solid understanding of the methods). Whether an electrolytical reaction can occur can be determined from standard electrode potentials.
Point sort of well taken. I came up with the generation of hydrogen at the result of electrolysis -- based on the standard electrode potentials corrected for temperature and concentration even (in water).
What surprised me was a difficult to discover chemical reaction that someone else had found which made it happen in spite of this problem. In most experimental attempts I get hydrogen contaminated product. The success of this method delights me, and I would like to explore why it works; I'm not sure if it has to do with complexing, or the particular acids used -- especially because I had to substitute a chemical in the reaction that wasn't in the original experiments -- for environmental reasons.

Not only did the reaction succeed, but it went beyond even the original experiments in quality. I found that I could produce a product *and* the expected post processing from the literature was un-necessary in my formulation.

I suppose I could try to figure out what happened using a slide rule, paper, and other easily available methods -- but then I would be likely to make a mistake, and it would take a long time to redo the calculation. I think it would be nicer to have my computer do it for me.

As far as a solid understanding .... how would you know?

 Which "well-known problems"?
I rest my case.

 And what's wrong with Hartree-Fock? It's been the basis of the majority of QC methods for the last 80 years, and there have been good reasons for it. (Nor is it an algorithm, btw, it's an physical/mathematical approximation. SCF is an algorithm)
It is an algorithm as well. Many mathematical statements are algorithms -- take Sigma notation for example. If you want to quibble over distinctions which don't make any difference... my brother was a math major, he likes to do it too, sometimes -- but less often now. Perhaps you would like to be introduced?

 What do you mean by 'flawless simulator'? There are quantum-chemical methods such as CI and CC (which are based on Hartree-Fock) which are exact, in principle. You can hardly do better than that, as far as accuracy is concerned. (Scaling and speed is another matter)
Oh, they are *exact* ... then either they are not solvable in many cases ... or the people who implement computer programs based on them simply can't get them to work in every single case ... or they are not free and require coding anyway. Am I mistaken?? I am not omniscient, so since I get frustrated with other people's garbage is there a problem with my attempting something new?

Historically, I can say nothing pisses me off more than spending $5000+ on a piece of software, and having to contact tech support only to have them tell me; "the soulution is -- just don't do that.". I am speaking from experience. Somhow the spend$1000 on a gamble just isn't appealing either.

 The best recommendation I can give you is to get some textbooks and start studying QM and QC, work your way up to the current state-of-the-art in the field, and then you can start thinking about how to improve on the existing methods.
I have studied variational calculus since the 8th grade -- my original purpose in doing so was to understand Schrodinger's equation.

I have a hard time believing what you say here as anything but double talk.
If these methods worked perfectly or exactly as you seem to be selling them as, and if you really include what I desire to simulate -- in the sense that I am proposing -- then they would definitively predict what happens in say, "the Bell inequality", based experiments. Considering most of the physics community is still arguing over that, including ZPE (with known flaws) as one objection, and no definitive set of experiments has yet buried the competition ... I see no point in arguing over what is better, or what is "perfect".

I'm interested in an experimental simulation method; the one I have chosen is just different -- it is not likely perfect either -- but I would like to try it; Are you interested in actually addressing the thread questions I am proposing or something else after my clearing the air here?

If you have anything to add concerning how to compute how long an electron stays in an excited state, I'm interested. That is one of two issues which prevent me from finishing the code which I already have; as I said at the start of the thread, I'd like to see how far I get; and I mean after I know the answers to my actual questions.

--Andrew.
 Sci Advisor P: 1,865 -I do deny sigma and pi bonds exist in molecules. Orbitals as well. These are all abstract concepts derived from various theoretical models, not real physical things. -Force-field methodology doesn't have anything to do with 'sketching' anything. It's a semi-classical physical model, not a single scalar or vector field or anything of the sort. -All DFT methods in practical use for molecules are more accurate than the Hartree-Fock method. Obviously you don't know the relative size of dispersion effects if you think that would be a deal-breaker. -Yes, QC methods were being used prior to electronic computers, in the late 20's and 30's. Newton didn't have any problems developing his algorithm for root-finding without one either. -Computers WON'T 'do it for you'. That was the point I was trying to make. Most chemists in most situations do not use quantum chemical software at all. Most chemists don't know how to use quantum chemical software. -You think it's reasonable to demand that they give you software that took them years to write, no strings attached? Crazy. -The fact that a method is exact does not mean a given implementation of it is necessarily numerically stable in every case, or that code is bug free. -The fact that a particular implementation might not work (perhaps due to user error), does not invalidate the theory behind it in the slightest. -A quantum-chemistry program can't and won't tell you about the Bell inequality. It's an entirely different system and problem. They won't calculate scattering cross-sections or do your taxes, for that matter. Also, it's not the calculations themselves which are in dispute. -It's "Heisenberg" not "burg" (since you seem to think spellings are of great importance) -The Uncertainty Principle is not a simple experimental limit, it's a theoretical limit. This is explained in every textbook and in a thread in this forum once a month or so. -You can produce _exact_ pictures of hydrogenic orbitals simply by plotting the spherical harmonics, since that's what they are. But they won't give you a good approximation of any multi-electron system. -How to calculate lifetimes of excited states is also in every textbook. Fermi's Golden Rule, look it up. -The Schrödinger equation does not have even a superficial resemblance to the equations of the Bohr-Sommerfeld models, in my opinion. Beyond that, I won't address your 'independent research' since this forum has clear rules about that stuff. And that's pretty much all I have to say on the topic.
 PF Patron P: 271 "Nobody yet has a flawless simulator...." Although I wont dispute this, I write animation software in the timeframes/distancescale from microseconds/nanometer to show photon size and movement, attoseconds/femtometers for electron motion and Yoctosecond/yoctometers for nuclear decay for neutrons under a collection of different physical forces. You may require several animations specifically to account for distance and time (large scale and/or small scale). Have you given any thought to distance and time scale for your simulation? Some view this type of animation as "speculative" and do not like the discussion here, but I would be happy to discuss it if you would like to send me a message or sample picture of what you have in mind. Regards
P: 263
 Quote by edguy99 "Nobody yet has a flawless simulator...." Although I wont dispute this, I write animation software in the timeframes/distancescale from microseconds/nanometer to show photon size and movement, attoseconds/femtometers for electron motion and Yoctosecond/yoctometers for nuclear decay for neutrons under a collection of different physical forces.
Hi;
I think the photon scale and the electron are interchanged?

I was planning on using my electronics simulator base as-is for the project at first -- and modifying it later for improvements. There is a saying "one in the hand is worth two in the bush."

The electronics simulator auto-adjusts time-steps depending on interaction values -- that is, the timesteps computed become larger as the path(s) become more predictable / repeating of past history -- and smaller where a small change can cause a large instability.

The algorithm is complicated but like a video game drawing algorithm -- is carefully optimized in the inner loops; Roughly speaking, it allows me to simulate around 1000+ items in a field, and easily 100,000 nodes+ in electronic circuity on a modest single core Celeron processor (SSE2/SIMD) -- although, the computation time drops exponentially with the *randomness (non-repetitive change); and conversely accelerates with the repetition of similar events/changes. (*randomness within a steady state is not what I mean.)

The details of the algorithm are beyond the scope of Quantum Physics, obviously, and are a programming/data-structures nightmare; but it's extremely powerful and fast compared to alternative solutions used in the electronics simulator industry (eg: typically Laplace transformations).

At heart, I have opted for a compute quickly -- and use a statistical method to do quality control.
If you are familiar with industrial process control -- the method is essentially the same kind of thing.

In the electronics realm, I specify the time-step I wish to output images/plot at, and the simulator discards all intermediate states between frames to be drawn. This eliminates the highly variable time-step plot problems required for efficient & accurate calculation of events; In electronic simulation, the time-step typically varies from milliseconds down to picoseconds. A crude estimate for (super)atomic phenomena is in the range of milliseconds down to sub-atto second 0.01 atto (YUCK).

I haven't given much thought to the time-steps I will output -- outputting every subtle change doesn't make a whole lot of sense as a large number of small changes are required (typcially) to make a long term macroscopically noticed change;

I plan, for right now, in just hand setting the time-steps as I do in electronics simulation for different regions of interest. Eg: large time-steps which allow me to ignore the development of the initial state (typical state) of the system -- but still do a sanity check. And then focus the time-step down in regions where interesting things are happening that I am trying to understand better.

 You may require several animations specifically to account for distance and time (large scale and/or small scale). Have you given any thought to distance and time scale for your simulation? Some view this type of animation as "speculative" and do not like the discussion here, but I would be happy to discuss it if you would like to send me a message or sample picture of what you have in mind. Regards
Even though time is involved in my simulation, I was not planning on making time evolved movies in the end. The reason for this is twofold --

1) the simulator itself determines how to focus CPU power computationally on different regions of space; This is what allows the simulation to proceed at much quicker rates than would be possible if all points were computed for the same time-step everywhere in parallel. Items of circuitry / space which are sufficiently decoupled to introduce small errors by reducing the update rate of *changes* in interaction become time-wasters each time a displaying of state occurs. Of course, blindly printing every time-step automatically chosen by the simulator is the absolute worst time waster. eg: 1e6 + times slower....

2) In electronics, the time-evolving waveforms are often important; so much so that they become unintelligible if the time-step varies, so I am forced to plot a maximum & fixed time-step rate if I wish to easily extract information about the evolution of the waveforms in time -- so that is the way my simulator works now -- and to keep the cpu time down, when the simulator has to reduce the time-step for accurate results; intermediate steps are discarded from the plot to save time. To keep the problem tractable once I get to large (65-100 atom) systems, I plan on having the simulator ultimately record changes in state and not changes in time; and I ultimately, I would like to be able to specify which states are of interest to reduce the data even more.

I do have open questions about how much information to save at each plot point; but I can't solve those in my head -- I have to actually experiment to see what is possible.

The space, is unfortunately, limited by the nature of the simulation, cache memory, main memory, etc.
I won't be able to simulate more than a few cubic microns of space -- and only a 1000-2000 or so electrons/protons/etc. within that space. (Presuming the questions I am trying to get answered do not significantly degrade the estimates.... engineering :) )

Why would some consider it "speculative", atto second laser pulses are regularly used to excite atoms into the wave packet superposition of states which mimics classical motion. That's experimentally done, and I wasn't aware that the people doing this were violating any principle of quantum physics. I know that when I first heard of the frequencies they were talking about and atto second that the uncertainty principle crossed my mind. For the purposes of simulation, though, these small times replace the differential element in calculus -- and anyone who uses calculus to work with Schrodinger's is automatically guilty of using small times as well, for a mathematical purpose if not a physical one. hmmm..... Separation of Church and state, or ought it be Calculus and state... hmmm....

I wasn't planning to get into that. And from the other response I got, I need to correct the extra words being put on my lips...

--Andrew.
PF Patron
P: 271
 Quote by andrewr Hi; I think the photon scale and the electron are interchanged?
You could be right, I'll try again (feel free to correct):

1. units Femtosecond(10^-15) and Nanometers(10^-9) to show photon size and movement - with a viewing width of 10 micrometers and a steptime of 10 femtoseconds, a 600 nanometer photon will bounce back and forth on your screen at a comfortable rate when viewed at 30 frames per second. Electron and protons will not show any motion.

2. units Attoseconds(10^-18) and Femtometers(10^-15) for electron motion - with a viewing width of 10 angstroms and steptime of 10 attoseconds, an electron with 10 evolts of kinetic energy will move through your screen at a comfortable rate of about 2 picometers/attosecond, photons travel too fast to be seen at 300 picometers/attosecond and protons (heavier) dont really move at all over this timeframe.

3. units Yoctoseconds(10^-24) and Attometers(10^-18) for nuclear decay of neutrons - with a viewing width of 10 femtometers and steptime of 1 yoctosecond, a down quark changes to an upquark and emits an electron and an anti-electron neutrino. The w-boson would be visible for less then a yoctosecond and the neutrino would float off at a comfortable pace.

You mention 3 animations: Bohr Magneton, orbital transitions, and the Stern Gerlach experiment;

I will give some thought to time and distance scales and post later. You may well have something in mind already, if so please post. With regards to changing timescales. I dont see how this is done accurately as certainly the location of particles will not change when you change the timescale, but the momentum and momentum direction does change and I dont see how they can be recalculated without going to an n-body problem... I find it saves time to do 2 or more animations, one at the slow rate and a different one at the fast rate.
P: 263
 Quote by edguy99 You could be right, I'll try again (feel free to correct): 1. units Femtosecond(10^-15) and Nanometers(10^-9) to show photon size and movement - with a viewing width of 10 micrometers and a steptime of 10 femtoseconds, a 600 nanometer photon will bounce back and forth on your screen at a comfortable rate when viewed at 30 frames per second. Electron and protons will not show any motion.
mmmm... much better.
At this resolution, in free space: c=3.0x10**8 m/s, the wavefront will step with a distance of 300nm / femtosecond. If the screen distance is 10μm, that means steps= 10/0.3 = 33.3 frames.
So it takes around a second to cross the screen; if a wavefront simulates in dispersive media with a slower rate of travel, this would indeed be comfortable.

#2 I will just let sit, #3 -- wow. At least I don't think I will have to worry about that time-reference...

[quote]
You mention 3 animations: Bohr Magneton, orbital transitions, and the Stern Gerlach experiment;
[quote]

Sort of. I want to be able to simulate the effect of these three things -- but that isn't always the same as graphing/animating them directly.

eg:
I don't know what a Bohr magenton would look like up "close". I do know what the electric field would look like a short distance away from an electron/proton emitting a magnetic moment. In general, one is merely graphing the E-M field, or a compressed sketch of it over all space, but by watching the extrema points of the plot -- one can trace a path -- whether "jumping" or continuous, to locate the results of an experiment.

In particular, if a possible simulation of the Bohr magneton is occurring -- the Pauli exclusion principle ought to occur automatically without introduction of a second matrix, or any extra data structures. So graphing the magneton itself is not necessary -- just graphing the EM field, and the orbits taken in a helium atom ought to show whether or not the simulation is working correctly.

What is more significant, is that the simulation is going to have to deal with photon/EM disturbances at every step of the simulation (Not all such disturbances are measurable photons.). If one is watching an "electron" probability cloud develop -- then there will be "random" noise due to photons and EM disturbances in transit which are moving too quickly to be watchable in the animations.

I was considering sampling the location (or volume of most probability) for an item (electron/proton/etc) and accumulating it on a quasi-static probability graph rather than attempting to animate a probability over all space. eg: keeping track of the last 1000 positions of each item and varying intensity with the number of times a volume is entered in the simulation. The amount of data which needs to be output at each time-step is significantly reduced that way, and one is able to get an idea of what the probability field looks locally with respect to each item.

Part of my goal is to be able to output standard ball and stick models as well in the future., along with sketches of MO -- whether done from the probability angle, or from the equivalent charge density angle (EM field intensity)-- which is subtly different, and perhaps an even more accurate picture than the HF charge/time average approximation.

An orbital transistion will not look like anything in such a simulation. The position will not change except in subsequent time, so the "view" one gets of an orbital will depend on how long an electron remains stable in it. The long it is there, the more completely the probability will be graphed.

 I will give some thought to time and distance scales and post later. You may well have something in mind already, if so please post. With regards to changing timescales. I dont see how this is done accurately as certainly the location of particles will not change when you change the timescale, but the momentum and momentum direction does change and I dont see how they can be recalculated without going to an n-body problem... I find it saves time to do 2 or more animations, one at the slow rate and a different one at the fast rate.
Correct, the location does not change. Time scaling is a mathematical artifact of the simulation. Essentially, the scale is changed by a factor of 1/2 or 2x at each step depending on measured accuracy conditions. The reason for 2x or 1/2 is the math co-processor efficiency, round off error, etc. problems.

Each item has a total energy; When you mention an N body problem -- you are correct. I am making an assumption about how to solve it based on the identical nature of the fields interacting from item to item. There is no separate field for each electron, proton, etc. but one composite EM field for all of them. In order to simulate, I must provide an operator based on the local EM field which causes the item being simulated to *statistically* move to places which will correctly solve the Schrodinger equation in a probability wise fashion. A single run of the simulator may not provide a complete probability field for graphing the reaction progressing, but the simulator can be re-wind and run segments again with a changing random sampler, (and again, and ...) to improve the probability field resolution. I think R. Feynman invented/did something similar with path integrals, but I can't remember for certain off the top of my head -- it has been too many years. The path integral was Dr. Young's favorite tool.

I might do well to simulate a 1D particle in a box for you to show you what I mean. (It will be a static gnu-plot graph for now) I am not certain if I can leave out the Bohr magneton and still have the model I have in mind predict correctly -- but after vacation next week, I'll give it a try and see what happens. A picture is worth a 1000+ words.... eh?

--Andrew.
PF Patron
P: 271
A couple of questions and comments.

Regarding the Stern Gerlach experiment:

If you wanted to do an animation with a viewing width of 10 meters (to show the equipment), what timestep would show electrons moving across your screen at a reasonable speed? Ie. if I had a laser gun pointed over your shoulder checking the electrons speed, how fast would they be going?

Clearly the spin up electrons move one way and the spin down electrons move the other way based on the magnetic field, but I dont think they all move "exactly" the same. Is there research that shows the pattern of the distribution of electrons that show up over time or do they indeed all move the same up or down amount?

Regarding orbital transitions:

 What is more significant, is that the simulation is going to have to deal with photon/EM disturbances at every step of the simulation (Not all such disturbances are measurable photons.). If one is watching an "electron" probability cloud develop -- then there will be "random" noise due to photons and EM disturbances in transit which are moving too quickly to be watchable in the animations.
I think you may need to consider "units Attoseconds(10^-18) and Femtometers(10^-15) for electron motion - with a viewing width of 10 angstroms and steptime of 10 attoseconds, an electron with 10 evolts of kinetic energy will move through your screen at a comfortable rate of about 2 picometers/attosecond, photons travel too fast to be seen at 300 picometers/attosecond and protons (heavier) dont really move at all over this timeframe."

A viewing width of 10 angstroms produces the kind of picture you would see of "real atoms" like the IBM pictures with scanning electron microscopes, what you dont get is the timescale. A timescale of femtoseconds works well to view things like proton/nucleon vibrations but even low energy "unbounded" electrons are moving way to fast to see. A timescale of attoseconds allows you to build your electron probability clouds for the frame and deal with a free electron floating by that you know, because of its momentum, will be somewhere else in the next few frames. This timescale has the same problem you talked about where photons will seem to appear out of nowhere.

Also,

 I do know what the electric field would look like a short distance away from an electron/proton emitting a magnetic moment.
I would appreciate if you could expand on this comment.

Regards
P: 263
 Quote by edguy99 A couple of questions and comments. Regarding the Stern Gerlach experiment:
OK. (SG henceforth in my notes.)

 If you wanted to do an animation with a viewing width of 10 meters (to show the equipment), what timestep would show electrons moving across your screen at a reasonable speed? Ie. if I had a laser gun pointed over your shoulder checking the electrons speed, how fast would they be going?
I intend to mimic the MIT reproduction of the experiment using hydrogen and either accepting the factor of 40 mass change, and therefore expected 40x scale change -- or artificially increasing proton mass 40x to simulate a fake "potassium" atom in transit with hydrogen. I haven't calculated the speeds, but the transit time and dimensions are essentially fixed by the experiment. If I modify them arbitrarily -- then verification of the simulation will be difficult at best. If I were to animate a cross section of the experiment -- the atom would be vanishingly small. I was thinking to simply collect the
[x,z] data points at the end of many experiments for a statistical sample equivalent to the SG photograph.

Note the MIT experiment uses a slightly different shape of electromagnet than SG. I believe SG is more of a triangular wedge near a half-circular socket. MIT's is on pp. 17, from a Google search of Lab 18, MIT and Stern Gerlach:

web.mit.edu/8.13/www/JLExperiments/JLExp_18.pdf

I'm going to answer these next to questions out of order, as I think I may be clearer that way.
 I would appreciate if you could expand on this comment
I stated the idea poorly. I am less certain of the shape/distribution of the EM field as one approaches an electron's vicinity. An electron is a dynamic magnetic dipole -- for there are no monopoles of magnetic "charge", and thus the field changes around the electron as it "moves";
(Dispite wikipedia's present fictional monopole account.... which is "mathematically" equivalent, supposedly -- but whoever wrote that makes me groan.... ).
A dipole's detectable field scales as r**-3, where r is somewhat ill defined, but as one gets farther away from the source(s) the error in r from variations in dipole shape contributes less and less to the overall values. I think (but am not absolutely certain) that much of the geometry distortion of a dipole is primarily in higher terms (r**-4) etc.

The primary meaning of my comment is that there are several unknowns affecting the field of an electron which become more important as computes closer to it's location. These unknowns are not necessarily a result of the Heisenberg uncertainty principle.

In general, the idea that an electron is a physical particle capable of "spinning" on an axis is discouraged -- and although I am not certain of what experiment(s) disprove the idea altogether -- I seem to remember someone mentioning historical experiments that disprove any "radius" being assigned to an electron (either detected or not...). But if an electron is not a particle with spin, then it must still at least be a distribution of electric field and time; the latter idea is true whether or not an electron is a physical particle.

 Clearly the spin up electrons move one way and the spin down electrons move the other way based on the magnetic field, but I dont think they all move "exactly" the same. Is there research that shows the pattern of the distribution of electrons that show up over time or do they indeed all move the same up or down amount?
In SG, the developed photograph of the silver atoms which hit the target is quite distinct. So much so that there is little variation from electron to electron (and perhaps even less spread would be realized if the experiment was improved.)

See p. 56 for a photo of the actual silver atom pattern on the film target.
http://www.owlnet.rice.edu/~hpu/cour...rn-Gerlach.pdf

A couple of things to note about the photo.

The "ideal" silver dipole would be located at the center (x axis in the photo) for one launched non-skew into the magnetic field. It is precisely here where a spike develops in the x direction -- that spike is generally reasoned to have something to do with the non-symmetry of the pole faces on the magnet, although no explanation I have read is really satisfying for a mathematical analysis of SG's actual magnet is not done. The other plots side is flattened, and that is typically described as being because of the silver atoms physically hitting the rounded part of the magnet cavity.

I didn't see a plot for the MIT version of the experiment which uses a magnet shape which ought not have that spike, either. The only obvious clue pointing to the anomalous nature of the spike is that it is asymmetrical -- The x axis was oriented vertically in the actual experiment, so the spike is even against gravity if I recall correctly. But if the explanation of the other side's thin-ness is truly because of physical clipping, then there is no way to be certain the spike would not have shown up experimentally due to the field shape if the atoms had not hit the surface of the magnet. One really needs a predictable *linear* variation in magnetic field strength over space to be able to effectively correct for physical manufacturing limitations of the slits, etc, and SG doesn't really have that.

Comparing the demagnetized plate to the magnetized one, there is a fairly strong indication that the plates were sent on the postcard willy-nilly and with no definite orientation. (The mirror image reticule pattern on the right image re-enforces that notion...) The demagnetized version ought to be a purely flat line -- but isn't. In the lower half of the line a clear broadening of the pattern can be seen -- which if the magnet is truly off, can only be attributed to the shape of the slits allowing the silver out -- or the position of the silver oven behind the slits biasing the source statistically.
In any event, a careful inspection I believe I see trace widening on the upper half of the pattern in the magnetized view suggesting that the flat line slide is actually rotated 180 degrees from how it was taken in the other photo, with an optional mirroring on top of that...

Using that information as a corrective measure, and knowing that the slit is horizontal, one can estimate the amount of classical skew in trajectory a silver atom would have (incoming y angle and x angles with respect to the demagnetized photo, in contradistinction to a perfect 90 degree angle.); and since the size of the slit is likely large compared to the wavelength of the silver atom, these classical trajectories ought to be fairly accurate. Each point, then (y on the photo) has a definite angle (source dx/dz, dy/dz) from which the silver atom could have come, and the effect of this distribution needs to be calculated and compensated for to determine what an idealized simulation ought to look like.

Generally, there ought to be fewer angles of approach as one gets closer to the edges of the slit (y on the photo), although dx/dz will be fairly constant across the whole slit except where the demagnetized photo shows thickening.

So, I think the most accurate part of the magnetized photo will be the lower right quadrant of the photo taking the approximate center of symmetry of the shape as the origin.
Using a straight edge, the line widening as one gets closer to y=0, increases very linearly (eg: right most trace / lower right quadrant) until extremely close to the magnet center where the field of the magnet becomes less known / near the surface of the magnet. So, there are two (ideal) linear effects with approach toward the center -- 1) the width of the trace, and 2) the offset of the trace from the picture's y axis
.
The offset is explained by the experiment itself -- eg: the proportionality between magnetic field intensity gradient with x offset in the photo vaires approximately linear in SG (and linearly in the MIT version, according to the MIT text). The widening of the trace, however, is something I don't know how it theoretically ought to proceed, and which is caused by "all other differences" in the electron including any Heisenburg and QM interference effects.

If only the electron's position [delta x, delta y] were affected -- I could more easily separate out what causes the thickening -- but as there is also skew, the problem isn't solvable qualitatively in my head. The variation in trace with, then, is something I don't know if it would replicate in an idealized experiment or not; and will need to work out before verifying my simulation against SG.

If I am able to correct (reduce data) in the photo by mathematical modeling of skew, other features might become visible which presently are not. But to really do this correctly, the best source of information would be a run of the MIT experiment with computer usable data points to operate on. I'll have to look around and see if anyone has posted a good run, and if not, perhaps I will put some thought into re-constructing the experiment. I have the vacuum equipment, some very pure micro-crystalline level homogeneous Iron, and the machining equipment to re-construct the magnet, along with motorized micrometers (robotic) -- but that would take quite a bit of time and effort to put together for me in my present state .... if anyone knows of an online data source, I would appreciate it.

I'll probably post a bit more tomorrow... I am still trying to resolve a second question(/s) concerning the time/intensity distribution that an electron dipole has by focusing on what a quantized magnetic moment actually means, both classically (the limit) and Quantum Mechanically. I will need to think out loud, and perhaps will make some mistakes when doing so -- but hopefully the sharp eyes of others will be helpful there and I can get enough of an answer to my second question/(s) to model it reasonably well for simulation purposes.

 Regarding orbital transitions: I think you may need to consider "units Attoseconds(10^-18) and Femtometers(10^-15) for electron motion - with a viewing width of 10 angstroms and steptime of 10 attoseconds, an electron with 10 evolts of kinetic energy will move through your screen at a comfortable rate of about 2 picometers/attosecond, photons travel too fast to be seen at 300 picometers/attosecond and protons (heavier) dont really move at all over this timeframe."
We'll just have to see what works once I get there.... but I can certainly try it.

.--Andrew.
PF Patron
P: 271
Thank you for the post and link: web.mit.edu/8.13/www/JLExperiments/JLExp_18.pdf, this helps the discussion a lot. From the figures, it appears they shoot potassium atoms through about a .5cm channel of a 10cm long magnet and check if they hit a 4mm paladium wire that is moved around. An animation width of 1 meter sounds good but I remain stuck on the timeframe that would show on your computer screen, atoms emerging from the oven (say one per quarter second) and hitting the wire within a second or two...

The speed of the potassium atom appears to depend on the oven temperature that drives the vapor pressure that determines the speed of the atom. I have assumed here that the speed is important since a slow potassium atom spends a longer time under the magnet then a fast atom and hence would be bent more. This would seem to suggest that even a small adjustment in temperature would cause the atoms to either spread out too much and be undetectable or not spread out at all. Perhaps I miss something there or mayby the temp at 190 degrees produces a consistent enough speed that you do not see the effect. or something else..

 In general, the idea that an electron is a physical particle capable of "spinning" on an axis is discouraged -- and although I am not certain of what experiment(s) disprove the idea altogether -- I seem to remember someone mentioning historical experiments that disprove any "radius" being assigned to an electron (either detected or not...). But if an electron is not a particle with spin, then it must still at least be a distribution of electric field and time; the latter idea is true whether or not an electron is a physical particle.
I agree completely. There are at least 2 major problems, the size you mention - I believe radius calculations for the mass involved come out way to high (300 femtometers vs a lorentz radius of 3 femtometers) and also a problem, I think the electron can be flipped over 720 degrees and a spinning disk is only 360 degrees.

Anyway thank you for the info, and if I can nail the speed of the atoms I visualize a pretty nice first animation showing an oven blasting (with a setting for temperature) out potassium atoms at a particular speed??? and rate??? and over a timespan (microseconds???) you would see atoms hit a wire after being bent by a setable magnetic field with the proper bending of up and down electrons. The nice thing about starting the animation at this level of time and space magnification is that its pretty clear what happens at this scale and there is no need to prob the details of what the atoms actually do or look like. Probing a more detailed time or distance scale will be more difficult.
P: 263
 Quote by edguy99 Thank you for the post and link: web.mit.edu/8.13/www/JLExperiments/JLExp_18.pdf, this helps the discussion a lot. From the figures, it appears they shoot potassium atoms through about a .5cm channel of a 10cm long magnet and check if they hit a 4mm paladium wire that is moved around. An animation width of 1 meter sounds good but I remain stuck on the timeframe that would show on your computer screen, atoms emerging from the oven (say one per quarter second) and hitting the wire within a second or two...
Without checking, that sounds correct from what I remember.

 The speed of the potassium atom appears to depend on the oven temperature that drives the vapor pressure that determines the speed of the atom. I have assumed here that the speed is important since a slow potassium atom spends a longer time under the magnet then a fast atom and hence would be bent more. This would seem to suggest that even a small adjustment in temperature would cause the atoms to either spread out too much and be undetectable or not spread out at all. Perhaps I miss something there or mayby the temp at 190 degrees produces a consistent enough speed that you do not see the effect. or something else..
I agree.
The experiment requires the student to wait for the temperature to stabilize for a long period of time, otherwise the data will be in error -- so I am fairly sure that small variations in temperature affect speed significantly.

The divergence of the magnetic field is what causes a vertical accelerating force to appear on the magnetic dipole in the first place, and divergence is independent of speed. The traveling speed of the dipole, then, has nothing to do with vertical acceleration -- eg: the overall charge is neutral, and the + and - are both moving in the same direction so even Hall effect does not occur. However, the + and - charge would want to be deflected in opposite directions horizontally in proportion to the speed with which the atom travels through the B field, and I am sure (classically) that would cause the unpaired electron and remaining nuclear charge to align on a horizontal line with respect to each other and in a plane 90 degrees to the magnetic field direction.

There is, then, going to be a vertical force vector independent of speed, and a horizontal one which is counterbalanced by electrostatic attraction. Since acceleration is an integral of force ( on average f/m), the deflection depends on time spent in the magnetic field. Even at a constant temperature speeds will vary statistically. These variations will cause (add to) trace width spread in the photos shown; and also (small) skew angle which does the same thing. The lab sheet may contain enough information to compute the speed variation, and one can compute a percentage of average time which eg; 99.8% of atoms will fall within, and the spread of the atoms (not including skew angle) due to this variation in time will be a linear fraction of the total deflection. % time error = % y deflection in error.

 I agree completely. There are at least 2 major problems, the size you mention - I believe radius calculations for the mass involved come out way to high (300 femtometers vs a lorentz radius of 3 femtometers) and also a problem, I think the electron can be flipped over 720 degrees and a spinning disk is only 360 degrees.
You'll have to excuse my ignorance of vocabulary; as a BSEE, I was taught primarily about semiconductors which is a bulk phenomena. Different terminology is employed, perhaps somewhat simplified, because the dominant effect is not always the same as would be the case with discreet items. When you refer to a "Lorentz radius" are you speaking about a contraction in length due to motion of some kind? And regardless of that, what calculations did you do to arrive at those particular numbers?

 Anyway thank you for the info, and if I can nail the speed of the atoms I visualize a pretty nice first animation showing an oven blasting (with a setting for temperature) out potassium atoms at a particular speed??? and rate??? and over a timespan (microseconds???) you would see atoms hit a wire after being bent by a setable magnetic field with the proper bending of up and down electrons. The nice thing about starting the animation at this level of time and space magnification is that its pretty clear what happens at this scale and there is no need to prob the details of what the atoms actually do or look like. Probing a more detailed time or distance scale will be more difficult.
You're welcome. And thanks for just speaking about the subject in general; in your own way you have brought up something I did not think about regarding this experiment -- and that is the fact that the scale must exceed the few cubic microns which I can simulate realistically. In order to simulate the MIT version of SG, I will have to find a way to delete nodes from one side of the space I am simulating and add those nodes to the other side so as to keep the moving atom inside a window of simulation space. (Not to mention, computing a new value for the moved nodes...)
That will, unfortunately, complicate the simulator enough to seriously increase the time it takes me to implement it as my present simulator does not have that capability.

+1 to the to-do list.

More will be coming about the "spin" of an electron when I have a chance this evening, along with Q/Qs that I have about how to work with it -- and a short summary of what I am presently considering. (Next steps.)
 P: 263 OK. Fell asleep on vacation ... and fell asleep last night early.... and this morning again .. but I'm back. A brief review of the ideas I need to build the simulator: So far I have spoken about orbital transitions -- which has nothing to do with *where* an electron(/other item) is at the instant before the state change and after. Only the probability of where it is found in the future can change -- without the "actual" position changing for a single run of the simulation at the instant of state change. So, one may think of a simulated orbital change (state change) as being a change in the statistical rule about where the item will diffuse in the future. Alexm ties "spontaneous" decay of excited state orbitals to a "disturbance" in the EM field by invoking Fermi's golden rule. This idea is new to me, for in semiconductor physics the mechanism of spontaneous decay is *NOT* explained, rather it is pragmatically considered an empirical property of the semiconductor which is modified by disturbances in the EM field (eg: in a perfect crystal, it would be empirical -- but in a doped one, the lifetime of an excited state is the empirical value *decreased* by the probability that a fermion will diffuse into a defect in the crystal causing a stimulated decay of the excited state.) Since no one has contradicted Alexm, his opinion stands as the method I will attempt to use for simulation. The remaining issue I have to deal with is the "spin" state of an item. This state is what causes an "anomaly" in emission spectra similar to fine structure (haven't studied this, so I am speaking in general) based on the magnetic state of individual protons in the nucleus coupled to the magnetic state of the light emitting electron. The anomaly is extremely small since the Bohr magneton is affected by the mass of the item -- and protons being relatively heavy, have ~~ 1/1836 times the effect of an electron. I have no engineering text describing the exact nature of various experiments about the Bohr magneton and guidance/input from those who might know of experiments/articles to review would be appreciated -- but I do know that spin of an electron has a mechanical moment from an experiment done by Einstein and a colleague, and I do know that the orientation of the magnetic dipole in space can change. Since classical angular momentum is found experimentally when dealing with the Bohr magneton -- along with non-classical quantum effects -- I am simply going to write down what I think I know -- and hope for correction or at least suggestions of what I might be over-simplifying if anything. A top (mechanical moment/dipole) spinning which has force (torque) applied to re-orient the axis of rotation will resist re-orientation by precessing about it's rotary axis. Gyroscopic formulas tend to focus on the period of precession -- and I am unaware of how one calculates how much a top will reorient it's axis of rotation with respect to time and torque; nor if such a reorientation is possible with a frictionless device.... I assume reorientation is possible even in a frictionless environment, since permanent magnetism in non-superconductor material is (as far as I know) purely a spin based phenomena and there is no friction that I am aware of for a spinning electron (there are no electrons that *DON'T* 'spin'). I suppose it is possible that quasi-Kepler motion of an electron around an atom (P orbital, etc.) might also contribute to magnetic field, but everything I was taught in chemistry indicates that unpaired electrons are the sole cause of magnetism. So I am assuming that is the case for the moment. (pun intended.) Also, I have seen the "orientation" of spin as being what supposedly changes in the NMRI/MRI effect. I have not seen any mathematics which explicitly show how QM-wise engineers came to predict that the magnetic field would "FLIP" when appropriate resonance radiation was applied to the electrons in hydrogen (the typical NMRI target), although the energy required being proportional to a statically applied magnetic field does make sense. The energy required to reorient the magnetic dipole being readily computable from analogous situations in DC/AC motors being used for power generation. The magnetic dipole rotor requiring work to flip orientation while in the magnetic field of the motor's stator. I am presuming that a rule akin to Fermi's golden rule needs to be used in this case (spin flip) -- but I have not come across in commonly available literature how NMRI is deduced, nor how to handle the QM states. Any pointers/example calculations which could be applied in the case of a non-static magnetic field would be appreciated, as large uniform magnetic fields are not what will be found in molecular simulation -- but rather weak dipole moments from various items located near the "electron" or other simulated item of interest which dynamically can re-orient. A second issue that comes to mind, and which may/or not affect the discussion is that Ampere's law and other effects computed using calculus assume a uniform magnetic field when taking the limit as area goes to zero (eg: differential area or volumes...which have an existence whose consistency is questionable given the Heisenberg concept... ). A magnetic dipole's strength is measured in current x area enclosed by the current loop. I*area**2. However, making a large rectangular and planar loop of wire and applying a constant current to it -- and then using a compass as a probe the field's strength has convinced me that B field magnitude in the plane of the loop, and inside the loop, is stronger as one gets closer to the wire -- and weaker as one gets closer to the center of the loop. When replacing distributed currents with individual electrons, protons, etc. The magnitude of the current is replaced with the velocity of the charged atomic item in relation to the path length required to circle an "enclosed" area. Therefore, the product of area enclosed to velocity of circulating charge (or E-field disturbance), is then constant if one is given a constant magnetic moment (eg: The identity of the Bohr magneton for an electron..) So, the diffusion of the electron in space to create the magnetic moment must have an average value in an angular sense -- and because of inertial reference frames, helical paths must also be possible where the path is closed only on a moving reference frame. What I have stated here, is the sum total of what I am certain of regarding magnetic moments; discussion of how these ideas fit with QM/Schrodinger's would be greatly appreciated. --Andrew.
PF Patron
P: 271
A quick sidenote, the June 10/2010 issue of Nature has a pretty good article:

"Electron localization following attosecond molecular photoioniztion" that starts with "For the past several decades, we have been able to probe the motion of atoms that is associated with chemical transformation and which occurs on the femtosecond (10^-15) timescale. However, studing the inner workings of atoms and molecules on the electronic timescale has become possible only with recent development of isolated attosecond (10^-18) laser pulses."

The attosecond timescale will (I think) prove to be very important in the history of the electron.

Regarding:
 A top (mechanical moment/dipole) spinning which has force (torque) applied to re-orient the axis of rotation will resist re-orientation by precessing about it's rotary axis. Gyroscopic formulas tend to focus on the period of precession -- and I am unaware of how one calculates how much a top will reorient it's axis of rotation with respect to time and torque; nor if such a reorientation is possible with a frictionless device....
The object you may be thinking of is a Bloch Sphere, animations here. Although not directly derived from general relativity, It does properly predict most (if not all) of the properties of the proton. Its important features are an axis that points in a particular direction and a spin direction.

and:
 Also, I have seen the "orientation" of spin as being what supposedly changes in the NMRI/MRI effect.
Proton MRI is generally modelled by the Bloch Sphere, animations here and has the right kind of magnetic moment that you would expect from this type of spinning object. Although these animations are showing the protons "precessing", in a solid object it may be just as likely that these guys are not precessing, but somehow building up energy and then flipping. The Bloch Sphere works great for protons and neutrons. For more elements with multiple protons and neutrons, you dont just have spin up/down or more commonly termed +1/2 or -1/2. Lithium-7 nucleon for example has 4 spin states labeled +3/2, +1/2, -1/2 and -3/2. It does not matter so much what the picture looks like, its just important that you label your objects with the right spin so you know how it reacts to the forces around it.

and:
 Any pointers/example calculations which could be applied in the case of a non-static magnetic field would be appreciated, as large uniform magnetic fields are not what will be found in molecular simulation -- but rather weak dipole moments from various items located near the "electron" or other simulated item of interest which dynamically can re-orient.
The electron at this level is more difficult. As I understand it, the Bloch Sphere does not work for it as the magnetic moment is slightly over 2 times too big to work (it takes too much energy for its size to flip it..) and there are many other problems. Electron flipping is the basis of atomic clocks and I think many of the properties have come about to explain this.

The Lorentz radius is: "In simple terms, the classical electron radius is roughly the size the electron would need to have for its mass to be completely due to its electrostatic potential energy - not taking quantum mechanics into account." and comes out to 3 femtometers. I think that this much mass in a 3 femtometer radius is spinning at something like 100 times the speed of light. Ie, the electron looks too small to fit that much energy in.

You commented "A dipole's detectable field scales as r**-3, where r is somewhat ill defined" and "the product of area enclosed to velocity of circulating charge (or E-field disturbance), is then constant if one is given a constant magnetic moment "

An important structure to is consider are twistors (one animated here, ignore the bottom row as it is a work in progress and sometimes you have to click a couple of times to get them to run right). It has a defined axis and it has the 720 spin (watch carefully as the blue dot goes through 360 degrees, the electron ends upside down and must go another 360 degrees to get right side up). It would pack a lot more energy in a lot less space and has kind of a "wierd" layout of its dipole field...

I have assumed spin flips do not apply to the SG experiment. There is no reason why the valance electron would not simply align itelf with the external field and slowly pull the atom in one direction or the other depending on whether it was up to down.

Dont know if this helps much, but I have to get back to trying to find the speed of the potassium atoms..

 Related Discussions Biology 9 Classical Physics 8 Introductory Physics Homework 6 Biology, Chemistry & Other Homework 1 Atomic, Solid State, Comp. Physics 20