Register to reply 
DIY: Simulate Atoms and Molecules... 
Share this thread: 
#1
Jun2010, 05:53 AM

P: 263

I would like to see how far I can get writing some novel & simplified simulator code for molecular modeling based on QM.
 Background follows  skip to next post to get to the details of the project & starter QM question.. My son took chemistry this last semester (Highschool level) and to help him out I wanted to find a free molecular modeler tool that would allow him to visualize VSEPR like images for molecules (PI/Sigma bonds), and also for myself; I was severely disappointed. As I have have some QM work that I may later publish (free) I don't want to enter into any kind of ip contract for such a program. eg: most molecular modelers are either 1. $1K> / seat or 2. proprietary, and want signed papers ... although the methods and implementations have been around for nearly as long as computers. This became acute a few weeks ago when I succeeded in doing an electrolytic process that I previously though impossible because water would be decomposed; but it worked in spite of that. I desperately wanted to understand this chemical process although I am not a chemist but an EE. (Chem was my weak point in college). After a fairly diligent search, I discovered that the only program which was totally free in a sense usable to me was Ghemical, built on top of Jmol as a visualizer. Sad to say the code is extremely buggy in places; The Java version never got beyond the splash screen, and the C/fortan code version could not completely compile  especially in the area of orbital interaction based on old Fortran code, so that some features were disabled in my version. I still had the crude "optimize geometry" function which I had hoped was close to VSEPR (Vander Waal's outlines...) and that would have been good enough  but even that has serious bugs; Eg: when I built a test crystal that I knew what its bonds were the optimization turned it into a 2D item with atoms that would practically have to undergo nuclear fusion to be in the places they were optimized to... I am not adverse to fixing code  I have done quite a bit of it including kernel drivers, but I would rather spend my time on something other than old algorithms such as Hartree Fock, slater determinants, and all the well known problems of these variational methods which make my eyes glaze over. Nobody yet has a flawless simulator.... So I would like to write my own code, both for my son and I  but do not understand QM sufficiently to complete the project, and would like some feedback on problems I see and perhaps a heads up on any issues where I show myself to be blatantly ignorant of something. (As a BSEE I solve solid state QM problems regularly, but molecular modeling is a bit different ... ) I have figured out how to numerically simulate a field, such as EM field, in ways that are reasonably accurate and efficient (sparse matrix). I wrote an electronics simulator using these techniques and am able to simulate circuit functions and nonlinear devices with extreme accuracy. I know this is sufficient as a foundation for actually building virtual "space" for watching EM in the classical Maxwell sense from discreetly located electrons and protons. So, I would like to experiment on the next possible step  replacing Hartree Fock approximations, Slater determinants, etc, with the pilot wave interpretation of the Schrodinger equation and a statistical sampling of a (perhaps modified) Schrodinger equation. As inspiration for trying this, I came across an old textbook (1950's) showing all of the QM orbitals S,P,D,F, modeled *correctly* using a mechanical spinning top  and the accuracy was surprising to me  so I do believe that a semiclassical simulation can produce fairly accurate results. 


#2
Jun2010, 05:55 AM

P: 263

My first goal is a simple electron simulation, and perhaps atomic simulation of Hydrogen and Helium.
In particular, I would like to be able to simulate effects of the Bohr Magneton, orbital transitions, and the Stern Gerlach experiment; which ought to work for hydrogen, but fail for Helium. For now, I am looking at the Bohr magneton as an effect proceeding from a point (or ring), affecting the EM field  and am simply using the definition of energy in a dipole magnet to simulate the electron spin's effect using a quasi ZPE technique, and using the pilot wave idea with QM/Schrodinger's to decide where the electron may statistically go. I know how to use Boltzmann eqn. to determine the probability distribution among orbitals; But I am stumped on orbital transitions  since I am using a dynamic time based simulation. I also hope that the speed of light effects also included in my EM field may yield results which are consistent with relativity ... but I'll settle for classical if not. What I envision is that in the Schrodinger Eqn, individual "orbital" states can be computed given the immediately surrounding EM field. Ignoring for discussion, but not simulation, the motion of the proton  thus  the electron can be said to be in a state who's probability distribution is known (eg: at statistical sample time t) for the EM field given, regardless of how many particles created that field. The self field of the electron/proton/etc being easily removed for that calculation. One can in temporary memory, vary the total energy of an electron in say the 1s state to the interorbital state 1/2(1s + 2s) to determine when the EM field would permit the electron to change to the 2S state (although no change in electron position would be recorded). I intend to experiment with various algorithms for conserving energy... and am not concerned about that yet. I know from semiconductor physics how to compute the probability of an item being in various states; but I have no idea what the mechanism/mathematics of spontaneous emission ought to look like. I haven't found any mathematics which I understand and which explain how to estimate/compute the time an electron will stay in an excited state  and what dependencies that has on the EM field around it. In reality this may not be a problem, as I expect the EM field will always be changing and may simply naturally fall to lower levels at the appropriate times statistically, I have no way of verifying if a repeated experiment is right on average or not without some kind of estimate. So, I'll stop here as I have no idea how complex the answer will be to this first question, what determines the lifetime of an excited state. Andrew. 


#3
Jun2010, 06:52 AM

P: 88

Probably I am not helping you out with this reply, but I am also interested in developing my own C code for simulating quantum physics. Up to now I have succeeded performing classical manybody gravitation and classical Ising models, but I would like to jump to quantum realm. My wife used Gaussian software to perform chemical simulations in her PhD thesis, but I would like to build my own code from scratch.
Just when figuring out what to do in my mind, I have found an important problem. I would like to get a dynamic picture as the result of the simulation. I mean, the position of the electron at different times. But, according to quantum mechanics, getting the position implies collapsing the wavefunction. That leads me to a Zeno effect, or kind of, because the simulator is constantly performing position measurements on the system. I am concerned about that, but I am not sure if my point is correct or I am missing something important. Thanks. 


#4
Jun2010, 07:56 AM

P: 263

DIY: Simulate Atoms and Molecules...
Heisenburg's principle applies to actual measurements. He himself, in one document I read, admitted that once the experiment was over his principle did not by itself prevent one from determining where the measured item was in the past. (That is not to say that it is possible to always determine it in every kind of experiment.) However, the Heisenburg uncertanty principle does destroy the ability to predict future positions after a measurement is made. There are philosophical issues which I really don't have answers to. But a handgrenade simulator does not have to be perfect, just close.
I would point you back to the physical top model I mentioned which generates the different orbitals of quantum mechanics. The device used time lapse photography to take pictures of a light which a specially designed gyrating top manipulated. In this way, the more often the top was in a certain position, the brighter that point was on the film. "Modern college physics" Fifth edition, Harvey E. White  1966, (C) Litton Educational Publishing, Published by: Van Nostrand Rheinhold Company, N.Y, N.Y., PP. 562. The top produced reasonably accurate pictures all the way to the 6**2F[7/2] orbital. Since the model is macroscopic, Heisenburg does not really apply. Considering the activity of the top faithfully reproduced the orbitals  and also considering that the uncertanty at the atomic level is high enough to make the precision of individual measurements impossible  one is left with the conclusion that even if the model is not working identical with nature  it none the less is a good enough approximation to solve problems of interest. I analyzed the Schrodinger equation some years ago substituting in a classical velocity interpretation for probabilty, and wanted to see how that would correlate with a periodic oscillator and other such standard fare. I especially wanted to do it because I wanted to see if Schrodinger's equation took into account correlation from repeating orbits (assuming a nonstationary one). The answer was a surprising no. The probability does not include such correlation, but appears to represent the probability of a single shot experiment through each point. It is a diffusion equation, and I don't know why it doesn't contain any useful information about possible repeating "statistical" orbits  except that the equation somehow implies that all paths (not speeds) are equally likely. Again, this is philosophical  not so much practical for it may be simply an artifact of the equation  but it would seem that in the 1D particle box, it is perfectly legitimate to say that the electron is under one "hump" of probability *only* and never goes through the zero probability points (infinite speed exceeds C)  but the odds of the electron being in any one of the humps is equally likely. The alternate interpretation, and the more knee jerk one  is to assume the electron travels by tunneling from one probability hump to another since they are all there in the solution. I am not familiar with Ising models, so I can't really comment on what the changes will be like. In the end, though, the Schrodinger equation to me looks to me like a 3D version of Bohr's equation  that is, instead of saying the nodes along a classical trajectory must be exactly a wavelength multiple  Schrodinger's appears to say that the phase loop around *ANY* path must be 360 degrees with the local "wavelength" varying according to the EM field. Bohr's orbits being circular and therefore tracing out a constant energy had only a single value of wavelength; so Schrodinger simply generalized it and perhaps got rid of degenerate cases. (Eg: 1s does not orbit kepler like, but hovers based on the wave equation. Don't forget a Bohr magnetic spin effect is also involved in the hovering. ) I am interested to see what happens. 


#5
Jun2010, 01:42 PM

Sci Advisor
P: 1,866

And what's wrong with HartreeFock? It's been the basis of the majority of QC methods for the last 80 years, and there have been good reasons for it. (Nor is it an algorithm, btw, it's an physical/mathematical approximation. SCF is an algorithm) What do you mean by 'flawless simulator'? There are quantumchemical methods such as CI and CC (which are based on HartreeFock) which are exact, in principle. You can hardly do better than that, as far as accuracy is concerned. (Scaling and speed is another matter) 


#6
Jun2010, 04:06 PM

P: 263

Looking in my Chem book and several others either ball and stick sketches or overlapping P orbitals are sketched in the VSEPR chapter. What I said still stands: VSEPR like sketches. If you wish to quibble, I never said Sigma and Pi orbitals are officially part of VSEPR theory  I don't believe you are denying Sigma and Pi bonds exist in molecules, and that VSEPR theory is used to sketch molecules. So it seems you are concerned over accuracy of the sketching or something, or just didn't notice things like: And how does one sketch these force fields? Is it forbidden to use a pi LIKE bond sketch? Or are you demanding I dot my i's and cross my t's because my statements might not fit your categories perfectly, and my kind hint that similie/analogy was meant was missed by you? Do you trust me not to use your name and address against you *in any way* if I somehow am upset BY you and have these pieces of information? Do you want to give me your name and address with no strings attached to do with whatever I want? (I'm not Dracula, that's for sure.) I'll look into it again ... but I think PyQuante was the one my friend tried and didn't work. As far as I know, the DFT algorithm[/s] is/are probably worse than Hartree Fock implementations  it is designed to reduce computational load not improve accuracy. Oh, am I being unfair? What surprised me was a difficult to discover chemical reaction that someone else had found which made it happen in spite of this problem. In most experimental attempts I get hydrogen contaminated product. The success of this method delights me, and I would like to explore why it works; I'm not sure if it has to do with complexing, or the particular acids used  especially because I had to substitute a chemical in the reaction that wasn't in the original experiments  for environmental reasons. Not only did the reaction succeed, but it went beyond even the original experiments in quality. I found that I could produce a product *and* the expected post processing from the literature was unnecessary in my formulation. I suppose I could try to figure out what happened using a slide rule, paper, and other easily available methods  but then I would be likely to make a mistake, and it would take a long time to redo the calculation. I think it would be nicer to have my computer do it for me. As far as a solid understanding .... how would you know? Historically, I can say nothing pisses me off more than spending $5000+ on a piece of software, and having to contact tech support only to have them tell me; "the soulution is  just don't do that.". I am speaking from experience. Somhow the spend $1000 on a gamble just isn't appealing either. I have a hard time believing what you say here as anything but double talk. If these methods worked perfectly or exactly as you seem to be selling them as, and if you really include what I desire to simulate  in the sense that I am proposing  then they would definitively predict what happens in say, "the Bell inequality", based experiments. Considering most of the physics community is still arguing over that, including ZPE (with known flaws) as one objection, and no definitive set of experiments has yet buried the competition ... I see no point in arguing over what is better, or what is "perfect". I'm interested in an experimental simulation method; the one I have chosen is just different  it is not likely perfect either  but I would like to try it; Are you interested in actually addressing the thread questions I am proposing or something else after my clearing the air here? If you have anything to add concerning how to compute how long an electron stays in an excited state, I'm interested. That is one of two issues which prevent me from finishing the code which I already have; as I said at the start of the thread, I'd like to see how far I get; and I mean after I know the answers to my actual questions. Andrew. 


#7
Jun2110, 10:24 AM

Sci Advisor
P: 1,866

I do deny sigma and pi bonds exist in molecules. Orbitals as well. These are all abstract concepts derived from various theoretical models, not real physical things.
Forcefield methodology doesn't have anything to do with 'sketching' anything. It's a semiclassical physical model, not a single scalar or vector field or anything of the sort. All DFT methods in practical use for molecules are more accurate than the HartreeFock method. Obviously you don't know the relative size of dispersion effects if you think that would be a dealbreaker. Yes, QC methods were being used prior to electronic computers, in the late 20's and 30's. Newton didn't have any problems developing his algorithm for rootfinding without one either. Computers WON'T 'do it for you'. That was the point I was trying to make. Most chemists in most situations do not use quantum chemical software at all. Most chemists don't know how to use quantum chemical software. You think it's reasonable to demand that they give you software that took them years to write, no strings attached? Crazy. The fact that a method is exact does not mean a given implementation of it is necessarily numerically stable in every case, or that code is bug free. The fact that a particular implementation might not work (perhaps due to user error), does not invalidate the theory behind it in the slightest. A quantumchemistry program can't and won't tell you about the Bell inequality. It's an entirely different system and problem. They won't calculate scattering crosssections or do your taxes, for that matter. Also, it's not the calculations themselves which are in dispute. It's "Heisenberg" not "burg" (since you seem to think spellings are of great importance) The Uncertainty Principle is not a simple experimental limit, it's a theoretical limit. This is explained in every textbook and in a thread in this forum once a month or so. You can produce _exact_ pictures of hydrogenic orbitals simply by plotting the spherical harmonics, since that's what they are. But they won't give you a good approximation of any multielectron system. How to calculate lifetimes of excited states is also in every textbook. Fermi's Golden Rule, look it up. The Schrödinger equation does not have even a superficial resemblance to the equations of the BohrSommerfeld models, in my opinion. Beyond that, I won't address your 'independent research' since this forum has clear rules about that stuff. And that's pretty much all I have to say on the topic. 


#8
Jun2110, 09:29 PM

PF Gold
P: 291

"Nobody yet has a flawless simulator...."
Although I wont dispute this, I write animation software in the timeframes/distancescale from microseconds/nanometer to show photon size and movement, attoseconds/femtometers for electron motion and Yoctosecond/yoctometers for nuclear decay for neutrons under a collection of different physical forces. You may require several animations specifically to account for distance and time (large scale and/or small scale). Have you given any thought to distance and time scale for your simulation? Some view this type of animation as "speculative" and do not like the discussion here, but I would be happy to discuss it if you would like to send me a message or sample picture of what you have in mind. Regards 


#9
Jun2210, 11:39 PM

P: 263

I think the photon scale and the electron are interchanged? I was planning on using my electronics simulator base asis for the project at first  and modifying it later for improvements. There is a saying "one in the hand is worth two in the bush." The electronics simulator autoadjusts timesteps depending on interaction values  that is, the timesteps computed become larger as the path(s) become more predictable / repeating of past history  and smaller where a small change can cause a large instability. The algorithm is complicated but like a video game drawing algorithm  is carefully optimized in the inner loops; Roughly speaking, it allows me to simulate around 1000+ items in a field, and easily 100,000 nodes+ in electronic circuity on a modest single core Celeron processor (SSE2/SIMD)  although, the computation time drops exponentially with the *randomness (nonrepetitive change); and conversely accelerates with the repetition of similar events/changes. (*randomness within a steady state is not what I mean.) The details of the algorithm are beyond the scope of Quantum Physics, obviously, and are a programming/datastructures nightmare; but it's extremely powerful and fast compared to alternative solutions used in the electronics simulator industry (eg: typically Laplace transformations). At heart, I have opted for a compute quickly  and use a statistical method to do quality control. If you are familiar with industrial process control  the method is essentially the same kind of thing. In the electronics realm, I specify the timestep I wish to output images/plot at, and the simulator discards all intermediate states between frames to be drawn. This eliminates the highly variable timestep plot problems required for efficient & accurate calculation of events; In electronic simulation, the timestep typically varies from milliseconds down to picoseconds. A crude estimate for (super)atomic phenomena is in the range of milliseconds down to subatto second 0.01 atto (YUCK). I haven't given much thought to the timesteps I will output  outputting every subtle change doesn't make a whole lot of sense as a large number of small changes are required (typcially) to make a long term macroscopically noticed change; I plan, for right now, in just hand setting the timesteps as I do in electronics simulation for different regions of interest. Eg: large timesteps which allow me to ignore the development of the initial state (typical state) of the system  but still do a sanity check. And then focus the timestep down in regions where interesting things are happening that I am trying to understand better. 1) the simulator itself determines how to focus CPU power computationally on different regions of space; This is what allows the simulation to proceed at much quicker rates than would be possible if all points were computed for the same timestep everywhere in parallel. Items of circuitry / space which are sufficiently decoupled to introduce small errors by reducing the update rate of *changes* in interaction become timewasters each time a displaying of state occurs. Of course, blindly printing every timestep automatically chosen by the simulator is the absolute worst time waster. eg: 1e6 + times slower.... 2) In electronics, the timeevolving waveforms are often important; so much so that they become unintelligible if the timestep varies, so I am forced to plot a maximum & fixed timestep rate if I wish to easily extract information about the evolution of the waveforms in time  so that is the way my simulator works now  and to keep the cpu time down, when the simulator has to reduce the timestep for accurate results; intermediate steps are discarded from the plot to save time. To keep the problem tractable once I get to large (65100 atom) systems, I plan on having the simulator ultimately record changes in state and not changes in time; and I ultimately, I would like to be able to specify which states are of interest to reduce the data even more. I do have open questions about how much information to save at each plot point; but I can't solve those in my head  I have to actually experiment to see what is possible. The space, is unfortunately, limited by the nature of the simulation, cache memory, main memory, etc. I won't be able to simulate more than a few cubic microns of space  and only a 10002000 or so electrons/protons/etc. within that space. (Presuming the questions I am trying to get answered do not significantly degrade the estimates.... engineering :) ) Why would some consider it "speculative", atto second laser pulses are regularly used to excite atoms into the wave packet superposition of states which mimics classical motion. That's experimentally done, and I wasn't aware that the people doing this were violating any principle of quantum physics. I know that when I first heard of the frequencies they were talking about and atto second that the uncertainty principle crossed my mind. For the purposes of simulation, though, these small times replace the differential element in calculus  and anyone who uses calculus to work with Schrodinger's is automatically guilty of using small times as well, for a mathematical purpose if not a physical one. hmmm..... Separation of Church and state, or ought it be Calculus and state... hmmm.... I wasn't planning to get into that. And from the other response I got, I need to correct the extra words being put on my lips... Andrew. 


#10
Jun2310, 05:09 AM

P: 263

Alxm,
I have tried a few times to pen a response, but I simply don't post because the response is so much longer than your post. There are overlapping issues in what you are remarking about  and clearly some confusion about what I actually said, and what I appear to have said. So rather than reply head on; I hope I am able, in this way, to emulate your very compact response style which I rather wish came naturally to me. How to calculate lifetimes of excited states is also in every textbook. Fermi's Golden Rule, look it up. I use Fermi's rule for understanding stimulated emission events. If what you are saying isn't nuts (and I do admit you have an IQ) then it appears to imply that there are no such things as truly "spontaneous" emissions. If that's the case  my simulator already takes care of the issue, and I am wasting time asking about it; If your comment is a mistake, let me know  otherwise I need to change the question. Beyond that, I won't address your 'independent research' since this forum has clear rules about that stuff. And that's pretty much all I have to say on the topic. The Uncertainty Principle is not a simple experimental limit, it's a theoretical limit. This is explained in every textbook and in a thread in this forum once a month or so. A quantumchemistry program can't and won't tell you about the Bell inequality. It's an entirely different system and problem. They won't calculate scattering crosssections or do your taxes, for that matter. Also, it's not the calculations themselves which are in dispute. If it is all you have to say, then the thread won't be cluttered up with more of this. I haven't taken sides in any absolute way on controversial issues. In the last (and only other) thread I ever posted on the forums  it took the science advisor several pages to actually come up with an answer that made any sense. I still shake my head that he could have possibly missed that using a 1024+ digit calculator means one is serious about verifying something, and not just 'estimating'. Your response gives me hope that you have more awareness than average. Beyond that, I will simply remark that the research I have cited thus far in the thread  isn't mine. Secondly, it was searches on the physics forums and recommendations of "science advisors" and more important "physics mentors" which led me to follow the links to these controversial issues in the first place. * up Yes, QC methods were being used prior to electronic computers, in the late 20's and 30's. Newton didn't have any problems developing his algorithm for rootfinding without one either. Computers WON'T 'do it for you'. That was the point I was trying to make. Most chemists in most situations do not use quantum chemical software at all. Most chemists don't know how to use quantum chemical software. The fact that a method is exact does not mean a given implementation of it is necessarily numerically stable in every case, or that code is bug free. * same You can produce _exact_ pictures of hydrogenic orbitals simply by plotting the spherical harmonics, since that's what they are. But they won't give you a good approximation of any multielectron system. The Uncertainty Principle is not a simple experimental limit, it's a theoretical limit. This is explained in every textbook and in a thread in this forum once a month or so. My respect for you went up a notch or stayed the same with each of these comments. I agree with them, and always did although that may not be obvious at first reading of my past comments. They do not affect why I am doing anything, though. The fact that a particular implementation might not work (perhaps due to user error), does not invalidate the theory behind it in the slightest. I do deny sigma and pi bonds exist in molecules. Orbitals as well. These are all abstract concepts derived from various theoretical models, not real physical things. Forcefield methodology doesn't have anything to do with 'sketching' anything. It's a semiclassical physical model, not a single scalar or vector field or anything of the sort. Gee. In one line: My "hand" is also an abstract concept built upon sigma and pi abstract comments, and I can draw my hand with my hand which does not invalidate my theory of what a hand or sigma or pi bond is  in the slightest. All DFT methods in practical use for molecules are more accurate than the HartreeFock method. Obviously you don't know the relative size of dispersion effects if you think that would be a dealbreaker. The Schrödinger equation does not have even a superficial resemblance to the equations of the BohrSommerfeld models, in my opinion. OK. Your entitled to your opinion even when it has almost nothing to do with what I said. Sommerfeld's modification is not Bohr  and DeBroglie only interpreted, but did not change Bohr's theory. Nor can I prove or falsify what you said  so I'll join it: Since I don't know what is the cause of what I witnessed  Better safe to include dispersion than sorry I didn't. BTW: The experiment I am trying to understand has nothing to do with cold fusion or Blacklight power, etc. It is purely related to the change in economics over the last 100 years as to what processes are economically viable. and environmentally friendly. It's "Heisenberg" not "burg" (since you seem to think spellings are of great importance) ! Then, you admit we are on par rather than you being so much more superior than I am ?! Wow. I'm flattered. You think it's reasonable to demand that they give you software that took them years to write, no strings attached? Crazy. ! Touche! ! LOL: Say that again with a straight face after buying $50K software to solve a problem which you have only a week to solve before your competitors take your client away; then discover that there are bugs in the software you have purchased, and after you hold hands with tech support to solve their mistake for free (which only they can actually fix in the code)  and after they turn around and sell your fixes to your competitor and refuse to pay you a dime. (And that is what the nicer of the companies will do. Murphy was an optimist.) I am laughing at the notion that you might think you will convince anyone that *refusing* to spend money is crazy, or that a refusal to buy is equivalent to *DEMAND*ing something for nothing ??? By the way, thank you for giving *some* of your time away free. I am thankful to Dr. Young (no the name is not a 'simple' coincidence) for selling me a college physics education, and allowing me to use the knowledge after class as public domain  including his comments. If one (esp you) are crazy for giving away, or even think you are  I have the name of some very good psychiatrists with multiple degrees and understanding of other subjects. They do interview before accepting clients, and not many get accepted, but you might get accepted if you are lucky. 


#11
Jun2410, 02:55 AM

PF Gold
P: 291

1. units Femtosecond(10^15) and Nanometers(10^9) to show photon size and movement  with a viewing width of 10 micrometers and a steptime of 10 femtoseconds, a 600 nanometer photon will bounce back and forth on your screen at a comfortable rate when viewed at 30 frames per second. Electron and protons will not show any motion. 2. units Attoseconds(10^18) and Femtometers(10^15) for electron motion  with a viewing width of 10 angstroms and steptime of 10 attoseconds, an electron with 10 evolts of kinetic energy will move through your screen at a comfortable rate of about 2 picometers/attosecond, photons travel too fast to be seen at 300 picometers/attosecond and protons (heavier) dont really move at all over this timeframe. 3. units Yoctoseconds(10^24) and Attometers(10^18) for nuclear decay of neutrons  with a viewing width of 10 femtometers and steptime of 1 yoctosecond, a down quark changes to an upquark and emits an electron and an antielectron neutrino. The wboson would be visible for less then a yoctosecond and the neutrino would float off at a comfortable pace. You mention 3 animations: Bohr Magneton, orbital transitions, and the Stern Gerlach experiment; I will give some thought to time and distance scales and post later. You may well have something in mind already, if so please post. With regards to changing timescales. I dont see how this is done accurately as certainly the location of particles will not change when you change the timescale, but the momentum and momentum direction does change and I dont see how they can be recalculated without going to an nbody problem... I find it saves time to do 2 or more animations, one at the slow rate and a different one at the fast rate. 


#12
Jun2510, 04:53 AM

P: 263

At this resolution, in free space: c=3.0x10**8 m/s, the wavefront will step with a distance of 300nm / femtosecond. If the screen distance is 10μm, that means steps= 10/0.3 = 33.3 frames. So it takes around a second to cross the screen; if a wavefront simulates in dispersive media with a slower rate of travel, this would indeed be comfortable. #2 I will just let sit, #3  wow. At least I don't think I will have to worry about that timereference... [quote] You mention 3 animations: Bohr Magneton, orbital transitions, and the Stern Gerlach experiment; [quote] Sort of. I want to be able to simulate the effect of these three things  but that isn't always the same as graphing/animating them directly. eg: I don't know what a Bohr magenton would look like up "close". I do know what the electric field would look like a short distance away from an electron/proton emitting a magnetic moment. In general, one is merely graphing the EM field, or a compressed sketch of it over all space, but by watching the extrema points of the plot  one can trace a path  whether "jumping" or continuous, to locate the results of an experiment. In particular, if a possible simulation of the Bohr magneton is occurring  the Pauli exclusion principle ought to occur automatically without introduction of a second matrix, or any extra data structures. So graphing the magneton itself is not necessary  just graphing the EM field, and the orbits taken in a helium atom ought to show whether or not the simulation is working correctly. What is more significant, is that the simulation is going to have to deal with photon/EM disturbances at every step of the simulation (Not all such disturbances are measurable photons.). If one is watching an "electron" probability cloud develop  then there will be "random" noise due to photons and EM disturbances in transit which are moving too quickly to be watchable in the animations. I was considering sampling the location (or volume of most probability) for an item (electron/proton/etc) and accumulating it on a quasistatic probability graph rather than attempting to animate a probability over all space. eg: keeping track of the last 1000 positions of each item and varying intensity with the number of times a volume is entered in the simulation. The amount of data which needs to be output at each timestep is significantly reduced that way, and one is able to get an idea of what the probability field looks locally with respect to each item. Part of my goal is to be able to output standard ball and stick models as well in the future., along with sketches of MO  whether done from the probability angle, or from the equivalent charge density angle (EM field intensity) which is subtly different, and perhaps an even more accurate picture than the HF charge/time average approximation. An orbital transistion will not look like anything in such a simulation. The position will not change except in subsequent time, so the "view" one gets of an orbital will depend on how long an electron remains stable in it. The long it is there, the more completely the probability will be graphed. Each item has a total energy; When you mention an N body problem  you are correct. I am making an assumption about how to solve it based on the identical nature of the fields interacting from item to item. There is no separate field for each electron, proton, etc. but one composite EM field for all of them. In order to simulate, I must provide an operator based on the local EM field which causes the item being simulated to *statistically* move to places which will correctly solve the Schrodinger equation in a probability wise fashion. A single run of the simulator may not provide a complete probability field for graphing the reaction progressing, but the simulator can be rewind and run segments again with a changing random sampler, (and again, and ...) to improve the probability field resolution. I think R. Feynman invented/did something similar with path integrals, but I can't remember for certain off the top of my head  it has been too many years. The path integral was Dr. Young's favorite tool. I might do well to simulate a 1D particle in a box for you to show you what I mean. (It will be a static gnuplot graph for now) I am not certain if I can leave out the Bohr magneton and still have the model I have in mind predict correctly  but after vacation next week, I'll give it a try and see what happens. A picture is worth a 1000+ words.... eh? Andrew. 


#13
Jun2510, 01:49 PM

PF Gold
P: 291

A couple of questions and comments.
Regarding the Stern Gerlach experiment: If you wanted to do an animation with a viewing width of 10 meters (to show the equipment), what timestep would show electrons moving across your screen at a reasonable speed? Ie. if I had a laser gun pointed over your shoulder checking the electrons speed, how fast would they be going? Clearly the spin up electrons move one way and the spin down electrons move the other way based on the magnetic field, but I dont think they all move "exactly" the same. Is there research that shows the pattern of the distribution of electrons that show up over time or do they indeed all move the same up or down amount? Regarding orbital transitions: A viewing width of 10 angstroms produces the kind of picture you would see of "real atoms" like the IBM pictures with scanning electron microscopes, what you dont get is the timescale. A timescale of femtoseconds works well to view things like proton/nucleon vibrations but even low energy "unbounded" electrons are moving way to fast to see. A timescale of attoseconds allows you to build your electron probability clouds for the frame and deal with a free electron floating by that you know, because of its momentum, will be somewhere else in the next few frames. This timescale has the same problem you talked about where photons will seem to appear out of nowhere. Also, Regards 


#14
Jun2710, 03:03 AM

P: 263

[x,z] data points at the end of many experiments for a statistical sample equivalent to the SG photograph. Note the MIT experiment uses a slightly different shape of electromagnet than SG. I believe SG is more of a triangular wedge near a halfcircular socket. MIT's is on pp. 17, from a Google search of Lab 18, MIT and Stern Gerlach: web.mit.edu/8.13/www/JLExperiments/JLExp_18.pdf I'm going to answer these next to questions out of order, as I think I may be clearer that way. (Dispite wikipedia's present fictional monopole account.... which is "mathematically" equivalent, supposedly  but whoever wrote that makes me groan.... ). A dipole's detectable field scales as r**3, where r is somewhat ill defined, but as one gets farther away from the source(s) the error in r from variations in dipole shape contributes less and less to the overall values. I think (but am not absolutely certain) that much of the geometry distortion of a dipole is primarily in higher terms (r**4) etc. The primary meaning of my comment is that there are several unknowns affecting the field of an electron which become more important as computes closer to it's location. These unknowns are not necessarily a result of the Heisenberg uncertainty principle. In general, the idea that an electron is a physical particle capable of "spinning" on an axis is discouraged  and although I am not certain of what experiment(s) disprove the idea altogether  I seem to remember someone mentioning historical experiments that disprove any "radius" being assigned to an electron (either detected or not...). But if an electron is not a particle with spin, then it must still at least be a distribution of electric field and time; the latter idea is true whether or not an electron is a physical particle. See p. 56 for a photo of the actual silver atom pattern on the film target. http://www.owlnet.rice.edu/~hpu/cour...rnGerlach.pdf A couple of things to note about the photo. The "ideal" silver dipole would be located at the center (x axis in the photo) for one launched nonskew into the magnetic field. It is precisely here where a spike develops in the x direction  that spike is generally reasoned to have something to do with the nonsymmetry of the pole faces on the magnet, although no explanation I have read is really satisfying for a mathematical analysis of SG's actual magnet is not done. The other plots side is flattened, and that is typically described as being because of the silver atoms physically hitting the rounded part of the magnet cavity. I didn't see a plot for the MIT version of the experiment which uses a magnet shape which ought not have that spike, either. The only obvious clue pointing to the anomalous nature of the spike is that it is asymmetrical  The x axis was oriented vertically in the actual experiment, so the spike is even against gravity if I recall correctly. But if the explanation of the other side's thinness is truly because of physical clipping, then there is no way to be certain the spike would not have shown up experimentally due to the field shape if the atoms had not hit the surface of the magnet. One really needs a predictable *linear* variation in magnetic field strength over space to be able to effectively correct for physical manufacturing limitations of the slits, etc, and SG doesn't really have that. Comparing the demagnetized plate to the magnetized one, there is a fairly strong indication that the plates were sent on the postcard willynilly and with no definite orientation. (The mirror image reticule pattern on the right image reenforces that notion...) The demagnetized version ought to be a purely flat line  but isn't. In the lower half of the line a clear broadening of the pattern can be seen  which if the magnet is truly off, can only be attributed to the shape of the slits allowing the silver out  or the position of the silver oven behind the slits biasing the source statistically. In any event, a careful inspection I believe I see trace widening on the upper half of the pattern in the magnetized view suggesting that the flat line slide is actually rotated 180 degrees from how it was taken in the other photo, with an optional mirroring on top of that... Using that information as a corrective measure, and knowing that the slit is horizontal, one can estimate the amount of classical skew in trajectory a silver atom would have (incoming y angle and x angles with respect to the demagnetized photo, in contradistinction to a perfect 90 degree angle.); and since the size of the slit is likely large compared to the wavelength of the silver atom, these classical trajectories ought to be fairly accurate. Each point, then (y on the photo) has a definite angle (source dx/dz, dy/dz) from which the silver atom could have come, and the effect of this distribution needs to be calculated and compensated for to determine what an idealized simulation ought to look like. Generally, there ought to be fewer angles of approach as one gets closer to the edges of the slit (y on the photo), although dx/dz will be fairly constant across the whole slit except where the demagnetized photo shows thickening. So, I think the most accurate part of the magnetized photo will be the lower right quadrant of the photo taking the approximate center of symmetry of the shape as the origin. Using a straight edge, the line widening as one gets closer to y=0, increases very linearly (eg: right most trace / lower right quadrant) until extremely close to the magnet center where the field of the magnet becomes less known / near the surface of the magnet. So, there are two (ideal) linear effects with approach toward the center  1) the width of the trace, and 2) the offset of the trace from the picture's y axis . The offset is explained by the experiment itself  eg: the proportionality between magnetic field intensity gradient with x offset in the photo vaires approximately linear in SG (and linearly in the MIT version, according to the MIT text). The widening of the trace, however, is something I don't know how it theoretically ought to proceed, and which is caused by "all other differences" in the electron including any Heisenburg and QM interference effects. If only the electron's position [delta x, delta y] were affected  I could more easily separate out what causes the thickening  but as there is also skew, the problem isn't solvable qualitatively in my head. The variation in trace with, then, is something I don't know if it would replicate in an idealized experiment or not; and will need to work out before verifying my simulation against SG. If I am able to correct (reduce data) in the photo by mathematical modeling of skew, other features might become visible which presently are not. But to really do this correctly, the best source of information would be a run of the MIT experiment with computer usable data points to operate on. I'll have to look around and see if anyone has posted a good run, and if not, perhaps I will put some thought into reconstructing the experiment. I have the vacuum equipment, some very pure microcrystalline level homogeneous Iron, and the machining equipment to reconstruct the magnet, along with motorized micrometers (robotic)  but that would take quite a bit of time and effort to put together for me in my present state .... if anyone knows of an online data source, I would appreciate it. I'll probably post a bit more tomorrow... I am still trying to resolve a second question(/s) concerning the time/intensity distribution that an electron dipole has by focusing on what a quantized magnetic moment actually means, both classically (the limit) and Quantum Mechanically. I will need to think out loud, and perhaps will make some mistakes when doing so  but hopefully the sharp eyes of others will be helpful there and I can get enough of an answer to my second question/(s) to model it reasonably well for simulation purposes. .Andrew. 


#15
Jun2710, 05:21 PM

PF Gold
P: 291

Thank you for the post and link: web.mit.edu/8.13/www/JLExperiments/JLExp_18.pdf, this helps the discussion a lot. From the figures, it appears they shoot potassium atoms through about a .5cm channel of a 10cm long magnet and check if they hit a 4mm paladium wire that is moved around. An animation width of 1 meter sounds good but I remain stuck on the timeframe that would show on your computer screen, atoms emerging from the oven (say one per quarter second) and hitting the wire within a second or two...
The speed of the potassium atom appears to depend on the oven temperature that drives the vapor pressure that determines the speed of the atom. I have assumed here that the speed is important since a slow potassium atom spends a longer time under the magnet then a fast atom and hence would be bent more. This would seem to suggest that even a small adjustment in temperature would cause the atoms to either spread out too much and be undetectable or not spread out at all. Perhaps I miss something there or mayby the temp at 190 degrees produces a consistent enough speed that you do not see the effect. or something else.. Anyway thank you for the info, and if I can nail the speed of the atoms I visualize a pretty nice first animation showing an oven blasting (with a setting for temperature) out potassium atoms at a particular speed??? and rate??? and over a timespan (microseconds???) you would see atoms hit a wire after being bent by a setable magnetic field with the proper bending of up and down electrons. The nice thing about starting the animation at this level of time and space magnification is that its pretty clear what happens at this scale and there is no need to prob the details of what the atoms actually do or look like. Probing a more detailed time or distance scale will be more difficult. 


#16
Jun3010, 08:06 PM

P: 263

The experiment requires the student to wait for the temperature to stabilize for a long period of time, otherwise the data will be in error  so I am fairly sure that small variations in temperature affect speed significantly. The divergence of the magnetic field is what causes a vertical accelerating force to appear on the magnetic dipole in the first place, and divergence is independent of speed. The traveling speed of the dipole, then, has nothing to do with vertical acceleration  eg: the overall charge is neutral, and the + and  are both moving in the same direction so even Hall effect does not occur. However, the + and  charge would want to be deflected in opposite directions horizontally in proportion to the speed with which the atom travels through the B field, and I am sure (classically) that would cause the unpaired electron and remaining nuclear charge to align on a horizontal line with respect to each other and in a plane 90 degrees to the magnetic field direction. There is, then, going to be a vertical force vector independent of speed, and a horizontal one which is counterbalanced by electrostatic attraction. Since acceleration is an integral of force ( on average f/m), the deflection depends on time spent in the magnetic field. Even at a constant temperature speeds will vary statistically. These variations will cause (add to) trace width spread in the photos shown; and also (small) skew angle which does the same thing. The lab sheet may contain enough information to compute the speed variation, and one can compute a percentage of average time which eg; 99.8% of atoms will fall within, and the spread of the atoms (not including skew angle) due to this variation in time will be a linear fraction of the total deflection. % time error = % y deflection in error. That will, unfortunately, complicate the simulator enough to seriously increase the time it takes me to implement it as my present simulator does not have that capability. +1 to the todo list. More will be coming about the "spin" of an electron when I have a chance this evening, along with Q/Qs that I have about how to work with it  and a short summary of what I am presently considering. (Next steps.) 


#17
Jul110, 10:12 AM

P: 263

OK. Fell asleep on vacation ... and fell asleep last night early.... and this morning again .. but I'm back.
A brief review of the ideas I need to build the simulator: So far I have spoken about orbital transitions  which has nothing to do with *where* an electron(/other item) is at the instant before the state change and after. Only the probability of where it is found in the future can change  without the "actual" position changing for a single run of the simulation at the instant of state change. So, one may think of a simulated orbital change (state change) as being a change in the statistical rule about where the item will diffuse in the future. Alexm ties "spontaneous" decay of excited state orbitals to a "disturbance" in the EM field by invoking Fermi's golden rule. This idea is new to me, for in semiconductor physics the mechanism of spontaneous decay is *NOT* explained, rather it is pragmatically considered an empirical property of the semiconductor which is modified by disturbances in the EM field (eg: in a perfect crystal, it would be empirical  but in a doped one, the lifetime of an excited state is the empirical value *decreased* by the probability that a fermion will diffuse into a defect in the crystal causing a stimulated decay of the excited state.) Since no one has contradicted Alexm, his opinion stands as the method I will attempt to use for simulation. The remaining issue I have to deal with is the "spin" state of an item. This state is what causes an "anomaly" in emission spectra similar to fine structure (haven't studied this, so I am speaking in general) based on the magnetic state of individual protons in the nucleus coupled to the magnetic state of the light emitting electron. The anomaly is extremely small since the Bohr magneton is affected by the mass of the item  and protons being relatively heavy, have ~~ 1/1836 times the effect of an electron. I have no engineering text describing the exact nature of various experiments about the Bohr magneton and guidance/input from those who might know of experiments/articles to review would be appreciated  but I do know that spin of an electron has a mechanical moment from an experiment done by Einstein and a colleague, and I do know that the orientation of the magnetic dipole in space can change. Since classical angular momentum is found experimentally when dealing with the Bohr magneton  along with nonclassical quantum effects  I am simply going to write down what I think I know  and hope for correction or at least suggestions of what I might be oversimplifying if anything. A top (mechanical moment/dipole) spinning which has force (torque) applied to reorient the axis of rotation will resist reorientation by precessing about it's rotary axis. Gyroscopic formulas tend to focus on the period of precession  and I am unaware of how one calculates how much a top will reorient it's axis of rotation with respect to time and torque; nor if such a reorientation is possible with a frictionless device.... I assume reorientation is possible even in a frictionless environment, since permanent magnetism in nonsuperconductor material is (as far as I know) purely a spin based phenomena and there is no friction that I am aware of for a spinning electron (there are no electrons that *DON'T* 'spin'). I suppose it is possible that quasiKepler motion of an electron around an atom (P orbital, etc.) might also contribute to magnetic field, but everything I was taught in chemistry indicates that unpaired electrons are the sole cause of magnetism. So I am assuming that is the case for the moment. (pun intended.) Also, I have seen the "orientation" of spin as being what supposedly changes in the NMRI/MRI effect. I have not seen any mathematics which explicitly show how QMwise engineers came to predict that the magnetic field would "FLIP" when appropriate resonance radiation was applied to the electrons in hydrogen (the typical NMRI target), although the energy required being proportional to a statically applied magnetic field does make sense. The energy required to reorient the magnetic dipole being readily computable from analogous situations in DC/AC motors being used for power generation. The magnetic dipole rotor requiring work to flip orientation while in the magnetic field of the motor's stator. I am presuming that a rule akin to Fermi's golden rule needs to be used in this case (spin flip)  but I have not come across in commonly available literature how NMRI is deduced, nor how to handle the QM states. Any pointers/example calculations which could be applied in the case of a nonstatic magnetic field would be appreciated, as large uniform magnetic fields are not what will be found in molecular simulation  but rather weak dipole moments from various items located near the "electron" or other simulated item of interest which dynamically can reorient. A second issue that comes to mind, and which may/or not affect the discussion is that Ampere's law and other effects computed using calculus assume a uniform magnetic field when taking the limit as area goes to zero (eg: differential area or volumes...which have an existence whose consistency is questionable given the Heisenberg concept... ). A magnetic dipole's strength is measured in current x area enclosed by the current loop. I*area**2. However, making a large rectangular and planar loop of wire and applying a constant current to it  and then using a compass as a probe the field's strength has convinced me that B field magnitude in the plane of the loop, and inside the loop, is stronger as one gets closer to the wire  and weaker as one gets closer to the center of the loop. When replacing distributed currents with individual electrons, protons, etc. The magnitude of the current is replaced with the velocity of the charged atomic item in relation to the path length required to circle an "enclosed" area. Therefore, the product of area enclosed to velocity of circulating charge (or Efield disturbance), is then constant if one is given a constant magnetic moment (eg: The identity of the Bohr magneton for an electron..) So, the diffusion of the electron in space to create the magnetic moment must have an average value in an angular sense  and because of inertial reference frames, helical paths must also be possible where the path is closed only on a moving reference frame. What I have stated here, is the sum total of what I am certain of regarding magnetic moments; discussion of how these ideas fit with QM/Schrodinger's would be greatly appreciated. Andrew. 


#18
Jul110, 01:14 PM

PF Gold
P: 291

A quick sidenote, the June 10/2010 issue of Nature has a pretty good article:
"Electron localization following attosecond molecular photoioniztion" that starts with "For the past several decades, we have been able to probe the motion of atoms that is associated with chemical transformation and which occurs on the femtosecond (10^15) timescale. However, studing the inner workings of atoms and molecules on the electronic timescale has become possible only with recent development of isolated attosecond (10^18) laser pulses." The attosecond timescale will (I think) prove to be very important in the history of the electron. Regarding: and: and: The Lorentz radius is: "In simple terms, the classical electron radius is roughly the size the electron would need to have for its mass to be completely due to its electrostatic potential energy  not taking quantum mechanics into account." and comes out to 3 femtometers. I think that this much mass in a 3 femtometer radius is spinning at something like 100 times the speed of light. Ie, the electron looks too small to fit that much energy in. You commented "A dipole's detectable field scales as r**3, where r is somewhat ill defined" and "the product of area enclosed to velocity of circulating charge (or Efield disturbance), is then constant if one is given a constant magnetic moment " An important structure to is consider are twistors (one animated here, ignore the bottom row as it is a work in progress and sometimes you have to click a couple of times to get them to run right). It has a defined axis and it has the 720 spin (watch carefully as the blue dot goes through 360 degrees, the electron ends upside down and must go another 360 degrees to get right side up). It would pack a lot more energy in a lot less space and has kind of a "wierd" layout of its dipole field... I have assumed spin flips do not apply to the SG experiment. There is no reason why the valance electron would not simply align itelf with the external field and slowly pull the atom in one direction or the other depending on whether it was up to down. Dont know if this helps much, but I have to get back to trying to find the speed of the potassium atoms.. 


Register to reply 
Related Discussions  
Cells are made of molecules,molecules are made of atoms,atoms are made of energy.so..  Biology  9  
Which theory, Law or equation I should use to simulate molecules motion while heating  Classical Physics  8  
Atoms and molecules  Introductory Physics Homework  6  
Atoms and molecules  Biology, Chemistry & Other Homework  1  
Can scientists see either atoms or molecules?  Atomic, Solid State, Comp. Physics  20 