Developing a simple program to solve the diffusion equation

In summary, if you want to solve the diffusion equation in parallel, you might need a Krylov-subspace solver.
  • #1
libertad
43
1
Hi there,
I'm looking for an algorithm which describes the numeric solution to solve the diffusion equation (1D or 2D).
I've taken a look on some textbook such as Hamilton and Henry ones but I didn't find a simple solution.
anybody knows about it? Where can I find what I'm looking for?
Thanks.
 
Last edited:
Engineering news on Phys.org
  • #2
libertad said:
Hi there,
I'm looking for an algorithm which describes the numeric solution to solve the diffusion equation (1D or 2D).
I've taken a look on some textbook such as Hamilton and Henry ones but I didn't find a simple solution.
anybody knows about it? Where can I find what I'm looking for?
Thanks.
libertad,

It's trivial to write an algorithm to solve the diffusion equation. In 1-D, the diffusion equation will
couple together 3 adjacent zones - the diffusion term giving the leakage out the left face of the
center zone involves a difference between the flux in the zone to the left and the flux in the center
zone.

Likewise, the diffusion term giving leakage out the right side involves the flux to the right, and the
flux in the center zone. Therefore, the total balance equation couples 3 adjacent zones, left, center,
right for each center zone. If you write the equations for all the zones in matrix form; you get a
tri-diagonal matrix. You solve that with a simple Gaussian reduction - two sweeps - a forward
elimination of the lower diagonal, and a backward substitution that solves the equation.

If you have 2-D; then you couple together 5 zones - left, right, front, back, and center. Again in
matrix form; this yields a penta-diagonal matrix. Two forward elimination sweeps - one for each of
the now 2 lower diagonals; followed by a backward substitution.

Duck soup!

Dr. Gregory Greenman
Physicist
 
  • #3
It's slightly more complicated than Morbius says. I recently had to do a 1-D program and it took me about a week.

The way to attack the problem of making a program is to divide it into parts. First determine the composition and geometery of the system you're looking at, this will allow you to set up the matrix for the calculations you need to do. Duderstalt and Hamiltion should have a section on numerical solutions, I know Stacey's does since I'm using his; those will show you how to set the matrix up. After you have this, you can go on and do the calculations Morbius mentioned above. It is pretty straight foward, but if you're like me and not great at programming, it will take a while.

Incidentally, I'm now working on a 2-D code. I'm teaching myself FORTRAN from the basics.
 
  • #4
theCandyman said:
It's slightly more complicated than Morbius says. I recently had to do a 1-D program and it took me about a week.
Candyman,

Actually it is EXACTLY as easy as I stated. If you can program the matrix setup, and the Gaussian
reduction - that's all there really is to it. You don't need to go looking in books for the solution strategy;
it's really easy to figure it out.

Dr. Gregory Greenman
Physicist
 
  • #5
Morbius said:
Candyman,

Actually it is EXACTLY as easy as I stated. If you can program the matrix setup, and the Gaussian
reduction - that's all there really is to it. You don't need to go looking in books for the solution strategy;
it's really easy to figure it out.

Dr. Gregory Greenman
Physicist

Hi Dr. Gregory,
I actually want to test some of my parallel computing techniques in coding a program to solve the diffusion equations.
I'm a comparatively professional developer. If you have any problems in developing software applications don't hesitate to contact me.
 
  • #6
libertad said:
Hi Dr. Gregory,
I actually want to test some of my parallel computing techniques in coding a program to solve the diffusion equations.
I'm a comparatively professional developer. If you have any problems in developing software applications don't hesitate to contact me.

the real power of parallelization comes with higher dimensional diffusion. in which case, a more efficient way of calculating diffusion is through Monte Carlo integration (since the error scales as 1/sqrt(N) rather than with the dimensionality), which indeed parallelizes trivially. for a 1D case i can't see any reason why you would parallelize.

if you still want to solve it via matrix methods, then look at the ESSL/BLAS/LAPACK libraries and MPI.
 
  • #7
libertad said:
Hi Dr. Gregory,
I actually want to test some of my parallel computing techniques in coding a program to solve the diffusion equations.
libertad,

If you want something that runs in parallel [ the Gaussian elimination is not a parallel algorithm ], then
I would suggest some form of Krylov-subspace solver; e.g. like a conjugate gradient method.

Mehtods such as Krylov-subspace solvers can be made to run in parallel. Much of the work is
doing the matrix multiplies.

However, there will be one or two "dot products" of the solution and/or correction vectors.

These dot products require a "sync point" - basically an operation called an "all reduce".

Each processor computes its own portion of the dot product, a "partial sum" for the part of the
vector that the processor "owns". Then all the processors have to sum all their partial sums to
the "Boss" processor. The Boss processor then has to broadcast the dot product to all the workers.

The "step length" in the Krylov solve is typically the quotient of two dot products.

I'm a comparatively professional developer. If you have any problems in developing software applications don't hesitate to contact me.

That's my profession also. I'm a "computational physicist". I write physics simulation programs
for Lawrence Livermore National Laboratory. All of our work is done in parallel since we have
computers with as many as 131,000 processors:

https://asc.llnl.gov/computing_resources/bluegenel/

LLNL's Center for Advanced Scientific Computing just won one of R&D Magazine's
"Top 100" Awards in 2007 for the "Hypre" solver library:

http://www.rdmag.com/ShowPR.aspx?PUBCODE=014&ACCT=1400000100&ISSUE=0707&RELTYPE=PR&ORIGRELTYPE=FE&PRODCODE=00000000&PRODLETT=G&CommonCount=0

http://www.llnl.gov/str/Oct07/pdfs/10_07.4.pdf

http://www.llnl.gov/CASC/linear_solvers/

I thought you might be interested in reading about Hypre.

Dr. Gregory Greenman
Physicist
 
Last edited by a moderator:
  • #8
Dr. Gregory Greenman

Thanks for your recommendations. As I got from your note you and your team are doing great paralellization based on distributed processors.
What I intend to do is paralellization in a local computer with a single processor. I want to divide my computation tasks to 2 or more separated threads which are running simultaneously. After doing so I will doing some paralellization on a dual core processor.
Because of the fact that I'm a nuclear engineer, I intended to test my task on the diffusion equation.
Nevertheless, It's my honor to get to know more about you and your projects in paralellization.

Thanks.
 
  • #9
so what does one typically use as a scattering function? (im assuming that we're talking about solving the Boltzmann-Poisson eqn for neutron diffusion?)
 
  • #10
quetzalcoatl9 said:
the real power of parallelization comes with higher dimensional diffusion. in which case, a more efficient way of calculating diffusion is through Monte Carlo integration (since the error scales as 1/sqrt(N) rather than with the dimensionality), which indeed parallelizes trivially..
quetzalcoatal,

First - the VARIANCE scales as 1 / sqrt(N) - NOT the error; and that's for a 0-dimensional quantity
like the k-effective.

The question as to whether it is more efficient to use Monte Carlo or deterministic methods depends
on what you want to calculate - and for a typical reactor problem, deterministic methods beat
Monte Carlo HANDS DOWN!

This was covered in a session at the American Nuclear Society Topical Meeting in Computational
Methods held last April in Monterey, California. The session was hosted by Prof. William Martin,
Chairman of the Nuclear Engineering Program at the University of Michigan.

The question under discussion was a conjecture at a previous meeting by Dr. Kord Smith of
Studsvik Scanpower, Idaho Falls, ID. The most used codes for reactor analysis is the suite from
Studsvik; SIMULATE, CASMO,... authored by Dr. Smith and his colleagues. [ Kord and I both
did our graduate studies together with Professor Alan F. Henry at MIT in the 1970s ]

Kord's conjecture is that it will be about 15-20 years before one can use Monte Carlo as the basis
for reactor reload calculations. The consensus of the session hosted by Professor Martin is that
Kord's conjecture is about right; maybe 10-15 years at best.

The reason that deterministic methods win over Monte Carlo has to do with what I said above about
it depending on what you want to calculate. For something simple like a k-eff calculation; or some
other 0-dimensional quantity - the scaling is 1 / sqrt(N) for the VARIANCE!

However, if one is going to do something like a reactor reload calculation, then the quantity of interest
is the peak heat flux or some other limit that the calculation has to show to be within the tech specs
for the reactor.

The peak heat flux occurs at point in space; which is NOT known a priori. Therefore, although the
variance scales with the inverse root of the number of particles; there is a very low probability that
the neutron histories that include the peak heat flux point in any number large enough to give one
anything approaching an acceptable error.

That's why the 1 / sqrt(N) is NOT the error. That only gives the scaling; NOT the error.
There is another coefficient out front of that inverse root N scaling. The scaling with N is
only half the story.

For something like a point power, or a point heat flux; that coefficient is BIG. So even though
you can decrease the error by sqrt(N) by making N - the number of neutron histories large - it will take
you an absolutely HUGE number of histories before you make a dent in that large coefficient.

The problem is that the quantity of interest - a peak power or peak heat flux - is a very small part
of the phase space. Zero-dimensional quantities like k-effective which are integrals over the entire
problem space converge quite fast with Monte Carlo.

However, point quantities DO NOT. If one knew the point in question a priori - that is - if the
question was "How many neutrons will be absorbed by a detector at location (x,y,z)?" - then such a
question can be solved readily by Adjoint Monte Carlo.

One uses the Monte Carlo technique to solve the Adjoint Boltzmann Transport Equation. When
you do that - the roles of sources and sinks of neutrons reverse. That is the number of neutrons that
will be absorbed by a detector in the forward problem; is equal to an integral over the whole problem
from "adjoint neutrons" sourced at the detector location. Thus in adjoint space, one again is solving
for an integral over a large volume of the problem.

However, one has to know the location of the detector a priori, because one has to source
"adjoint neutrons" in at that location. In the reactor problem, one doesn't know that a priori.

Therefore, even with the fastest computers; Monte Carlo loses very badly to deterministic methods
even if the deterministic methods don't take advantage of parallelism. If the deterministic methods
are coded to take advantage of what parallelism they can; Monte Carlo loses even more.

I am currently working on a Monte Carlo code; so if anything, my "bias" should be in favor of the
Monte Carlo. However, facts are facts; for a reactor calculation, deterministic methods beat
Monte Carlo hands down, and will continue to do so for many years to come.

Dr. Gregory Greenman
Physicist
 
  • #11
Morbius said:
That's why the 1 / sqrt(N) is NOT the error. That only gives the scaling; NOT the error.
There is another coefficient out front of that inverse root N scaling. The scaling with N is
only half the story.

Dr. Greenman,

ok, so you are saying that for this particular problem there is a large scaling prefactor infront of the 1/sqrt(N), and so this currently makes non-stochastic methods desirable. (However, since you say that you are developing a stochastic code I take this to mean that the high levels of parallelism available cheaply now allow you to "make a dent" in that prefactor otherwise you wouldn't bother).

Unfortunately, we are not always so fortunate...for example, in Quantum Monte Carlo we have no hope of diagonalizing the Hamiltonian, and so we must learn to live with the sampling issues inherent in fermionic systems (due to the antisymmetry of the wavefunction, we run into a "sign problem" whereby calculating averages from the density matrix requires a delicate cancellation of two very large quantities - a situation that is numerically unstable). Ceperely and others have developed methods of nodal sampling (essentially umbrella sampling) that makes this somewhat more tractable, but not much.

Take path integral MC for example, it truly is taunting since we have a small scaling prefactor, the problem factorizes linearly, and the MC algorithm is trivially parallel. Unfortunately, as i said above, the sampling space is so large (and method of summation is inherently unstable) that unless we can focus around the nodes we are doomed - but if you know where the nodes are (where all of the interesting chemistry happens) then why bother!

Q

P.S. - i have used the blue gene at SDSC extensively, but given the SUs for their slower hardware i found it somewhat unattractive for what i am doing. i understand the purpose for which the BG was supposedly designed (FLOPS/kW and FLOPS/m^2) but (a) neither of these things really help me and (b) power efficiency doesn't seem to be stopping anyone else from building highly parallel systems either, i.e. look at TACC's Ranger (which at 504 TFlops will displace LLNL's BG/L from the #1 spot on the top500). i think this spells the end of future bluegene systems. the opteron-based Cray XT3 is also an excellent system. i have nothing against the BG per se, but PPC is largely a failure (i.e. no 64-bit) and IBM is trying to spin it a different way.

http://www.tacc.utexas.edu/resources/hpcsystems/
 
Last edited by a moderator:
  • #12
quetzalcoatl9 said:
ok, so you are saying that for this particular problem there is a large scaling prefactor infront of the 1/sqrt(N), and so this currently makes non-stochastic methods desirable. (However, since you say that you are developing a stochastic code I take this to mean that the high levels of parallelism available cheaply now allow you to "make a dent" in that prefactor otherwise you wouldn't bother).
quetzalcoatal,

Your assumption that our reason for developing the Monte Carlo code is to take advantage of
high levels of parallelism. That's incorrect.

The reason for using the Monte Carlo code is that for the applications that we are interested in;
there are certain effects that are more easily implemented in a Monte Carlo code. It's not the
parallelism that gives us an advantage over deterministic methods - it's that certain effects are
more difficult to implement in the deterministic code.

Additionally, for our application; unlike the reactor application; the important constraint is not a
local one; unlike the reactor problem

Dr. Gregory Greenman
Physicist
 
  • #13
quetzalcoatl9 said:
i.e. look at TACC's Ranger (which at 504 TFlops will displace LLNL's BG/L from the #1 spot on the top500). i think this spells the end of future bluegene systems. /QUOTE]
quetzalcoatl,

The next occupant of the #1 spot on the Top500 is another BlueGene/L class machine currently in
the pipeline called "PetaFlop".

"PetaFlop" will be a 1,000 Teraflop machine - i.e. 1 Petaflop.
i understand the purpose for which the BG was supposedly designed (FLOPS/kW and FLOPS/m^2)
That wasn't the purpose of BG/L at all. Power consumption and floor space is NOT a concern.

Blue Gene/L takes up about 1/3rd of one of the two big computer rooms in LLNL's Terascale Simulation
Facility - the TSF. There was no need to scrimp on floor space; LLNL has PLENTY of room for
computers.

Power is not a factor either. In the other big computer room that neighbors the one that Blue Gene/L's
room in the TSF is another computer called ASC Purple. ASC Purple is currently #6 on the Top 500
list; down from when it debuted at #3. ASC Purple consumes about 15 Megawatts of power.

Power consumption is not a factor either.

Dr. Gregory Greenman
Physicist

Dr. Gregory Greenman
Physicist
 
Last edited:
  • #14
Morbius said:
quetzalcoatal,
The reason for using the Monte Carlo code is that for the applications that we are interested in;
there are certain effects that are more easily implemented in a Monte Carlo code. It's not the
parallelism that gives us an advantage over deterministic methods - it's that certain effects are
more difficult to implement in the deterministic code.

Morbius,

you and i have something in common then: I also have implemented a Monte Carlo code in order to avoid doing many-body molecular dynamics with a potential energy function whose derivative is numerically ill-behaved! (Although, I do not get away completely unscathed since the systems that I am studying via this method exhibit long correlation times, which means lots of "wasted" CPU time - but at least the many-body problem is now tractable!)

Q
 
  • #15
Morbius said:
That wasn't the purpose of BG/L at all. Power consumption and floor space is NOT a concern.

Blue Gene/L takes up about 1/3rd of one of the two big computer rooms in LLNL's Terascale Simulation
Facility - the TSF. There was no need to scrimp on floor space; LLNL has PLENTY of room for
computers.

Power is not a factor either. In the other big computer room that neighbors the one that Blue Gene/L's
room in the TSF is another computer called ASC Purple. ASC Purple is currently #6 on the Top 500
list; down from when it debuted at #3. ASC Purple consumes about 15 Megawatts of power.

Power consumption is not a factor either.

I kindly disagree. If you read the BG white papers those are EXACTLY the reasons that they have used to achieve the high parallelism.

At 440 Mhz with 512 MB memory (BG/L), the motive was certainly NOT performance: it was to achieve a low power consumption footprint thus allowing higher levels of parallelism.

From IBM's architectural white-paper:

The reasons why we chose an SoC design point included a high level of integration, low power, and low design cost. We chose a processor with modest performance because of the clear performance/power advantage of such a core. Low-power design is the key enabler to the Blue Gene family.

Please see the following for an overview of the blue gene architecture:

http://www.research.ibm.com/journal/rd/492/gara.html

Q
P.S. - note that the BG node consume only 20 W, that is note by accident.
 
Last edited by a moderator:
  • #16
Morbius said:
The next occupant of the #1 spot on the Top500 is another BlueGene/L class machine currently in
the pipeline called "PetaFlop".

the first BG/P is not expected to be operational until Spring 2008 (and only 111 TFlops at that, petascale systems will follow later) - Ranger will be up and in use sometime in December, making it #1 at least for a little while :)

Q

P.S. - i do not work for TACC, Ionly like their systems :)
 
  • #17
quetzalcoatl9 said:
the real power of parallelization comes with higher dimensional diffusion. in which case, a more efficient way of calculating diffusion is through Monte Carlo integration (since the error scales as 1/sqrt(N) rather than with the dimensionality), which indeed parallelizes trivially..
quetzalcoatal,

That Monte Carlo parallelizes trivially is only trivial when one doesn't think about it too much.

It would be trivial to parallelize Monte Carlo if one can devote a processor to tracking the entire
lifecycle of a simulation particle - from birth to death. Unfortunately, that is seldom the case.

Since a simulation particle can theoretically go anywhere in the problem domain, in order for a
processor to track a simulation particle from birth to death; it must have a complete description of
the entire problem geometry. In a machine like Blue Gene/L which has 131,000 processors; full
utilization of the machine would require 131,000 duplicate representations to reside in memory;
since each processor must have the complete geometry.

This would be wasteful of computer memory. No - large Monte Carlo simulations are run on large
parallel machines using a technique called "domain decomposition". In essence, each processor
"owns" only a piece of the entire problem. As simulation particles reach the boundary of the domain
on a given processor, it transits to another domain on another processor, and the simulation particle
must be "handed off" to the processor that owns the neighboring domain.

Once processors have to communicate particles back and forth - then one has the same problems
that one has with a deterministic system. Once the "cradle to grave on a single processor" paradigm
is broken, one loses the easy parallelism.

In fact the parallelism problems are WORSE for Monte Carlo. In a deterministic code, if each
processor has equivalent sized pieces of the problem; then the processors have roughly equivalent
work. The amount of work a given processor has to do depends on how much of the problem it
"owns"; and the amount of work does not depend on the solution.

Therefore, if the domains are parcelled out roughly equally to all processors; then the all the processors
should arrive at the "sync points" where they need to communicate at roughly equivalent times. This
makes a "sychronous communication" model viable. That is all processors arrive at the communication
point at roughly the same time. Therefore, they wait until all processors have arrived, and can then
exchange the data that is needed across processor boundaries. Additionally, the amount of data
being shared is symmetric. If processor "i" borders processor "j" at "n" faces; then "i" needs to send
"j" "n" values for each variable being shared; and vise-versa - "j" needs to send "i" the equivalent
amount. Each processor needs to know 1 value per face per variable in a 1st order approximation.
Each processor would have to go 2 values into the adjacent domain for a 2nd order approximation...
Regardless of order, there's always a symmetric exchange of data.

That isn't the case in Monte Carlo. In Monte Carlo, there may be a very active region bordering a
very inactive region. The amount of data that needs to flow is not symmetric. Because processors
don't have an equal amount of work to do - given varying number of simulation particles; then one
can't assume that they will all arrrive at a "sync point" nearly simultaneously.

Therefore, one can't use a "synchronous" communication paradigm unless one is willing to have
processors waiting a large portion of the time. In this case, an "asynchronous" communication
model is used. Processors exchange data, not in a determinate way; but in an indeterminate way.
Processors "buffer" particles as needed and neighoring processors flush these buffers when more
particles are needed to process.

The asynchronous communication model also opens the door to potential deadlock conditions if one
is not careful.

Additionally, unlike the deterministic case, where processors have roughly equal amounts of work
to do since the are solving the same equations on roughly equal numbers of zones; Monte Carlo
simulations inherently are not "load balanced". The workload among the processors may not be
shared equally.

One could even the workload and "load balance" by redistributing the regions of space for which a
given processor "owns". As regions of the problem are transferred between processors in order to
balance the load; so too the simulation praticles that currently occupy those regions must also be
transferred. This extensive communication may offset much of the gains to be had from balancing
the load. Therefore, one should only load balance when it will yield a net improvement in computer
efficiency. Some of my collegues have figured out how to do that; and have written presented papers
on the subject. For example, one that was presented in Avignon, France last year is available from
the Dept. of Energy at:

http://www.osti.gov/bridge/purl.cover.jsp?purl=/878214-gj95NC/

One property that one wishes a parallel simulation to have, is that the answers not depend on the
number of processors on which the simulation was run. If processors "owned" the random number
seeds, then the history of a given particle would depend on the set of processors that it visited and
when it did so. Clearly, this would impart a depedence of the simulation results to the number of
processors involved. In order to sidestep this issue, it is useful for each simulation particle to carry
its own random number seed. This gives each simulation particle its own pseudo-random number
sequence, and it won't matter which processor calculates the next number in the sequence.

However, at the beginning of the problem, the initialization of the random number seeds has to be
coordinated among the processors in order to avoid duplication.

All in all, I have coded both deterministic methods, as well as Monte Carlo methods. The complexity
of the parallelism of the Monte Carlo methods is at least one or two orders of magnitude more complex
than the deterministic methods.

Dr. Gregory Greenman
Physicist
 
Last edited:
  • #18
quetzalcoatl9 said:
the first BG/P is not expected to be operational until Spring 2008 (and only 111 TFlops at that, petascale systems will follow later)
quetzalcoatl,

That's for the UNCLASSIFED systems. You don't know what the CLASSIFED ones are doing.

Dr. Gregory Greenman
Physicist
 
  • #19
Morbius said:
This would be wasteful of computer memory. No - large Monte Carlo simulations are run on large
parallel machines using a technique called "domain decomposition". In essence, each processor
"owns" only a piece of the entire problem. As simulation particles reach the boundary of the domain
on a given processor, it transits to another domain on another processor, and the simulation particle
must be "handed off" to the processor that owns the neighboring domain.

Morbius,

Yes, I am familiar with domain decomposition methods - infact I utilize decompositions in solving for the many-bodied polarization states (iterative gauss-seidel solver with preconditioning) of my MC code.

It sounds like you are focusing on a very particular type of problem - neutron diffusion in some fissile material. However, keep in mind that if one was, for example, solving for a statistical mechanical observable, then doing the trivial parallelization where each processor handles a separate (indepedent) sample is appropriate - infact, desireable since the goal is to sample as many possible states from the ensemble and this method scales linearly; one could hardly do better for this type of problem! The same is true in path integral MC since one wishes to calculate the density matrix accurately - far from being wasteful, starting independent configurations on many processors separately is the best way to do this!

Molecular dynamics is more of the case that you describe, since we are propagating the entire system along a trajectory in phase space - in which case the forces/potential are decomposed by domain..there are very effective libraries for doing so (CHARM++ being one) that can take advantage of the various interconnect topologies.

Keep in mind that "Monte Carlo" does not refer to neutron diffusion - it is nothing more than an efficient way of numerically doing integrals!

Regards,

Q
 
  • #20
Morbius said:
quetzalcoatl,

That's for the UNCLASSIFED systems. You don't know what the CLASSIFED ones are doing.

Dr. Gregory Greenman
Physicist

Morbius,

I do not know anything about classified systems - I would never have guessed that such large systems would even be classified, since the current top #1 BG at LLNL is certainly not classified (actually, #1, #3 and #6, all NNSA are online at top500.org).

Q
 
  • #21
quetzalcoatl9 said:
I kindly disagree. If you read the BG white papers those are EXACTLY the reasons that they have used to achieve the high parallelism.

At 440 Mhz with 512 MB memory (BG/L), the motive was certainly NOT performance:
quetzalcoatl,

You sure are batting ZERO today. The issue DEFINITELY WAS performance.

It was the US Congress that commissioned BlueGene/L; and the stated intent was to recapture
the #1 slot of the Top500 list for the USA; because at the time the #1 slot had been held by Japan's
Earth Simulator.

Evidently you don't understand that it didn't matter HOW IBM achieved the #1 supercomputer
designation. The charge from the US Congress what that they recapture the #1 title for the USA.

For someone to say that performance wasn't an issue shows a profound lack of understanding.
Please see the following for an overview of the blue gene architecture:
You think I don't already know this! I know even more about this than I
am allowed to tell. Please don't patronize me by implying that I don't know
about the Blue Gene archietecture.
 
  • #22
quetzalcoatl9 said:
Morbius,

I do not know anything about classified systems - I would never have guessed that such large systems would even be classified, since the current top #1 BG at LLNL is certainly not classified (actually, #1, #3 and #6, all NNSA are online at top500.org).
Quetzalcoatl,

WRONG again. The #1 BlueGene/L at LLNL is a CLASSIFIED system.

It's existence has been disclosed; but it is NOT online on any open network.

All the machines you listed were CLASSIFIED during the procurement and build process.

Part of the workload for these machines will be calculations in support of the Dept. of Energy/
National Nuclear Security Administration Stockpile Stewardship missions.

Therefore, because of the nature of some of the work that will be run on these machines; the
details of what is being built and how is not revealed.

When there is a machine sitting inside of a secure computer room, inside a secure building, inside
a secure laboratory; it does no harm to announce the existence of the machine. Nobody can exploit
that information for nefarious purposes anymore; the machine is secure.

Dr. Gregory Greenman
Physicist
 
  • #23
Morbius said:
You think I don't already know this! I know even more about this than I
am allowed to tell. Please don't patronize me by implying that I don't know
about the Blue Gene archietecture.

That is up to you: you obviously did not know that the goal was increased ratio of performance per watt and performance per m^2, as I pointed out (is IBM lying in their white paper which I posted the link to?)...that is how/why the #1 slot was attained since you can stuff 1024 nodes in a single, standard rack...big deal. the goal was NOT to increase the performance per node, as you had originally implied.

It was not my intent to make this nasty, but that is your tone: how could you not know about the performance/power ratio?? Did you think that 440 Mhz PPC were chosen for their individual performance?

YOU called my allegations of performance/power ratio wrong, but in fact I have cited that this is the correct design motive! I have posted an entire document about this!

Congress' motives are irrelevant to this discussion.

Q
 
  • #24
quetzalcoatl9 said:
It sounds like you are focusing on a very particular type of problem - neutron diffusion in some fissile material.
quetzalcoatl,

NOPE - we're not focusing on a very particular type of problem, as far as the code is concerned.
The code is a multi-physics, multi-use code.

I focused on the very particular type of problem - the neutron transport in a reactor - as an example
of where deterministic methods best Monte Carlo methods hands down, and will continue to do
so for a number of years. That was merely for rebutting the contention that Monte Carlo would
be more efficient than deterministic methods for reactor calculations.
However, keep in mind that if one was, for example, solving for a statistical mechanical observable, then doing the trivial parallelization where each processor handles a separate (indepedent) sample is appropriate - infact, desireable since the goal is to sample as many possible states from the ensemble and this method scales linearly; one could hardly do better for this type of problem!

You sure didn't understand my previous post as to why you CAN'T do that!

In order to realize this trivial parallelization, you need a single processor to follow a simulation
particle from birth to death. That means that the processor has to know the specifications of the
entire problem - since the simulation particle can go anywhere. However, we are dealing with very
LARGE simulations. Even the US Gov't can only afford enough memory for ONE copy
of the problem description to be in computer memory.

That single copy of the problem is divided among all the CPUs assigned to the task. Since no one
CPU has the entire description of the problem; then one CPU can not follow the particle from birth to
death. The particle has to jump from processor to processor to processor in the course of its
calculational lifetime.

It is the synchronizing of all these jumps from processor to processor, as well as things like
population control, tallies... ALL of which need to have multiple processors coordinating.

Therefore, one never has the trivial parallelism, except for brief instants of time. Yes - there are
brief periods when a single processor tracks a single particle. But that doesn't last very long before
one of these events that requires coordination among the processors happens.
Keep in mind that "Monte Carlo" does not refer to neutron diffusion - it is nothing more than an efficient way of numerically doing integrals!
The genesis of "Monte Carlo" WAS for doing neutron transport [ not diffusion; which is a low-order approximation to transport ].
The "Monte Carlo" method was invented for doing neutron transport. Later, it was realized that one could also do integrals.

Consider the history of the Monte Carlo method:
http://en.wikipedia.org/wiki/Monte_Carlo_method

"Monte Carlo methods were originally practiced under more generic names such as "statistical sampling". The
"Monte Carlo" designation, popularized by early pioneers in the field (including Stanislaw Marcin Ulam,
Enrico Fermi, John von Neumann and Nicholas Metropolis), is a reference (Nicholas Metropolis noted that
Stanislaw Marcin Ulam had an uncle who would borrow money from relatives because he "just had to go to
Monte Carlo.") to the famous casino in Monaco. Its use of randomness and the repetitive nature of the process
are analogous to the activities conducted at a casino. Stanislaw Marcin Ulam tells in his autobiography
Adventures of a Mathematician that the method was named in honor of his uncle, who was a gambler, at the
suggestion of Metropolis."


In case you don't recognize the cast of characters; Ulam, Fermi, von Neumann, Metropolis; they were all
pioneers in the field of neutron and radiation transport at Los Alamos National Laboratory where the technique
was invented. Monte Carlo started out as a way of doing nuclear particle simulations. It's use as a numerical
integration technique was realized later.

Dr. Gregory Greenman
Physicist
 
Last edited:
  • #25
Morbius said:
You sure didn't understand my previous post as to why you CAN'T do that!

In order to realize this trivial parallelization, you need a single processor to follow a simulation
particle from birth to death. That means that the processor has to know the specifications of the
entire problem - since the simulation particle can go anywhere. However, we are dealing with very
LARGE simulations. Even the US Gov't can only afford enough memory for ONE copy
of the problem description to be in computer memory.

i think that if you stop and listen to what i am saying you will realize that what i am saying is correct.

consider, for example, a periodic system of N gas molecules contained in a specific volume at a specific temperature. Let's say we have 1000 processors available on some system. If we have 1000 uncorrelated starting configurations, we could start up 1000 seperate Monte Carlo simulations (with individual RNG seeds) in order to calculate some obsevable (let's say, for example, the pressure of the gas). Each processor calculates it's own average pressure. At the end of a certain number of steps, those 1000 averages (and configurations also if we wish) are sent back to the head node to get the final averaging.

this is what we call trivial parallelization and is VERY useful for studying physical systems.
 
  • #26
quetzalcoatl9 said:
That is up to you: you obviously did not know that the goal was increased ratio of performance per watt and performance per m^2, as I pointed out (is IBM lying in their white paper which I posted the link to?)...that is how/why the #1 slot was attained since you can stuff 1024 nodes in a single, standard rack...big deal. the goal was NOT to increase the performance per node, as you had originally implied.
quetzalcoatl,

You assumed WRONG again. As an LLNL code developer, my collegues and I were involved
in the specifications of BlueGene/L when it was merely ideas on pieces of paper. IBM determined
how they wanted to design the computer in terms of how many square feet, or its power consumption...

The main driver for the procurement of BlueGene/L was performance. Now IBM was in
competition with other computer vendors for the BlueGene/L contract. As part of their
strategy, I would bet that IBM attempted to meet the goals of the program with as little power
consumption, or as little footprint as possible. Such consideration could come into play to
break a tie when multiple vendors bid systems of comparable performance.

But the whole mandate in building BlueGene/L was from first to last - PERFORMANCE.

That IBM did so with so little power consumption or floor space is a credit to IBM. However,
BlueGene/L could have physically been about 3X larger. I've been in the BlueGene/L computer
room and BlueGene/L looks rather small in there.

The goal was performance of the entire machine. The goal was not to maximize the per node
performance as you ERRONEOUSLY attribute to me above. BlueGene/L could have been
much bigger if need be - the TSF facility could easily handle a much bigger machine than BlueGene/L;
so there were no floor space or power consumption dictates from the buyer. [ As stated above,
IBM may have had some self-imposed constraints to make their design look better in what was
after all a competition]

You can't READ! Where did I imply that the goal was to increase performance per node.

All I said was that the US Congress said that the TOTAL power of the machine had to be
such that it reclaimed the number one slot. I NEVER said anything about the power per node.
It was not my intent to make this nasty, but that is your tone: how could you not know about the performance/power ratio?? Did you think that 440 Mhz PPC were chosen for their individual performance?

HELL YES - I knew that. At the time, my collegues and I advocated few but more powerful nodes.

What you also are not considering is that the 440 MHz CPUs are also NOT the "long pole in the tent".

The long pole in the tent in terms of these systems is the switching network. There were other
processors available, but they wouldn't work with the particular network. Additionally, you can
get more of the cheaper, less capable processors. You seem to be trying to optimize a single
node. The goal was to optimize the performance of the entire machine.

YOU called my allegations of performance/power ratio wrong, but in fact I have cited that this is the correct design motive! I have posted an entire document about this!

You posted a single document from what was YEARS of discussion between LLNL and IBM.
You singled out one element of a VASTLY more complex design iteration. You are singling
out one issue and portraying that as THE issue. It was not.

There were multiple issues. What CPU to use. What memory to use. What network to use...
Will this CPU work with that network...

The goal was to get a machine that had the highest performance. When I say performance, I mean
performance for the ENTIRE machine working as a unit.

Optimizing the performance of the entire machine doesn't just mean getting the fastest processor.

There's another constraint that comes in here and that is cost. Suppose you can get processors
with 75% of the performance of the highest performing processor for 50% of the cost. You could
then get twice as many of these lower performing processors for the same cost. Because each
processor was 75% of the highest performer - 2X as many will give you a machine with a total
processing power that is 50% higher than if you made the machine with all highest performers.

That's EXACTLY what happened with BlueGene/L. There were more powerful processors. However,
in a combination of processor power, electrical consumption, cooling, switching capacity... the
solution that optimizes the performance of the entire machine is not one that optimizes the power
of an individual CPU.

Now as a code developer, I would rather have more powerful CPUs; because it makes my job easier.

The machine has a higher performance in the aggregate; but that means that the application codes
have to have a greater degree of parallelism in order to exploit the power of the machine. Fewer,
more powerful processors would have made my job easier - but that's not how the design turned out.
Congress' motives are irrelevant to this discussion.

NOPE - Congress's motives had EVERYTHING to do with how this machine turned out.

Congress was paying for it - and it set the goals. The goal was that the total performance of this
machine had to be #1 in the world.

Congress then left it up to the DOE, the Labs, the vendors... to devise how that goal was to be met.

Again, the goal of having the fastest machine within a finite budget may lead to a design in which the
individual CPUs are not the most powerful available. You have to look at this from a wider context;
not just how powerful a single node was. That wasn't a constraint. The constraint was that the entire
machine had to be as powerful as one could make it for the money.

Dr. Gregory Greenman
Physicist
 
Last edited:
  • #27
Morbius said:
But the whole mandate in building BlueGene/L was from first to last - PERFORMANCE.

i recognize this as obvious given that this is still the #1 machine...my point was that the WAY this was accomplished was through power/cost, etc. which you NOW also mention.

My point is this: consider this from the perspective of a teragrid user with a fixed allocation of SUs. I have access to several different highly parallel systems, including a BG/L. I could run my code on 2000 cores of a BG/L (440 Mhz, etc.) or on 2000 cores of very fast Opterons (2.66 Mhz, etc.) - both systems charge me the same amount. Which one would you chose?? Exactly - this is why the teragrid BG/L at SDSC is averaging 20% utilization as opposed to nearly every other teragrid resources at 100% utilization.

The end result of the BG/L design, of which I'm glad we both agree that power consumption played a significant part, is a highly parallel system. By "highly parallel" I mean many, many thousands of processors. The big result was that there are many small processors available.

My point was that this high parallelism was achieved by increased efficiency (power and real estate) - which, for some reason, you jumped all over me about. I never said that this was ALL it was about, only that it was a critical design component - one which, as an academic user, I do not care about. I really do not need any more than "only" a few thousand processors at a time.

You say that you were involved in the design of the BG/L system; I will take your word for it and believe you. However, I'm a bit perplexed that at the same time you question that a statistical mechanical ensemble can be sampled INDEPENDENTLY via MC (remember, I told you up front that I am NOT doing anything with neutrons - global limitations in YOUR system are not MINE) when I have been doing that for years - infact, I am running a highly parallel statistical mechanics simulation via MC on many processors in parallel as we speak, yet you say that's impossible! What am I to think here?

Q
 
Last edited:
  • #28
quetzalcoatl9 said:
i recognize this as obvious given that this is still the #1 machine...my point was that the WAY this was accomplished was through power/cost, etc. which you NOW also mention.
Q,

I mention that either power or cost is just ONE parameter is a highly multiparameter space.

BlueGene/L's performance isn't due to optimizing power, or optimizing cost, or optimizing power and
cost. It was a very complicated multi-parameter search through "option space".

You are trying to make it much simpler than it actually was!
My point is this: consider this from the perspective of a teragrid user with a fixed allocation of SUs.
What's an "SU"?
I have access to several different highly parallel systems, including a BG/L. I could run my code on 2000 cores of a BG/L (440 Mhz, etc.) or on 2000 cores of very fast Operatons (2.66 Mhz, etc.) - both systems charge me the same amount.

Well there's the stupid part - why charge the same for systems with different performance?
Which one would you chose?? Exactly - this is why the teragrid BG/L at SDSC is averaging 20% utilization as opposed to nearly everyone other teragrid resources at 100% utilization.

That's a function of someone's stupid pricing scheme. What gets me is that you somehow
think that has to do with the design of the machine. It's a function of the pricing system.
The end result of the BG/L design, of which I'm glad we both agree that power consumption played a significant part, is a highly parallel system. By "highly parallel" I mean many, many thousands of processors. The big result was that there are many small processors available.

My point was that this high parallelism was achieved by increased efficiency (power and real estate) - which, for some reason, you jumped all over me about.

It was NOT achieved by power and real estate efficiency which is why I jumped on you for that.

Those were MINOR considerations, if ANYTHING.

Why do you say that real estate was a primary design objective, when I tell you that BlueGene sits
in a space that can hold a machine 3X as big? Why do you say that power consumption was a big
deal, when BlueGene's neighbor consumes 15 Megawatts.

As far as LLNL was concerned; those were not "show stoppers". The major issue was performance
from the "get go". I'm suggesting that IBM may have thought the machine more attractive if it used
less real estate and less power. However, IBM build Purple next door, and it takes more real estate
and more power for a machine that is less powerful in the rating.

BTW - the ranking we are talking about is based on running Linpack. When it comes to solving REAL
problems; Purple kicks BlueGene's butt. Purple runs our codes better than BlueGene/L runs them.
BlueGene/L just beats Purple in running Linpack.

If you ask me whether I want to run on Purple or BlueGene/L; the answer is Purple.
I never said that this was ALL it was about, only that it was a critical design component - one which, as an academic user, I do not care about. I really do not need any more than "only" a few thousand processors at a time.

For our problems a few thousand processors is just the entry fee.

We can't run the simulations that we really want to run on either Purple or BlueGene/L.

What is really needed for our problems are machines in the 10s to 100 Petaflop class; or higher.

You say that you were involved in the design of the BG/L system; I will take your word for it and believe you. However, I'm a bit perplexed that at the same time you question that a statistical mechanical ensemble can be sampled INDEPENDENTLY via MC (remember, I told you up front that I am NOT doing anything with neutrons - globallimitations in YOUR system are not MINE) when I have been doing that for years - infact, I am running a highly parallel statical mechanics simulation via MC on many processors in parallel as we speak, yet you say that's impossible! What am I to think here?

I'm using the term "Monte Carlo" the way it is used in Nuclear Engineering!

You have NOTHING to be perplexed about. I explained at length above that the problem with
the parallelism concerns transporting neutron simulation particles from one processor to another.
THAT is the reason for the the lack of parallelism.

Now you are claiming that I'm saying you can't do independent samplings from an ensemble.

I just got done explaining this in a neutron transport context dummy; and then you apply my
statement to your statistical mechanics problem. Then you say you are "perplexed"!

You're not "perplexed" - you're STUPID.

This is a Nuclear Engineering Forum - and when you say "Monte Carlo" in a Nuclear Engineering
context - then it means "neutron transport"!

You are using MC as meaning a method of integration.

That's NOT sense of the term that I'm using. I told you it was impossible to run a Monte Carlo
calculation of the type that Nuclear Engineers run.

You mean this whole discussion is due to the fact that you eveidently don't know you are in
a Nuclear Engineering forum.

What you are to think is that you are pretty stupid for not using the term "Monte Carlo"
in the way a Nuclear Engineer uses the term "Monte Carlo" - in a Nuclear Engineering Forum!

If this were a forum about statistical mechanics - then I can see where you might have a case.

However, this is a Nuclear Engineering Forum, and in Nuclear Engineering the term "Monte Carlo"
means something in particular. It means a method of solving the neutron transport equation.

It doesn't mean some general way of doing an integral!

It's like the term "reactor". Chemical engineers also use the term "reactor". But in THIS
forum; a reactor is a "nuclear reactor".

If someone started saying that there were no free neutrons running around a "reactor"; and everyone
jumped on him for saying such a incorrect statement. Then the person says, "Oh, I meant a
chemical reactor - where chemical reactions take place. There's no great population of free neutrons
in there".

What would you think? In this forum called "Nuclear Engineering" a "reactor" means a "nuclear reactor".

Likewise, in a Nuclear Engineering forum; "Monte Carlo" is a way of solving the neutron transport equation.

Dr. Gregory Greenman
Physicist
 
Last edited:
  • #29
Morbius said:
However, this is a Nuclear Engineering Forum, and in Nuclear Engineering the term "Monte Carlo"
means something in particular. It means a method of solving the neutron transport equation.

It doesn't mean some general way of doing an integral!

I think that it is you who is being the STUPID one here. Open a book on Monte Carlo methods and you will see that it is for solving INTEGRALS! Guess what? That equation you have been solving all of these years via Monte Carlo is an INTEGRAL equation - did you realize that? We solve statmech problems because we can write the expectation value as an INTEGRAL. Metropolis et al. did the same kinds of things that I AM, solving INTEGRAL equations for the equation of state of a gas in their landmark paper. You say that they the ability for MC to solve integrals "came later" - you truly do not understand the process, you think that the founders actually would be as STUPID as you are?

Q

P.S. Who actually signs themselves "Dr. X, Physicists" anyway except an extremely pretentious loser. You probably think that you can bully others into accepting your retarded views, from what I have seen of your interactions with others in this forum.
 
  • #30
Morbius,

I will go slow here, so that even a loud-mouth like you can understand it; I will hold your hand and together you will get through this.

Please read the following seminal paper:

JCP 21(6):1087-1092 (1953)

Equation of State Calculations by Fast Computing Machines

Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller

(the one which you have previously referred).

when you do read it, you will see the following clear statement in the conclusion section:

The method of Monte Carlo integrations over configuration space seems to be a feasible approach to statistical mechanical problems which are yet not analytically soluble.

See? That wasn't so bad right? You see, the seminal work makes use of the stochastic method, via the Metropolis acceptance function, of sampling an ensemble of configurations. This is how everyone else in the world knows the term "Monte Carlo".

So I have Metroplis and Teller on my side here - are you so arrogant as to say them wrong too?

Q
 
  • #31
libertad said:
Hi there,
I'm looking for an algorithm which describes the numeric solution to solve the diffusion equation (1D or 2D).
I've taken a look on some textbook such as Hamilton and Henry ones but I didn't find a simple solution.
anybody knows about it? Where can I find what I'm looking for?
Thanks.
This thread has derailed nicely... :biggrin:

Perhaps the simplest answer to the OP would be to look up finite difference and finite element schemes, for the 1D and 2D problems, respectively.

(This is what Morbius explained in the original reply. However, I've never heard the term "zone" w.r.t. FEMs.)
 
  • #32
It was NOT achieved by power and real estate efficiency which is why I jumped on you for that.

Those were MINOR considerations, if ANYTHING.

No. Power may not have been the explicit concern of the user, but power efficiency is tightly coupled to the realizable performance of any large machine today; its not just a nice to have feature any more, obviated by access to dedicated megawatts, acres of floor space, tons of cooling equipment, or even money to burn. Two reasons: 1) HEAT limitations at the CPU and board assembly level brought on by the use of large clusters of ever increasing IC fabrication densities, and 2) local delivery of electrical power (1000's of Amps switched in nanoseconds) to same. If one pops the hatch on the large machines you find they are substantially power distribution and heat removal by volume with some computing thrown in. Modern computing hardware has reached a point where it does not scale further w/ out attention to power efficiency.
 
Last edited:
  • #33
quetzalcoatl9 said:
I think that it is you who is being the STUPID one here. Open a book on Monte Carlo methods and you will see that it is for solving INTEGRALS! Guess what? That equation you have been solving all of these years via Monte Carlo is an INTEGRAL equation - did you realize that?
Q,

Actually the equation that is being solved; the Boltzmann Transport Equation is an
Integro-differential equation. It has BOTH integrals [ due to scattering ] and derivatives
[ due to the transport terms ]

We solve statmech problems because we can write the expectation value as an INTEGRAL. Metropolis et al. did the same kinds of things that I AM, solving INTEGRAL equations for the equation of state of a gas in their landmark paper. You say that they the ability for MC to solve integrals "came later" - you truly do not understand the process, you think that the founders actually would be as STUPID as you are?

I don't have a quibble with your use of MC methods for solving your statatistical mechanics
problems. However, I VERY CLEARLY explained to you where the parallelism is broken.

The parallelism is broken in the TRANSPORT part of the Monte Carlo - that's the part that
has DERIVATIVES.

Your problem is only an integral problem - therefore you don't have the same coupling as the
transport equation does.

When I clearly explained where the parallelism broke down; due to the transport; you applied
my statement of the limitation to your problem when you stated:
However, I'm a bit perplexed that at the same time you question that a statistical mechanical ensemble
can be sampled INDEPENDENTLY via MC...when I have been doing that for years - infact, I am
running a highly parallel statical mechanics simulation via MC on many processors in parallel as we
speak, yet you say that's impossible! What am I to think here?

In the above QUOTE; you stated that I claimed what you had been doing was impossible -
"...yet you say that's impossible! What am I to think here?

Show me WHERE I stated that sampling integrals by Monte Carlo was impossible!

I was VERY CLEAR in my description above that what breaks the parallelism is the
transport of particles among processors. That is due to the TRANSPORT part of solving
the integro-differential part of the Boltzmann Transport Equation.

Where did I say that one can't sample statmech integrals in parallel?

What you did was a classic "stawman" - you mischaracterized what I said - and then proceeded
to dispute it.

THAT'S the part that I find stupid.

BTW - the processors in Blue Gene/L are PowerPC 440s - but "440" is a model number

It doesn't mean 440 MHz as you ERRONEOUSLY claimed above in your statement:
"I could run my code on 2000 cores of a BG/L (440 Mhz, etc.)

The PowerPC 440 processors in Blue Gene/L do NOT run at 440 MHz.

You probably think that you can bully others into accepting your retarded views, from what I have seen of your interactions with others in this forum.

I'm just challenging ERRORS! You've promulgated an entire string of errors; starting with
claiming that Monte Carlo will beat deterministic methods for solving a reactor problem, which was
the original poster's contention...all the way up to mischaracterizing what I said for your "strawman"
arguement.

The original poster asked a question concerning nuclear reactors and the neutron transport therein,
which you answered INCORRECTLY. When called on that you "morphed" the discussion into
how Monte Carlo parallelizes for statmech problems. However, your experience solving some trivial
statistical mechanics problems doesn't apply to the nuclear reactor transport problem which was
under discussion in this Nuclear Engineering forum.

It's not "bullying" to CORRECT ERRORS! If you pursue a career in science; get used to people
correcting you.

Dr. Gregory Greenman
Physicist
 
Last edited:
  • #34
mheslep said:
If one pops the hatch on the large machines you find they are substantially power distribution and heat removal by volume with some computing thrown in. Modern computing hardware has reached a point where it does not scale further w/ out attention to power efficiency.
mheslep,

The Blue Gene/L and ASC Purple machines are air cooled - with ducts in the floor to force cooling
air through the machines.

The air blowers for the cooling systems is one floor below the machine floor in LLNL's TSF.

I stood in the air stream that goes to cool Blue Gene/L and Purple; and it was like being in a
hurricane.

If you look at a picture of Blue Gene - the sloping sides were dictated by the design of the cooling
air flow:

https://asc.llnl.gov/computing_resources/bluegenel/index.html

Dr. Gregory Greenman
Physicist
 
  • #35
J77 said:
This thread has derailed nicely... :biggrin:

Perhaps the simplest answer to the OP would be to look up finite difference and finite element schemes, for the 1D and 2D problems, respectively.

(This is what Morbius explained in the original reply. However, I've never heard the term "zone" w.r.t. FEMs.)
J77,

"Zone" is used more with finite difference equations; the term "element" is used in finite element.

Many times one speaks of a "mesh" being used to discretize the space. The points where the
mesh lines cross are called "nodes". The little volumes that make up the mesh are called "zones".

Dr. Gregory Greenman
Physicist
 
<h2>1. What is the diffusion equation?</h2><p>The diffusion equation is a mathematical model that describes the movement of particles or substances from areas of high concentration to areas of low concentration. It is commonly used in physics, chemistry, and biology to study the process of diffusion.</p><h2>2. Why is it important to develop a program to solve the diffusion equation?</h2><p>Developing a program to solve the diffusion equation allows for a more efficient and accurate way to analyze and predict diffusion processes. It also allows for the exploration of different scenarios and variables, which can help in understanding the underlying principles of diffusion.</p><h2>3. What are the key components of a simple program to solve the diffusion equation?</h2><p>The key components of a simple program to solve the diffusion equation include defining the initial conditions, setting the boundary conditions, discretizing the equation, and using numerical methods to solve for the diffusion over time. It also requires knowledge of programming languages and mathematical concepts.</p><h2>4. How can a simple program to solve the diffusion equation be applied in real-world situations?</h2><p>A simple program to solve the diffusion equation can be applied in various fields such as material science, environmental science, and biomedical engineering. It can be used to study the diffusion of pollutants in the atmosphere, the diffusion of drugs in the body, and the diffusion of heat in materials, among others.</p><h2>5. Are there any limitations to using a simple program to solve the diffusion equation?</h2><p>While a simple program can provide useful insights into diffusion processes, it may not be able to accurately model complex scenarios or account for all variables. It also relies on assumptions and simplifications, which may not always reflect real-world situations. Therefore, it is important to carefully consider the limitations and assumptions of the program when interpreting the results.</p>

1. What is the diffusion equation?

The diffusion equation is a mathematical model that describes the movement of particles or substances from areas of high concentration to areas of low concentration. It is commonly used in physics, chemistry, and biology to study the process of diffusion.

2. Why is it important to develop a program to solve the diffusion equation?

Developing a program to solve the diffusion equation allows for a more efficient and accurate way to analyze and predict diffusion processes. It also allows for the exploration of different scenarios and variables, which can help in understanding the underlying principles of diffusion.

3. What are the key components of a simple program to solve the diffusion equation?

The key components of a simple program to solve the diffusion equation include defining the initial conditions, setting the boundary conditions, discretizing the equation, and using numerical methods to solve for the diffusion over time. It also requires knowledge of programming languages and mathematical concepts.

4. How can a simple program to solve the diffusion equation be applied in real-world situations?

A simple program to solve the diffusion equation can be applied in various fields such as material science, environmental science, and biomedical engineering. It can be used to study the diffusion of pollutants in the atmosphere, the diffusion of drugs in the body, and the diffusion of heat in materials, among others.

5. Are there any limitations to using a simple program to solve the diffusion equation?

While a simple program can provide useful insights into diffusion processes, it may not be able to accurately model complex scenarios or account for all variables. It also relies on assumptions and simplifications, which may not always reflect real-world situations. Therefore, it is important to carefully consider the limitations and assumptions of the program when interpreting the results.

Similar threads

Replies
0
Views
459
  • Advanced Physics Homework Help
Replies
0
Views
76
Replies
8
Views
2K
  • Biology and Chemistry Homework Help
Replies
1
Views
2K
Replies
2
Views
2K
Replies
2
Views
1K
  • MATLAB, Maple, Mathematica, LaTeX
2
Replies
41
Views
8K
  • Biology and Chemistry Homework Help
Replies
4
Views
1K
  • Programming and Computer Science
Replies
6
Views
1K
Replies
5
Views
5K
Back
Top