Computational Fluid Dynamics and refining the mesh

Click For Summary
The discussion revolves around optimizing mesh refinement for a two-phase simulation of a partially filled tank draining in zero gravity. Users suggest conducting a mesh convergence study by running simulations with increasingly dense meshes to determine if results significantly differ. OpenFOAM is mentioned as the software being used, with concerns about the practicality of running multiple simulations due to long processing times. Participants agree that initial coarse runs can help guide the selection of finer meshes, and emphasize the importance of estimating time and length scales for effective mesh design. The conversation highlights the computational intensity of CFD and the necessity of using appropriate hardware for such simulations.
member 428835
Hi PF!

I'm running a two-phase simulation, which takes a long time to run. The simulation is simply a partially filled tank being drained in zero gravity, and the tank is a little smaller than a mailbox (~ 160 X 40 X 40 mm). Before running the simulation I would like to know if my mesh is sufficiently refined. How would you do this?
 
Engineering news on Phys.org
Which software do you use ? You should run this simulation few times with increasingly dense mesh and see whether the results differ significantly or just slightly. That's a typical mesh convergence study.
 
  • Like
Likes SCP
FEAnalyst said:
Which software do you use ? You should run this simulation few times with increasingly dense mesh and see whether the results differ significantly or just slightly. That's a typical mesh convergence study.
I use OpenFOAM, a finite difference solver that uses a volume of fluid approach to handle the interface. Are you suggesting to run the entire simulation on different meshes and see how convergence looks? This seems so expensive and impractical in most situations (right?).
 
Yes, but maybe it will turn out that two/three reruns are enough to find out whether the mesh is sufficient or not.
 
joshmccraney said:
This seems so expensive and impractical in most situations (right?).
It is not expensive if you re-run the simulation many times with the optimium mesh size. I think the trial and error method has been standard ever since we started with digital simulations.

If this is a simulation that you will run only once, then you already did it once to find out that it is slow, so why ask the question?
 
anorlunda said:
It is not expensive if you re-run the simulation many times with the optimium mesh size. I think the trial and error method has been standard ever since we started with digital simulations.

If this is a simulation that you will run only once, then you already did it once to find out that it is slow, so why ask the question?
I ran it for 24 hours but predict the actual elapsed time will be weeks (on what I believe to be a coarse mesh). Don't want to go through this for several meshes if I can help it.
 
joshmccraney said:
I ran it for 24 hours but predict the actual elapsed time will be weeks (on what I believe to be a coarse mesh). Don't want to go through this for several meshes if I can help it.
You only need to look and see if the calculated results are or are not substantially unchanged. You should be able to do that after a test of brief duration.

For example, you said you ran it for 24 hours. Can't you examine the intermediate results after that period?
 
joshmccraney said:
I ran it for 24 hours but predict the actual elapsed time will be weeks (on what I believe to be a coarse mesh). Don't want to go through this for several meshes if I can help it.
It sounds like you should try some courser runs and only resort to a run of multiple weeks if you absolutely have to.
PS. CFD is one area where supercomputers are in demand. Now you know why.
 
Okay great, this is what I was wanting to do but wanted to ask people what they did. Yep, I can process the results after a shorter run.

And these run times are not on a supercomputer, but I did build a pretty strong one and it still takes a long time.
 
  • Like
Likes FactChecker
  • #10
joshmccraney said:
Okay great, this is what I was wanting to do but wanted to ask people what they did. Yep, I can process the results after a shorter run.

And these run times are not on a supercomputer, but I did build a pretty strong one and it still takes a long time.

Computational fluid dynamics is one of the most computationally-intensive tasks that currently exists. Building a "pretty strong" computer is not likely to make much of a dent in it if you want any kind of simulation involving both size and fidelity. You can likely get away with a local workstation for Euler codes, but, even RANS calculations are typically performed on clusters, not desktops. Shoot, the national labs build supercomputers that are still utilized for "low-fidelity" simulations like RANS. Now scale that up to doing LES or DNS and your problems multiply rapidly.

I don't know what you consider to be "pretty strong," but your best bet is probably to do some very coarse meshes locally to debug and guide the selection of your finer mesh(es), then move the latter onto a cluster or other HPC somewhere.
 
  • #11
boneh3ad said:
Computational fluid dynamics is one of the most computationally-intensive tasks that currently exists. Building a "pretty strong" computer is not likely to make much of a dent in it if you want any kind of simulation involving both size and fidelity. You can likely get away with a local workstation for Euler codes, but, even RANS calculations are typically performed on clusters, not desktops. Shoot, the national labs build supercomputers that are still utilized for "low-fidelity" simulations like RANS. Now scale that up to doing LES or DNS and your problems multiply rapidly.

I don't know what you consider to be "pretty strong," but your best bet is probably to do some very coarse meshes locally to debug and guide the selection of your finer mesh(es), then move the latter onto a cluster or other HPC somewhere.
I do laminar flows, so no RANS or LES. "Pretty stong" = 16 cores 8 memory channels 128 gigs of RAM. Very good for local desktop. Very weak for clusters.
 
  • #12
16 cores should be OK for these kind of simulations then. I have a workstation at home with 16 cores each and they are fine for such computations, say < 5 million cells or so and for steady laminar and rans cases. In your case you do a time-dependent simulation, so the bottleneck will be the timestep that you can use and the total time you would like to simulate.
The best thing you can do I think is try to estimate what kind of time scales and length scales you expect and create a mesh that can capture this. Do you expect large changes in e.g. velocity to happen over a distance of a mm or a cm, and at specific locations? Do you expect changes to happen over the course of a millisecond or a second?
If you have a nice box-shaped geometry, You could create a nice structured mesh with say 1mm cells. Also get an estimate of the interface velocity. The interface should not move more than the size of one cell in a single time step. Is this true for your mesh then? You could refine a bit more around corners/edges. You could also try to set up a case with adaptive mesh refinement on velocity gradients and interface location, but my experience with openfoam is very limited so I don't know how well that works (in terms of overhead, load balancing etc. but I assume they use a standard package for this like parmetis).
 
  • Like
  • Informative
Likes member 428835 and FactChecker
  • #13
bigfooted said:
16 cores should be OK for these kind of simulations then. I have a workstation at home with 16 cores each and they are fine for such computations, say < 5 million cells or so and for steady laminar and rans cases. In your case you do a time-dependent simulation, so the bottleneck will be the timestep that you can use and the total time you would like to simulate.
The best thing you can do I think is try to estimate what kind of time scales and length scales you expect and create a mesh that can capture this. Do you expect large changes in e.g. velocity to happen over a distance of a mm or a cm, and at specific locations? Do you expect changes to happen over the course of a millisecond or a second?
If you have a nice box-shaped geometry, You could create a nice structured mesh with say 1mm cells. Also get an estimate of the interface velocity. The interface should not move more than the size of one cell in a single time step. Is this true for your mesh then? You could refine a bit more around corners/edges. You could also try to set up a case with adaptive mesh refinement on velocity gradients and interface location, but my experience with openfoam is very limited so I don't know how well that works (in terms of overhead, load balancing etc. but I assume they use a standard package for this like parmetis).
Thanks, lots of good stuff here!

So the geometry is a V-groove 160mm long with a 30 degree angle about 40mm high. Velocity at the drain port is about 0.3 mm/s. I'm simulating about 30-75 seconds of draining. Characteristic time scales should be on the order of seconds, for sure no smaller.

I have an adjustable time step that will not exceed Courant numbers of 0.2. And Yes, I have implemented a dynamic mesh at the interface.
 
  • #14
Can you share an image of your current mesh? a typical cross-section, perhaps superimposed over the velocity contour? You said that you ran it for 24 hours, what is your total mesh size and your typical time step for that simulation?
 

Similar threads

  • · Replies 45 ·
2
Replies
45
Views
6K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 18 ·
Replies
18
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
5
Views
5K
Replies
8
Views
6K
  • · Replies 10 ·
Replies
10
Views
6K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K