Computational Fluid Dynamics and refining the mesh

Click For Summary

Discussion Overview

The discussion revolves around the challenges of refining the mesh for a two-phase computational fluid dynamics (CFD) simulation of a partially filled tank being drained in zero gravity. Participants explore methods for ensuring mesh adequacy before running lengthy simulations, including mesh convergence studies and practical considerations for computational resources.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant suggests running the simulation multiple times with increasingly dense meshes to observe significant differences in results, a method known as mesh convergence study.
  • Another participant questions the practicality of running the entire simulation on different meshes due to time constraints, suggesting that only a few reruns might suffice to determine mesh adequacy.
  • Concerns are raised about the computational expense of rerunning simulations, especially if the simulation is expected to take weeks on a coarse mesh.
  • Some participants propose examining intermediate results after shorter runs to assess whether the mesh is sufficient without extensive reruns.
  • Discussion includes the capabilities of personal workstations versus supercomputers for running CFD simulations, with varying opinions on what constitutes a "strong" computer for such tasks.
  • Participants discuss the importance of estimating time and length scales relevant to the simulation to create an appropriate mesh, including suggestions for structured meshes and adaptive mesh refinement techniques.

Areas of Agreement / Disagreement

Participants express differing views on the necessity and practicality of running multiple simulations with different mesh densities. While some advocate for a thorough approach to ensure mesh adequacy, others emphasize the impracticality of such methods given the expected simulation times. The discussion remains unresolved regarding the best approach to mesh refinement.

Contextual Notes

Limitations include the dependence on specific software capabilities (OpenFOAM), assumptions about the simulation's time and length scales, and the varying definitions of what constitutes an adequate mesh. The discussion also highlights the computational intensity of CFD tasks and the challenges of resource allocation.

Who May Find This Useful

This discussion may be useful for researchers and practitioners in computational fluid dynamics, particularly those dealing with two-phase flow simulations, mesh refinement strategies, and resource management for high-performance computing.

member 428835
Hi PF!

I'm running a two-phase simulation, which takes a long time to run. The simulation is simply a partially filled tank being drained in zero gravity, and the tank is a little smaller than a mailbox (~ 160 X 40 X 40 mm). Before running the simulation I would like to know if my mesh is sufficiently refined. How would you do this?
 
Engineering news on Phys.org
Which software do you use ? You should run this simulation few times with increasingly dense mesh and see whether the results differ significantly or just slightly. That's a typical mesh convergence study.
 
  • Like
Likes   Reactions: SCP
FEAnalyst said:
Which software do you use ? You should run this simulation few times with increasingly dense mesh and see whether the results differ significantly or just slightly. That's a typical mesh convergence study.
I use OpenFOAM, a finite difference solver that uses a volume of fluid approach to handle the interface. Are you suggesting to run the entire simulation on different meshes and see how convergence looks? This seems so expensive and impractical in most situations (right?).
 
Yes, but maybe it will turn out that two/three reruns are enough to find out whether the mesh is sufficient or not.
 
joshmccraney said:
This seems so expensive and impractical in most situations (right?).
It is not expensive if you re-run the simulation many times with the optimium mesh size. I think the trial and error method has been standard ever since we started with digital simulations.

If this is a simulation that you will run only once, then you already did it once to find out that it is slow, so why ask the question?
 
anorlunda said:
It is not expensive if you re-run the simulation many times with the optimium mesh size. I think the trial and error method has been standard ever since we started with digital simulations.

If this is a simulation that you will run only once, then you already did it once to find out that it is slow, so why ask the question?
I ran it for 24 hours but predict the actual elapsed time will be weeks (on what I believe to be a coarse mesh). Don't want to go through this for several meshes if I can help it.
 
joshmccraney said:
I ran it for 24 hours but predict the actual elapsed time will be weeks (on what I believe to be a coarse mesh). Don't want to go through this for several meshes if I can help it.
You only need to look and see if the calculated results are or are not substantially unchanged. You should be able to do that after a test of brief duration.

For example, you said you ran it for 24 hours. Can't you examine the intermediate results after that period?
 
joshmccraney said:
I ran it for 24 hours but predict the actual elapsed time will be weeks (on what I believe to be a coarse mesh). Don't want to go through this for several meshes if I can help it.
It sounds like you should try some courser runs and only resort to a run of multiple weeks if you absolutely have to.
PS. CFD is one area where supercomputers are in demand. Now you know why.
 
Okay great, this is what I was wanting to do but wanted to ask people what they did. Yep, I can process the results after a shorter run.

And these run times are not on a supercomputer, but I did build a pretty strong one and it still takes a long time.
 
  • Like
Likes   Reactions: FactChecker
  • #10
joshmccraney said:
Okay great, this is what I was wanting to do but wanted to ask people what they did. Yep, I can process the results after a shorter run.

And these run times are not on a supercomputer, but I did build a pretty strong one and it still takes a long time.

Computational fluid dynamics is one of the most computationally-intensive tasks that currently exists. Building a "pretty strong" computer is not likely to make much of a dent in it if you want any kind of simulation involving both size and fidelity. You can likely get away with a local workstation for Euler codes, but, even RANS calculations are typically performed on clusters, not desktops. Shoot, the national labs build supercomputers that are still utilized for "low-fidelity" simulations like RANS. Now scale that up to doing LES or DNS and your problems multiply rapidly.

I don't know what you consider to be "pretty strong," but your best bet is probably to do some very coarse meshes locally to debug and guide the selection of your finer mesh(es), then move the latter onto a cluster or other HPC somewhere.
 
  • #11
boneh3ad said:
Computational fluid dynamics is one of the most computationally-intensive tasks that currently exists. Building a "pretty strong" computer is not likely to make much of a dent in it if you want any kind of simulation involving both size and fidelity. You can likely get away with a local workstation for Euler codes, but, even RANS calculations are typically performed on clusters, not desktops. Shoot, the national labs build supercomputers that are still utilized for "low-fidelity" simulations like RANS. Now scale that up to doing LES or DNS and your problems multiply rapidly.

I don't know what you consider to be "pretty strong," but your best bet is probably to do some very coarse meshes locally to debug and guide the selection of your finer mesh(es), then move the latter onto a cluster or other HPC somewhere.
I do laminar flows, so no RANS or LES. "Pretty stong" = 16 cores 8 memory channels 128 gigs of RAM. Very good for local desktop. Very weak for clusters.
 
  • #12
16 cores should be OK for these kind of simulations then. I have a workstation at home with 16 cores each and they are fine for such computations, say < 5 million cells or so and for steady laminar and rans cases. In your case you do a time-dependent simulation, so the bottleneck will be the timestep that you can use and the total time you would like to simulate.
The best thing you can do I think is try to estimate what kind of time scales and length scales you expect and create a mesh that can capture this. Do you expect large changes in e.g. velocity to happen over a distance of a mm or a cm, and at specific locations? Do you expect changes to happen over the course of a millisecond or a second?
If you have a nice box-shaped geometry, You could create a nice structured mesh with say 1mm cells. Also get an estimate of the interface velocity. The interface should not move more than the size of one cell in a single time step. Is this true for your mesh then? You could refine a bit more around corners/edges. You could also try to set up a case with adaptive mesh refinement on velocity gradients and interface location, but my experience with openfoam is very limited so I don't know how well that works (in terms of overhead, load balancing etc. but I assume they use a standard package for this like parmetis).
 
  • Like
  • Informative
Likes   Reactions: member 428835 and FactChecker
  • #13
bigfooted said:
16 cores should be OK for these kind of simulations then. I have a workstation at home with 16 cores each and they are fine for such computations, say < 5 million cells or so and for steady laminar and rans cases. In your case you do a time-dependent simulation, so the bottleneck will be the timestep that you can use and the total time you would like to simulate.
The best thing you can do I think is try to estimate what kind of time scales and length scales you expect and create a mesh that can capture this. Do you expect large changes in e.g. velocity to happen over a distance of a mm or a cm, and at specific locations? Do you expect changes to happen over the course of a millisecond or a second?
If you have a nice box-shaped geometry, You could create a nice structured mesh with say 1mm cells. Also get an estimate of the interface velocity. The interface should not move more than the size of one cell in a single time step. Is this true for your mesh then? You could refine a bit more around corners/edges. You could also try to set up a case with adaptive mesh refinement on velocity gradients and interface location, but my experience with openfoam is very limited so I don't know how well that works (in terms of overhead, load balancing etc. but I assume they use a standard package for this like parmetis).
Thanks, lots of good stuff here!

So the geometry is a V-groove 160mm long with a 30 degree angle about 40mm high. Velocity at the drain port is about 0.3 mm/s. I'm simulating about 30-75 seconds of draining. Characteristic time scales should be on the order of seconds, for sure no smaller.

I have an adjustable time step that will not exceed Courant numbers of 0.2. And Yes, I have implemented a dynamic mesh at the interface.
 
  • #14
Can you share an image of your current mesh? a typical cross-section, perhaps superimposed over the velocity contour? You said that you ran it for 24 hours, what is your total mesh size and your typical time step for that simulation?
 

Similar threads

  • · Replies 45 ·
2
Replies
45
Views
7K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
5
Views
5K
Replies
8
Views
6K
  • · Replies 10 ·
Replies
10
Views
6K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K