Computational Fluid Dynamics and refining the mesh

In summary, the conversation focused on running a two-phase simulation on a partially filled tank being drained in zero gravity. The main question was how to determine if the mesh used was sufficiently refined. It was suggested to run the simulation multiple times with increasingly dense mesh to perform a mesh convergence study. The software used for the simulation was OpenFOAM, but it was also mentioned that supercomputers are often used for CFD simulations due to their computational intensity. The conversation also touched on the issue of run times and the need for strong computers or clusters to handle large and complex simulations. The optimal approach suggested was to do some coarse runs
  • #1
member 428835
Hi PF!

I'm running a two-phase simulation, which takes a long time to run. The simulation is simply a partially filled tank being drained in zero gravity, and the tank is a little smaller than a mailbox (~ 160 X 40 X 40 mm). Before running the simulation I would like to know if my mesh is sufficiently refined. How would you do this?
 
Engineering news on Phys.org
  • #2
Which software do you use ? You should run this simulation few times with increasingly dense mesh and see whether the results differ significantly or just slightly. That's a typical mesh convergence study.
 
  • Like
Likes SCP
  • #3
FEAnalyst said:
Which software do you use ? You should run this simulation few times with increasingly dense mesh and see whether the results differ significantly or just slightly. That's a typical mesh convergence study.
I use OpenFOAM, a finite difference solver that uses a volume of fluid approach to handle the interface. Are you suggesting to run the entire simulation on different meshes and see how convergence looks? This seems so expensive and impractical in most situations (right?).
 
  • #4
Yes, but maybe it will turn out that two/three reruns are enough to find out whether the mesh is sufficient or not.
 
  • #5
joshmccraney said:
This seems so expensive and impractical in most situations (right?).
It is not expensive if you re-run the simulation many times with the optimium mesh size. I think the trial and error method has been standard ever since we started with digital simulations.

If this is a simulation that you will run only once, then you already did it once to find out that it is slow, so why ask the question?
 
  • #6
anorlunda said:
It is not expensive if you re-run the simulation many times with the optimium mesh size. I think the trial and error method has been standard ever since we started with digital simulations.

If this is a simulation that you will run only once, then you already did it once to find out that it is slow, so why ask the question?
I ran it for 24 hours but predict the actual elapsed time will be weeks (on what I believe to be a coarse mesh). Don't want to go through this for several meshes if I can help it.
 
  • #7
joshmccraney said:
I ran it for 24 hours but predict the actual elapsed time will be weeks (on what I believe to be a coarse mesh). Don't want to go through this for several meshes if I can help it.
You only need to look and see if the calculated results are or are not substantially unchanged. You should be able to do that after a test of brief duration.

For example, you said you ran it for 24 hours. Can't you examine the intermediate results after that period?
 
  • #8
joshmccraney said:
I ran it for 24 hours but predict the actual elapsed time will be weeks (on what I believe to be a coarse mesh). Don't want to go through this for several meshes if I can help it.
It sounds like you should try some courser runs and only resort to a run of multiple weeks if you absolutely have to.
PS. CFD is one area where supercomputers are in demand. Now you know why.
 
  • #9
Okay great, this is what I was wanting to do but wanted to ask people what they did. Yep, I can process the results after a shorter run.

And these run times are not on a supercomputer, but I did build a pretty strong one and it still takes a long time.
 
  • Like
Likes FactChecker
  • #10
joshmccraney said:
Okay great, this is what I was wanting to do but wanted to ask people what they did. Yep, I can process the results after a shorter run.

And these run times are not on a supercomputer, but I did build a pretty strong one and it still takes a long time.

Computational fluid dynamics is one of the most computationally-intensive tasks that currently exists. Building a "pretty strong" computer is not likely to make much of a dent in it if you want any kind of simulation involving both size and fidelity. You can likely get away with a local workstation for Euler codes, but, even RANS calculations are typically performed on clusters, not desktops. Shoot, the national labs build supercomputers that are still utilized for "low-fidelity" simulations like RANS. Now scale that up to doing LES or DNS and your problems multiply rapidly.

I don't know what you consider to be "pretty strong," but your best bet is probably to do some very coarse meshes locally to debug and guide the selection of your finer mesh(es), then move the latter onto a cluster or other HPC somewhere.
 
  • #11
boneh3ad said:
Computational fluid dynamics is one of the most computationally-intensive tasks that currently exists. Building a "pretty strong" computer is not likely to make much of a dent in it if you want any kind of simulation involving both size and fidelity. You can likely get away with a local workstation for Euler codes, but, even RANS calculations are typically performed on clusters, not desktops. Shoot, the national labs build supercomputers that are still utilized for "low-fidelity" simulations like RANS. Now scale that up to doing LES or DNS and your problems multiply rapidly.

I don't know what you consider to be "pretty strong," but your best bet is probably to do some very coarse meshes locally to debug and guide the selection of your finer mesh(es), then move the latter onto a cluster or other HPC somewhere.
I do laminar flows, so no RANS or LES. "Pretty stong" = 16 cores 8 memory channels 128 gigs of RAM. Very good for local desktop. Very weak for clusters.
 
  • #12
16 cores should be OK for these kind of simulations then. I have a workstation at home with 16 cores each and they are fine for such computations, say < 5 million cells or so and for steady laminar and rans cases. In your case you do a time-dependent simulation, so the bottleneck will be the timestep that you can use and the total time you would like to simulate.
The best thing you can do I think is try to estimate what kind of time scales and length scales you expect and create a mesh that can capture this. Do you expect large changes in e.g. velocity to happen over a distance of a mm or a cm, and at specific locations? Do you expect changes to happen over the course of a millisecond or a second?
If you have a nice box-shaped geometry, You could create a nice structured mesh with say 1mm cells. Also get an estimate of the interface velocity. The interface should not move more than the size of one cell in a single time step. Is this true for your mesh then? You could refine a bit more around corners/edges. You could also try to set up a case with adaptive mesh refinement on velocity gradients and interface location, but my experience with openfoam is very limited so I don't know how well that works (in terms of overhead, load balancing etc. but I assume they use a standard package for this like parmetis).
 
  • Like
  • Informative
Likes member 428835 and FactChecker
  • #13
bigfooted said:
16 cores should be OK for these kind of simulations then. I have a workstation at home with 16 cores each and they are fine for such computations, say < 5 million cells or so and for steady laminar and rans cases. In your case you do a time-dependent simulation, so the bottleneck will be the timestep that you can use and the total time you would like to simulate.
The best thing you can do I think is try to estimate what kind of time scales and length scales you expect and create a mesh that can capture this. Do you expect large changes in e.g. velocity to happen over a distance of a mm or a cm, and at specific locations? Do you expect changes to happen over the course of a millisecond or a second?
If you have a nice box-shaped geometry, You could create a nice structured mesh with say 1mm cells. Also get an estimate of the interface velocity. The interface should not move more than the size of one cell in a single time step. Is this true for your mesh then? You could refine a bit more around corners/edges. You could also try to set up a case with adaptive mesh refinement on velocity gradients and interface location, but my experience with openfoam is very limited so I don't know how well that works (in terms of overhead, load balancing etc. but I assume they use a standard package for this like parmetis).
Thanks, lots of good stuff here!

So the geometry is a V-groove 160mm long with a 30 degree angle about 40mm high. Velocity at the drain port is about 0.3 mm/s. I'm simulating about 30-75 seconds of draining. Characteristic time scales should be on the order of seconds, for sure no smaller.

I have an adjustable time step that will not exceed Courant numbers of 0.2. And Yes, I have implemented a dynamic mesh at the interface.
 
  • #14
Can you share an image of your current mesh? a typical cross-section, perhaps superimposed over the velocity contour? You said that you ran it for 24 hours, what is your total mesh size and your typical time step for that simulation?
 

1. What is Computational Fluid Dynamics (CFD)?

Computational Fluid Dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and algorithms to solve and analyze problems involving fluid flow, heat transfer, and related phenomena. It involves the use of computer simulations to model and predict the behavior of fluids in various systems.

2. Why is refining the mesh important in CFD?

Refining the mesh is important in CFD because it helps to improve the accuracy and reliability of the simulation results. A finer mesh allows for better resolution of the fluid flow and captures more details of the flow behavior. This is especially important in areas where there are complex geometries or flow features.

3. How is the mesh refined in CFD?

The mesh can be refined in CFD by dividing the computational domain into smaller cells or elements. This can be done manually by the user or automatically by the software, based on certain criteria such as the gradient of the flow variables or the proximity to solid boundaries. Adaptive mesh refinement techniques can also be used to dynamically adjust the mesh as the simulation progresses.

4. What are the challenges of refining the mesh in CFD?

One of the main challenges of refining the mesh in CFD is finding a balance between accuracy and computational cost. A finer mesh may provide more accurate results, but it also requires more computational resources and time. Additionally, refining the mesh in areas with complex geometries or flow features can be challenging and may require specialized techniques and tools.

5. How can the accuracy of CFD simulations be verified after refining the mesh?

The accuracy of CFD simulations can be verified by comparing the results to experimental data or analytical solutions, if available. Grid convergence studies, where the mesh is successively refined and the results are compared, can also be used to assess the accuracy of the simulation. Additionally, sensitivity analysis can be performed to evaluate the impact of mesh refinement on the simulation results.

Similar threads

Replies
1
Views
46
  • General Math
Replies
18
Views
1K
  • Mechanical Engineering
Replies
9
Views
1K
  • Mechanical Engineering
Replies
4
Views
2K
Replies
2
Views
768
Replies
10
Views
4K
Replies
5
Views
4K
  • General Math
Replies
2
Views
1K
  • Introductory Physics Homework Help
Replies
15
Views
2K
  • Other Physics Topics
Replies
1
Views
1K
Back
Top