Finite Element and CFL condition for the heat equation

Click For Summary

Discussion Overview

The discussion revolves around the application of the CFL condition in the context of solving the heat equation using finite element methods (FEM) in a C++ code. Participants explore the implications of grid size and time step on stability and accuracy in both 2D and 3D scenarios, particularly focusing on the behavior of the CFL condition in these dimensions.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant notes that while the CFL condition holds in 2D (requiring dt
  • Another participant inquires about the specific partial differential equation being solved, suggesting that the nature of the equation (e.g., Fourier or involving moving fluids) could influence the results.
  • Concerns are raised about potential singularities in the analytic solution that could lead to excessive mesh refinement and instability in the numerical solution.
  • Suggestions are made to limit the minimum size of elements and/or time steps to maintain physical relevance in the problem.
  • Participants discuss the advantages of using implicit integration methods over explicit ones, particularly in terms of stability and accuracy when dealing with larger time steps.
  • A detailed explanation is provided regarding the transition from explicit to fully implicit methods, including the role of a weight factor that influences the stability and accuracy of the solution.
  • One participant shares their experience of resolving the issue by optimizing the mesh in GMSH, noting that a regular 3D grid adhered to the expected CFL condition.

Areas of Agreement / Disagreement

Participants express varying opinions on the applicability of the CFL condition in 3D finite element methods, with some suggesting that mesh quality and element size play critical roles. There is no consensus on the best approach to take regarding time stepping and mesh refinement strategies.

Contextual Notes

Participants highlight limitations related to mesh quality in 3D, the potential for singularities in the solution, and the need for careful control over element sizes and time steps. The discussion reflects a range of approaches and considerations without resolving the underlying complexities.

Who May Find This Useful

This discussion may be of interest to researchers and practitioners working with numerical methods for partial differential equations, particularly those using finite element methods in computational physics or engineering contexts.

pepgma
Messages
2
Reaction score
0
I am solving the heat equation in a non comercial C++ finite elements code with explicit euler stepping, and adaptive meshes (coarse in the boundaries and finer in the center). I am aware the CFL condition for the heat equation depends on dt/h**2 for the 1D, 2D, 3D case. When I solve the equation in 2D this principle is followed and I require smaller grids following dt<h**2.

But in 3D the problem seems to be requiring finner and finer grids as I decrease the timestep in what appears to be a dt/h**3 behaviour. Does anyone have an idea what could be happening? is the CFL no longer valid in FEM and 3D? What other factors could be influencing?
 
Physics news on Phys.org
What partial differential equation are you solving? Fourier or something with a moving fluid where you have first derivatives in space?
 
It is possible the analytic solution has a singularity in the space dimension(s) , so a naive attempt to solve it with automatic mesh refinement will "diverge" to an infinite number of elements and an infinitesimal time step.

The simplest fix for that is just to limit the minimum size of elements and/or timesteps to something physcially meaningful for the problem.

If you need small elements in space to capture the solution, an implicit difference scheme in time with variable times steps can be orders of magnitude faster than a conditionally stable implicit scheme.
 
A fully implicit integration is the best approach. With too large a time step, all you lose is accuracy, not stability.
 
Thanks LawrenceC and AlephZero. Finally, the issue was merely a meshing problem. I am ussing gmsh, which allows to provide a 1d parameter of "typical length". Although this parameter is well conserved in 2d meshes, 3d meshes seem to result with very deformed elements (e.g. tetrahedron with one of the edges near to zero). Optimizing the mesh with the two 'optimize' buttons in Gmsh, helped tough, but still some of the tetrahedrons where very small.

To corroborate, I solved the same equations in a regular 3D grid, where I was sure the element size was conserved, and indeed, the dt<h**2 was followed.

As for your questions, the equation was really just the heat equation div(sigma*grad(phi))=K*dphi/dt

No automatic (i assume, during run time) mesh refinement was used

>The simplest fix for that is just to limit the minimum size of elements and/or timesteps to
>something physcially meaningful for the problem.

I guess this is equivalent to what i found, i was not having real control over the minimum size

>an implicit difference scheme in time with variable times steps can be orders of magnitude faster
>than a conditionally stable implicit scheme.

I understand the first, and I agree, I have moved to an implicit scheme now. Could you explain " conditionally stable implicit scheme" did you mean 'explicit scheme' ?

>A fully implicit integration is the best approach. With too large a time step, all you lose is
>accuracy, not stability.

Thanks!, hadnt see it that way.
 
Last edited:
To advance the first order differential equations in time, you can go from explicit to fully implicit. There is a weight factor from 0 to 1 that controls it. If it is 0, the advancement is explicit. If it is 1, the solution is fully implicit. If it is 1/2, it's called Crank-Nicolson. With the heat equation you wind up with a set of equations written as follows. The order of the matricies is the number of gridpoints in the system.

[B * theta * [K] + [CAP]]{t'} = [[CAP] - (1-B) * theta * [K]]{t''} + {b} * theta

where

[ ] denotes NXN matrix
{} denotes NX1 vector
theta is time step
B is weight 0<B<1 (should be weak inequality - can't type it)
[K] is stiffness matrix consisting of FE approximation for Laplacian operator, NXN
[CAP] is density-specific heat matrix, diagonal, NXN
{b} is a NX1 vector where the boundary conditions wind up
{t'} is the vector of temperatures at the new time time step
{t''} is the vector of temperatures from the previous time step

So you can readily see that if B is anything but zero, you must solve an NXN set of simultaneous equations for each time step. Fortunately, there are some very fast routines (Gauss elimination) that can do this if the matrix is symmetric and the grid numbering is such that bandwidth is minimized. If it is very large, I have used conjugate gradient methods to solve large systems of equations.

If B=0, you have 'explicit' expressions for the answer at the new time step because [CAP] is diagonal. But, as mentioned, it is unstable. If B is greater or equal to 1/2, the solution is stable but it can have damping oscillations if too large a time step is chosen. For B=1, there are no oscillations.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 22 ·
Replies
22
Views
4K
  • · Replies 23 ·
Replies
23
Views
7K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K