When do we need to consider the homogeneous solution?

  • Context: Graduate 
  • Thread starter Thread starter LCSphysicist
  • Start date Start date
  • Tags Tags
    Homogeneous
Click For Summary
SUMMARY

The discussion focuses on the necessity of considering homogeneous solutions in differential equations, particularly when solving the equation ##\nabla u = 0##. It emphasizes that while separating variables can yield specific solutions, cases where ##k=0## require additional terms such as ##x, y, xy, const.##. The need for these terms becomes evident in boundary conditions, as illustrated by examples involving the Biot-Savart formula and the divergence of magnetic fields. The conversation highlights the challenge of determining the necessity of homogeneous solutions in advance, particularly in complex scenarios.

PREREQUISITES
  • Understanding of partial differential equations (PDEs)
  • Familiarity with boundary value problems
  • Knowledge of eigenvalue problems in mathematical physics
  • Basic concepts of magnetostatics and the Biot-Savart law
NEXT STEPS
  • Study the method of separation of variables in PDEs
  • Explore the role of homogeneous solutions in boundary value problems
  • Learn about eigenvalue problems and their applications in physics
  • Investigate the implications of the Biot-Savart law in magnetostatics
USEFUL FOR

Mathematicians, physicists, and engineering students dealing with differential equations, boundary value problems, and magnetostatics will benefit from this discussion.

LCSphysicist
Messages
644
Reaction score
162
Homework Statement:: All below
Relevant Equations:: ,

Generally, when for example we need to solve ##\nabla u = 0##, we separate variables and find equations like that ##X''/X = -Y''/Y = k^2##. So we just solve it, sum the solutions and make it satisfy the boundary/initial conditions.

But, sometimes we also need to consider the case when ##k=0##, that is, we need to consider solutions of the type ##x, y, xy, const.##.

While it becomes apparent the necessity of these terms when we are solving the problem, i would like to know if there is a way to realize right at the beginning if we would need to consider these other solutions.

For example, ##u = 0## at ##x=0, y = 0, x = L; u = 30## at ##y = H## does not need it. But ##u_y = 0## at ## x=0, x=L; u = 0## at ##y=0; u = f(x)## at ## y = H## need it.

How could i know right at the beginning? Of course this is just one example, i would like to know for any general case, even for differents differential equations other than ##\nabla u = 0##

[Moderator's note: moved from a homework forum.]
 
Last edited by a moderator:
Physics news on Phys.org
One case that I have encountered is with the differential equation for ## H ## in magnetostatics for the steady state problem: ## \nabla \times H =J_{conductors} ##. The solution to this is basically the Biot-Savart formula, but this solution misses the homogeneous solution from the magnetic poles.

In solving the problem in an alternative manner, using ## B=\mu_o (H +M) ##, and taking the divergence of both sides, you get ## \nabla \cdot H=-\nabla \cdot M ##. This has an integral solution for ## H ## with the inverse square law with ## \rho_m==\nabla \cdot M ##, which is the solution from the poles that we needed above, but this time the homogeneous solution from the currents in the conductors is missing.

I don't know that there is a good way to determine in advance whether you need to include a homogeneous solution. In this case though, it really can make for some puzzling mathematics, if one isn't heads-up enough to spot what is missing.
 
Consider \nabla^2 u = 0 on (0,L) \times (0,H) subject to
<br /> \begin{array}{cc}<br /> \alpha_0 u + \beta_0u_x = 0 &amp; x = 0 \\<br /> \alpha_1 u + \beta_1u_x = 0 &amp; x = L \\<br /> u = f(x) &amp; y = 0 \\<br /> u = 0 &amp; y = H<br /> \end{array}<br /> where \alpha_i^2 + \beta_i^2 = 1. Then there exists a sequence of eigenvalues \lambda_n \in \mathbb{R} such that X_n&#039;&#039; - \lambda_nX_n = 0\quad\mbox{subject to}\quad<br /> \begin{array}{c} \alpha_0X_n(0) + \beta_0X_n&#039;(0) = 0, \\<br /> \alpha_1X_n(L) + \beta_1X_n&#039;(L) = 0,\end{array}<br /> has a non-trivial solution. The condition for zero to be one of these eigenvalues is <br /> \alpha_1 \beta_0 - \alpha_0(\beta_1 + \alpha_1L) = 0.
 

Similar threads

  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 36 ·
2
Replies
36
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
474
  • · Replies 2 ·
Replies
2
Views
3K