- #1
llarsen
- 39
- 0
I have a question which has perplexed me for a time and thought maybe someone here would have some insight that might prove useful. My research involves a generalization of first order partial differential equations. The simplest case can be defined in the following manner: Let V be an arbitrary vector field which may be written as:
[tex]V = a_1 \frac{\partial}{\partial x_1} + a_2 \frac{\partial}{\partial x_2} + ... + a_n \frac{\partial}{\partial x_n}[/tex]
where the coefficients [tex]a_i[/tex] are functions of [tex]x_1 , ... , x_n[/tex]. Find a nontrivial function f such that:
[tex] V(f) = h(f) [/tex] where [tex] h(f) [/tex] is an arbitrary function of f. In expanded form the equation becomes:
[tex]a_1 \frac{\partial f}{\partial x_1} + a_2 \frac{\partial f}{\partial x_2} + ... + a_n \frac{\partial f}{\partial x_n} = h(f)[/tex]
This is a generalization of a partial differential equation since [tex] h(f) [/tex] is not a prescribed function, but rather an arbitrary function of f. If one prescribes the function (say [tex]h(f)=1[/tex]) then this becomes a partial differential equation which can be solved numerically when a solution exists. Note that the solution set for [tex]V(f)=h(f)[/tex] should be larger than in that case were [tex]h(f)[/tex] is explicitly prescribed (i.e. there is more flexibility since h(f) is arbitrary).
The equation [tex]V(f)=h(f)[/tex] proves to be an unweildy form of the equation to work with since [tex]h(f)[/tex] is not prescribed. In order to make this more amenable to numerical analysis, I apply an exterior derivative to get [tex]dV(f) = dh(f) = \frac{\partial h}{\partial f} df[/tex]. In order to eliminate the troublesome arbitrary function [tex]h(f)[/tex] one simply takes the wedge product with df to get [tex]dV(f) \wedge df = h_f df \wedge df \equiv 0[/tex]. Note that since this is exactly equal to 0 (since [tex]df \wedge df \equiv 0[/tex]) each component with be zero. This leads to a set of equations by extracting the individual components (which equal 0) from:
[tex]dV(f) \wedge df \equiv 0 [/tex]
However, while the partial differential equation [tex]V(f)=1[/tex] is more restrictive than [tex]V(f)=h(f)[/tex], [tex]V(f)=1[/tex] produces a single equation, while [tex]dV(f) \wedge df \equiv 0[/tex] produces [tex]\frac{n!}{2!(n-2)!}[/tex] equations (one for each component of the two form [tex]dV(f) \wedge df \equiv 0[/tex]). One the surface this would make it appear that [tex]dV(f) \wedge df \equiv 0[/tex] is more restrictive than [tex]V(f)=1[/tex] throughout the domain, when in actuality is should be less restrictive. This seems to suggest that there is some redundancy built into the equation [tex]dV(f) \wedge df \equiv 0[/tex]. I am curious is anyone has insight into the apparent contradiction or a good explanation for the redundancy.
I understand that [tex]dV(f) \wedge df \equiv 0[/tex] includes second derivative terms which means that one has more flexiblity to choose values on the boundary, but it seems to me that the equation shouldn't be more restrictive through the domain as it appear to be based on the number of equations that must be satisified in each case.
Incidentally, I have not seen any references to people numerically solving problems of the form [tex] V(f) = h(f) [/tex] or generalizations such as:
[tex]V(f) = h_1 (f,g)[/tex]
[tex]V(g) = h_2 (f,g)[/tex]
where [tex]h_i[/tex] are arbitrary functions. If anyone knows of numerical methods for solving such problems, I would be very interested in any references to research or papers on the topic.
[tex]V = a_1 \frac{\partial}{\partial x_1} + a_2 \frac{\partial}{\partial x_2} + ... + a_n \frac{\partial}{\partial x_n}[/tex]
where the coefficients [tex]a_i[/tex] are functions of [tex]x_1 , ... , x_n[/tex]. Find a nontrivial function f such that:
[tex] V(f) = h(f) [/tex] where [tex] h(f) [/tex] is an arbitrary function of f. In expanded form the equation becomes:
[tex]a_1 \frac{\partial f}{\partial x_1} + a_2 \frac{\partial f}{\partial x_2} + ... + a_n \frac{\partial f}{\partial x_n} = h(f)[/tex]
This is a generalization of a partial differential equation since [tex] h(f) [/tex] is not a prescribed function, but rather an arbitrary function of f. If one prescribes the function (say [tex]h(f)=1[/tex]) then this becomes a partial differential equation which can be solved numerically when a solution exists. Note that the solution set for [tex]V(f)=h(f)[/tex] should be larger than in that case were [tex]h(f)[/tex] is explicitly prescribed (i.e. there is more flexibility since h(f) is arbitrary).
The equation [tex]V(f)=h(f)[/tex] proves to be an unweildy form of the equation to work with since [tex]h(f)[/tex] is not prescribed. In order to make this more amenable to numerical analysis, I apply an exterior derivative to get [tex]dV(f) = dh(f) = \frac{\partial h}{\partial f} df[/tex]. In order to eliminate the troublesome arbitrary function [tex]h(f)[/tex] one simply takes the wedge product with df to get [tex]dV(f) \wedge df = h_f df \wedge df \equiv 0[/tex]. Note that since this is exactly equal to 0 (since [tex]df \wedge df \equiv 0[/tex]) each component with be zero. This leads to a set of equations by extracting the individual components (which equal 0) from:
[tex]dV(f) \wedge df \equiv 0 [/tex]
However, while the partial differential equation [tex]V(f)=1[/tex] is more restrictive than [tex]V(f)=h(f)[/tex], [tex]V(f)=1[/tex] produces a single equation, while [tex]dV(f) \wedge df \equiv 0[/tex] produces [tex]\frac{n!}{2!(n-2)!}[/tex] equations (one for each component of the two form [tex]dV(f) \wedge df \equiv 0[/tex]). One the surface this would make it appear that [tex]dV(f) \wedge df \equiv 0[/tex] is more restrictive than [tex]V(f)=1[/tex] throughout the domain, when in actuality is should be less restrictive. This seems to suggest that there is some redundancy built into the equation [tex]dV(f) \wedge df \equiv 0[/tex]. I am curious is anyone has insight into the apparent contradiction or a good explanation for the redundancy.
I understand that [tex]dV(f) \wedge df \equiv 0[/tex] includes second derivative terms which means that one has more flexiblity to choose values on the boundary, but it seems to me that the equation shouldn't be more restrictive through the domain as it appear to be based on the number of equations that must be satisified in each case.
Incidentally, I have not seen any references to people numerically solving problems of the form [tex] V(f) = h(f) [/tex] or generalizations such as:
[tex]V(f) = h_1 (f,g)[/tex]
[tex]V(g) = h_2 (f,g)[/tex]
where [tex]h_i[/tex] are arbitrary functions. If anyone knows of numerical methods for solving such problems, I would be very interested in any references to research or papers on the topic.
Last edited: