How to Solve the Fourier Integral in Eq. (27) Involving Position Vectors?

In summary, the conversation discusses a complex integral involving two dimensional position vectors and a vector in the Fourier domain. The author has encountered this integral in an article and is struggling to solve it. They ask for guidance and the other person suggests a method involving simplifying the expression and changing variables. The conversation ends with a discussion of the difficult algebra involved in solving the integral.
  • #1
David932
7
0
upload_2017-2-15_14-5-3.png


Where , rho 1 and rho 2 are two dimensional position vectors and K is a two dimensional vector in the Fourier domain. I encountered the above Eq. (27) in an article and the author claimed that after integration the right hand side gives the following result:
upload_2017-2-15_14-7-50.png

I tried to solve this integral but couldn't get it right. please can anyone here can guide me how to get this result from Eq. (27).

This integral has been annoying me for a few weeks!
Thank you so much
 
Physics news on Phys.org
  • #2
If it weren't for that factor ##Sign(i,j)## in the result I would have thought that we could treat ##\sigma_i##, ##\sigma_j## and ##\delta_{ij}## as arbitrary constants in (27) and that you were simply trying to find the Fourier transform of a multivariate Gaussian (which I would do by diagonalizing the quadratic form to get a bunch of 1-D Fourier transforms of Gaussians, which can be performed many ways). But it seems like something else is going on here.

So I think we need more information. What do ##\sigma_i##, ##\sigma_j## and ##\delta_{ij}## stand for, and what is ##Sign(i,j)##?

Jason
 
  • #3
jasonRF said:
If it weren't for that factor ##Sign(i,j)## in the result I would have thought that we could treat ##\sigma_i##, ##\sigma_j## and ##\delta_{ij}## as arbitrary constants in (27) and that you were simply trying to find the Fourier transform of a multivariate Gaussian (which I would do by diagonalizing the quadratic form to get a bunch of 1-D Fourier transforms of Gaussians, which can be performed many ways). But it seems like something else is going on here.

So I think we need more information. What do ##\sigma_i##, ##\sigma_j## and ##\delta_{ij}## stand for, and what is ##Sign(i,j)##?

Jason
Hi thanks for the reply. The parameters ##\sigma_i## and ##\sigma_j## corresponds to the spread of spectral density and ##\delta_{ij}## is the width of the correlation between the i and j components of a field. and sign (i, j) = 1 when i = j and sign (i,j) = -1 if i is not equal to j.
 

Attachments

  • upload_2017-2-15_18-14-55.png
    upload_2017-2-15_18-14-55.png
    3.4 KB · Views: 427
  • #4
Okay. I think I get it now. We can treat the parameters as constants, but the final simplifications can take into account that the answer will depend on whether ##i=j## or ##i\neq j##.

What part are you getting stuck on? Can you do the integrals and are just struggling with the algebra to simplify the expression, or are you having trouble doing the integrals? I've derived a simple expression for the general integral, but have not done the messy algebra to see if the final result agrees with the posted answer.

jason
 
  • #5
Hi Jason,

First, thank you so much for the reply.
I am struggling with the doing the integrals. I tried and tried but couldn't get the result.
 
  • #6
Okay. By the way, my "simple" formula I thought I had was wrong. The correct way is messy algebra, but conceptually quite simple.
The basic form of your integral is:
$$
I = \int_{-\infty}^{\infty} d^2\mathbf{r}_1 \int_{-\infty}^{\infty} d^2\mathbf{r}_2\, e^{-i2\pi\mathbf{k \cdot }(\mathbf{r}_1-\mathbf{r}_2)} \, e^{-\mathbf{r}_1 \cdot \mathbf{r}_1 /a_1^2}\,e^{-\mathbf{r}_2 \cdot \mathbf{r}_2 /a_2^2}\, e^{-2 b \mathbf{r}_2 \cdot \mathbf{r}_1 }
$$
We can re-write this,
$$
I = \int_{-\infty}^{\infty} d^2\mathbf{r}_1\, e^{-i2\pi\mathbf{k \cdot } \mathbf{r}_1}\, e^{-\mathbf{r}_1 \cdot \mathbf{r}_1 /a_1^2} \, J(\mathbf{r}_1)
$$
where,
$$
\begin{eqnarray}
J(\mathbf{r}_1) & = & \int_{-\infty}^{\infty} d^2\mathbf{r}_2\, e^{i2\pi\mathbf{k \cdot} \mathbf{r}_2} \,e^{-\mathbf{r}_2 \cdot \mathbf{r}_2 /a_2^2}\, e^{-2 b \mathbf{r}_2 \cdot \mathbf{r}_1 }
\end{eqnarray}
$$
Now change variables using ##\mathbf{y} = \mathbf{r}_2/a_2## to get
$$
\begin{eqnarray}
J(\mathbf{r}_1) & = & a_2^2 \int_{-\infty}^{\infty} d^2\mathbf{y} \,e^{i2\pi a_2 \mathbf{k \cdot} \mathbf{y}} \,e^{-\mathbf{y} \cdot \mathbf{y}}\, e^{-2 b a_2 \mathbf{r}_1 \cdot \mathbf{y} } \\
& = & a_2^2\int_{-\infty}^{\infty} d^2\mathbf{y} \,e^f
\end{eqnarray}
$$
Where all the mess is in ##f##. Re-writing,
$$
\begin{eqnarray}
f & = & i2\pi a_2 \mathbf{k \cdot} \mathbf{y} -\mathbf{y} \cdot \mathbf{y} -2 b a_2 \mathbf{r}_1 \cdot \mathbf{y} \\
& \equiv & \mathbf{s \cdot} \mathbf{y} -\mathbf{y} \cdot \mathbf{y} .
\end{eqnarray}
$$
where the above defines ##\mathbf{s}=i2\pi a_2 \mathbf{k} -2 b a_2 \mathbf{r}_1##. Now we want to find a new integration variable ## \mathbf{z} = \mathbf{y} - \mathbf{w}## for a constant ##\mathbf{w}## that makes the linear term above go away. We look at ##f##,
\begin{eqnarray}
f & = & \mathbf{s \cdot} \mathbf{y} -\mathbf{y} \cdot \mathbf{y} \\
& = & \mathbf{s \cdot} \mathbf{z} + \mathbf{s \cdot} \mathbf{w} - \mathbf{z} \cdot \mathbf{z} - \mathbf{w} \cdot \mathbf{w} - 2 \mathbf{w} \cdot \mathbf{z}
\end{eqnarray}
so if ##\mathbf{w} = \mathbf{s}/2## we get,
\begin{eqnarray}
f & = & \mathbf{s \cdot} \mathbf{s} /2- \mathbf{z} \cdot \mathbf{z} - \mathbf{s} \cdot \mathbf{s}/4 \\
& = & \mathbf{s \cdot} \mathbf{s} /4- \mathbf{z} \cdot \mathbf{z}
\end{eqnarray}
and we have,
\begin{eqnarray}
J(\mathbf{r}_1) & = & a_2^2 e^{ \mathbf{s \cdot} \mathbf{s} /4} \int_{-\infty}^{\infty} d^2\mathbf{z} \,e^{-\mathbf{z} \cdot \mathbf{z}} \\
& = & \pi \, a_2^2 e^{ \mathbf{s \cdot} \mathbf{s} /4}
\end{eqnarray}
Now, ##\mathbf{s \cdot} \mathbf{s} /4 = \zeta_1 \mathbf{k \cdot} \mathbf{k} + \zeta_2 \mathbf{r}_1 \cdot \mathbf{r}_1 + i \zeta_3 \mathbf{k \cdot} \mathbf{r}_1 ##; doing the algebra will give the the values of the real ##\zeta_1##,##\zeta_2##, and ##\zeta_3##.
So our original integral becomes,
$$
I = \pi \, a_2^2 e^{\zeta_1 \mathbf{k \cdot} \mathbf{k}} \int_{-\infty}^{\infty} d^2\mathbf{r}_1 \, e^{-i(2\pi-\zeta_3)\mathbf{k \cdot } \mathbf{r}_1}\, e^{-\mathbf{r}_1 \cdot \mathbf{r}_1 (\zeta_2 + 1/a_1^2)}
$$
This can be integrated using the same trick above. The algebra will be really messy, but this is do-able. Hope this helps!

Jason
 
Last edited:
  • #7
I have another way that yields a simple result that may be easier to deal with. I will use matrix notation, so dot products will be ##\mathbf{k \cdot x} = \mathbf{k}^T\mathbf{x}## where the T superscript represents transpose.

Let ##\mathbf{r}## be a 4 x 1 vector that contains ##\rho_1## and ##\rho_2## stacked on top of each other, and ##\mathbf{k}## be the 2x1 vector wave-vector. Then the integral is of the form
\begin{eqnarray}
I & = & \int_{-\infty}^{\infty} d^4 \mathbf{r} \, e^{i 2 \pi \mathbf{k}^T B \mathbf{r}}\, e^{-\mathbf{r}^T A \mathbf{r}}
\end{eqnarray}
Where ##B## is a 2x4 matrix and ##A## is a positive definite symmetric 4x4 matrix. For your problem I believe these have a very special form. If ##I_2## is a 2x2 identity matrix, then in block-matrix notation I think your problem can be represented
\begin{eqnarray}
B & = & \left( \begin{array} & I_2 & -I_2 \end{array} \right) \\
A & = & \left( \begin{array} & a I_2 & b I_2 \\ b I_2 & c I_2 \end{array} \right) \\
\end{eqnarray}
The results below do not rely on these forms, but they are probably helpful in doing the algebra to get the final answer in a form you want.

Anyway, now we use the fact that ##A## is positive definite to do an eigen-decomposition, ##A = P \Lambda P^T##, where ##\Lambda## is the diagonal matrix of eigenvalues and ##P## is an orthogonal matrix of eigenvectors. Then the integral can be written,
\begin{eqnarray}
I & = & \int_{-\infty}^{\infty} d^4 \mathbf{r} \, e^{i 2 \pi \mathbf{k}^T B \mathbf{r}}\, e^{-\mathbf{r}^T P \Lambda P^T \mathbf{r}}
\end{eqnarray}
Now we do a change of variable to ##\mathbf{y} = \Lambda^{1/2} P^T \mathbf{r}##. Note that the Jacobian is ##\left| \Lambda^{1/2} P^T\right|=\left| \Lambda\right|^{1/2} =\left| A \right|^{1/2}##, where ##| \cdot |## represents the determinant. So we have
\begin{eqnarray}
I & = & \left| A \right|^{-1/2}\int_{-\infty}^{\infty} d^4 \mathbf{y} \, e^{i 2 \pi \mathbf{k}^T B P \Lambda^{-1/2} \mathbf{y}}\, e^{-\mathbf{y}^T \mathbf{y}} \\
& = & \left| A \right|^{-1/2}\int_{-\infty}^{\infty} d^4 \mathbf{y} \, e^{\mathbf{s}^T\mathbf{y} - \mathbf{y}^T \mathbf{y}}
\end{eqnarray}
where ##\mathbf{s} = i 2 \pi \Lambda^{-1/2} P^T B^T \mathbf{k}##. Now we do the same trick as before to get rid the of the term linear in ##\mathbf{y}## (by letting ##\mathbf{z}=\mathbf{y}-\mathbf{s}/2##) and end up with,
\begin{eqnarray}
I & = & \left| A \right|^{-1/2}e^{\mathbf{s}^T\mathbf{s}/4} \int_{-\infty}^{\infty} d^4 \mathbf{z} \, e^{ - \mathbf{z}^T \mathbf{z}} \\
& = & \pi^2 \left| A \right|^{-1/2}e^{\mathbf{s}^T\mathbf{s}/4}
\end{eqnarray},
Now, ##\mathbf{s}^T\mathbf{s} = -4 \pi^2 \mathbf{k}^T B P \Lambda^{-1/2}\Lambda^{-1/2} P^T B^T \mathbf{k}=-4 \pi^2 \mathbf{k}^T B A^{-1} B^T \mathbf{k}##. So we finally have
\begin{eqnarray}
I = \pi^2 \left| A \right|^{-1/2}e^{-\pi^2 \mathbf{k}^T B A^{-1} B^T \mathbf{k}}.
\end{eqnarray}
The fact that ##A## and ##B## have such special forms will hopefully make this expression reducible. For example forumulas in
https://en.wikipedia.org/wiki/Block_matrix
can be used to find the inverse of ##A## quite easily, and I suspect the determinant of ##A## is also not so bad.

Good luck,

Jason
 
Last edited:
  • Like
Likes Greg Bernhardt
  • #8
I found a nice formula for the determinant. Start from the equation between (1) and (2) of,
https://en.wikipedia.org/wiki/Woodbury_matrix_identity
\begin{eqnarray}
\left(\begin{array} & A & U \\ V & C \end{array} \right) & = & \left(\begin{array} & I & UC^{-1} \\ 0 & I \end{array} \right) \left(\begin{array} & A-UC^{-1}V & 0 \\ 0 & C \end{array} \right) \left(\begin{array} & I & 0 \\ C^{-1}V & I \end{array} \right)
\end{eqnarray}
Taking the determinant yields
\begin{eqnarray}
\left| \left(\begin{array} & A & U \\ V & C \end{array} \right) \right| & = & \left|A-UC^{-1}V \right| \left| C \right|
\end{eqnarray}
In your case all of the blocks are proportional to ##I_2##, so this should be easy to compute.

jason
 
Last edited:
  • #9
Dear Jason,

Thank you so much for the great help and giving priceless time to guide me in solution of this integral. I am trying to solve it with the approaches you mentioned. I hope I can find a reasonable solution to this integral. I also tried to solve the integral with using the formula, integral (exp(-bx^2) exp(iax))dx = sqrt(pi/b)exp(-a^2/4b) but I couldn't get the required result.
 
  • #10
Did you try the approach I gave in post 7? When i plug in the parameters of your problem into equation 20 I get a result that is almost the one you posted. Instead of ##sign(i,j)## I get ##\pi^2##.

In retrospect I do not understand how the sign factor could possibly appear. For the special case ##\kappa=0## you have an inyegral of a Gaussian, which must be positive. Likewise, it seems inevitable that there should be a ##\pi^2##. So i do not see how your posted solution would be correct.
 
Last edited:
  • #11
Hi Jason,

I tried your approaches but I find it a bit difficult to understand fully. Maybe I don't have enough skills and a more 'mechanical' or brute force approach may be easier for me to understand. Now I will try once again the approaches you mentioned if I can get answer with it.

Thank you so much for the huge help.
Regards
 

Related to How to Solve the Fourier Integral in Eq. (27) Involving Position Vectors?

1. What is a Fourier Integral?

A Fourier Integral is a mathematical tool used to represent a function as a sum of sinusoidal functions with different frequencies. It is used to analyze and solve differential equations and other complex problems in physics, engineering, and mathematics.

2. How can I solve Eq. (27) using a Fourier Integral?

To solve Eq. (27) using a Fourier Integral, you can follow the steps of the Fourier Transform method. First, apply the Fourier transform to both sides of the equation, then use properties of the Fourier transform to simplify the equation. Finally, apply the inverse Fourier transform to obtain the solution.

3. What are the benefits of using a Fourier Integral to solve equations?

A Fourier Integral allows for the representation of complex functions in terms of simpler sinusoidal functions, making it easier to analyze and solve equations. It also has applications in signal processing, image analysis, and other fields where complex data can be represented and manipulated using Fourier transforms.

4. Can a Fourier Integral be used to solve any type of equation?

A Fourier Integral is not suitable for solving all types of equations. It is most commonly used for solving linear, homogeneous, and constant coefficient differential equations. It may also be used for solving certain types of integral equations and boundary value problems.

5. Are there any limitations to using a Fourier Integral?

The Fourier Integral method may not always provide an exact solution to an equation, as it is based on approximations and assumptions. It may also be difficult to apply in cases where the equation is not well-defined or the function is not periodic. Additionally, the Fourier Integral method may require advanced mathematical knowledge and techniques to solve certain problems.

Similar threads

Replies
2
Views
765
Replies
16
Views
1K
  • Calculus
Replies
14
Views
2K
Replies
2
Views
2K
Replies
3
Views
2K
Replies
1
Views
2K
Replies
1
Views
840
  • Special and General Relativity
Replies
4
Views
223
Replies
10
Views
2K
Replies
2
Views
1K
Back
Top