Successive iteration problem in quantum dissipation article

In summary, the conversation is about a mathematical equation in a paper that involves a bounce action and successive iterations with a zero-order approximation. The goal is to calculate the coefficients in the equation, which decay exponentially fast with increasing n. The approach involves making a trial function and substituting it into the equation to get an approximate equation for R_n. There is some confusion about how to implement this method in a computer program, particularly with regards to the second sum in the equation.
  • #1
moso
14
0
Hey Guys, Trying to figure out how to replicate the following from an article, but can not understand their notations;

The main points are:

The bounce action can be written as the equation

$$\left( n^2 \Theta^2 + 2\alpha n\Theta -1\right)R_n = 2 \sum_{m=1}^\infty R_{n+m}R_m + \sum_{m=1}^n R_{n-m}R_m,$$

where we chose a value of $\Theta$ between 0 and 1 and $\alpha$ between 0 and 10. The paper states that it does successive iterations starting with a zero-order approximation of $R_n \propto \exp(-n)$. It is section VII in this paper (https://journals.aps.org/prb/abstract/10.1103/PhysRevB.36.1931). page 35-36.

The problem is then to calculate the coefficients $R_n$, which is stated to decay exponentially fast with increasing n. My problem with this is I do not see how the iteration occur, as you normally have something on the form $x_i=f(x_{i-1})$ and you can then iteration the equation, but here you have a sum to infinity.

But in this example, I can not seem to identify the correct procedure to find the coefficient R_n in the equation using successive iteration. An example from the article is that for $\alpha=0.1$ and $\Theta=0.3$, the sum of the coefficients $R_n$ should be 8.44.

It anyone can see the trick or procedure to implement this numerically I would very much appreciate it.
 
Physics news on Phys.org
  • #2
So this is a recurrence relation with an infinite number of terms, and therefore can't be solved with ordinary methods. The idea is probably to make an assumption that after some ##n\in\mathbb{N}##, the ##R_n## are practically zero and can be ignored. Then you get a recurrence equation of finite order and it can be solved. Finally you substitute your solution to the original equation to see how good the approximation was.
 
  • #3
So I set the infinity sum to a fix number, let's say 10. Is the idea then to;
1. set the function F_n = f_n*exp(-n)
2. Isolate the constant for f_n as a value of higher order constant f_(n+1), f_(n+2) etc
3. Insert it into F_(n+1)
4. Isolation f_(n+1) as a function of f_(n+2) etc and so on
5. At F(N) I would get a number which can be use to calculate the remaining constant

The problem with this idea is that I would when isolating get huge expression and sometimes more than one solution. Am I on the right track?
 
  • #4
If you start by an exponential trial function ##R_n = Ae^{-kn}## with ##A## and ##k## constants that you deduce by some means, you can then substitute that trial only in the sums on the right hand side of your equation while leaving the ##R_n## on the left side. Then you have an approximate equation for ##R_n##, and you can again substitute this on the RHS of the equation to get a higher order approximation for ##R_n##. In this approach the sums don't have to be cut at some ##n##, if you're able to calculate the limits of the sums.
 
Last edited:
  • #5
okay, that might work. My only problem with that is that it gives us an equation for R_n, but the sum requires us to also use R_(n+1), these could/have different coefficients then R_n.

Do you imply that I should only have one A and n constant or one for every R_n, (n=1,2,...).
 
  • #6
The innitial assumption is that ##R_n = Ae^{-kn}## for all ##n\in\mathbb{N}##. Then ##R_{n+1}## is just ##Ae^{-k(n+1)}##. The only problem in this approach is that if there are many solutions, you can't be sure which one of those the iteration converges to.
 
  • #7
okay, that clarified a lot, I will try your method...thank you
 
  • #8
Sorry about the typo in post #4, it should say "##R_n = Ae^{-kn}## with ##A## and ##k## constants" instead of the "##A## and ##n## constants". I corrected the post by editing.
 
  • #9
So just to clarify the method, as I seem to have some problems with it. If we choose a=1, k=1, so our trial function is R_n = exp(-n). Then in order to implement your results I have to choose a value for n. Choosing n=2 and calculating the sums gives us a number. Which cannot be put back into the equation, as it does not depend on n? How would you do it for generic n, when one of the sums depends on n?
 
  • #10
Now, if the trial is ##R_n = e^{-n}##, we get something like this:

##R_n = \frac{2\sum\limits_{m=1}^{\infty}R_{n+m}R_m + \sum\limits_{m=1}^{n}R_{n-m}R_m}{n^2 \Theta^2 + 2\alpha n\Theta -1} = \frac{2\sum\limits_{m=1}^{\infty}e^{-(n+m)}e^{-m} + \sum\limits_{m=1}^{n}e^{-(n-m)}e^{-m}}{n^2 \Theta^2 + 2\alpha n\Theta -1}##,

and now you probably need to use some kind of a geometric series formula to simplify this result so that it becomes easier to work with. Another way would be to write a computer program that calculates these and does the first sum only up to some finite but large value of ##m##. Note that the index ##m## is only a "dummy" index in the result and the expression is actually only a function of ##n##, ##\Theta## and ##\alpha##.
 
  • #11
I did understand that, I am trying to write the computer program. My problem is with the second sum, that has the limit n. This limit would change all the time... for small n, it is small for large n it is large, but can not see how I can implement this limit, in example MATLAB without specifying the n value.
 
  • #12
The idea of the iteration is to produce a "sequence of sequences", ##R_{n}^{(0)}, R_{n}^{(1)}, R_{n}^{(2)}, \dots##, where each of the ##R_{n}^{(l)}## is a sequence with infinitely many elements and the ##l## denotes the order of approximation. If you write a numerical code that calculates this, you obviously can't calculate each of the infinite number of elements, you only calculate it up to some finite ##n##, one element at a time.
 
  • #13
okay, but when you write the program is the general idea then

1. set limit in the infinity sum to (lets say) 500 and the sum with n to 50, so from the first sum we get something like R(n+1)R(1)... and from the second something like R(n-1)r(1)...R(n-50)R(50),,,
2. You then get a function R_n, which you use as your new function with the same sum limits.
3. repeat until it diverges

is this right?
 
  • #14
Yes, that's the idea. I think the summation limit has to be smaller on each iteration, like ##m\leq 256## on first round, ##m\leq 128## on second round and so on. Then you can set the upper limit in the ##\sum\limits_{m=1}^{\infty}## sum to be ##n##, too.
 
  • #15
hilbert2 said:
Yes, that's the idea. I think the summation limit has to be smaller on each iteration, like ##m\leq 256## on first round, ##m\leq 128## on second round and so on. Then you can set the upper limit in the ##\sum\limits_{m=1}^{\infty}## sum to be ##n##, too.

Perfect, that is just a hell of a calculation to perform, as we have for m=256 then 128 etc, means that we will end up with an equation that is super big after a few iterations.
 
  • #16
If you just want the final result as a sequence of numbers with something like 4 significant figures, you can continue the iteration until ##m\leq 16## and then it's not too long a list. If you want the result as an analytical expression, you can do the iteration with Mathematica.
 
  • #17
hilbert2 said:
If you just want the final result as a sequence of numbers with something like 4 significant figures, you can continue the iteration until ##m\leq 16## and then it's not too long a list. If you want the result as an analytical expression, you can do the iteration with Mathematica.

Yeah, But using Matlab it takes me around 10 min to do 3 iteration with m=20 and n=10. I am also doing it the brute force, with an script like this:

clear all, clc
alpha = 1;
Theta = 0.1;
f(n) = exp(-n);

for o = 1:4;
for i = 1:20; %first sum
p(i) = 2*f(i+n)*f(i);
end
q(n) = sum(p);

for i = 1:10; %second sum
p1(i) = 2*f(-i+n)*f(i);
end
q1(n) = sum(p1);

q2(n) = (q+q1)/(n.^2*Theta^2+2*alpha*n*Theta-1)*(1/f(0)^2); %new function

f(n) = q2; %define new function as old function

end

I think this script captures what we have discussed. but it just takes a long time. I am not sure how to do it in Mathematica.
 
  • #18
I tried calculating this both numerically and symbolically with Mathematica, and if ##\alpha = 0,1##, ##\Theta = 0.3##, it didn't seem to converge properly with the initial trial ##R_{n}^{(0)} = e^{-n}##. It took a lot of time, as you noticed too, and finding a symbolic formula became too difficult at 2nd iteration already. It would probably be necessary to try several trial sequences ##R_{n}^{(0)} = Ae^{-kn}## with different values of ##A## and ##k## until you find one where the ##R_{n}^{(1)}## does not differ too much from ##R_{n}^{(0)}##.
 
  • #19
Okay it is just weird because I'm the article they say that they use 100 iterations.
 
  • #20
I'm not sure whether they set some really small upper limit for ##m## and ##n## or something.

One way to investigate the behavior of the sequence ##R_n## for large values of ##n## would be to ignore terms that become insignificant when ##n\rightarrow\infty##. Then, for example,

##\left( n^2 \Theta^2 + 2\alpha n\Theta -1\right) \approx n^2 \Theta^2##, and

##2 \sum_{m=1}^\infty R_{n+m}R_m + \sum_{m=1}^n R_{n-m}R_m \approx \sum_{m=1}^n R_{n-m}R_m##,

and the equation would become easier to solve. This is somewhat similar to how the Schrödinger equation of a harmonic oscillator is simplified by guessing that the ##\psi(x)## approaches zero as fast as ##e^{-kx^2}## when ##x\rightarrow\infty##.
 

1. What is the "successive iteration problem" in quantum dissipation articles?

The successive iteration problem refers to the challenge of accurately modeling the effects of dissipation (loss of energy) in quantum systems. It arises when attempting to use iterative methods to solve the equations of motion for these systems, as each iteration introduces new terms that can make the solution more complex and less accurate.

2. How does quantum dissipation affect the behavior of quantum systems?

Quantum dissipation can cause a loss of coherence and entanglement in quantum systems, leading to a breakdown of their unique properties such as superposition and entanglement. It can also introduce noise and errors into quantum computations, making them less reliable.

3. What approaches are used to address the successive iteration problem in quantum dissipation articles?

Some approaches include using perturbation theory, time-dependent variational methods, and numerical techniques such as Monte Carlo simulations. Other methods involve approximations and simplifications of the equations of motion to make them more tractable.

4. Are there any real-world applications of the research on the successive iteration problem in quantum dissipation articles?

Yes, this research has implications for a wide range of fields, including quantum computing, quantum information processing, and quantum simulation. Understanding and mitigating the effects of dissipation is crucial for the development and implementation of these technologies.

5. What are some potential future directions for addressing the successive iteration problem in quantum dissipation articles?

Some potential directions include developing more efficient and accurate numerical methods, exploring new theoretical frameworks for understanding dissipation in quantum systems, and utilizing machine learning techniques to optimize quantum control protocols in the presence of dissipation.

Similar threads

Replies
3
Views
619
Replies
12
Views
2K
  • Quantum Physics
Replies
2
Views
970
  • Calculus and Beyond Homework Help
Replies
6
Views
390
  • Quantum Physics
Replies
1
Views
703
Replies
62
Views
6K
  • Atomic and Condensed Matter
Replies
1
Views
1K
Replies
2
Views
2K
Replies
1
Views
2K
Replies
1
Views
2K
Back
Top