Converegence Problem for forward and backward propogation

  • Thread starter aashish.v
  • Start date
In summary, the conversation discusses a function, J(n-1)+J(n+1)=2nJ(n), and how it behaves in the forward and backward directions. The person is trying to prove that it diverges in the forward direction and converges in the backward direction, but is struggling to find a method. They mention using a typical approach where the solution is in the form of C.λ^n, but their λ is a function of n. They offer to scan and upload their work for further clarification.
  • #1
aashish.v
13
0
The function is J(n-1)+J(n+1)=2nJ(n)

I need to prove that it diverges in forward direction but converges in backward direction.
I am unable to find any method, kindly suggest.
 
Physics news on Phys.org
  • #2
Are you sure that you typed the function correctly?
 
  • #3
Yes.

for forward direction we can re-write it this way..
J(n)=2*(n-1)*J(n-1)-J(n-2)

and for backward propogation...
J(n)=2*(n+1)*J(n+1)-J(n+2)
 
  • #5
aashish.v said:
The function is J(n-1)+J(n+1)=2nJ(n)

I need to prove that it diverges in forward direction but converges in backward direction.
I am unable to find any method, kindly suggest.

You need to show your work.

RGV
 
  • #6
Ray Vickson said:
You need to show your work.

RGV

The typical approach shown in text is to show that if the solution of such function truns out to be in form of
[itex]C.λ^n[/itex] then we can say that such recursive function diverges, I have tried the same approach for the problem but the λ I am getting turns out to be function of n.

I can scan and upload my work if you wish.
 

1. What is the convergence problem for forward and backward propagation?

The convergence problem for forward and backward propagation refers to the issue of the neural network not learning or improving its accuracy over time. This can happen due to various reasons such as incorrect network architecture, inappropriate learning rate, or insufficient training data.

2. How does the convergence problem affect the performance of a neural network?

The convergence problem can significantly impact the performance of a neural network as it prevents the network from learning and improving its accuracy. This can result in inaccurate predictions and poor performance on the given task.

3. What are some common causes of the convergence problem?

Some common causes of the convergence problem include using an incorrect network architecture, choosing an inappropriate learning rate, not normalizing the input data, and having insufficient training data. Other factors such as vanishing or exploding gradients can also contribute to this problem.

4. How can the convergence problem be solved?

The convergence problem can be solved by carefully selecting the network architecture and tuning the learning rate. It is also essential to normalize the input data and ensure that there is enough training data to learn from. Regularization techniques such as dropout and early stopping can also help prevent overfitting and improve convergence.

5. Can the convergence problem be completely eliminated?

While the convergence problem can be minimized through careful tuning and regularization techniques, it cannot be completely eliminated. Neural networks are complex models, and there can always be some data or scenarios that the network struggles to learn from. It is important to continuously monitor and adjust the model to improve its performance.

Similar threads

  • Precalculus Mathematics Homework Help
Replies
13
Views
1K
Replies
1
Views
1K
  • Precalculus Mathematics Homework Help
Replies
2
Views
835
  • Introductory Physics Homework Help
Replies
21
Views
181
  • Precalculus Mathematics Homework Help
Replies
3
Views
639
  • Precalculus Mathematics Homework Help
Replies
7
Views
686
Replies
5
Views
388
  • Precalculus Mathematics Homework Help
Replies
10
Views
979
  • Precalculus Mathematics Homework Help
Replies
15
Views
2K
Replies
15
Views
2K
Back
Top