Graduate Understanding of conjugate directions

  • Thread starter Thread starter Ironphys
  • Start date Start date
  • Tags Tags
    Conjugate
Click For Summary
SUMMARY

The discussion focuses on the concept of conjugate directions as presented in J. R. Shewchuk's paper "An introduction to the conjugate gradient method without the agonizing pain." It clarifies that two vectors, d1 and d2, are A-orthogonal if the condition d2T*A*d1 = 0 holds true after transformation by matrix A. The confusion arises regarding the expectation that the transformed vectors d'1 and d'2 should satisfy d'2T*d'1 = 0, leading to the condition d2T*AT*A*d1=0, which differs from the A-orthogonality condition. The discussion raises questions about the matrix used in these transformations and the conditions for orthogonality.

PREREQUISITES
  • Understanding of conjugate gradient methods
  • Familiarity with linear algebra concepts, specifically matrix transformations
  • Knowledge of vector orthogonality and inner products
  • Experience with mathematical notation and terminology in optimization
NEXT STEPS
  • Study the properties of A-orthogonal vectors in the context of linear transformations
  • Explore the derivation of the conjugate gradient method from Shewchuk's paper
  • Investigate the implications of using different matrices in vector transformations
  • Learn about the role of matrix B in the context of conjugate directions
USEFUL FOR

Mathematicians, computer scientists, and engineers interested in optimization algorithms, particularly those working with the conjugate gradient method and linear algebra applications.

Ironphys
Messages
1
Reaction score
0
TL;DR
Understanding of conjugate directions
I am reading a good paper of J. R. Shewchuk, titled "An introduction to the conjugate gradient method without the agonizing pain", however, I do not fully understand the idea of conjugate directions. As shown in Figure 22a, where the vectors d1 and d2 are not orthogonal. These vectors are transformed by a multiplication with the matrix A and after the transformation we have the corresponding vectors d'1 (=A*d1) and d'2 (=A*d2) as in Figure 22b. If the transformed vectors d'1 and d'2 are orthogonal now, the original vectors d1 and d2 satisfy the condition d2T*A*d1 = 0. The vectors d1 and d2 are then called A-orthogonal or conjuate. So far, so good!

However, I would expect a different condition. The transformed vectors d'1 and d'2 should satisfy the condition d'2T*d'1 = 0. Inserting there d'1 = A*d1 and d'2 = A*d2 would yield the condition d2T*AT*A*d1=0. This condition is different from the condition for the A-orthogonality. And I don't understand why ... ?
 
Last edited:
Physics news on Phys.org
Are you sure it should be the same matrix? Or just any matrix, like ##d_2^\tau B d_1## with ##B=A^\tau A##. Another way out is to search a matrix ##A## such that ##d_1'=d_1A## and ##d_2'=Ad_2## are orthogonal.
 
Last edited:
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

Replies
1
Views
5K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
1
Views
6K
  • · Replies 0 ·
Replies
0
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 20 ·
Replies
20
Views
4K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
505