Can linear algebra convert the minimum column vector in a matrix?

tobinare
Messages
2
Reaction score
0
I'm doing a particular form of regression that gives a eigenvector and matrix for the solution. I'm using two matrix libraries to code the solution. One library yields the minimum column vector in each column. The other does not, is there a method in linear algebra to convert the eigenmatrix to one where all the column vectors are minimized?

As an example, all the column vectors in Matrix A are 1. In Matrix B, the 2nd column vector is 2 and I think I require it to be 1. Is this change possible? TIA


Vectors | Matrix - A
---------|-----------------
0.0365 | 0.27217 0.22176
0.0001 | -0.96225 0.97510

Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715 0.5026
0.0001 | -0.9624 1.9018
 
Physics news on Phys.org
tobinare said:
I'm doing a particular form of regression that gives a eigenvector and matrix for the solution. I'm using two matrix libraries to code the solution. One library yields the minimum column vector in each column. The other does not, is there a method in linear algebra to convert the eigenmatrix to one where all the column vectors are minimized?

As an example, all the column vectors in Matrix A are 1. In Matrix B, the 2nd column vector is 2 and I think I require it to be 1. Is this change possible? TIA

Vectors | Matrix - A
---------|-----------------
0.0365 | 0.27217 0.22176
0.0001 | -0.96225 0.97510

Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715 0.5026
0.0001 | -0.9624 1.9018

Hey tobinare and welcome to the forums.

What I would do is take your matrix and divide by the determinant so that you get a determinant of 1 in the matrix.

This will transform your matrix to a complete orthonormal basis if the matrix itself has an orthogonal basis, but if it's not orthogonal I don't know for sure if you will get your requirement that the length of each column vector is approximately 1.

Also one idea is that if you are able to do a transformation between two bases, then you can use a transformation that takes your basis, orthogonalizes, and from that you can then get the determinant and divide it so that each of the basis vectors will be 1. Since it is orthonormal, you will get the properties of a rotation matrix.

What you can do then is you can place a constraint on your decomposition that the non-orthgonal part has column vectors that must be one and then extract out the rest of the conservation of your matrix (i.e. the equality) so that the rest of it takes that into account.

So I'd look into determinants, matrix norms, matrix decompositions including the gram-schmidt processes as well as using the decompositions and change of basis with regards to orthonormal basis vectors, where you can guarantee that taking the determinant out (scalar divide) will always give you vectors that have length 1.
 
Is normalizing vectors the answer to my question? Do I simply divide each vector by it's normal? So in my case with Matrix B

Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715 0.5026
0.0001 | -0.9624 1.9018

The normal for the first column vector is..
sqrt(0.2715^2 + -0.9624^2) = 1.000
The normal for the second column vector is..
sqrt(0.5026^2 + 1.9018^2) = 1.9671

The minimum vector for the second column is
Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715/1.00 0.5026/1.9671
0.0001 | -0.9624/1.00 1.9018/1.9671

Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715 0.2553
0.0001 | -0.9624 0.9668

Which is close to
Vectors | Matrix - A
---------|-----------------
0.0365 | 0.27217 0.22176
0.0001 | -0.96225 0.97510
 
Last edited:
tobinare said:
Is normalizing vectors the answer to my question? Do I simply divide each vector by it's normal? So in my case with Matrix B

Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715 0.5026
0.0001 | -0.9624 1.9018

The normal for the first column vector is..
sqrt(0.2715^2 + -0.9624^2) = 1.000
The normal for the second column vector is..
sqrt(0.5026^2 + 1.9018^2) = 1.9671

The minimum vector for the second column is
Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715/1.00 0.5026/1.9671
0.0001 | -0.9624/1.00 1.9018/1.9671

Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715 0.2553
0.0001 | -0.9624 0.9668

Which is close to
Vectors | Matrix - A
---------|-----------------
0.0365 | 0.27217 0.22176
0.0001 | -0.96225 0.97510

If these are just eigenvectors, then you might as well do that since you can have an infinite number of linearly dependent (i.e. parallel) eigenvectors and one such vector will be of length 1.

But it would be a good idea if you tell us what you are using this for. Is there a specific relationship between the vectors and the matrix that needs to hold?
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top