Can linear algebra convert the minimum column vector in a matrix?

In summary, the conversation discusses using a particular form of regression that involves finding an eigenvector and matrix for the solution. The person is using two matrix libraries and is wondering if there is a method in linear algebra to convert the eigenmatrix to one where all the column vectors are minimized. They also discuss normalizing vectors and the potential solutions for their specific case.
  • #1
tobinare
2
0
I'm doing a particular form of regression that gives a eigenvector and matrix for the solution. I'm using two matrix libraries to code the solution. One library yields the minimum column vector in each column. The other does not, is there a method in linear algebra to convert the eigenmatrix to one where all the column vectors are minimized?

As an example, all the column vectors in Matrix A are 1. In Matrix B, the 2nd column vector is 2 and I think I require it to be 1. Is this change possible? TIA


Vectors | Matrix - A
---------|-----------------
0.0365 | 0.27217 0.22176
0.0001 | -0.96225 0.97510

Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715 0.5026
0.0001 | -0.9624 1.9018
 
Physics news on Phys.org
  • #2
tobinare said:
I'm doing a particular form of regression that gives a eigenvector and matrix for the solution. I'm using two matrix libraries to code the solution. One library yields the minimum column vector in each column. The other does not, is there a method in linear algebra to convert the eigenmatrix to one where all the column vectors are minimized?

As an example, all the column vectors in Matrix A are 1. In Matrix B, the 2nd column vector is 2 and I think I require it to be 1. Is this change possible? TIA

Vectors | Matrix - A
---------|-----------------
0.0365 | 0.27217 0.22176
0.0001 | -0.96225 0.97510

Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715 0.5026
0.0001 | -0.9624 1.9018

Hey tobinare and welcome to the forums.

What I would do is take your matrix and divide by the determinant so that you get a determinant of 1 in the matrix.

This will transform your matrix to a complete orthonormal basis if the matrix itself has an orthogonal basis, but if it's not orthogonal I don't know for sure if you will get your requirement that the length of each column vector is approximately 1.

Also one idea is that if you are able to do a transformation between two bases, then you can use a transformation that takes your basis, orthogonalizes, and from that you can then get the determinant and divide it so that each of the basis vectors will be 1. Since it is orthonormal, you will get the properties of a rotation matrix.

What you can do then is you can place a constraint on your decomposition that the non-orthgonal part has column vectors that must be one and then extract out the rest of the conservation of your matrix (i.e. the equality) so that the rest of it takes that into account.

So I'd look into determinants, matrix norms, matrix decompositions including the gram-schmidt processes as well as using the decompositions and change of basis with regards to orthonormal basis vectors, where you can guarantee that taking the determinant out (scalar divide) will always give you vectors that have length 1.
 
  • #3
Is normalizing vectors the answer to my question? Do I simply divide each vector by it's normal? So in my case with Matrix B

Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715 0.5026
0.0001 | -0.9624 1.9018

The normal for the first column vector is..
sqrt(0.2715^2 + -0.9624^2) = 1.000
The normal for the second column vector is..
sqrt(0.5026^2 + 1.9018^2) = 1.9671

The minimum vector for the second column is
Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715/1.00 0.5026/1.9671
0.0001 | -0.9624/1.00 1.9018/1.9671

Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715 0.2553
0.0001 | -0.9624 0.9668

Which is close to
Vectors | Matrix - A
---------|-----------------
0.0365 | 0.27217 0.22176
0.0001 | -0.96225 0.97510
 
Last edited:
  • #4
tobinare said:
Is normalizing vectors the answer to my question? Do I simply divide each vector by it's normal? So in my case with Matrix B

Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715 0.5026
0.0001 | -0.9624 1.9018

The normal for the first column vector is..
sqrt(0.2715^2 + -0.9624^2) = 1.000
The normal for the second column vector is..
sqrt(0.5026^2 + 1.9018^2) = 1.9671

The minimum vector for the second column is
Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715/1.00 0.5026/1.9671
0.0001 | -0.9624/1.00 1.9018/1.9671

Vectors | Matrix - B
---------|-----------------
0.0365 | 0.2715 0.2553
0.0001 | -0.9624 0.9668

Which is close to
Vectors | Matrix - A
---------|-----------------
0.0365 | 0.27217 0.22176
0.0001 | -0.96225 0.97510

If these are just eigenvectors, then you might as well do that since you can have an infinite number of linearly dependent (i.e. parallel) eigenvectors and one such vector will be of length 1.

But it would be a good idea if you tell us what you are using this for. Is there a specific relationship between the vectors and the matrix that needs to hold?
 
  • #5



Linear algebra is a powerful tool that can be used in various ways to solve problems in many fields, including regression analysis. In this case, it is possible to use linear algebra techniques to convert the minimum column vector in a matrix to a desired value.

One approach is to use the concept of eigenvalues and eigenvectors. An eigenvector is a special vector that, when multiplied by a matrix, results in a scalar multiple of itself. The corresponding scalar multiple is called an eigenvalue. In your case, the eigenvectors and eigenvalues can be used to convert the minimum column vector in a matrix to a desired value.

To do this, you can first find the eigenvalues and eigenvectors of the matrix using one of the matrix libraries you mentioned. Then, you can use the eigenvector corresponding to the minimum eigenvalue to transform the matrix. This can be done by multiplying the matrix by the inverse of the eigenvector matrix, which will result in a new matrix with the desired minimum column vector.

It is also important to note that linear algebra provides various techniques for transforming matrices, such as orthogonal transformations and diagonalization, which can also be used to convert the minimum column vector in a matrix to a desired value.

In conclusion, linear algebra offers several methods that can be used to convert the minimum column vector in a matrix to a desired value. By utilizing the concepts of eigenvectors and eigenvalues, as well as other transformation techniques, it is possible to achieve the desired result in your regression analysis.
 

Related to Can linear algebra convert the minimum column vector in a matrix?

1. What is a minimum column vector?

A minimum column vector is a mathematical vector with the minimum number of elements necessary to represent a particular quantity or set of quantities. It is often used to represent data in a concise and efficient manner.

2. How is a minimum column vector different from a regular vector?

A minimum column vector differs from a regular vector in that it contains the minimum number of elements necessary to represent the data, while a regular vector may contain additional elements that are not essential for representing the data. This makes a minimum column vector more compact and efficient for data storage and manipulation.

3. What are some applications of minimum column vectors?

Minimum column vectors are commonly used in data analysis, machine learning, and other fields where efficient representation and manipulation of data is important. They are also used in linear algebra and optimization problems.

4. How are minimum column vectors created?

Minimum column vectors can be created by selecting the minimum number of elements necessary to represent the data, while still maintaining the necessary information. This can be done manually or through mathematical algorithms.

5. Can a minimum column vector be converted into a different data structure?

Yes, a minimum column vector can be converted into a different data structure, such as a matrix or a regular vector. However, this conversion may result in loss of information or efficiency, depending on the specific data and application.

Similar threads

  • Linear and Abstract Algebra
Replies
4
Views
911
  • Linear and Abstract Algebra
Replies
1
Views
987
Replies
5
Views
14K
  • Linear and Abstract Algebra
Replies
4
Views
2K
  • Precalculus Mathematics Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
3K
  • Calculus and Beyond Homework Help
Replies
8
Views
6K
  • Calculus and Beyond Homework Help
Replies
6
Views
3K
  • Linear and Abstract Algebra
Replies
2
Views
9K
Back
Top