MHB Linear Independence of Vectors: Why Determinant ≠ 0?

Click For Summary
SUMMARY

The discussion centers on the concept of linear independence of vectors, specifically addressing why a non-zero determinant indicates linear independence. Vectors V=(3,a,1), U=(a,3,2), and W=(4,a,2) can be represented in a square matrix, allowing the determinant to be calculated. A non-zero determinant confirms that the vectors span the entire space of ℝ³, thus proving their linear independence. Additionally, it is established that if the number of vectors exceeds the dimension of the space, linear dependence occurs.

PREREQUISITES
  • Understanding of linear algebra concepts such as vectors and matrices
  • Knowledge of determinants and their properties
  • Familiarity with the concept of vector spaces and dimensions
  • Basic understanding of linear transformations
NEXT STEPS
  • Study the properties of determinants in detail
  • Learn about the relationship between linear independence and matrix inverses
  • Explore the implications of the Rank-Nullity Theorem in linear algebra
  • Investigate the concept of spanning sets in vector spaces
USEFUL FOR

Students and professionals in mathematics, particularly those studying linear algebra, as well as educators seeking to clarify the concepts of vector independence and determinants.

Petrus
Messages
702
Reaction score
0
Hello MHB,
I got one question. If we got this vector $$V=(3,a,1)$$,$$U=(a,3,2)$$ and $$W=(4,a,2)$$ why is it linear independence if determinant is not equal to zero? (I am not interested to solve the problem, I just want to know why it is)

Regards,
$$|\pi\rangle$$
 
Physics news on Phys.org
Petrus said:
Hello MHB,
I got one question. If we got this vector $$V=(3,a,1)$$,$$U=(a,3,2)$$ and $$W=(4,a,2)$$ why is it linear independence if determinant is not equal to zero? (I am not interested to solve the problem, I just want to know why it is)

Regards,
$$|\pi\rangle$$

If you put your vectors in a matrix, you get a linear function identified by the matrix.
If you can "reach" all of $\mathbb R^3$ with this function, your vectors are linearly independent.
The determinant shows if this is possible. Zero means it's not.
 
I like Serena said:
If you put your vectors in a matrix, you get a linear function identified by the matrix.
If you can "reach" all of $\mathbb R^3$ with this function, your vectors are linearly independent.
The determinant shows if this is possible. Zero means it's not.
Thanks! I start to understand now!:)

Regards,
$$|\pi\rangle$$
 
Is this state true.
1. We can check if a vector is linear Independence by checking if the determinant is not equal to zero
2. For a linear independence matrix there exist a inverse
3.
I did find this theorem in internet that don't say in my book.
"If amounts of vector $$v_1,v_2,v_3...v_p$$ is in a dimension $$\mathbb R^n$$
then it is linear dependen if $$p>n$$

Regards,
$$|\pi\rangle$$
 
Last edited:
Petrus said:
Is this state true.
1. We can check if a vector is linear Independence by checking if the determinant is not equal to zero
2. For a linear independence matrix there exist a inverse
3.
I did find this theorem in internet that don't say in my book.
"If amounts of vector $$v_1,v_2,v_3...v_p$$ is in a dimension $$\mathbb R^n$$
then it is linear dependen if $$p>n$$

Regards,
$$|\pi\rangle$$

You can only calculate the determinant of a square matrix.
That means you can only use a determinant to check independence of n vectors, each of dimension n.

An inverse can only exist for a square matrix.
If the matrix is square and the vectors in it are linearly independent, then there exists an inverse.

If you have n linearly independent vectors, they span an n-dimensional space, like $\mathbb R^n$.
If you have one more vector, that won't fit in that n-dimensional space anymore in an independent manner.
So a set of n+1 vectors in $\mathbb R^n$ will have to be dependent.
 
I like Serena said:
You can only calculate the determinant of a square matrix.
That means you can only use a determinant to check independence of n vectors, each of dimension n.

An inverse can only exist for a square matrix.
If the matrix is square and the vectors in it are linearly independent, then there exists an inverse.

If you have n linearly independent vectors, they span an n-dimensional space, like $\mathbb R^n$.
If you have one more vector, that won't fit in that n-dimensional space anymore in an independent manner.
So a set of n+1 vectors in $\mathbb R^n$ will have to be dependent.
Thanks, I meant a square matrix :)

Regards,
$$|\pi\rangle$$
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 24 ·
Replies
24
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K