Linear Independence of Vectors: Why Determinant ≠ 0?

Click For Summary

Discussion Overview

The discussion revolves around the concept of linear independence of vectors, specifically addressing the relationship between the determinant of a matrix formed by vectors and their linear independence. Participants explore theoretical aspects and implications without seeking to solve a specific problem.

Discussion Character

  • Conceptual clarification
  • Debate/contested
  • Technical explanation

Main Points Raised

  • Some participants propose that a determinant not equal to zero indicates linear independence, suggesting that this is a method to check for independence.
  • Others argue that the determinant's value reflects whether the corresponding linear function can span all of $\mathbb{R}^3$, with zero indicating dependence.
  • A participant mentions a theorem stating that if the number of vectors exceeds the dimension of the space, the vectors must be linearly dependent.
  • It is noted that the determinant can only be calculated for square matrices, which limits its application to checking the independence of an equal number of vectors and dimensions.
  • Some participants clarify that an inverse exists only for square matrices and that linearly independent vectors span an n-dimensional space.

Areas of Agreement / Disagreement

Participants express varying degrees of understanding regarding the implications of the determinant and the conditions for linear independence. There is no consensus on the explanations provided, and some statements are reiterated without resolution.

Contextual Notes

Participants highlight limitations regarding the applicability of determinants to non-square matrices and the conditions under which vectors can be considered independent or dependent based on their dimensionality.

Petrus
Messages
702
Reaction score
0
Hello MHB,
I got one question. If we got this vector $$V=(3,a,1)$$,$$U=(a,3,2)$$ and $$W=(4,a,2)$$ why is it linear independence if determinant is not equal to zero? (I am not interested to solve the problem, I just want to know why it is)

Regards,
$$|\pi\rangle$$
 
Physics news on Phys.org
Petrus said:
Hello MHB,
I got one question. If we got this vector $$V=(3,a,1)$$,$$U=(a,3,2)$$ and $$W=(4,a,2)$$ why is it linear independence if determinant is not equal to zero? (I am not interested to solve the problem, I just want to know why it is)

Regards,
$$|\pi\rangle$$

If you put your vectors in a matrix, you get a linear function identified by the matrix.
If you can "reach" all of $\mathbb R^3$ with this function, your vectors are linearly independent.
The determinant shows if this is possible. Zero means it's not.
 
I like Serena said:
If you put your vectors in a matrix, you get a linear function identified by the matrix.
If you can "reach" all of $\mathbb R^3$ with this function, your vectors are linearly independent.
The determinant shows if this is possible. Zero means it's not.
Thanks! I start to understand now!:)

Regards,
$$|\pi\rangle$$
 
Is this state true.
1. We can check if a vector is linear Independence by checking if the determinant is not equal to zero
2. For a linear independence matrix there exist a inverse
3.
I did find this theorem in internet that don't say in my book.
"If amounts of vector $$v_1,v_2,v_3...v_p$$ is in a dimension $$\mathbb R^n$$
then it is linear dependen if $$p>n$$

Regards,
$$|\pi\rangle$$
 
Last edited:
Petrus said:
Is this state true.
1. We can check if a vector is linear Independence by checking if the determinant is not equal to zero
2. For a linear independence matrix there exist a inverse
3.
I did find this theorem in internet that don't say in my book.
"If amounts of vector $$v_1,v_2,v_3...v_p$$ is in a dimension $$\mathbb R^n$$
then it is linear dependen if $$p>n$$

Regards,
$$|\pi\rangle$$

You can only calculate the determinant of a square matrix.
That means you can only use a determinant to check independence of n vectors, each of dimension n.

An inverse can only exist for a square matrix.
If the matrix is square and the vectors in it are linearly independent, then there exists an inverse.

If you have n linearly independent vectors, they span an n-dimensional space, like $\mathbb R^n$.
If you have one more vector, that won't fit in that n-dimensional space anymore in an independent manner.
So a set of n+1 vectors in $\mathbb R^n$ will have to be dependent.
 
I like Serena said:
You can only calculate the determinant of a square matrix.
That means you can only use a determinant to check independence of n vectors, each of dimension n.

An inverse can only exist for a square matrix.
If the matrix is square and the vectors in it are linearly independent, then there exists an inverse.

If you have n linearly independent vectors, they span an n-dimensional space, like $\mathbb R^n$.
If you have one more vector, that won't fit in that n-dimensional space anymore in an independent manner.
So a set of n+1 vectors in $\mathbb R^n$ will have to be dependent.
Thanks, I meant a square matrix :)

Regards,
$$|\pi\rangle$$
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 24 ·
Replies
24
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K