Couple of Linear Algebra Questions

In summary, linear independence does not imply orthogonality, as orthogonality is defined by the inner product while linear independence is defined by the lack of parallelism. In addition, a vector space can only be defined over a field, not a ring, due to the lack of a multiplicative inverse in a ring causing issues with finding bases.
  • #1
FunkyDwarf
489
0
Howdy,

First off, a stupid question: Why does linear indipendance not imply orthogonality? i mean we define the latter such that the inner product is zero i sort of see it as a case of the chicken and the egg, are we using two things to define each other in a cyclical way? Also what extra constraints are placed on orthogonal vectors besides them being linearly independant? I mean if i have two vectors in R2 surely if they are LI they are at right angles? Or am i missing something here...

Also could someone please explain group action with a really simply example as i totally don't get it, and also why you canonly have vector spaces over fields and not rings ( i understand the basic differences between them).

Thanks!
-G
 
Physics news on Phys.org
  • #2
Orthogonal only makes sense if you have already defined your inner product. Thus some set of vectors can be orthogonal in some inner product, and not in others. There is no cyclicity here at all.

As for the other kettle of fish, pick any set S, and let G be the set of permutations of S. This is a group action on S. Pick any regular n-gon, and let G be the set of symmetries of it. G acts on the n-gon.

A vector space is _by defininition_ over a field. If you alter the definition to be over a ring the resulting object is called a module. They have many different properties.
 
Last edited:
  • #3
FunkyDwarf said:
Howdy,

First off, a stupid question: Why does linear indipendance not imply orthogonality? i mean we define the latter such that the inner product is zero i sort of see it as a case of the chicken and the egg, are we using two things to define each other in a cyclical way? Also what extra constraints are placed on orthogonal vectors besides them being linearly independant? I mean if i have two vectors in R2 surely if they are LI they are at right angles? Or am i missing something here...
In R2, for example, the vectors <1, 0> and <1, 1> are certainly independent but not orthogonal. In this very simple case, "independent" just means "not parallel". That certainly does not imply "at right angles".


Also could someone please explain group action with a really simply example as i totally don't get it, and also why you canonly have vector spaces over fields and not rings ( i understand the basic differences between them).

Thanks!
-G
You cannot have a vector space over a ring because such things are not called "vector spaces"- they are called "modules". The reason they are given different names (which is really your question) is that they have very different properties. In particular, the lack of a multiplicative inverse cause problems with finding bases.
 
Last edited by a moderator:
  • #4
orthogonal does NOT imply independent, non zero and orthogonal does so.
 
  • #5
Ah thanks guys yeh i figured out my problem with LIness, i had it wrong in my head, when i wrote it down on paper i could see i was confusing it with orthogonality but under a different name.

Cheers for the reponses!
 
  • #6
In particular, the lack of a multiplicative inverse cause problems with finding bases.
could you please elaborate i don't see the connection (sorry I am sure its obvious)
 
  • #7
A "vector space" is defined over a field- in particular evey member of the field, except 0, has a multiplicative inverse so we can do the following:
Suppose [itex]{v_1, v_2, \cdot\cdot\cdot, v_n}[/itex] is a DEPENDENT set of vectors in vector space V over field F. Then, by definition of "dependent", there exist a set [itex]a_1, a_2, \cdot\cdot\cdot, a_n}[/itex] of members of F, not all 0, such that [itex]a_1v_1+ \cdot\cdot\cdot+ a_nv_n= 0[/itex]. In particular, if [itex]a_k[/itex] is not 0, we can write that as [itex]a_kv_k= -a_1v_1- \cdot\cdot\cdot- a_nv_n[/itex] and so, since [itex]a_k\ne 0[/itex], [itex] v_k= -(a_1/a_k)v_1- \cdot\cdot\cdot- (a_n/a_k)v_n[/itex]. That is, [itex]v_k[/itex] can be written as a linear combination of the other vectors in the set. We can continue that until we reach a linearly independent subset such that all the other vectors in the set can be written as linear combinations of the vectors in the subset: a basis for the span of the original set of vectors.

If, instead, we have a module (a "vector space" over a ring) we cannot, in general, "divide by" that [itex]a_k[/itex] and so we cannnot guarantee that every set of dependent vectors contains a basis for its span- or even that a module has a "basis"!
 
  • #8
Ah ok gotcha, thanks!
 

1. What is linear algebra?

Linear algebra is a branch of mathematics that deals with the study of linear equations, vector spaces, and matrices.

2. What are some real-world applications of linear algebra?

Linear algebra has various applications in fields such as computer graphics, machine learning, economics, physics, and engineering. It is used to solve systems of linear equations, perform data analysis, and model real-world scenarios.

3. What are vectors and matrices?

Vectors are mathematical objects that represent quantities with both magnitude and direction. They are commonly represented as a list of numbers or coordinates. Matrices are arrays of numbers that can be used to perform operations on vectors, such as addition, subtraction, and multiplication.

4. How is linear algebra used in machine learning?

Linear algebra plays a crucial role in machine learning, as many algorithms and models rely on linear algebra operations to process and analyze large amounts of data. For example, linear regression, principle component analysis, and support vector machines all use linear algebra.

5. What are eigenvectors and eigenvalues?

Eigenvectors and eigenvalues are concepts used in linear algebra to describe the behavior of linear transformations on a vector space. An eigenvector is a vector that remains in the same direction after a linear transformation, while an eigenvalue is the scalar that scales the eigenvector. They are commonly used in data analysis and image processing.

Similar threads

  • Linear and Abstract Algebra
Replies
10
Views
329
  • Linear and Abstract Algebra
Replies
3
Views
1K
Replies
10
Views
2K
  • Linear and Abstract Algebra
Replies
9
Views
163
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
543
  • Linear and Abstract Algebra
Replies
13
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
12
Views
3K
Replies
5
Views
1K
Back
Top