What are the key definitions in Linear Algebra?

In summary: A span is a subset of a subspace that is also closed under distance. That is, if you have a subspace S and a span S', then S' is a subset of S such that for every vector x in S, there exists a vector y in S' such that |x-y| is small.A linearly dependent/independent vector is one where changing one affects the other. That is, if A is a linearly dependent/independent vector, then for every vector x in A there exists a unique y in A such that |x-y| is small. This can be proven in many ways, but one way is
  • #1
DanielT29
16
0
Hey guys, I'm new to physics forums I decided to join after seeing how you guys help each other. Well here's my problem I'm sure you guys can answer this. Last semester I had Linear Algebra and I slacked (almost failed) off a bit since it was a 8am lecture and it took place in a movie theatre...yes my universty rents out theatres for bigger classes. So even if I went there I'd be falling asleep. I tried studying by myself I understood matrices but as they got into more complex theories I couldn't understand the terms they used. Even though I noticed in my linear algebra labs that the complicated terms they used to describe things and problems were easily solved. I took it upon my self to learn it by myself once again. I just need help with definitions. Can you guys explain me what each term means exactly, I checked wikipedia I understood some but maybe you guys can explain it easier using simpler words. Here they are:

1)Ordered n - tuple - I think its like a coordinate such as (1,2 3) correct me if I'm wrong.

2)Linear Transformation:
R^n -> R^m; One to One Linear Transformation- (can you explain me this too or give me a link to a thread that does; our exam had questions regarding this but phrased in a really weird and complicated way).

3)Eigenvalues/Eigevectors

4)Vector space - Area where vectors are?
->row space
->column space
->null space
->Subspace
->Spanning or Span?
->Linearly Dependent/Independent
->Rank
->Nullity
->Basis and Dimension

That's all I can think of and get from the book so far, I would just like an explanation of what that is and maybe how to calculate or determine them. If you can use simple terms to explain I'll very much appreciate it and posting a website or link to another thread is very much appreciated as well. Thanks guys.
 
Physics news on Phys.org
  • #2
An ordered n-tuple is a list of things where order matters and repetitions are allowed. Ordered pairs like (1, 2) and (22, 3), as well as ordered triples, (1, 2, 3), etc. are examples you are familar with. You can have As many things in an ordered n-tuple as you want, and they don't all have to be of the same "type". For instance, if I say that x is in the set R x N, that means I'm looking at ordered pairs where the first entry is a real number and the second is an integer. Etc. In the context of linear algebra, these will be vectors, like from calculus and physics.

A linear transformation is a mapping from some set A to some set B such that the following is true: if x is in A and y is in A, and a, b are scalar constants, then L(ax + by) = aL(x) + bL(y). You can equivalently check the following two conditions instead: 1) L(x + y) = L(x) + L(y); 2) L(ax) = aL(x). Linear transformations can be expressed in terms of matrices, and can transform coordinate systems.

An eigenvalue for a matrix A is a value for the variable k such that Ax = kx, for some vector x. An eigenvector x corresponds to an eigenvalue k and a matrix A if plugging in x will make the above equation true for the given A, k. To find eigenvalues, simply set |A - kI| = 0 and solve the characteristic equation. To find eigenvectors, you have various options, the most straightforward of which is just plugging in for k and solving for the nullspace of A - kI.

A vector space is a set of vectors closed under addition and scalar multiplication. There is a list of the so called Vector Space Axioms which must all be satisfied for something to be a vector space. This can be found online or in your book. Examples of vector spaces include the 2-d cartesian plane, 3-space, etc.

The row space is the vector space which spans the row vectors of a matrix A. Specifically, if a matrix A has rows r1, r2, ..., r3, these can be interpreted as column vectors after taking the transpose, and these vectors will span a certain vector space. This spanned space is the row space.

The column space is the same thing as the row space, but with A's columns instead of its rows.

The null space is the set of all vectors for which a given matrix has Ax = 0. The set of all such x just happens to be a vector space (the proof is simple).

A subspace is a subset of a vector space that is still closed under addition and scalar multiplication. In fact, all you have to do to prove something is a subspace is to show it is a subset, that it's closed under addition, and that it's closed under scalar multiplication. This is easier than proving something is a vector space. Example: the set of all points on the x-axis is a subspace of the set of all points in the 2-d cartesian plane, and the set of all points in the 2-d cartesian plane is a subspace of the set of all points in 3-space.

A vector space V is spanned by a set of vectors v1, v2, ..., vn iff every vector in V can be written as a linear combination of v1, v2, ..., vn. That is, if x is in V, then it must be true that x = av1 + bv2 + ... zvn for some combination of a, b, ..., z. If a set of vectors spans a space, it is a spanning set for the space.

A set of vectors v1, v2, ..., vn is linear independent if av1 + bv2 + ... + zvn = 0 implies that a, b, ..., z must all equal zero. If the set is not linearly independent, it is linearly dependent.

The rank of a matrix is the dimension of its row space or, equivalently, of its column space.

Nullity refers to the dimension of the null space.

A basis for a vector space V is a set b1, b2, ..., bn of linearly independent vectors such that the span of b1, b2, ..., bn equals V; that is, the set of linearly independent vectors is a spanning set for V. The number of linearly independent vectors in the basis for V equals the dimension of V. For instance, to span 3-space, you need an x, y, and z vector; so the dimension of 3-space is 3. No matter how you slice it, you need at least 3 vectors to span R^3. Interestingly, any more vectors, and they're not linearly independent (if you add any 4th vector, it can be expressed in terms of the other 3 linearly independent ones).
 
  • #3
1) An ordered n-tuple is an ordered list of n objects. Usually, these objects are numbers. Ordered means that the string of symbols (1, 4, 3) is considered to be different from the string of symbols (1, 3, 4) since 3 and 4 appear in different order. It is common to use ordered n-tuples to identify points in an n-dimensional manifold; the collection of ordered n-tuples is then called a system of coordinates.

2) A linear transformation T is a function that satisfies two particular properties: If u and v are objects in the domain of T and s is a number, then T(s*u) = s*T(u) and T(u + v) = T(u) + T(v). In linear algebra, u and v are usually the elements of a vector space defined over a field F, which is usually R, the set of real numbers.

3) If T is a linear transformation over some n-dimensional vector space V, we are usually interested in breaking T down into simple transformations over subspaces of V. The simplest transformation is if T maps a one-dimensional subspace of V into itself. That is to say, there is some vector v in V such that T(v) = s*v for some s in the field F. T simply rescales this 1-dimensional subspace. Any vector in that subspace, including v, is called an eigenvector of T, and the scalar s is called an eigenvalue of T. These two concepts are extremely important to know for further study and for any application of linear algebra to physics/etc. If V is n-dimensional Euclidean space, then a 1-dimensional subspace is just a line through the origin. T then "does nothing" to that line geometrically; algebraically, the coordinates on the line have been reparametrized only.
You should know these like the back of your hand. Ie., if T is a rotation of R2, then T usually has no eigenvectors (there are some rotations that will have eigenvectors, what are they?) If you know C, the field of complex numbers, then you should be able to figure out that T does have eigenvectors if V is defined over C. If T is a rescaling of R2, then T has 2 linearly independent eigenvectors. If T is a shearing (a rotation of only one basis vector) of R2 then T has an eigenvector in the direction of shearing.

4) Algebra studies properties of algebraic objects, such as algebras, monoids, rings, groups and fields; linear algebra studies systems of objects that behave linearly algebraically, like addition of points on the real line. these types of algebraic systems have come to be called vector spaces, and a given algebraic system is identified as a vector space over a field F if it satisfies the vector space axioms, which should be given in your text.
 
  • #4
AUMathTutor said:
An eigenvalue for a matrix A is a value for the variable k such that Ax = kx, for some vector x. An eigenvector x corresponds to an eigenvalue k and a matrix A if plugging in x will make the above equation true for the given A, k. To find eigenvalues, simply set |A - kI| = 0 and solve the characteristic equation. To find eigenvectors, you have various options, the most straightforward of which is just plugging in for k and solving for the nullspace of A - kI.



The row space is the vector space which spans the row vectors of a matrix A. Specifically, if a matrix A has rows r1, r2, ..., r3, these can be interpreted as column vectors after taking the transpose, and these vectors will span a certain vector space. This spanned space is the row space.

The column space is the same thing as the row space, but with A's columns instead of its rows.


A set of vectors v1, v2, ..., vn is linear independent if av1 + bv2 + ... + zvn = 0 implies that a, b, ..., z must all equal zero. If the set is not linearly independent, it is linearly dependent.

The rank of a matrix is the dimension of its row space or, equivalently, of its column space.

Nullity refers to the dimension of the null space.

When you mention the definition of linear independent saying how av1+bv2+...zvn=0, which means a ,b...z must all equal zero, wouldn't this go for every vector then? So every vector would be linearly independent? What am I missing from your explanation if you can explain this thanks.

For determining the dimension of a matrix, is it counting how many rows it has? I remember that from a lab my previous semester and it related it to the basis, nullity, row space and column space can someone care to explain that thanks. You guys seem to know your stuff, can you guys tell me what an application of eigenvalues and eigenvectors can be this can help me better understand the unit; If I decide to retake it in university.
 
  • #5
slider142 said:
1)

You should know these like the back of your hand. Ie., if T is a rotation of R2, then T usually has no eigenvectors (there are some rotations that will have eigenvectors, what are they?) If you know C, the field of complex numbers, then you should be able to figure out that T does have eigenvectors if V is defined over C. If T is a rescaling of R2, then T has 2 linearly independent eigenvectors. If T is a shearing (a rotation of only one basis vector) of R2 then T has an eigenvector in the direction of shearing.

What did you mean the rotation of R^2? Did you mean a vector in R^2 or the space itself. Can someone also explain me briefly what a subset is thanks guys.
 
  • #6
DanielT29 said:
When you mention the definition of linear independent saying how av1+bv2+...zvn=0, which means a ,b...z must all equal zero, wouldn't this go for every vector then? So every vector would be linearly independent? What am I missing from your explanation if you can explain this thanks.
Perhaps you are missing the difference between singular and plural! "av1+ bv2+ ...+zvn= 0" involves n vectors not 1. To talk about "independent" or "dependent" you must talk about a set of vectors, not one vector. For example, the set {<1, 0, 1>, <1, 1, 0>, <3, 1, 2>} are dependent (not independent) because (2)<1, 0, 1>+ (1)<1, 1, 0>+ (-1)<3, 1, 2>= <0, 0, 0> where the "a", "b", "c" are not all 0. (a)<1, 0, 1>+ b<1, 1, 0>+ c<3, 1, 2>= <0, 0, 0>
does NOT imply a= b= c= 0.

For determining the dimension of a matrix, is it counting how many rows it has? I remember that from a lab my previous semester and it related it to the basis, nullity, row space and column space can someone care to explain that thanks. You guys seem to know your stuff, can you guys tell me what an application of eigenvalues and eigenvectors can be this can help me better understand the unit; If I decide to retake it in university.
I prefer to avoid the phrase "dimension of a matrix"- it's too ambiguous. Some texts talk about a matrix of dimension "m x n" where m is the number of columns and n the number of
row. But in Linear Algebra the term "dimension" really applies to vector spaces, not matrices. Thought of as a vector space, the space of all "m x n" matrices is a vector space of dimension mn. But a matrix also represents a linear transformation from one vector space to another and they have "dimensions". If a matrix has m columns and n rows, then it must be applied to vector with m components and results in a vector with m components. That is, it is from a vector space of dimension m to a vector space of dimension n.

The "kernel" or "null space" of a matrix is the space of all vectors that the matrix maps into the 0 vector. It is a subspace of the vector space of vectors the transformation applies to and the "nullity" is the dimension of the null space. The image of the matrix is the set of all If you multiply a matrix by the vector <1, 0, 0, ...> the result is in the "image" and is just the first column of the matrix. Similarly, if you multiply a matrix by <0, 1, 0, ..., 0> you get the second matrix. The "column space" is the vector space spanned by the columns of the matrix, thought of as vectors, and is the image of the matrix.

One very important application of "eigenvalues" and "eigenvectors" is in differential equations. For example, a linear differential equation can be reduced to a system of first order equations or a matrix equation:
[tex]\frac{dX}{dt}= Ax[/tex]
If you can find the eigenvalues and eigenvectors of A, the problem simplifies greatly.
 
  • #7
DanielT29 said:
When you mention the definition of linear independent saying how av1+bv2+...zvn=0, which means a ,b...z must all equal zero, wouldn't this go for every vector then? So every vector would be linearly independent? What am I missing from your explanation if you can explain this thanks.

For determining the dimension of a matrix, is it counting how many rows it has? I remember that from a lab my previous semester and it related it to the basis, nullity, row space and column space can someone care to explain that thanks. You guys seem to know your stuff, can you guys tell me what an application of eigenvalues and eigenvectors can be this can help me better understand the unit; If I decide to retake it in university .

Specifically, one says that a set of vectors is not linearly independent if and only if one of the vectors in the set can be written as a linear combination of the others; that is if v is that vector v = a1w1 + ... + anwn where the ai's are scalars from the field, and the wi's are the vectors in the set. Note that the superscript is just used for enumeration, it is not an exponent. Equivalently, a set of vectors is linearly independent if none of the vectors can be written as a linear combination of any of the other vectors in the set. That is to say, visualizing in Euclidean space, consider the vectors {(1, 0, 0), (0, 1, 0), (0, 0, 1)}. Is there any way to add real multiples of two of those vectors and get the third?
As another concrete example, consider a collection of 3 vectors in R3, 3-tuples of real numbers, usually identified with 3-dimensional Euclidean space, but we do not need that extra structure (of distances between points) to study linear independence. Consider the set of vectors {(1, 0, 1), (0, 0, 1), (1, 0, 0)}. It should be obvious that this set is not linearly independent, as the "first" vector in the set can be written as a linear combination of the second two, where the coefficients in the combination are both 1. Visually, all 3 vectors lie in the same plane, and it should only take 2 vectors to describe a 2-dimensional plane. This is made more precise in theorems in your text.
This is useful information, as suppose this was a problem, physical or abstract, where you know that the solution space is spanned by these vectors. You then know that you don't need to worry about the "first" vector, as every vector in that space can be written using only the "second" and "third" vectors. You then know that the solution space is not R3, but a specific 2-dimensional subspace.
It is an equivalent definition to state that a set of vectors is linearly independent if the only way they linearly combine to the 0 vector is if each coefficient in the combination is the 0 scalar. We say that two statements are equivalent if they logically imply each other.
 
Last edited:
  • #8
DanielT29 said:
What did you mean the rotation of R^2? Did you mean a vector in R^2 or the space itself. Can someone also explain me briefly what a subset is thanks guys.

Either process is fine. You can either think of the transformation as rotating every vector in R2 by some angle, or you can think of it as rotating the coordinates themselves, leaving the geometry fixed.
A set is a collection of objects, say the set A = {a, b, c}. 'a' is an element of the set A. The set B = {b, c} contains only elements that are also elements of A. We say that B is a subset of A.
If A is a vector space and B also satisfies all the axioms for being a vector space, we say that B is a subspace of A. This is a specific type of subset, that preserves the algebraic structure you are studying.
Examples:
In R2 considered as a vector space over R, the set of vectors in the x-axis constitutes a subspace of R2, because they satisfy all the axioms for being a vector space themselves. On the other hand the set of vectors on the line described by the equation y = 2x + 1 do not form a subspace of R2 (There is no 0 vector).
 
  • #9
Thanks that clarified things, I'll leave this thread open if people want to post any helpful sites, or elaborate more, once again I can't thank you enough guys.
 
  • #10
Often in mathematics what we refer to as a definition is really a misnomer. Texts state that you do this and then you do that and that is how you define the thing that you find.

This however gives little insight into what the thing really is!

Textbook definitions of eigenvectors and eigenvalues provide a good example. We read that eigenvalues are the roots of the polynomial det(A - lambda I) = 0. While the statement is true it provides littile insight as to what an eigenvalue is!

Here is an alternate definition/explanation.

To understand what and eigenvalue is we must first understand what an eigenvector is.

In general when we multiply a vector x by a matrix A, the resulting vector Ax will be a rotation of x and either a stretching or a compression of x by some scalar factor.
There are however some very special vectors x such that when we multiply them by the matrix A there will be no rotation. These special vectors are eigenvectors.
That is eigenvectors of a matrix A are the vectors that experience no rotation when we multiply by A.

The eigenvectors will in general still be stretched or a compressed. The measure of stretching or compression is what the eigenvalue lambda is.
 
  • #11
so eigenvalues are values you can multiply a matrix with, without causing the vectors within the matrix to rotate? Can't that be done with any scalar multiple?
 
  • #12
DanielT29 said:
so eigenvalues are values you can multiply a matrix with, without causing the vectors within the matrix to rotate? Can't that be done with any scalar multiple?

Not exactly. The thing that must be thought about first is the eigenvector. All non-eigenvectors will rotate.

The eigenvalue is only associated with an eigenvector and represents a measure of how much the eigenvector gets strecthed or compressed when multiplied by matrix A.
 
  • #13
So you have a eigenvector, and when you factor a certain scalar out is that the eigenvalue then? But then in that case you can factor any number out and say its an eigenvalue what am i missing in this..or is it that you can't factor an eigenvalue out, you have to use that equation with lambda some how?
 
  • #14
DanielT29 said:
So you have a eigenvector, and when you factor a certain scalar out is that the eigenvalue then? But then in that case you can factor any number out and say its an eigenvalue what am i missing in this..or is it that you can't factor an eigenvalue out, you have to use that equation with lambda some how?


It's great that you are really thinking about this as eigenvectors and eigenvalues are really important in many aspects of applied mathematics.

So let me try with the explanation again.

You have a square matrix A and you notice that for most vectors Y when we multiply by the matrix A that the result involves a rotation. However you note that for some special vectors there is no rotation. Let's say X is one of these special vectors (an eigenvector) so multiplying X by A results in zero rotation, thus the only thing that multiplying X by A does is to strech it or compress it by a factor of lambda...that is AX becomes lambda times X.

The eigenvector X goes to Lambda times X.

on the other hand if Y is not an eigenvector multplication by A will result in a rotation and there will be no scalar lambda such that AY is lambda times Y.

Hope this helps.
 
  • #15
In other words, if the linear transformation T acts on a specific vector x in such a way that T(x) = r*x for some number r, then we say that x is an eigenvector of T, and that r is an eigenvalue of T. You should be able to think about some algebraic and geometric implications of this. If x is an eigenvector of T, is s*x necessarily an eigenvector of T for an arbitrary number s? What does that imply about the set of all vectors with that particular eigenvalue? In Euclidean space, what shape does this set have? If T is a map from Rn into itself and we let x be an eigenvector of T, and generate the type of set we found above, what does T do to this set (what is the image of this set under T)? This is your first exposure to the study of invariants, things that don't change under various types of operations. Invariants are central to higher mathematics.
The following two are not as easy as the ones above and will require lots of careful thought, or a good study of several chapters in your text. If x is an eigenvector of T and T is an operator on R2 (meaning it is a function from R2 into itself), is it necessarily true that there is another linearly independent eigenvector of T? What about in R3? The general answer to this question leads to interesting connections.
 
  • #16
That explanation was very good, so basically Ax becomes lambda times x, when x is an eigenvector... does this imply the eigenvalue can be a matrix?

Slider can you give me hints to the questions you asked, I'm still a bit sketchy on algebra terms, I'm pretty much a novice at this can you answer some for your questions you asked for me, in simple terms. I will greatly appreciate it, I think those questions you posed are important. I will also have some try at the questions but I don't think I can fully even type them here on the thread I'm not that great in linear algebra yet:(
 
  • #17
DanielT29 said:
That explanation was very good, so basically Ax becomes lambda times x, when x is an eigenvector... does this imply the eigenvalue can be a matrix?


No, by definition an eigenvalue is a scalar.
 
  • #18
In a sense the point of "eigenvalues" is that [itex]Ax= \lamba x[/itex] says that, for eigenvectors at least, A "acts" like a number.
 

1. What is a vector?

A vector is a mathematical object that has both magnitude (size) and direction. In linear algebra, a vector is typically represented as an array of numbers.

2. What is a matrix?

A matrix is a rectangular array of numbers or other mathematical objects arranged in rows and columns. In linear algebra, matrices are used to represent linear transformations and systems of linear equations.

3. What is a scalar?

A scalar is a single numerical value, as opposed to a vector or matrix. In linear algebra, scalars are used to scale vectors and matrices.

4. What is a linear combination?

A linear combination is a combination of two or more vectors or matrices, where each vector/matrix is multiplied by a scalar and then added together. In linear algebra, linear combinations are used to represent linear transformations and to solve systems of linear equations.

5. What is a basis?

A basis is a set of linearly independent vectors that span a vector space. In linear algebra, bases are used to represent all possible combinations of vectors in a vector space.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
880
Replies
10
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
1K
Back
Top