Matrices and linear transformations

In summary, Matrices correspond to linear transformations, which are functions from one vector space to another. Every linear transformation can be represented by a matrix, and to do so, one must first choose a basis for each vector space. The basis vectors are then transformed by the linear transformation, and the coefficients of the resulting linear combination are used to form the columns of the matrix. A vector space is a set of vectors that can be added together and multiplied by scalars, while a vector basis is a set of vectors that can be used to uniquely represent any vector in that space. A linear combination is a sum of vectors multiplied by scalars, and a linear transformation is a function that preserves vector addition and scalar multiplication. It is important to have a
  • #1
kashiark
210
0
I've recently come to the conclusion that i need to learn matrices. I read that matrices correspond to linear transformations and that every linear transformation can be represented by matrices, but what are linear transformation, and how do you represent it by a matrix?
 
Mathematics news on Phys.org
  • #2
Simplest example (2 dimensions). Let (x,y) represent a point on the plane in some coordinate system. Let (u,v) represent the same point in another coordinate system. Then one can write u=ax+by and v=cx+dy - a linear transformation since x and y appear as first power only. The matrix to represent the transformation will look like:
(a b)
(c d)

The transformation is then U=MX, where M is the matrix, X=(x,y) and U=(u,v). I presume you know how to multiply a matrix times a vector.
 
  • #3
i don't understand what is being transformed?
 
  • #4
Here's another view: Let U and V be vector spaces. Ie., U may be R (real numbers) or R2 (ordered pairs of real numbers, commonly represented graphically by the Cartesian plane). A function T from U into V is called a linear transformation if it satisfies the equation T(r*u + w) = r*T(u) + T(w) for all real numbers r and all vectors u and w in U. In other words T distributes over addition and scalar multiplication.
Every n-dimensional vector space has an ordered n-tuple of vectors (v1, ..., vn) for which every vector u in that space can be written uniquely as u1v1 + ... + unvn. The numbers ui are called the components of the vector u with respect to the basis (v1, ..., vn). For R2 for example, a common basis is the two vectors (1, 0) and (0, 1).
Using the component form of a vector and the definition of linear transformation given above, derive the general expression for T(u) where T is a linear transformation from U into V in terms of the numbers ui, the basis vectors ei of U, and the basis vectors fj of V.

The following will not make much sense unless you did the above exercise:

You will find that T is specified by certain numbers that show how T maps basis vectors in U into vectors in V, and these numbers naturally have two indices, where the first index corresponds to the basis vector in U and the second to the the component of the transformed basis vector in V.
Matrix multiplication is then defined so that if A is this matrix of numbers corresponding to T with respect to two particular bases, then Ax = T(x).
In particular, if E is a matrix representing the chosen basis of U (in particular, E is just the components of the basis vectors of U written in column form), then note that applying A to E will give you the image of the basis under T. In particular, if E is the standard basis (1s and 0s) then the matrix of T is simply the image of the basis. Ie., if T rotates the axes by 45 degrees (Pi/4 radians), then if our vectors in R2 are written with respect to the standard basis ((1, 0), (0, 1)) and we want our image to be written with respect to the standard basis, then the matrix corresponding to T is just the rotated basis ((Sqrt(2)/2, Sqrt(2)/2), (-Sqrt(2)/2, Sqrt(2)/2)). Apply it and check.
 
Last edited:
  • #5
kashiark said:
I've recently come to the conclusion that i need to learn matrices. I read that matrices correspond to linear transformations and that every linear transformation can be represented by matrices, but what are linear transformation, and how do you represent it by a matrix?

kashiark said:
i don't understand what is being transformed?
Then do you understand what a "linear transformation" is? Generally, a "transformation" is a function from one vector space to another. To be a linear transformation, L must satisfy L(u+ v)= L(u)+ L(v) and L(xv)= xL(v) for any vectors a and b and any number x. It is the vector, u, say, that is being transformed to to vector L(u).

One important property of any vector space is that there always exist a "basis" for the space. In particular for a finite dimensional vector space there is a set of vector [itex]\left{ u_1, u_2, \cdot\cdot\cdot, u_n\right}[/itex] such that any vector can be written as a linear combination of those basis vectors: [itex]u= a_1u_1+ a_2u_2+ /cdot/cdot/cdot+ a_nu_n[/itex], for some numbers [itex]a_1, a_2, ..., a_n[/itex], in a unique way.

That means that if we agree on a specific basis, written in a specific order, we can do away with writing the vectors and just write the number: write [itex]u= a_1u_1+ a_2u_2+ /cdot/cdot/cdot+ a_nu_n[/itex] as <a1, a2, ..., an> and treat it as a vector in Rn.

To write a linear transformation from vector space U to vector space V as a matrix, first select a basis for each, say [itex]\left{ u_1, u_2, \cdot\cdot\cdot, u_n\right}[/itex] for U, and [itex]\left{ v1, v_2, \cdot\cdot\cdot, v_m\right}[/itex] for V. Here, n and m are the dimensions of U and V respectively. Now apply the linear transformation to u1. The result is in V and so can be written as a linear combination of v1, v2, etc. The coefficients for the first column of the matrix representation. Doing the same to u2[/sup] gives the second column, etc.

Note that choosing a different basis for U, or V, or both will give a different matrix representation of the same linear transformation.
 
  • #6
What are Vector spaces,Vector basis,Transformations,linear combination,Linear transformation.I haven't known any of these things?Tell me about these things at a begginer stage.I know generally about simple vectors(not plane vectors)and 3d matrices!
i want to know in simple clear terms,not involving any complex terms!
 
  • #7
ZunairaMaryam said:
What are Vector spaces,Vector basis,Transformations,linear combination,Linear transformation.I haven't known any of these things?Tell me about these things at a begginer stage.I know generally about simple vectors(not plane vectors)and 3d matrices!
i want to know in simple clear terms,not involving any complex terms!


I advise you to get an elementary text or an elementary course on vector analysis. All those terms would be described there. You might try wikipedia.
 

1. What is a matrix and what is it used for?

A matrix is a rectangular array of numbers, symbols, or expressions arranged in rows and columns. It is used to represent and manipulate data in many areas of mathematics and science, such as solving systems of equations, performing transformations, and analyzing data sets.

2. How do you add and subtract matrices?

Matrices can be added and subtracted if they have the same dimensions. To add or subtract two matrices, simply add or subtract the corresponding elements in each matrix. The resulting matrix will have the same dimensions as the original matrices.

3. What is a linear transformation?

A linear transformation is a function that maps one vector space to another in a way that preserves the linearity of the original space. This means that the transformation preserves the operations of addition and scalar multiplication.

4. How do you perform a matrix multiplication?

To multiply two matrices, the number of columns in the first matrix must be equal to the number of rows in the second matrix. To find the element in the resulting matrix at row i and column j, multiply the elements in row i of the first matrix by the corresponding elements in column j of the second matrix, and then add the products.

5. How can matrices be used to solve systems of equations?

Matrices can be used to solve systems of equations by representing the system in matrix form and then using techniques such as Gaussian elimination or Cramer's rule to solve for the unknown variables. This method is especially useful for solving systems with a large number of equations and variables.

Similar threads

  • Linear and Abstract Algebra
Replies
8
Views
1K
  • General Math
Replies
25
Views
2K
Replies
3
Views
1K
  • General Math
Replies
6
Views
992
  • Calculus and Beyond Homework Help
Replies
14
Views
588
Replies
1
Views
807
  • Differential Geometry
Replies
11
Views
2K
  • Differential Equations
Replies
5
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
42
Views
5K
Back
Top