Confused about vectors and transformations (linear)

GiuseppeR7
Messages
61
Reaction score
2
when we are talking about a linear transformation the argument of the function is a coordinate vector...is this true?
another question...when i see a column vector...these are the coordinates of the vector with respect of a basis...is this true? for example if i see...

<br /> (({{1},{3}}))^T<br />

with respect to a basis {a1,a2}...that "column symbol" are the coordinates of this vector?:
v=a1*1+a2*3
??
 
Last edited:
Physics news on Phys.org
GiuseppeR7 said:
when we are talking about a linear transformation the argument of the function is a coordinate vector...is this true?

Abstractly, one can talk about a linear transformation as a function whose argument is a single symbol, such as T(X) where X is a vector. Linearity requirements such as T(\alpha X + Y) = \alpha T(X) + T(Y) can be stated without reference to particular coordinates. (For example, in 2-D polar coordinates, the above requirement can be implemented by a coordinate operations that are different from an implementation in cartesian coordinate operations.)

another question...when i see a column vector...these are the coordinates of the vector with respect of a basis...is this true? for example if i see...

<br /> (({{1},{3}}))^T<br />

with respect to a basis {a1,a2}...that "column symbol" are the coordinates of this vector?:
v=a1*1+a2*3
??

I'd say yes.

When a vector is expressed in terms of a linear combination of a particular (finite) set of basis vectors then a customary way to represent a vector X is as a column vector whose entries are the scalars in the linear combination. (So it is fair to call that representation a "coordinate representation", although it does not necessarily refer to coordinates Euclidean space.) Then a linear transformation T amounts to left multiplying the column vector by a matrix of constants.
 
ok, thanks a lot!
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Back
Top