- #1

ihggin

- 14

- 0

I tried the following: let [tex]B[/tex] be a basis for [tex]V[/tex] that contains [tex]\alpha[/tex] (we can do this since [tex]\alpha \neq 0[/tex]). Then define [tex]f[/tex] such that [tex]f(\alpha)=1[/tex] and [tex]f(b)=0[/tex] for all the other basis vectors. Extend the definition to arbitrary vectors using linearity of [tex]f[/tex]. So if [tex]v=\sum c_i b_i[/tex] for scalars [tex]c_i[/tex] and basis vectors [tex]b_i[/tex] with [tex]b_1= \alpha[/tex], we have [tex]f(v)=c_1[/tex]. However, I then ran into the problem that when I take [tex]f(Tv)[/tex], [tex]T[/tex] can map other basis vectors into vectors with components in [tex]\alpha[/tex], which messes my strategy up.

Is there some smart choice of basis vectors that can prevent this from happening? Or is this just not a good way of doing the problem?