Can we think of a linear transformation from R^m-->R^n as mapping scalars to vectors? Let me say what I mean. Say we have some linear transformation L from R^m to R^n which can be represented by a matrix as follows: L=[ a11x1+a12x2+.....+a1mx m a21x1+...... . . . anmx1+.......+ anmxm ] (sorry for that ridiculous representation, just wasn't sure how to write it in this forum). Anyway its supposed to be the general representation of some nXm matrix. So this takes as imputs scalars (x1,x2,....,xn) and gives as output: L(x1,x2,....,xn)=(a11x1+a12x2+...+a1nx n, a21x1+a22x2+...+ a2nxn,....,an1x1+...+anmxm) isn't this just like saying: L(x1,x2,....,xn)=(a11x1+a12x2+...+a1nxn)i + (a21x1+a22x2+...+a2nxn)j,+....+,(an1x1+...+anmxm))t *(not sure what standard vector you would use if its n dimensional, i only know i j and k so i just randomly chose the letter t) So can't we think of the transformation like a vector field? Isn't this what the gradient ∇ does? (takes a scalar field and transforms it to a vector field) Now, we can't take the gradient of a vector field right, so how can we go about finding max and min points then? Because they occur when the gradient is 0 but how can I really think of the gradient of this?