MHB Help with Proof of Junghenn Proposition 9.2.3 - A Course in Real Analysis

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading Hugo D. Junghenn's book: "A Course in Real Analysis" ...

I am currently focused on Chapter 9: "Differentiation on $$\mathbb{R}^n$$"

I need some help with the proof of Proposition 9.2.3 ...

Proposition 9.2.3 and the preceding relevant Definition 9.2.2 read as follows:
View attachment 7902
View attachment 7903
In the above proof Junghenn let's $$ \mathbf{a}_i = ( a_{i1}, a_{i2}, \ ... \ ... \ , a_{in} ) $$

and then states that $$T \mathbf{x} = ( \mathbf{a}_1 \cdot \mathbf{x}, \mathbf{a}_2 \cdot \mathbf{x}, \ ... \ ... \ , \mathbf{a}_n \cdot \mathbf{x} )$$ where $$\mathbf{x} = ( x_1, x_2, \ ... \ ... \ x_n )$$(Note: Junghenn defines vectors in \mathbb{R}^n as row vectors ... ... )Now I believe I can show $$T \mathbf{x}^t = [a_{ij} ]_{ m \times n } \mathbf{x}^t = ( \mathbf{a}_1 \cdot \mathbf{x}, \mathbf{a}_2 \cdot \mathbf{x}, \ ... \ ... \ , \mathbf{a}_n \cdot \mathbf{x} )^t$$ ...... ... as follows:
$$T \mathbf{x}^t = [a_{ij} ]_{ m \times n } \mathbf{x}^t = \begin{pmatrix} a_{11} & a_{12} & ... & ... & a_{1n} \\ a_{21} & a_{22} & ... & ... & a_{2n} \\ ... & ... & ... & ... & ... \\ ... & ... & ... & ... & ... \\ a_{m1} & a_{m2} & ... & ... & a_{mn} \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ . \\ . \\ x_n \end{pmatrix}$$
$$= \begin{pmatrix} a_{11} x_1 + a_{12} x_2 + \ ... \ ... \ + a_{1n} x_n \\ a_{21} x_1 + a_{22} x_2 + \ ... \ ... \ + a_{2n} x_n \\ ... \\ ... \\ a_{m1} x_1 + a_{m2} x_2 + \ ... \ ... \ + a_{mn} x_n \end{pmatrix} $$
$$= \begin{pmatrix} \mathbf{a}_1 \cdot \mathbf{x} \\ \mathbf{a}_2 \cdot \mathbf{x} \\ . \\ . \\ \mathbf{a}_n \cdot \mathbf{x} \end{pmatrix}$$
$$= ( \mathbf{a}_1 \cdot \mathbf{x}, \mathbf{a}_2 \cdot \mathbf{x}, \ ... \ ... \ , \mathbf{a}_n \cdot \mathbf{x} )^t $$

So ... I have shown$$T \mathbf{x}^t = [a_{ij} ]_{ m \times n } \mathbf{x}^t = ( \mathbf{a}_1 \cdot \mathbf{x}, \mathbf{a}_2 \cdot \mathbf{x}, \ ... \ ... \ , \mathbf{a}_n \cdot \mathbf{x} )^t$$ ...How do I reconcile or 'square' that with Junghenn's statement that $$T \mathbf{x} = ( \mathbf{a}_1 \cdot \mathbf{x}, \mathbf{a}_2 \cdot \mathbf{x}, \ ... \ ... \ , \mathbf{a}_n \cdot \mathbf{x} )$$ where $$\mathbf{x} = ( x_1, x_2, \ ... \ ... \ x_n )$$(Note: I don't think that taking the transpose of both sides works ... ?)
Hope someone can help ...

Peter
 
Last edited:
Physics news on Phys.org
Junghenn defines the relation between the linear transformation $T$ and the matrix $A$ by $$T \mathbf{x} = ( \mathbf{a}_1 \cdot \mathbf{x},\, \mathbf{a}_2 \cdot \mathbf{x}, \ldots , \mathbf{a}_n \cdot \mathbf{x} )$$ where $$\mathbf{x} = ( x_1,\, x_2, \ldots, x_n )$$. This – as you show – is equivalent to the statement $(T\mathbf{x})^t = A\mathbf{x}^t.$

In other words, linear transformations act on elements of $\mathbb{R}^n$ (which Junghenn defines as row vectors), but matrices act (by pre-multiplication) on column vectors. There is no great mathematical significance in this. Junghenn probably prefers row vectors simply for convenience, because they take up less room on the printed page. But the $m\times n$ matrix $A$ has to be multiplied by an $n\times1$ vector (in other words, a column vector) in order for the matrix multiplication to be defined.

So if you are talking about linear transformations, you need to use row vectors, but if you want to deal with their associated matrices then you must use column vectors.
 
Opalg said:
Junghenn defines the relation between the linear transformation $T$ and the matrix $A$ by $$T \mathbf{x} = ( \mathbf{a}_1 \cdot \mathbf{x},\, \mathbf{a}_2 \cdot \mathbf{x}, \ldots , \mathbf{a}_n \cdot \mathbf{x} )$$ where $$\mathbf{x} = ( x_1,\, x_2, \ldots, x_n )$$. This – as you show – is equivalent to the statement $(T\mathbf{x})^t = A\mathbf{x}^t.$

In other words, linear transformations act on elements of $\mathbb{R}^n$ (which Junghenn defines as row vectors), but matrices act (by pre-multiplication) on column vectors. There is no great mathematical significance in this. Junghenn probably prefers row vectors simply for convenience, because they take up less room on the printed page. But the $m\times n$ matrix $A$ has to be multiplied by an $n\times1$ vector (in other words, a column vector) in order for the matrix multiplication to be defined.

So if you are talking about linear transformations, you need to use row vectors, but if you want to deal with their associated matrices then you must use column vectors.
Thanks Opalg ...

To know that the representation of vectors varies according to context like that is important to me in fully understanding what is going on in the various proofs/results in Euclidean and metric spaces ...

Thanks again for that post!

Peter
 
We all know the definition of n-dimensional topological manifold uses open sets and homeomorphisms onto the image as open set in ##\mathbb R^n##. It should be possible to reformulate the definition of n-dimensional topological manifold using closed sets on the manifold's topology and on ##\mathbb R^n## ? I'm positive for this. Perhaps the definition of smooth manifold would be problematic, though.

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
Replies
3
Views
2K
Replies
5
Views
2K
Replies
4
Views
2K
Replies
27
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K