Help with Proof of Junghenn Proposition 9.2.3 - A Course in Real Analysis

Click For Summary
SUMMARY

The discussion focuses on Proposition 9.2.3 from Hugo D. Junghenn's "A Course in Real Analysis," specifically addressing the relationship between linear transformations and matrices in the context of differentiation on $$\mathbb{R}^n$$. The proof demonstrates that the linear transformation $$T \mathbf{x}$$, defined as $$T \mathbf{x} = ( \mathbf{a}_1 \cdot \mathbf{x}, \mathbf{a}_2 \cdot \mathbf{x}, \ldots, \mathbf{a}_n \cdot \mathbf{x} )$$, is equivalent to the matrix multiplication representation $$(T\mathbf{x})^t = A\mathbf{x}^t$$. The discussion clarifies that while linear transformations utilize row vectors, matrices require column vectors for proper multiplication, emphasizing the contextual nature of vector representation in mathematical proofs.

PREREQUISITES
  • Understanding of linear transformations in vector spaces
  • Familiarity with matrix multiplication and vector representation
  • Knowledge of Euclidean spaces and metric spaces
  • Basic concepts of differentiation in real analysis
NEXT STEPS
  • Study the implications of linear transformations in vector spaces
  • Explore matrix representation of linear transformations in depth
  • Learn about the properties of Euclidean and metric spaces
  • Investigate the role of differentiation in real analysis, particularly in higher dimensions
USEFUL FOR

Students of real analysis, mathematicians focusing on linear algebra, and anyone interested in the interplay between linear transformations and matrix representations in higher-dimensional spaces.

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading Hugo D. Junghenn's book: "A Course in Real Analysis" ...

I am currently focused on Chapter 9: "Differentiation on $$\mathbb{R}^n$$"

I need some help with the proof of Proposition 9.2.3 ...

Proposition 9.2.3 and the preceding relevant Definition 9.2.2 read as follows:
View attachment 7902
View attachment 7903
In the above proof Junghenn let's $$ \mathbf{a}_i = ( a_{i1}, a_{i2}, \ ... \ ... \ , a_{in} ) $$

and then states that $$T \mathbf{x} = ( \mathbf{a}_1 \cdot \mathbf{x}, \mathbf{a}_2 \cdot \mathbf{x}, \ ... \ ... \ , \mathbf{a}_n \cdot \mathbf{x} )$$ where $$\mathbf{x} = ( x_1, x_2, \ ... \ ... \ x_n )$$(Note: Junghenn defines vectors in \mathbb{R}^n as row vectors ... ... )Now I believe I can show $$T \mathbf{x}^t = [a_{ij} ]_{ m \times n } \mathbf{x}^t = ( \mathbf{a}_1 \cdot \mathbf{x}, \mathbf{a}_2 \cdot \mathbf{x}, \ ... \ ... \ , \mathbf{a}_n \cdot \mathbf{x} )^t$$ ...... ... as follows:
$$T \mathbf{x}^t = [a_{ij} ]_{ m \times n } \mathbf{x}^t = \begin{pmatrix} a_{11} & a_{12} & ... & ... & a_{1n} \\ a_{21} & a_{22} & ... & ... & a_{2n} \\ ... & ... & ... & ... & ... \\ ... & ... & ... & ... & ... \\ a_{m1} & a_{m2} & ... & ... & a_{mn} \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ . \\ . \\ x_n \end{pmatrix}$$
$$= \begin{pmatrix} a_{11} x_1 + a_{12} x_2 + \ ... \ ... \ + a_{1n} x_n \\ a_{21} x_1 + a_{22} x_2 + \ ... \ ... \ + a_{2n} x_n \\ ... \\ ... \\ a_{m1} x_1 + a_{m2} x_2 + \ ... \ ... \ + a_{mn} x_n \end{pmatrix} $$
$$= \begin{pmatrix} \mathbf{a}_1 \cdot \mathbf{x} \\ \mathbf{a}_2 \cdot \mathbf{x} \\ . \\ . \\ \mathbf{a}_n \cdot \mathbf{x} \end{pmatrix}$$
$$= ( \mathbf{a}_1 \cdot \mathbf{x}, \mathbf{a}_2 \cdot \mathbf{x}, \ ... \ ... \ , \mathbf{a}_n \cdot \mathbf{x} )^t $$

So ... I have shown$$T \mathbf{x}^t = [a_{ij} ]_{ m \times n } \mathbf{x}^t = ( \mathbf{a}_1 \cdot \mathbf{x}, \mathbf{a}_2 \cdot \mathbf{x}, \ ... \ ... \ , \mathbf{a}_n \cdot \mathbf{x} )^t$$ ...How do I reconcile or 'square' that with Junghenn's statement that $$T \mathbf{x} = ( \mathbf{a}_1 \cdot \mathbf{x}, \mathbf{a}_2 \cdot \mathbf{x}, \ ... \ ... \ , \mathbf{a}_n \cdot \mathbf{x} )$$ where $$\mathbf{x} = ( x_1, x_2, \ ... \ ... \ x_n )$$(Note: I don't think that taking the transpose of both sides works ... ?)
Hope someone can help ...

Peter
 
Last edited:
Physics news on Phys.org
Junghenn defines the relation between the linear transformation $T$ and the matrix $A$ by $$T \mathbf{x} = ( \mathbf{a}_1 \cdot \mathbf{x},\, \mathbf{a}_2 \cdot \mathbf{x}, \ldots , \mathbf{a}_n \cdot \mathbf{x} )$$ where $$\mathbf{x} = ( x_1,\, x_2, \ldots, x_n )$$. This – as you show – is equivalent to the statement $(T\mathbf{x})^t = A\mathbf{x}^t.$

In other words, linear transformations act on elements of $\mathbb{R}^n$ (which Junghenn defines as row vectors), but matrices act (by pre-multiplication) on column vectors. There is no great mathematical significance in this. Junghenn probably prefers row vectors simply for convenience, because they take up less room on the printed page. But the $m\times n$ matrix $A$ has to be multiplied by an $n\times1$ vector (in other words, a column vector) in order for the matrix multiplication to be defined.

So if you are talking about linear transformations, you need to use row vectors, but if you want to deal with their associated matrices then you must use column vectors.
 
Opalg said:
Junghenn defines the relation between the linear transformation $T$ and the matrix $A$ by $$T \mathbf{x} = ( \mathbf{a}_1 \cdot \mathbf{x},\, \mathbf{a}_2 \cdot \mathbf{x}, \ldots , \mathbf{a}_n \cdot \mathbf{x} )$$ where $$\mathbf{x} = ( x_1,\, x_2, \ldots, x_n )$$. This – as you show – is equivalent to the statement $(T\mathbf{x})^t = A\mathbf{x}^t.$

In other words, linear transformations act on elements of $\mathbb{R}^n$ (which Junghenn defines as row vectors), but matrices act (by pre-multiplication) on column vectors. There is no great mathematical significance in this. Junghenn probably prefers row vectors simply for convenience, because they take up less room on the printed page. But the $m\times n$ matrix $A$ has to be multiplied by an $n\times1$ vector (in other words, a column vector) in order for the matrix multiplication to be defined.

So if you are talking about linear transformations, you need to use row vectors, but if you want to deal with their associated matrices then you must use column vectors.
Thanks Opalg ...

To know that the representation of vectors varies according to context like that is important to me in fully understanding what is going on in the various proofs/results in Euclidean and metric spaces ...

Thanks again for that post!

Peter
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 27 ·
Replies
27
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K