Are the Functions Linearly Independent Based on the Matrix of Their Outputs?

Alban1806
Messages
2
Reaction score
0
There's a question in charles curtis linear algebra book which states:
Let ##f1, f2, f3## be functions in ##\mathscr{F}(R)##.
a. For a set of real numbers ##x_{1},x_{2},x_{3}##, let ##(f_{i}(x_{j}))## be the ##3-by-3## matrix
whose (i,j) entry is ##(f_{i}(x_{j}))##, for ##1\leq i,j \leq 3##. Prove that ##f_{1}, f_{2}, f_{3}## are linearly independent if the rows of the matrix ##(f_{i}(x_{j}))## are linearly independent.

Obviously if ##f_1,f_2,f_3## are in terms of basis vectors than they are linearly independent.
But can I say that if matrix ##A = (f_{i}(x_{j}))## is linearly independent, then they are in echelon form.
Therefore, I can row reduce the matrix to a diagonal matrix s.t. ## a_{i,i} \neq 0##.
Since the rows are linearly independent then ##f_{i} \neq 0 \quad 1 \leq i \leq 3##, therefore for
##\alpha_{i} f_{i} = 0## only if ##\alpha_{i} = 0##.

Is this a good proof for that question?
 
Physics news on Phys.org
Alban1806 said:
There's a question in charles curtis linear algebra book which states:
Let ##f1, f2, f3## be functions in ##\mathscr{F}(R)##.
a. For a set of real numbers ##x_{1},x_{2},x_{3}##, let ##(f_{i}(x_{j}))## be the ##3-by-3## matrix
whose (i,j) entry is ##(f_{i}(x_{j}))##, for ##1\leq i,j \leq 3##. Prove that ##f_{1}, f_{2}, f_{3}## are linearly independent if the rows of the matrix ##(f_{i}(x_{j}))## are linearly independent.

Obviously if ##f_1,f_2,f_3## are in terms of basis vectors than they are linearly independent.
I'm not sure what you mean by that. It three vectors are basis vectors then they are independent by definition of "basis". On the other hand, again by definition of "basis", any vectors can be written as a linear combination of basis vectors.

But can I say that if matrix ##A = (f_{i}(x_{j}))## is linearly independent, then they are in echelon form.
What do you mean by the "matrix A" being independent? That its rows are independent vectors?

Therefore, I can row reduce the matrix to a diagonal matrix s.t. ## a_{i,i} \neq 0##.
Since the rows are linearly independent then ##f_{i} \neq 0 \quad 1 \leq i \leq 3##, therefore for
##\alpha_{i} f_{i} = 0## only if ##\alpha_{i} = 0##.

Is this a good proof for that question?
Saying that ##f_1, f_2, f_3## are linearly independent means that \alpha f_1+ \beta f_2+ \gamma f_3= 0, which in turn means \alpha f_1(x)+ \beta f_2(x)|+ \gamma f_3(x)= 0 for all x. So take x equal to ##x_1, x_2, x_3## in turn.
 
From what I understand from the question, the matrix ## f_{i}(x_{j}) ## is a ## 3 x 3 ## matrix whose rows are linearly independent.
That means that we are given that ## \alpha f_{1} + \beta f_{2} + \gamma f_{3} = 0 ## implies that ## \alpha = \beta = \gamma = 0 ##.
I believe I have to show that each ## f_{i} ## is linearly independent.

Therefore when I say the rows are linearly independent is not the same thing as the ## f_{i} ##'s being linearly independent to clarify.

But I hope my interpretation of the question is correct.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
Back
Top