My first proof ever - Linear algebra

In summary, the author is trying to learn linear algebra in a more rigorous manner, using Hefferon's textbook, and has been struggling with a proof for ahomogeneous linear system.
  • #1
MexChemE
237
55
Homework Statement
Prove that if ##\mathbf{s}## and ##\mathbf{t}## satisfy a homogeneous system then so do these vectors.
a) ##\mathbf{s} +\mathbf{t}##
b) ##3 \mathbf{s}##
c) ##k \mathbf{s} + j \mathbf{t}## for ##k, j \in \mathbb{R}##
Relevant Equations
Not applicable
First, a little context. It's been a while since I last posted here. I am a chemical engineer who is currently preparing for grad school, and I've been reviewing linear algebra and multivariable calculus for the last couple of months. I have always been successful at math (at least in the computational way it is taught in engineering school), and I feel I have a nice understanding of calculus through physical/graphical intuition, however, I cannot say the same about linear algebra, so I've been trying to learn linear algebra in a more rigorous manner through Hefferon's textbook. This is the first proof I have ever written, and I know it is very basic, but I was hoping someone could help me review it and maybe give me a few tips (I intend to go through the whole book and do most excercises/proofs).

Now the actual proof:

Let ##\mathbf{s} , \mathbf{t} \in \mathbb{R}^{n}## so that ##\mathbf{s} = (s_1,...,s_n)## and ##\mathbf{t} = (t_1,...,t_n)##.
Let ##A \mathbf{x} = \mathbf{0}## be a homogeneous linear system where ##\mathbf{x} \in \mathbb{R}^{n}## and ##A## is the matrix of coefficients of the system of size ##m \times n##.
Let ##\mathbf{s}## and ##\mathbf{t}## be solutions to the aforementioned system, so that the following statements are true: ##A \mathbf{s} = \mathbf{0}## and ##A \mathbf{t} = \mathbf{0}##.
In other words, for any ##i##-th equation in the system, the following are true:
$$a_{i,1} \ s_1 + ... + a_{i, n} \ s_n = 0$$
$$a_{i,1} \ t_1 + ... + a_{i, n} \ t_n = 0$$

a)
Let ##\left(\mathbf{s} + \mathbf{t}\right) \in \mathbb{R}^{n}## so that ##\left(\mathbf{s} + \mathbf{t}\right) = (s_1+t_1,...,s_n+t_n)##.
We must show ##\mathbf{s} + \mathbf{t}## also satisfies the system, or in other words, that ##A (\mathbf{s} + \mathbf{t}) = \mathbf{0}## is true. Multiplying matrix ##A## times vector ##(\mathbf{s} + \mathbf{t})##, we get ##m## equations, where any ##i##-th equation is:
$$a_{i,1} \ (s_1+t_1) + ... + a_{i, n} \ (s_n+t_n) = b_i$$
Using the distributive property of the scalar multiplication of a vector, we can regroup terms:
$$(a_{i,1} \ s_1 + ... + a_{i, n} \ s_n) + (a_{i,1} \ t_1 + ... + a_{i, n} \ t_n) = b_i$$
But we know each of the expressions inside parentheses are equal to zero, so ##b_i = 0## for any ##i##-th equation. So ##(\mathbf{s} + \mathbf{t})## is a solution to the system and ##A (\mathbf{s} + \mathbf{t}) = \mathbf{0}## is true.

b)
Let ##3 \mathbf{s} \in \mathbb{R}^{n}## so that ##3 \mathbf{s} = (3s_1,...,3s_n)##.
We must show ##3 \mathbf{s}## also satisfies the system, or in other words, that ##A (3 \mathbf{s}) = \mathbf{0}## is true. Multiplying matrix ##A## times vector ##3 \mathbf{s}##, we get ##m## equations, where any ##i##-th equation is:
$$a_{i,1} \ (3 s_1) + ... + a_{i, n} \ (3 s_n) = b_i$$
Here we can see that the number three is a factor in every term, so we can extract it:
$$3 (a_{i,1} \ s_1 + ... + a_{i, n} \ s_n) = b_i$$
But we know that the expression inside parentheses is equal to zero, so ##b_i = 0## for any ##i##-th equation. So ##3 \mathbf{s}## is a solution to the system and ##A (3 \mathbf{s}) = \mathbf{0}## is true.

c)
Let ##\left(k\mathbf{s} + j\mathbf{t}\right) \in \mathbb{R}^{n}## so that ##\left(k\mathbf{s} + j\mathbf{t}\right) = (ks_1+jt_1,...,ks_n+jt_n)##, where ##k, j \in \mathbb{R}##.
We must show ##k\mathbf{s} + j\mathbf{t}## also satisfies the system, or in other words, that ##A (k\mathbf{s} + j\mathbf{t}) = \mathbf{0}## is true. Multiplying matrix ##A## times vector ##(k\mathbf{s} + j\mathbf{t})##, we get ##m## equations, where any ##i##-th equation is:
$$a_{i,1} \ (ks_1+jt_1) + ... + a_{i, n} \ (ks_n+jt_n) = b_i$$
Using the distributive property of the scalar multiplication of a vector, we can regroup terms:
$$(a_{i,1} \ ks_1 + ... + a_{i, n} \ ks_n) + (a_{i,1} \ jt_1 + ... + a_{i, n} \ jt_n) = b_i$$
We notice ##k## is a factor in every term inside the first parentheses, and ##j## is a factor in every term inside the second parentheses, so we can extract them:
$$k(a_{i,1} \ s_1 + ... + a_{i, n} \ s_n) + j(a_{i,1} \ t_1 + ... + a_{i, n} \ t_n) = b_i$$
But we know each of the expressions inside parentheses are equal to zero, so ##b_i = 0## for any ##i##-th equation and any values of ##k## and ##j##. So ##(k\mathbf{s} + j\mathbf{t})## is a solution to the system and ##A (k\mathbf{s} + j\mathbf{t}) = \mathbf{0}## is true.

QED.

One little comment I would like to make, is that up to this point, the author had defined the scalar multiplication of a vector, but not its distributive property, so I felt a little cheap using it in the proof, but I saw no other way. I also expressed linear systems in terms of matrix-vector multiplication which hasn't been introduced in the book either. Is this considered fair game?

Thanks in advance for any input!
 
Physics news on Phys.org
  • #2
MexChemE said:
Homework Statement:: Prove that if s and t satisfy a homogeneous system then so do these vectors.
a) s+t
b) 3s
c) ks+jt for k,j∈R
Relevant Equations:: Not applicable

First, a little context. It's been a while since I last posted here. I am a chemical engineer who is currently preparing for grad school, and I've been reviewing linear algebra and multivariable calculus for the last couple of months. I have always been successful at math (at least in the computational way it is taught in engineering school), and I feel I have a nice understanding of calculus through physical/graphical intuition, however, I cannot say the same about linear algebra, so I've been trying to learn linear algebra in a more rigorous manner through Hefferon's textbook. This is the first proof I have ever written, and I know it is very basic, but I was hoping someone could help me review it and maybe give me a few tips (I intend to go through the whole book and do most excercises/proofs).

Now the actual proof:

Let s,t∈Rn so that s=(s1,...,sn) and t=(t1,...,tn).
Let Ax=0 be a homogeneous linear system where x∈Rn and A is the matrix of coefficients of the system of size m×n.
Let s and t be solutions to the aforementioned system, so that the following statements are true: As=0 and At=0.
In other words, for any i-th equation in the system, the following are true:
ai,1 s1+...+ai,n sn=0
ai,1 t1+...+ai,n tn=0

a)
Let (s+t)∈Rn so that (s+t)=(s1+t1,...,sn+tn).
We must show s+t also satisfies the system, or in other words, that A(s+t)=0 is true. Multiplying matrix A times vector (s+t), we get m equations, where any i-th equation is:
ai,1 (s1+t1)+...+ai,n (sn+tn)=bi
Using the distributive property of the scalar multiplication of a vector, we can regroup terms:
(ai,1 s1+...+ai,n sn)+(ai,1 t1+...+ai,n tn)=bi
But we know each of the expressions inside parentheses are equal to zero, so bi=0 for any i-th equation. So (s+t) is a solution to the system and A(s+t)=0 is true.

b)
Let 3s∈Rn so that 3s=(3s1,...,3sn).
We must show 3s also satisfies the system, or in other words, that A(3s)=0 is true. Multiplying matrix A times vector 3s, we get m equations, where any i-th equation is:
ai,1 (3s1)+...+ai,n (3sn)=bi
Here we can see that the number three is a factor in every term, so we can extract it:
3(ai,1 s1+...+ai,n sn)=bi
But we know that the expression inside parentheses is equal to zero, so bi=0 for any i-th equation. So 3s is a solution to the system and A(3s)=0 is true.

c)
Let (ks+jt)∈Rn so that (ks+jt)=(ks1+jt1,...,ksn+jtn), where k,j∈R.
We must show ks+jt also satisfies the system, or in other words, that A(ks+jt)=0 is true. Multiplying matrix A times vector (ks+jt), we get m equations, where any i-th equation is:
ai,1 (ks1+jt1)+...+ai,n (ksn+jtn)=bi
Using the distributive property of the scalar multiplication of a vector, we can regroup terms:
(ai,1 ks1+...+ai,n ksn)+(ai,1 jt1+...+ai,n jtn)=bi
We notice k is a factor in every term inside the first parentheses, and j is a factor in every term inside the second parentheses, so we can extract them:
k(ai,1 s1+...+ai,n sn)+j(ai,1 t1+...+ai,n tn)=bi
But we know each of the expressions inside parentheses are equal to zero, so bi=0 for any i-th equation and any values of k and j. So (ks+jt) is a solution to the system and A(ks+jt)=0 is true.

QED.

One little comment I would like to make, is that up to this point, the author had defined the scalar multiplication of a vector, but not its distributive property, so I felt a little cheap using it in the proof, but I saw no other way. I also expressed linear systems in terms of matrix-vector multiplication which hasn't been introduced in the book either. Is this considered fair game?

Thanks in advance for any input!
For part a), you can use the properties of matrix multiplication.
Given that s and t are solutions of the equation Ax = 0,
A(s + t) = As + At = 0 + 0 = 0
and similar for the other two parts.

However, based on your final comment, I'm not sure that the above would be allowed.
It seems strange to me, though, that you're working with vectors in ##\mathbb R^n##, but the basic properties of a vector space haven't been presented yet or that matrix operations haven't been presented. I'm not familiar with the book you're using, so I'm at a loss here.
 
  • #3
Mark44 said:
However, based on your final comment, I'm not sure that the above would be allowed.
It seems strange to me, though, that you're working with vectors in ##\mathbb R^n##, but the basic properties of a vector space haven't been presented yet or that matrix operations haven't been presented.
Actually, the ##\mathbb{R}^{n}## part was mine too, it seems I used my previous knowledge from computational linear algebra as a crutch, but of course, I could have defined my vectors as having ##n## components, without necessarily saying they are in ##\mathbb{R}^{n}##.
Mark44 said:
I'm not familiar with the book you're using, so I'm at a loss here.
The book is an open source textbook from American professor Jim Hefferon. He also has lectures on YouTube supporting the textbook, which I usually watch before reading through the corresponding section. You can find more info about it here. I decided to give it a try as I've seen many people recommend it as an alternative to books on the level of Strang or Friedberg.
 
  • #4
MexChemE said:
Prove that if ##\mathbf{s}## and ##\mathbf{t}## satisfy a homogeneous system then so do these vectors.
a) ##\mathbf{s} +\mathbf{t}##
b) ##3 \mathbf{s}##
c) ##k \mathbf{s} + j \mathbf{t}## for ##k, j \in \mathbb{R}##
I believe most lin. algebra textbooks would consider the short proof I did in my previous post to be sufficient.
For part c, I think something like this would be fine:
A(ks + jt) = A(ks) + A(jt) = kA(s) + jA(t) = k*0 + j*0 = 0
Therefore ks + jt is a solution to the homogeneous equation.
I have used some basic properties of multiplication of a vector by a matrix. See https://en.wikipedia.org/wiki/Matrix_multiplication, under General Properties.
 
  • Like
Likes MexChemE
  • #5
Alright, thanks a lot!
 

1. What is linear algebra?

Linear algebra is a branch of mathematics that deals with the study of linear equations and their representations in vector spaces. It involves the use of matrices, vectors, and linear transformations to solve problems related to systems of linear equations.

2. Why is linear algebra important?

Linear algebra is important because it provides a powerful tool for solving real-world problems in various fields, such as physics, engineering, economics, and computer science. It also serves as a foundation for more advanced mathematical concepts and techniques.

3. What are the basic concepts in linear algebra?

The basic concepts in linear algebra include vectors, matrices, linear transformations, and systems of linear equations. Other important concepts include eigenvalues and eigenvectors, determinants, and diagonalization.

4. How is linear algebra used in data science?

Linear algebra is an essential tool in data science, as it provides the mathematical framework for machine learning algorithms, data analysis, and data visualization. It is used for tasks such as dimensionality reduction, regression analysis, and clustering.

5. Is linear algebra difficult to learn?

Linear algebra can be challenging to learn, but with practice and dedication, it can be mastered. It is important to have a strong foundation in algebra and geometry before studying linear algebra. Additionally, there are many resources available, such as textbooks, online courses, and tutorials, to help with learning linear algebra.

Similar threads

  • Precalculus Mathematics Homework Help
Replies
3
Views
1K
Replies
2
Views
513
  • Classical Physics
Replies
2
Views
889
  • Linear and Abstract Algebra
Replies
11
Views
1K
Replies
4
Views
617
Replies
3
Views
1K
Replies
1
Views
916
  • Mechanics
Replies
3
Views
827
  • Advanced Physics Homework Help
Replies
0
Views
296
  • Precalculus Mathematics Homework Help
Replies
1
Views
538
Back
Top