Proving dependent columns when the rows are dependent

Click For Summary
The discussion centers on proving that if the vector (a, b) is a multiple of (c, d), then (a, c) is a multiple of (b, d). The user presents their solution using assumptions and a system of equations but struggles to connect the final equation to proving the initial assumptions. Another participant advises against assuming what needs to be proven and emphasizes working with the given condition that (a, b) is a multiple of (c, d). They clarify that since all variables are nonzero, the relationship can be expressed as a matrix where the first column is a constant multiple of the second, leading to the desired conclusion.
kostoglotov
Messages
231
Reaction score
6
I feel like I almost understand the solution I've come up with, but a step in the logic is missing. I'll post the question and my solution in LaTeX form.

Paraphrasing of text question below in LaTeX. Text question can be seen in its entirety via this imgur link: http://i.imgur.com/41fvDRN.jpg

<br /> \ if \begin{pmatrix}<br /> a\\<br /> b<br /> \end{pmatrix} \ is \ a \ multiple \ of \begin{pmatrix}<br /> c\\<br /> d<br /> \end{pmatrix} \ with \ abcd \neq 0, show \ that \begin{pmatrix}<br /> a\\<br /> c<br /> \end{pmatrix} \ is \ a \ multiple \ of \begin{pmatrix}<br /> b\\<br /> d<br /> \end{pmatrix}<br />

My solution so far

<br /> assume \ \lambda \begin{pmatrix}<br /> c\\<br /> d<br /> \end{pmatrix} = \begin{pmatrix}<br /> a\\<br /> b<br /> \end{pmatrix} \rightarrow \begin{matrix}<br /> a = \lambda c\\<br /> b = \lambda d<br /> \end{matrix}<br />

<br /> now \ assume \ \gamma \begin{pmatrix}<br /> b\\<br /> d<br /> \end{pmatrix} = \begin{pmatrix}<br /> a\\<br /> c<br /> \end{pmatrix} \rightarrow \begin{matrix}<br /> a = \gamma b\\<br /> c = \gamma d<br /> \end{matrix}<br />

So I'm making two assumptions

Let's take the assumptions and put them into a system of four equations

1: \ a = \lambda c \ \ \ \ 2: \ b = \lambda d \\ 3: \ a = \gamma b \ \ \ \ 4: \ c = \gamma d

Now if we sub 3 into 1 to get A, and sub 2 into A to get B and then sub 4 into B to get C

C \rightarrow \lambda \gamma d = \lambda \gamma d

Similarly if we, sub 2 into 3 to get A, and 1 into A to get B, and 4 into B to get C

C \rightarrow \lambda \gamma d = \lambda \gamma d

I want to stop trying all the possible ways to get C now, because I want to look for a generalized way to show that they will all end up at the same point.

But more than this...what is the step of logic that connects the final equation C to proving the first two assumptions. I feel like this should prove the assumptions, but I don't know how exactly, or how exactly to express it.

Thanks :)
 
Physics news on Phys.org
\gamma = \frac {c}{d}
Then \frac {a}{b} =\frac {\lambda c}{\lambda d} = \frac{c}{d}.
 
kostoglotov said:
So I'm making two assumptions
You have to prove those assumptions.
And it is hard to prove things that are wrong...
Actually, there are just two special cases where this assumption holds.

If you think you need "assumptions", look for counterexamples first. They are easy to find here, and they save a lot of work.
 
kostoglotov said:
My solution so far

<br /> assume \ \lambda \begin{pmatrix}<br /> c\\<br /> d<br /> \end{pmatrix} = \begin{pmatrix}<br /> a\\<br /> b<br /> \end{pmatrix} \rightarrow \begin{matrix}<br /> a = \lambda c\\<br /> b = \lambda d<br /> \end{matrix}<br />

<br /> now \ assume \ \gamma \begin{pmatrix}<br /> b\\<br /> d<br /> \end{pmatrix} = \begin{pmatrix}<br /> a\\<br /> c<br /> \end{pmatrix} \rightarrow \begin{matrix}<br /> a = \gamma b\\<br /> c = \gamma d<br /> \end{matrix}<br />
With this second assumption, you are assuming the fact that you are supposed to prove! That won't get you anywhere. Instead, work with the first assumption, which is given: ##(a,b)## is a multiple of ##(c,d)##. So that means there is some constant ##\lambda## such that ##a = \lambda c## and ##b = \lambda d##.

By the way, note that the condition ##abcd \neq 0## means that all four of ##a,b,c,d## are nonzero, and therefore ##\lambda## is also nonzero.

Now we can rewrite the matrix as
$$\begin{pmatrix}
a & b \\
c & d \\
\end{pmatrix} =
\begin{pmatrix}
\lambda c & \lambda d \\
c & d \\
\end{pmatrix}$$
From this, you can easily see that first column is a constant multiple of the second column. (What is the constant?)
 
  • Like
Likes kostoglotov
jbunniii said:
With this second assumption, you are assuming the fact that you are supposed to prove! That won't get you anywhere. Instead, work with the first assumption, which is given: ##(a,b)## is a multiple of ##(c,d)##. So that means there is some constant ##\lambda## such that ##a = \lambda c## and ##b = \lambda d##.

By the way, note that the condition ##abcd \neq 0## means that all four of ##a,b,c,d## are nonzero, and therefore ##\lambda## is also nonzero.

Now we can rewrite the matrix as
$$\begin{pmatrix}
a & b \\
c & d \\
\end{pmatrix} =
\begin{pmatrix}
\lambda c & \lambda d \\
c & d \\
\end{pmatrix}$$
From this, you can easily see that first column is a constant multiple of the second column. (What is the constant?)

Nice one! Very clear explanation, thanks :)
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K