MHB Find basis and dimension of the intersection of S and T

Logan Land
Messages
83
Reaction score
0
If you have a vector space S be spanned by vectors (x1,y1,z1), (x2, y2, z2), (x3, y3, z3) and T spanned by (x1,y1,z1),
(x2, y2, z2), (x3, y3, z3). How would you find the basis and dimension of the intersection of S and T .

(x,y,z can be any value)

Do I go about it like this?
a(x1,y1,z1)+b(x2, y2, z2)+c(x3, y3, z3)-d(x1,y1,z1)-e(x2, y2, z2)-f(x3, y3, z3)=[0,0,0] and solve for each (a,b,c,d,e,f)?
 
Physics news on Phys.org
LLand314 said:
If you have a vector space S be spanned by vectors (x1,y1,z1), (x2, y2, z2), (x3, y3, z3) and T spanned by (x1,y1,z1),
(x2, y2, z2), (x3, y3, z3). How would you find the basis and dimension of the intersection of S and T .

(x,y,z can be any value)

Do I go about it like this?
a(x1,y1,z1)+b(x2, y2, z2)+c(x3, y3, z3)-d(x1,y1,z1)-e(x2, y2, z2)-f(x3, y3, z3)=[0,0,0] and solve for each (a,b,c,d,e,f)?

Hi LLand314!

Your problem statement suggest that those vectors in S and in T are the same.
If that is the case, then the intersection is $S \cap T= S=T$.

Can you clarify?

A general approach is to first find a basis for each space - preferably an orthonormal basis.
And then intersect them.
 
sorry I meant something like this

S (1, 2, 1), (1, 1, −1), (1, 3, 3)
T (1, 2, 2), (2, 3, −1), (1, 1, −3)

where they are not equal.

sorry for the confusion
 
LLand314 said:
sorry I meant something like this

S (1, 2, 1), (1, 1, −1), (1, 3, 3)
T (1, 2, 2), (2, 3, −1), (1, 1, −3)

where they are not equal.

sorry for the confusion

Right.

So the first step is to find a basis for each space.
One way to do it, is to use Gaussian elimination.

Then, if for instance both spaces turn out to have a 3 dimensional basis, we can conclude that the intersection is $\mathbb R^3$.
A more interesting case is when they turn out to be planes.
 
I like Serena said:
Right.

So the first step is to find a basis for each space.
One way to do it, is to use Gaussian elimination.

Then, if for instance both spaces turn out to have a 3 dimensional basis, we can conclude that the intersection is $\mathbb R^3$.
A more interesting case is when they turn out to be planes.

When I use Gaussian elimination I come up with
S= 1 0 2
0 1 -1
0 0 0

T= 1 0 1
0 1 0
0 0 0

Now set that to zero?
Like
X+2y=0
Y-z=0
For S
and
x+z=0
Y=0
For T?
 
LLand314 said:
When I use Gaussian elimination I come up with
S= 1 0 2
0 1 -1
0 0 0

T= 1 0 1
0 1 0
0 0 0

Good!
Except that I get for T:
$$T = \begin{bmatrix}1&0&-8\\ 0&1&5 \\ 0&0&0 \end{bmatrix}$$

That means that {(1,0,2), (0,1,-1)} is a basis for S.
And {(1,0,-8),(0,1,5)} is a basis for T.
Both have 2 dimensions, meaning that we are intersecting 2 planes.
Now set that to zero?
Like
X+2y=0
Y-z=0
For S
and
x+z=0
Y=0
For T?

You could.
That way, you'll get the orthogonal space for S respectively T.
But it doesn't give you the intersection.

To get the intersection, we're looking for vectors $(x,y,z)$ that are both in S and in T.
A vector in S can be represented by $(x,y,z)=\lambda (1,0,2) + \mu(0,1,-1)$.
And a vector in T by $(x,y,z)=\sigma(1,0,-8) + \tau(0,1,5)$.

Solve for $\lambda, \mu, \sigma, \tau$...
The space spanned by the corresponding $(x,y,z)$ vectors is the intersection.
 
I like Serena said:
Good!
Except that I get for T:
$$T = \begin{bmatrix}1&0&-8\\ 0&1&5 \\ 0&0&0 \end{bmatrix}$$

That means that {(1,0,2), (0,1,-1)} is a basis for S.
And {(1,0,-8),(0,1,5)} is a basis for T.
Both have 2 dimensions, meaning that we are intersecting 2 planes.

You could.
That way, you'll get the orthogonal space for S respectively T.
But it doesn't give you the intersection.

To get the intersection, we're looking for vectors $(x,y,z)$ that are both in S and in T.
A vector in S can be represented by $(x,y,z)=\lambda (1,0,2) + \mu(0,1,-1)$.
And a vector in T by $(x,y,z)=\sigma(1,0,-8) + \tau(0,1,5)$.

Solve for $\lambda, \mu, \sigma, \tau$...
The space spanned by the corresponding $(x,y,z)$ vectors is the intersection.

Solve for $\lambda, \mu, \sigma, \tau$ individually? Could you explain more or give an example?
 
LLand314 said:
Solve for $\lambda, \mu, \sigma, \tau$ individually? Could you explain more or give an example?

One way to do it, is to solve it as a regular system of linear equations and use Gaussian elimination.
$$\begin{cases}(x,y,z)=\lambda (1,0,2) + \mu(0,1,-1) \\
(x,y,z)=\sigma(1,0,-8) + \tau(0,1,5)\end{cases}\tag 1$$
$$\Rightarrow \lambda (1,0,2) + \mu(0,1,-1) - \sigma(1,0,-8) - \tau(0,1,5) = 0$$

Arrange as a matrix:
$$\begin{bmatrix}
1&0&-1&0 \\
0 & 1&0&-1\\
2&-1&8&-5
\end{bmatrix} \begin{bmatrix}
\lambda \\ \mu \\ \sigma \\ \tau
\end{bmatrix} = 0$$

Use Gaussian elimination to find:
$$\begin{bmatrix}
1&0&-1&0 \\
0 & 1&0&-1\\
0&0&10&-6
\end{bmatrix} \begin{bmatrix}
\lambda \\ \mu \\ \sigma \\ \tau
\end{bmatrix} = 0$$

Now we resolve from the bottom.
We pick $\tau=5$, which implies from the bottom-most equation that $\sigma=3$.
It follows from the 2nd equation that $\mu=5$ and finally that $\lambda=3$.

By substituting in $(1)$, that gives us the solution:
$$(x,y,z)=3\cdot (1,0,2) + 5\cdot(0,1,-1) = (3,5,1)$$

The only relevant alternative is to use $\tau=0$, which does not give us a new non-zero solution.

So the intersection of S and T is spanned by (3,5,1).
It's a line.
 
Ok I kinda understand but when you found that T = something different than me is it because you placed the vectors like this?

1 2 2
2 3 -1
1 1 -3

??

Cause our teacher said to place them like this

1 2 1
2 3 1
2 -1 -3

the vectors as columns not rows as you did
 
  • #10
So I went through and tried to solve this with T set up as
1 2 1
2 3 1
2 -1 -3

and I get

1 0 -1 0
0 1 0 -1
2 -1 1 -1
reducing I get

1 0 0 -2/3
0 1 0 -1
0 0 1 -2/3

so I get intersection at (1, 3/2, 1/2)

is that right?
 
  • #11
LLand314 said:
Ok I kinda understand but when you found that T = something different than me is it because you placed the vectors like this?

1 2 2
2 3 -1
1 1 -3

??

Cause our teacher said to place them like this

1 2 1
2 3 1
2 -1 -3

the vectors as columns not rows as you did

I suggest then to follow what your teacher said.
To be honest, I only wrote them in rows, because I thought that was what you did.
Note that for S you have the resulting independent vectors in rows.

It doesn't really matter though, because the matrix is merely a shorthand notation to write your vectors.
However, you have to eliminate differently then.

The point is that you add (or subtract) multiples of one vector to another vector.
When doing that systematically, as in Gaussian elimination, you end up with a set of independent vectors combined with null vectors.

To summarize: with the vectors in columns, you have to add/subtract multiples of one column to/from another. This should result in the last column becoming zero.
LLand314 said:
So I went through and tried to solve this with T set up as
1 2 1
2 3 1
2 -1 -3

How did you get to the basis of T?
 
  • #12
I like Serena said:
I suggest then to follow what your teacher said.
To be honest, I only wrote them in rows, because I thought that was what you did.
Note that for S you have the resulting independent vectors in rows.

It doesn't really matter though, because the matrix is merely a shorthand notation to write your vectors.
However, you have to eliminate differently then.

The point is that you add (or subtract) multiples of one vector to another vector.
When doing that systematically, as in Gaussian elimination, you end up with a set of independent vectors combined with null vectors.

To summarize: with the vectors in columns, you have to add/subtract multiples of one column to/from another. This should result in the last column becoming zero.

How did you get to the basis of T?

I used gaussian elimination.

And for S I set them up as columns and got
1 0 2
0 1 -1
0 0 0

if I set them up as rows then I get
1 0 -3
0 1 2
0 0 0

so would it still come to the same answer at the end? or something different since the basis is now different?
 
  • #13
LLand314 said:
I used gaussian elimination.

And for S I set them up as columns and got
1 0 2
0 1 -1
0 0 0

if I set them up as rows then I get
1 0 -3
0 1 2
0 0 0

so would it still come to the same answer at the end? or something different since the basis is now different?

You cannot set them up as columns and then sweep with rows to find a basis.To verify if you have an independent basis, we can and should check if each vector in the original set can be constructed with the vectors we found for the basis.

So for instance, if we verify the original vectors in S one by one, we can see that:
$$(1,2,1) = (1,0,-3) + 2\cdot(0,1,2)$$
$$(1,1,-1) = (1,0,-3) + (0,1,2)$$
$$(1,3,3) = (1,0,-3) + 3\cdot(0,1,2)$$
This confirms that {(1,0,-3),(0,1,2)} is the proper basis.
 
  • #14
I like Serena said:
You cannot set them up as columns and then sweep with rows to find a basis.To verify if you have an independent basis, we can and should check if each vector in the original set can be constructed with the vectors we found for the basis.

So for instance, if we verify the original vectors in S one by one, we can see that:
$$(1,2,1) = (1,0,-3) + 2\cdot(0,1,2)$$
$$(1,1,-1) = (1,0,-3) + (0,1,2)$$
$$(1,3,3) = (1,0,-3) + 3\cdot(0,1,2)$$
This confirms that {(1,0,-3),(0,1,2)} is the proper basis.

I just got really confused why can't they be set up in columns to find basis like i did?
 
  • #15
LLand314 said:
I just got really confused why can't they be set up in columns to find basis like i did?

They can be set up in columns and they should.
But you have to sweep with columns instead of rows.
 
  • #16
I like Serena said:
They can be set up in columns and they should.
But you have to sweep with columns instead of rows.

so then wouldn't that give me a different basis?

if I have vectors
S (1, 2, 1), (1, 1, −1), (1, 3, 3)
T (1, 2, 2), (2, 3, −1), (1, 1, −3)

so I set them up like this

S
1 1 1
2 1 3
1 -1 3

T
1 2 1
2 3 1
2 -1 -3

then used gauss jordan to reduce with row operations

and came up with

S
1 0 2
0 1 -1
0 0 0

T
1 0 -1
0 1 1
0 0 0

but when you set them up in rows its

S
1 2 1
1 1 -1
1 3 3

T
1 2 2
2 3 -1
1 1 -3

and then with reduction I get

S
1 0 -3
0 1 2
0 0 0

T
1 0 -8
0 1 5
0 0 0

see how they are different when you set them up differently? I don't understand now which way to set them up and proceed if both ways come up with different answers...
 
  • #17
I guess the thing to realize is that you cannot use Gauss-Jordan elimination to find an independent basis.
Gauss-Jordan elimination is defined to be used to solve a linear system of equations, which is not what we're doing here. It does not apply to find a basis.
To find a basis we need to do something else.

Btw, whenever you get a result, it is important to verify if the result is correct.
It must be possible to construct each vector in the space as a linear combination of the basis vectors.
Can you?
 
  • #18
I like Serena said:
I guess the thing to realize is that you cannot use Gauss-Jordan elimination to find an independent basis.
Gauss-Jordan elimination is defined to be used to solve a linear system of equations, which is not what we're doing here. It does not apply to find a basis.
To find a basis we need to do something else.

Btw, whenever you get a result, it is important to verify if the result is correct.
It must be possible to construct each vector in the space as a linear combination of the basis vectors.
Can you?

But isn't the answer of Gauss-Jordan elimination the basis?
you said that was the basis for S and T which was given by Gauss-Jordan elimination.

For example
S
(1, 2, 1), (1, 1, −1), (1, 3, 3)

or setting up as rows is
1 2 1
1 1 -1
1 3 3

and with gauss jordan I get
1 0 -3
0 1 2
0 0 0

which gives me what you said is the basis...
 
Back
Top