MHB Vector or Parametric Form of the Equation of a Plane P

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading David Poole's book: Linear Algebra: A Modern Introduction (Third Edition) ...

I have a basic (and probably simple) question regarding Poole's introductory discussion of the vector or parametric form of the equation of a plane $$\mathscr{P}$$ (page 38, Section 1.3 Lines and Planes) ...

Poole's discussion/remarks on the vector or parametric form of the equation of a plane $$\mathscr{P}$$ reads as follows:View attachment 5185In the above text Poole writes:

" ... ... we observe that a plane can be determined by specifying one of its points $$P$$ (by the vector $$p$$) and two direction vectors $$u$$ and $$v$$ parallel to the plane (but not parallel to each other). ... ... "

Poole then goes on to derive the vector or parametric equation of the plane as:

$$x = p + su + tv $$

... BUT ... at first glimpse it seems that ... because there are infinitely many different pairs of non-parallel direction vectors u and v emanating from a point P in the plane ... then there are infinitely many different parametric equations of the one plane ... BUT ... surely this is not right ... ...Can someone please clarify my confused impression of the parametric form of the equation of a plane ...

Peter
 
Last edited:
Physics news on Phys.org
You are right. There is an infinite number of parametric representations of a plane. :)
 
Fantini said:
You are right. There is an infinite number of parametric representations of a plane. :)

Oh my God ... I never expected such an answer ...

Thanks so much Fantini ... most helpful ...

Peter
 
...and this is related to the fact that a basis is not unique: for any vector space over an infinite field, there are infinitely many bases.

The defining feature of a plane, is the "two-dimensional-ness" of it. I put this in quotes because there are two types of planes:

1. A subspace of $F^n$ of dimension 2, let's call such a subspace $U$.
2. A translate of such a plane. This is a COSET $p+U$. The vector $p$ is the translation vector, and if $u,v$ generate $U$, then $p+u,p+v$ are our direction vectors (considered as points in $F^n$).

The distinction between these two highlights two competing definitions of "vector"

a) An (algebraic) vector is a point (element) in a vector space. To see these as "geometric" vectors, we imagine the tail of the vector at the origin, and the arrow-head at the point.

b) A (geometric) vector is an arrow in some direction, for a given length (its magnitude) starting at one point, and ending at another.

The vectors of type (b) don't live in a vector space (this is surprising, right?), they live in a related kind of space called an affine space. This is very much "like" a vector space (in fact, any affine space possesses an "underlying vector space")...but there's no "preferred point" (origin).

The QUOTIENT space $F^n/U$ is the vector space of all affine planes parallel to $U$. This is one-dimensional for $F^n = \Bbb R^3$, since we just have to pick a vector $v \not\in U$, and figure out which scalar multiple of $v$ lies in a given translate of $U$ (in the picture you provide, this can be the vector $p$)-if you have a deck of cards, saying how far up you go, or down you go, in the deck, determines which card you pick.

This can be somewhat confusing, because a lot of use of vectors in modeling *physical* problems, e.g. determining the forces at a point, actually uses affine vectors, not algebraic vectors (in the real world, there's no actual "origin").

In the parametric form, the important thing to remember is we have TWO "free parameters", $s$ and $t$. Thus affine spaces are very simple examples of what are known as "manifolds"-an affine plane is a 2-manifold (which is not only "locally" homeomorphic to $\Bbb R^2$ its "globally homeomorphic", via the translation homeomorphism:

$v \mapsto v + p$).

Analysts like to use the notation $v_p$ for affine vectors, meaning "the (algebraic) vector $v$ AT the point $p$". So they would label the diagram vectors $s\mathbf{u},t\mathbf{v}$ as: $(s\mathbf{u})_{\mathbf{p}},(t\mathbf{v})_{\mathbf{p}}$. In other words, we are "temporarily pretending $\mathbf{p}$ is the origin".
 
Deveno said:
...and this is related to the fact that a basis is not unique: for any vector space over an infinite field, there are infinitely many bases.

The defining feature of a plane, is the "two-dimensional-ness" of it. I put this in quotes because there are two types of planes:

1. A subspace of $F^n$ of dimension 2, let's call such a subspace $U$.
2. A translate of such a plane. This is a COSET $p+U$. The vector $p$ is the translation vector, and if $u,v$ generate $U$, then $p+u,p+v$ are our direction vectors (considered as points in $F^n$).

The distinction between these two highlights two competing definitions of "vector"

a) An (algebraic) vector is a point (element) in a vector space. To see these as "geometric" vectors, we imagine the tail of the vector at the origin, and the arrow-head at the point.

b) A (geometric) vector is an arrow in some direction, for a given length (its magnitude) starting at one point, and ending at another.

The vectors of type (b) don't live in a vector space (this is surprising, right?), they live in a related kind of space called an affine space. This is very much "like" a vector space (in fact, any affine space possesses an "underlying vector space")...but there's no "preferred point" (origin).

The QUOTIENT space $F^n/U$ is the vector space of all affine planes parallel to $U$. This is one-dimensional for $F^n = \Bbb R^3$, since we just have to pick a vector $v \not\in U$, and figure out which scalar multiple of $v$ lies in a given translate of $U$ (in the picture you provide, this can be the vector $p$)-if you have a deck of cards, saying how far up you go, or down you go, in the deck, determines which card you pick.

This can be somewhat confusing, because a lot of use of vectors in modeling *physical* problems, e.g. determining the forces at a point, actually uses affine vectors, not algebraic vectors (in the real world, there's no actual "origin").

In the parametric form, the important thing to remember is we have TWO "free parameters", $s$ and $t$. Thus affine spaces are very simple examples of what are known as "manifolds"-an affine plane is a 2-manifold (which is not only "locally" homeomorphic to $\Bbb R^2$ its "globally homeomorphic", via the translation homeomorphism:

$v \mapsto v + p$).

Analysts like to use the notation $v_p$ for affine vectors, meaning "the (algebraic) vector $v$ AT the point $p$". So they would label the diagram vectors $s\mathbf{u},t\mathbf{v}$ as: $(s\mathbf{u})_{\mathbf{p}},(t\mathbf{v})_{\mathbf{p}}$. In other words, we are "temporarily pretending $\mathbf{p}$ is the origin".
Thanks Deveno ... this post is extremely helpful, addressing as it does, many points that were puzzling to me ...

Just one question ... what are "direction vectors" and how should we think of them ...? Indeed, how do they differ from "ordinary" vectors ...?

I further note that you begin to discuss a notion that continually seems to evade my full understanding ... affine space ... what is the nature of affine space and how exactly does it differ from Euclidean Space ... and also how is affine space related to the notion of vector space ... ... is it just the point regarding the origin that you mention? what then are the implications of no origin? ...

Peter
 
Last edited:
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
Back
Top