Arguments leading to the speed of light as a dimensionless constant

In summary: I suppose that the right thing to do is to opt as unit-unificator for either space or time, since otherwise you would create inconsistency with the rest of physics, right?Schutz prefers space. Is there a reason why space would be better than time as unit-unificator?
  • #36
Nugatory said:
We choose whatever coordinates are convenient for the problem at hand. Everyone’s favorite example is the choice between using polar ##(r,\theta)## coordinates and Cartesian ##(x,y)## coordinates when working with the two-dimensional Euclidean surface of a sheet paper. (It would be a good exercise to derive the components of the metric tensor in polar coordinates).

But clearly this choice has nothing to do with the actual distances between points on the sheet of paper or how we measure them - we use a ruler. So to the extent that your first question is well-defined the answer is “No”.

You may have noticed that in polar coordinates ##\hat{r}## and ##\hat{\theta}## are still orthogonal. It is unfortunate that our two most familiar coordinate systems do have orthogonal axes, because we are tempted into the mistaken assumption that orthogonality is a natural property of all coordinate axes. For a counterexample, we need something less familiar: for example, if we’re considering the experience of an observer free-falling into a black hole, the most coordinate system will put the radial zero point at the infaller’s position; in these coordinates the ##r## and ##t## axes are not orthogonal.

Although you answered "no", I am interpreting your answer as "yes". When I equated choosing another system of coordinates with another method for measuring space and time, I did not refer to how we measure the distances between points on the sheet of paper, but to how we obtain (operationally) the data with which we feed the values that we later reflect on the sheet of paper, i.e. which measurements instruments we choose and how we display them.

Even if the decision to analyze a problem from the perspective of another frame is one that we make (as you say, out of convenience, because that makes the answer easier to see) at the desk, actually what you then do is simulate what an observer would obtain after an operational change, which may be more or less dramatic depending on whether the choice involves changing the orientation of the sticks (but keeping them orthogonal) or grabbing sticks that are not orthogonal or grabbing a different instrument like a theodolite or whatever...

The example that you mention illustrates this: taking the infaller's position is an operational change, I would say a most dramatic one. :smile:

Now that the question is better defined, would you be able to mention an example of a shift to a non-orthogonal basis (in the context of SR) and how this relates to a change in the nature or the rules of use of the clocks and rulers?

BTW, please let me remind (myself) of the reason for this excursus on orthogonality, for the purpose of retaking "the thread of the thread" in due time: the question was precisely that, in my opinion, it is our progressive understanding of how space and time are built and how they relate to each other at operational level what prompts us to use same units for both dimensions and thus make c dimensionless; in this context, assuming that our operational practice makes them actually orthogonal (even if it could be otherwise), I thought it appropriate to elaborate on the meaning of this orthogonality, because it is something analogous to what happens in ordinary space, where we also normally display X and Y perpendicularly (even if we could do otherwise).

It would also help to know an example or somehow an elaboration on the phenomenon mentioned by PeterDonis: a timelike vector that is parallel to the T axis and is not orthogonal to a spacelike vector that is parallel to the X axis. This would enable me to better understand how orthogonality differs in M-spacetime from orthogonality in Euclidean space, although, in my opinion, it would not ruin the analogy for the particular purpose for which it was conceived.
 
Physics news on Phys.org
  • #37
Saw said:
he phenomenon mentioned by PeterDonis: a timelike vector that is parallel to the T axis and is not orthogonal to a spacelike vector that is parallel to the X axis.
That's not what I said. I said you can find timelike vectors and spacelike vectors that are not orthogonal. I did not say those vectors would be parallel to the T and X axes of an inertial frame. That is impossible since those axes are everywhere orthogonal in an inertial frame.

But consider, for example, these two vectors, with ##(T, X)## components given in an inertial frame: ##V_1 = (1, 0.1)##, ##V_2 = (0.2, 1)##. ##V_1## is timelike and ##V_2## is spacelike, as you can see by computing their squared norms, but they are not orthogonal, as you can see by computing their inner product.
 
  • #38
Saw said:
how orthogonality differs in M-spacetime from orthogonality in Euclidean space
The general definition of "orthogonal" in vector spaces is that two vectors are orthogonal if their inner product is zero. Minkowski spacetime and Euclidean space are both vector spaces, so they both use that general definition of orthogonality, but their inner products are different (since, being metric spaces, their inner products are derived from their metrics, which are different).

We also sometimes talk about curves being orthogonal (such as the T and X axes of an inertial frame); what we really mean is that their tangent vectors are orthogonal at the point where they meet.
 
  • #39
PeterDonis said:
That's not what I said. I said you can find timelike vectors and spacelike vectors that are not orthogonal. I did not say those vectors would be parallel to the T and X axes of an inertial frame. That is impossible since those axes are everywhere orthogonal in an inertial frame.

But consider, for example, these two vectors, with ##(T, X)## components given in an inertial frame: ##V_1 = (1, 0.1)##, ##V_2 = (0.2, 1)##. ##V_1## is timelike and ##V_2## is spacelike, as you can see by computing their squared norms, but they are not orthogonal, as you can see by computing their inner product.
Sorry, I was too tired yesterday to search for your actual statement and misquoted you! No surprise that I was not finding in the internet any examples of what you had not said! :biggrin:

But then this is not so different from what happens in ordinary space, mutatis mutandis, of course, i.e. bearing in mind that the metric and hence the inner product of each vector space is different, as you noted in your next post and I do take into account. (BTW, I sometimes hear that the inner product of Minkowski space is called "bilinear form". Is that a generalized concept of the "inner product", just like the latter generalizes the good old "dot or scalar product"?).

At least, I don't see how this fact should ruin the analogy that I was making between ordinary vector space and Minkowski vector space, to illustrate that our evolution towards using the same units for time and space, and in particular length units for both, as proposed by Schutz and others (geometrical units), is driven by the fact that we are in face of two aspects (space and time) of the same physical reality ("events") and two orientations of the same measurement instrument (perpendicular in the most convenient display, although it could be otherwise, as long as it is not colinear).

At this stage, I don't know what to do. The subject of this excursus is in my opinion relevant to the topic of units-unification. I have ideas about it and would need to test them so as to either reject them or convert the intuition and apparent handwaving into mathematical/geometrical statements. But it is a complicated task in itself, should we discuss it here or should I maybe open another independent thread, about how the concept of orthogonality applies in both realms? I would think it would make sense to continue here, because this gives the discussion on orthogonality a practical background: supporting (or not) the claim in favor of units-unification.

PS: you may say that the matter is solved by noting that the metric in each space and hence the form of its inner product is different; but I think it is needed to go deeper into the reasons and implications of that difference, precisely because otherwise analogy between the two spaces (and its powerful teaching value) is lost; otherwise, noting any difference between the two areas is an obstacle ruining the comparison, whereas the trick with analogies is that there are differences that are relevant for the purpose at hand and others that are irrelevant.
 
Last edited:
  • #40
I am realizing that I may have left you without much to answer to, so I will be more specific.

I said long ago, speaking about the process of units-unification in SR:
Saw said:
  • This process would not take place if length and time were not measuring the same thing, even if from a different angle.
  • It would not be possible if you were not measuring with the same instrument, even if oriented in a different manner.
  • (...)
Of course, still, as noted, the difference between T and X versus Y and X is that the former combine with a negative sign.

Saw said:
just like spatial X and Y, spacetime T and X are axes looking from their respective and orthogonal axes at "events": one of them locally, the other simultaneously.
And, yes, I do think that here the meaning of orthogonality is the same as you find in ordinary vector space. In the latter, the essential requirement for a basis is that its basis vectors are "linearly independent", meaning that they are not overlapping or colinear; a convenient condition, since it simplifies calculations, is that such basis vectors are orthogonal, that is to say, not only to some extent independent (non-colinear) but "totally independent", meaning that one has 0 components in the others.

We should not abandon this generalized meaning in Minkowski space, just make the necessary adaptations.

The adjustment here is that the points are "events" and consequently both the T axis and the X are bunches of events, although each axis takes care of a specialized or independent function: the T axis is a bunch of events happening locally, at a fixed position x = 0, while the X axis is a bunch of events happening simultaneously, at t = 0; other parallel grid lines do the same thing, at the relevant fixed x or t points, respectively.

You may say then: but orthogonality is invariant, all frames agree on it and in SR all frames would not agree on what you have just stated; instead they do agree on another thing, which is the Minkowskian concept of orthogonality as checked through the Minkowskian dot product. I see no solution to this conundrum other than admitting that there are two versions or concepts of orthogonality: each frame builds its coordinate system assuming that it has orthogonality in the first sense; others disagree with that; but all agree that they all have orthogonality in the second sense, which is fine, because the second one is after all what solves the practical problem under consideration, which is one about causality btw events.
 
Last edited:
  • #41
Saw said:
You may say then: but orthogonality is invariant, all frames agree on it and in SR all frames would not agree on what you have just stated, but they do agree on another thing, which is the Minkowskian concept of orthogonality as checked through the Minkowskian dot product. I see no solution to this conundrum other than admitting that there are two versions or concepts of orthogonality
This “conundrum” will be based on some misunderstanding, but I’m genuinely not sure what your misunderstanding is. There is one concept of orthogonality, applicable everywhere to Euclidean spaces, Minkowski spaces, the more complicated spacetimes of general relativity, everywhere: ##g_{\mu\nu} U^{\mu} V^{\nu}=0##. There’s no separate “Minkowskian concept of orthogonality”.

Maybe if you could state this “Minkowski concept of orthogonality”, this “another thing” in the first quoted sentence above? Then we might better understand what you’re thinking?

I do note that you mention frames above as well. I’m not sure what you’re getting at there. Different frames naturally use different coordinates, but using different coordinates does not imply different frames. The choice of units for measurements of time and for distance (which as far as I know is still what this thread is about) is purely a coordinate issue.

Edit to add: The Minkowski spacetime and Minkowski coordinates are different things. People often just say "Minkowski" and rely on the context to clarify which was intended.
 
Last edited:
  • #42
Perhaps I am too late to the party here, but you do this in many branches of physics - like in quantum mechanics you set ##h=1## (or ##\hbar = 1##). In the end, you know what units you want, so you just insert back as many ##h## (or ##\hbar##) as needed to make the units work out.
 
  • #43
Nugatory said:
The choice of units for measurements of time and for distance (which as far as I know is still what this thread is about)

Well, here there is confusion because we are mixing the main issue of the thread (units) with the collateral issue of orthogonality. That is why I said in post 39 that maybe we should stop talking about orthogonality here and open a new thread to discuss it. The only link btw the two subjects is that I said that you will be more ready to accept the same units for time and space if you admit that they are two (orthogonal) aspects of the same thing, in the first sense, which I will refer to later as the "intuitive" one.

Nugatory said:
There is one concept of orthogonality, applicable everywhere to Euclidean spaces, Minkowski spaces, the more complicated spacetimes of general relativity, everywhere: ##g_{\mu\nu} U^{\mu} V^{\nu}=0##. There’s no separate “Minkowskian concept of orthogonality”.
Here, in order to build an abstract generalized concept of orthogonality, you are doing it like this: if the inner product of two vectors is 0, then they are orthogonal; a different thing is that, depending on the metric of each space, such dot product may take a different form or (technically) signature. Is this right?

We can call this (conventionally, for lack of a better name) the "Dot Product" concept of orthogonality. This can take a Euclidean or a Minkowskian form, but these (among others) are variants of the same thing.

Then there is another concept, which (again for lack of a better name) I will call the "Independence" sense, which is the one that I described here:

Saw said:
the same as you find in ordinary vector space. In the latter, the essential requirement for a basis is that its basis vectors are "linearly independent", meaning that they are not overlapping or colinear; a convenient condition, since it simplifies calculations, is that such basis vectors are orthogonal, that is to say, not only to some extent independent (non-colinear) but "totally independent", meaning that one has 0 components in the others.

What do you make of this meaning?

On the one hand, this meaning is present in SR, since it is clear to me what I stated here:

Saw said:
points are "events" and consequently both the T axis and the X are bunches of events, although each axis takes care of a specialized or independent function: the T axis is a bunch of events happening locally, at a fixed position x = 0, while the X axis is a bunch of events happening simultaneously, at t = 0; other parallel grid lines do the same thing, at the relevant fixed x or t points, respectively.

On the other hand, I find it hard to accommodate it within the "DP" meaning. It seems to me that it fits with the Euclidean signature but not with the Minkowskian one.

This is the "conundrum": has the "Independence" meaning been dropped behind as a Euclidean thing? If you tell me so, I could accept that this is the case, but then I do believe that each ST reference frame considers itself orthogonal in this "Independence" or "Euclidean" sense, while the others disagree and in turn attribute this feature to themselves. This is simply like when I say that I measure from x = 0, while your origin is located at x = d (in a translation) or when I say that my axes are not rotated, while yours are rotated by angle theta (in a 2D rotation) and vice versa.
 
  • #44
Saw said:
I sometimes hear that the inner product of Minkowski space is called "bilinear form".
The term "bilinear form" just means any thingie that takes two vectors and spits out a number, and is linear in both of its arguments. The inner product is an example of a bilinear form, but not the only possible one.

Saw said:
this is not so different from what happens in ordinary space
In terms of some pairs of vectors being orthogonal and others not, no, that's a general thing that will happen in any vector space that has an inner product defined on it.

As far as orthogonality goes, the key thing that distinguishes Minkowski spacetime from Euclidean space is that there are vectors--the null vectors--that are orthogonal to themselves.

Saw said:
two orientations of the same measurement instrument
But this does not apply to timelike and spacelike vectors in Minkowski spacetime; they are not just two different orientations of the same measuring instrument. You can't take a ruler and "point it in a timelike direction" to measure time; and you can't take a clock and "point it in a spacelike direction" to measure distance. Because of the way SI units are defined, you can use light in both your clock and your ruler (or more precisely to calibrate both your clock and your ruler) if you use those units, but that still doesn't make them the same measuring instrument.
 
  • Like
Likes PeroK and Orodruin
  • #45
Saw said:
Here, in order to build an abstract generalized concept of orthogonality, you are doing it like this: if the inner product of two vectors is 0, then they are orthogonal; a different thing is that, depending on the metric of each space, such dot product may take a different form or (technically) signature. Is this right?

We can call this (conventionally, for lack of a better name) the "Dot Product" concept of orthogonality. This can take a Euclidean or a Minkowskian form, but these (among others) are variants of the same thing.

Then there is another concept, which (again for lack of a better name) I will call the "Independence" sense
No. These are not two different concepts of orthogonality. There is just one concept of orthogonality, the "Dot Product" concept, which is perfectly well defined: two vectors are orthogonal if their dot product is zero. This is well-defined regardless of whether the dot product can have negative values or not (in Minkowski spacetime it can, in Euclidean space it can't).

The other concept that you are calling "independence" is the standard concept of linear independence, not orthogonality: two vectors are linearly independent if neither one is a scalar multiple of the other. More generally, a set of vectors is linearly independent if none of them can be expressed as a linear combination of the others. This is a necessary property for a set of vectors to be a basis of the vector space (the other necessary property is that the set must span the space, i.e., there cannot be any other vector in the space that is linearly independent of the set).

These are two different concepts and there is no confusion between them. If you are not familiar with them, I would suggest taking some time to study vector spaces and their properties.

It is often convenient to choose a basis for a vector space in which all of the vectors are orthogonal to each other (and it is also often convenient to have all of them be unit vectors, which is what the term "orthonormal basis" refers to--the vectors are all orthogonal to each other and all normalized to be unit vectors). But this is not required for a basis; it's just often convenient. The only required properties for a basis are the ones I gave above: that the basis vectors are all linearly independent and that they span the space.

Saw said:
At this stage, I don't know what to do.
I would suggest making sure you are thoroughly familiar with vector spaces and their properties, especially properties like orthogonality and linear independence, before you try to apply these concepts to the subject you seem to be interested in, which is systems of units.

I would not suggest trying to handwave definitions or applications for these vector space concepts on your own. This is a thoroughly studied subject and you should know the standard concepts in it and their standard definitions and applications.
 
  • #46
Nugatory said:
Different frames naturally use different coordinates, but using different coordinates does not imply different frames.
This depends on what definition you are using for "frame"--one possible definition is "coordinate chart", in which case different coordinates would imply different frames.

The definition you are implicitly using here for "frame" is "frame field", i.e., a continuous mapping of orthonormal tetrads to events. Then you could change coordinates without changing frames--you could match up a different coordinate chart to the same set of tetrads, by changing the units of at least one of the coordinates. In Minkowski spacetime there isn't much reason to do this except for pedagogy, but things get more complicated in curved spacetimes.
 
  • #47
PeterDonis said:
I would suggest making sure you are thoroughly familiar with vector spaces and their properties, especially properties like orthogonality and linear independence, before you try to apply these concepts to the subject you seem to be interested in, which is systems of units.

I would not suggest trying to handwave definitions or applications for these vector space concepts on your own. This is a thoroughly studied subject and you should know the standard concepts in it and their standard definitions and applications.

Well, nothing of what you have mentioned about vector spaces is something that I have not studied and indeed deeply studied. To convince me that I should drop my point and retire to study vector spaces, you should point out a specific point in which I am mistaken. But, without pretending that I know all about the subject, I am walking on quite firm ground in the area to which the discussion is restricted.

For example, these things that you are explaining to me, I have already brought them up myself, with other words:

PeterDonis said:
two vectors are linearly independent if neither one is a scalar multiple of the other. More generally, a set of vectors is linearly independent if none of them can be expressed as a linear combination of the others. This is a necessary property for a set of vectors to be a basis of the vector space (the other necessary property is that the set must span the space, i.e., there cannot be any other vector in the space that is linearly independent of the set).

PeterDonis said:
It is often convenient to choose a basis for a vector space in which all of the vectors are orthogonal to each other (and it is also often convenient to have all of them be unit vectors, which is what the term "orthonormal basis" refers to--the vectors are all orthogonal to each other and all normalized to be unit vectors). But this is not required for a basis; it's just often convenient. The only required properties for a basis are the ones I gave above: that the basis vectors are all linearly independent and that they span the space.

What I find difficult to understand is why you don't recognize that there is a link between so-called linear independence (the essential thing, as I said) and orthogonality (the convenient thing, as I also said) and this link is one of degree:

- if two vectors are only linearly independent, but not orthogonal, it means that they are not colinear or occupying the same line, but if you project one over the other, you still find that one projects a shadow over the other, i.e. it has some component of the other (that is why their dot product is not zero);
- instead, if two vectors are orthogonal, it means the same thing to a higher extent: if you project one over the other, you find that one does not project any shadow over the other, i.e. it has no component of the other (that is why their dot product is zero).

Do you see why I say that it is a question of degrees: some component vs. no component, some shadow vs no shadow?

If we cannot agree on this elementary thing, then I will concur (constructively quoting your own words) that one of us must go and make sure that he/she is thoroughly familiar with vector spaces and their properties, especially properties like orthogonality and linear independence, before continuing with the discussion. :wink:
 
Last edited:
  • Skeptical
Likes weirdoguy, PeroK and malawi_glenn
  • #48
Saw said:
- if two vectors are only linearly independent, but not orthogonal, it means that they are not colinear or occupying the same line, but if you project one over the other, you still find that one projects a shadow over the other, i.e. it has some component of the other (that is why their dot product is not zero);
- instead, if two vectors are orthogonal, it means the same thing to a higher extent: if you project one over the other, you find that one does not project any shadow over the other, i.e. it has no component of the other (that is why their dor product is zero).
:wink:
From a pure mathematical perspective, linear independence is an algebraic property. It depends only on the addition of vectors and multiplication by scalars. Whereas, orthogonality is an analytic property, as it depends on the inner product.

Moreover, linear independence of a set of (more than two) vectors is not a case of mutual linear independence only. E.g. the vectors ##(0,1), (1,1), (1,0)## in ##\mathbb R^2## are all pairwise linearly independent, but form a linearly dependent set.

Orthogonality, on the other hand, is only a pairwise concept. A set of vectors is orthogonal if and only if every pair of vectors in the set is orthogonal.

The two concepts are, therefore, more subtly different that your naive analysis involving "shadows" would suggest.
 
Last edited:
  • Like
Likes PeterDonis
  • #49
Saw said:
To convince me that I should drop my point and retire to study vector spaces, you should point out a specific point in which I am mistaken.
Sure, here's an example:

Saw said:
What I find difficult to understand is why you don't recognize that there is a link between so-called linear independence (the essential thing, as I said) and orthogonality (the convenient thing, as I also said) and this link is one of degree:
Because there is no such "link". The two concepts are different concepts. @PeroK's comments in this regard are good ones.

In Euclidean space, one can at least say that vectors that are orthogonal are also linearly independent (although the converse is of course not true). But in Minkowski spacetime, even that is not the case: null vectors, as I have already pointed out, are orthogonal to themselves, and they are certainly not linearly independent of themselves.

Saw said:
If we cannot agree on this elementary thing, then I will concur (constructively quoting your own words) that one of us must go and make sure that he/she is thoroughly familiar with vector spaces and their properties, especially properties like orthogonality and linear independence, before continuing with the discussion. :wink:
Yes, and that person would be you. See above.
 
  • Like
Likes weirdoguy
  • #50
Dears, I have to leave for a while now, not to study vector spaces, but to the Gym and later dinner. I will reply on return. Hope that the thread is not closed by then! :smile:
 
  • #51
@PeroK, you misquoted me, please edit your quote in post 48. My wink was accompanying the joke to PeterDonis where I gave him back the recommendation to go and study vector space. I introduced it to show that I was saying that in a playful way, with constructive intention, following his suggestion. But if you put it where you put it, you make me sound as patronizing, which is far from my intention.
PeroK said:
The two concepts are, therefore, more subtly different that your naive analysis involving "shadows" would suggest.

"Naive" is an implicit assumption that the analysis is basically correct. So the burden of disproving it is displaced to you.

PeroK said:
From a pure mathematical perspective, linear independence is an algebraic property. It depends only on the addition of vectors and multiplication by scalars. Whereas, orthogonality is an analytic property, as it depends on the inner product.
In what sense is inner product analytic? Is "analytic" here referring to calculus? If so, that will be the case when the dot product is an integral, like in Hilbert space, but not here.

PeroK said:
Moreover, linear independence of a set of (more than two) vectors is not a case of mutual linear independence only. E.g. the vectors ##(0,1), (1,1), (1,0)## in ##\mathbb R^2## are all pairwise linearly independent, but form a linearly dependent set.

Orthogonality, on the other hand, is only a pairwise concept. A set of vectors is orthogonal if and only if every pair of vectors in the set is orthogonal.

The two concepts are, therefore, more subtly different that your naive analysis involving "shadows" would suggest.

The two vectors at the extremes are orthogonal, i.e. totally linearly independent, and this is a 2D space, so forcefully they span the whole space, including the vector in the middle, which is therefore fully redundant... I don´t see how this contradicts what I am saying, rather it looks like a confirmation thereof.

Furthermore, it is not enough if you list differences btw the two concepts, you should mention why such differences are relevant to the discussion. I brought up the idea of independence btw basis vectors just to show that it makes sense to assign the same units to T and X because, no matter if they seem independent from each other, since that does not prevent one from seeing them as ways to look at the same thing. However, if as I already mentioned, we adopted an operational method to measure T and X where these axes were only linearly independent but not orthogonal, that would not undermine the argument...
 
  • Sad
Likes PeroK
  • #52
PeterDonis said:
In Euclidean space, one can at least say that vectors that are orthogonal are also linearly independent (although the converse is of course not true). But in Minkowski spacetime, even that is not the case: null vectors, as I have already pointed out, are orthogonal to themselves, and they are certainly not linearly independent of themselves.
If you read my posts, you would realize that I am not saying that the vision of orthogonality as "total independence" flows into Minkowskian dot product, which is the sense in which null vectors are orthogonal to themselves (dot product with negative sign is 0). The point is that here the orthogonality concept has been generalized in a way that picks up the dot product leg, but drops the full independence leg. So it is no surprise that the Minkowskian dot product does not conform to the total independence requirement, which it does not have. I am not saying that such way of generalizing orthogonality is wrong, no doubt it is the right thing to do, I am just holding that it is a conceptual swerve.

However, what is still puzzling is that the total independence idea pops up unexpectedly in spacetime. I think that this is hard to deny if you look at the Minkowski diagram and the frame which has been painted as observer, the one usually marked as non-primed frame. If I can see well, the T axis projects no shadow over the X axis and the deltas t and x of a timelike interval look like the height and the base of a right angle triangle, embracing the hypotenuse of the proper time as measured by the primed frame. Of course, this happens because the latter is painted in another scale. But that is why I hypothesized that each frame deems itself as having perpendicular axes in the "old" sense, denies that others have this privilege and sees their lengths distorted.

I can understand that you are suspicious about this speculation, which is what it is, I can perfectly concede that. But if you also deny that the "no shadow" or "total independence" concept is what has always denoted orthogonality, until we arrive at spaces with another metric, then I am very surprised. Look at the Fourier transform! How can you explain that the basis functions of a Hilbert space are orthogonal if not by reference to the concept of total independence (the magnitude of the signal at one time-point or frequency is independent from the others)...?

Anyhow, I have convinced myself that I will not move you an inch from your positions. Please say your last word, of course, if you please, but I will not reply. The units issue has been commented to satisfaction, I have learnt quite a few things, so I thank you all for your time and interest and will not say more, unless I find some interesting material to share!
 
  • Sad
Likes weirdoguy
  • #53
Saw said:
But if you also deny that the "no shadow" or "total independence" concept is what has always denoted orthogonality
What light source is casting the shadow? Your whole "no shadow" concept hinges on the light travelling parallel to the vector, which may be arranged for any vector.
 
  • #54
Saw said:
Ibix, I said that I would not reply any more, but I think I should out of courtesy: I genuinely don't know what you mean. My "no shadow concept" has nothing to do with light, it was only this:
I was taking your "shadow" literally. If you want to be less literal: how are you doing the projection? You are doing it by some process like dropping a perpendicular from the tip of one vector to the other - but here you presuppose a notion of perpendicularity. Thus your argument is circular: your definition of two orthogonal lines relies on the existence of two orthogonal lines.
 
  • #55
Saw said:
The two vectors at the extremes are orthogonal, i.e. totally linearly independent, and this is a 2D space, so forcefully they span the whole space, including the vector in the middle, which is therefore fully redundant... I don´t see how this contradicts what I am saying, rather it looks like a confirmation thereof.
There are many misconceptions in this paragraph.

First of all, there is no such thing as ”fully independent”. A set of vectors is either linearly independent or not. Any two of the quoted vectors are linearly independent but the set of three is not.

Second. Linear independence is not directly related to orthogonality. In fact, you do not even need an inner product to discuss linear independence. Furthermore, in Minkowski space you can have two vectors that are parallel yet linearly dependent.

You are also missing PeroK’s point of linear independence being a property of a set of vectors rather than a property of pairs of vectors.
 
  • Like
Likes PeroK and PeterDonis
  • #56
Ibix said:
What light source is casting the shadow? Your whole "no shadow" concept hinges on the light travelling parallel to the vector, which may be arranged for any vector.
Well, I said that I would not reply anymore, but I think I should out of courtesy, since you are asking me a question. I don't know what you are referring to. What I am saying is not a weird invention. I suppose that if I look for them, I will find many texts where they use this image to explain perpendicularity. If you project a light (an imaginary light) over an analyzed vector, perpendicularly to the analyzing vector, you get a shadow, which is the degree in which the analyzed vector shares the direction of the analyzing vector. If the shadow is zero, that means that analyzed and analyzing vectors are orthogonal (they do not share any component).
 
  • #57
Saw said:
Well, I said that I would not reply anymore, but I think I should out of courtesy, since you are asking me a question. I don't know what you are referring to. What I am saying is not a weird invention. I suppose that if I look for them, I will find many texts where they use this image to explain perpendicularity. If you project a light (an imaginary light) over an analyzed vector, perpendicularly to the analyzing vector, you get a shadow, which is the degree in which the analyzed vector shares the direction of the analyzing vector. If the shadow is zero, that means that analyzed and analyzing vectors are orthogonal (they do not share any component).
See this post:
Ibix said:
I was taking your "shadow" literally. If you want to be less literal: how are you doing the projection? You are doing it by some process like dropping a perpendicular from the tip of one vector to the other - but here you presuppose a notion of perpendicularity. Thus your argument is circular: your definition of two orthogonal lines relies on the existence of two orthogonal lines.
 
  • #58
Ibix, really, haven't you read this in hundreds of texts? The light source will be parallel to the analyzing vector and the light will come perpendicularly to the analyzed vector. This procedure shows the degree of directional coincidence of both vectors. I don't think there is circularity in this, but never mind!
 
  • #59
Saw said:
Ibix, really, haven't you read this in hundreds of texts? The light source will be parallel to the analyzing vector and the light will come perpendicularly to the analyzed vector. This procedure shows the degree of directional coincidence of both vectors. I don't think there is circularity in this, but never mind!
You (or that simile) are presupposing a Euclidean space. Spacetime is Minkowski, not Euclidean, so it does not work as a mental guide in spacetime.

And no, it does not sound like it would be a good textbook simile even in Euclidean space. At least not at higher level. It may have more of an intuition point in lower level maths but even then it is flawed as presented in this thread.
 
  • Like
Likes PeterDonis and Ibix
  • #60
Saw said:
I don't think there is circularity in this
Then how do you define the word "perpendicularly" when you write "the light will come perpendicularly to the analyzed vector"? You rely on it in your definition of orthogonality.
 
  • #61
Ibix said:
Then how do you define the word "perpendicularly" when you write "the light will come perpendicularly to the analyzed vector"? You rely on it in your definition of orthogonality.
The shadow idea is not for defining perpendicularity, it is only for illustrating the fact that perpendicularity is independence to a higher extent: it is instead of some shadow, no shadow at all.
 
  • #62
Orodruin said:
You (or that simile) are presupposing a Euclidean space. Spacetime is Minkowski, not Euclidean, so it does not work as a mental guide in spacetime.

And no, it does not sound like it would be a good textbook simile even in Euclidean space. At least not at higher level. It may have more of an intuition point in lower level maths but even then it is flawed as presented in this thread.
If you read my posts, you will find out that I am not positing the simile for Minkowski space, at least not for the orthogonality that is based on the dot product with negative sign, which is not the same thing, sure. I just said that the shadow simile pops up however also in Minkowski space under forms that would deserve discussion.
As to the goodness of the shadow simile in Euclidean space, I find it perfect for that domain. Does it work or not? Does it serve to tell a perpendicular vector from one that is not or not? That is all that matters. Rest is subjective: I think, I don’t think, I like, I don’t like...
But thanks indeed for the comments and bye! I said that I would not insist on these concepts that mentors are clearly rejecting and so this is over!
 
  • Sad
Likes weirdoguy
  • #63
Saw said:
The two vectors at the extremes are orthogonal, i.e. totally linearly independent
Not if they are null vectors.
 
  • #64
Saw said:
here the orthogonality concept has been generalized in a way that picks up the dot product leg, but drops the full independence leg.
I don't understand what you are talking about. This looks like a personal theory of yours. Personal theories are off limits here.

Saw said:
I can understand that you are suspicious about this speculation, which is what it is, I can perfectly concede that.
So it is a personal theory of yours. See above.

Saw said:
I have convinced myself that I will not move you an inch from your positions.
Since the "positions" that everyone but you in this thread are taking are the standard "positions" about vector spaces in both math and physics, your apparent expectation that we should "move" from them is mistaken. Particularly when, as you admit, you are expounding your own personal theory. Get your speculations published in a peer reviewed journal and then you can discuss them here.
 
  • #65
Saw said:
I said that I would not insist on these concepts that mentors are clearly rejecting and so this is over!
Since the "concepts" you refer to are your personal theory, yes, it is indeed over. Thread closed.
 
  • Like
Likes Saw

Similar threads

  • Special and General Relativity
Replies
13
Views
1K
  • Special and General Relativity
Replies
25
Views
2K
  • Special and General Relativity
5
Replies
146
Views
7K
  • Special and General Relativity
5
Replies
146
Views
6K
  • Special and General Relativity
Replies
8
Views
2K
  • Special and General Relativity
2
Replies
51
Views
3K
  • Special and General Relativity
2
Replies
65
Views
4K
  • Special and General Relativity
Replies
9
Views
851
  • Special and General Relativity
4
Replies
105
Views
5K
  • Special and General Relativity
Replies
25
Views
875
Back
Top