# In Dimensional analysis why is Lenght/Lenght=1 (a dimensionless number)?

$$\sin^{\frac{1}{\sqrt{2}}}{(\alpha)}$$

Just as in the Newtons Law example, if we had $$F^{\sqrt{1/2}}=m^{\sqrt{1/2}}a^{\sqrt{1/2}}$$, it would be something that could be "fixed". If not, then its not a good physics equation.

Me not finding an example does not constitute a proof of your claims. It is up to you to show that there is no such law. Also, you still had not addressed the question of:

$$B(p, q) = B(q, p)$$

True, but if do find an example, let me know. About the B(p,q)=B(q,p), the p and q are exponents of the trig functions in the integral, so its very similar to the $$\sin^x(\theta)$$ problem. Just because the exponents are "bad" doesn't mean they cannot be "fixed". Take a definition of the Beta function:

$$B(p,q)=\int_0^{\pi/2} \sin^{2p+1}(\theta)\cos^{2q+1}(\theta)\,d\theta$$

If we treat theta as a directed quantity, we write it as $$\theta 1_x$$ and the sine is directed the same way, the cosine is dimensionless, the $$d\theta$$ is directed the same way and the direction of the Beta function is $$1_x^{2p+2}=(1_x^2)^{p+1}$$ which is dimensionless. Defining $$\theta =\phi +\pi/2$$ we get $$\sin(\theta 1_x)=1_x \cos(\phi 1_x)$$ and $$\cos(\theta 1_x)=-1_x \sin(\phi 1_x)$$ (i.e. still dimensionless) so that B(p,q) is equal in value to B(q,p) and is again directed as $$1_x^{2p+2}$$ (i.e. dimensionless). But if we take your original definition (replacing p with (p+1)/2 etc.)

$$B\left(\frac{p+1}{2},\frac{q+1}{2}\right)=2\int_0^{\pi/2} \sin^p(\theta)\cos^q(\theta)\,d\theta$$

And now its directed as $$1_x^{p+1}$$ which is not well defined unless p+1 takes on certain values (like 1/3, but not 1/2, as in the sin(alpha) example). I have not checked, but I bet the directions are not consistent for the second case either. But the point is, a "bad" statement like the second case can be "fixed" to be good, like the first case. If Siano's extension is valid, then the Beta function will not occur in any physically meaningful equation in such a way that it cannot be "fixed" somehow, just like the Newtons law case.

I just noticed you `invented' a new dimension - cycle.

A cycle is 2 pi radians, so if radians are not dimensionless, then neither are cycles. Not an invention, just a consequence of saying radians are not dimensionless.

Could I repeat my question from before: why are you so sure that Siano's extension is valid? Does it follow from rotational invariance somehow? I would certainly not be comfortable making use of some ad-hoc set of rules unless I understand how they come about.

Could I repeat my question from before: why are you so sure that Siano's extension is valid? Does it follow from rotational invariance somehow? I would certainly not be comfortable making use of some ad-hoc set of rules unless I understand how they come about.

I am not sure that Siano's extension is valid, but try as I might, I cannot break it. Every time I think I have, further study convinced me that I have not, and that further study gave me insights into dimensional analysis in general. I have not investigated it on any deeper level than that. Trying to show that all the challenges to it given, esp. by Dickfore, can be dealt with is very interesting to me, but what would be excellent is to find one that really did not work. That would give some real insight. Thats why I am happy to try to answer all challenges to it, but I will not defend it to the death. Thats why I challenge anyone to break it. If it cannot be broken, then maybe its time to go deeper into the theory and ask why does it work? That means going deeper into dimensional analysis in general. The Buckingham Pi theorem is about as far as I have gone in this direction. Its evidently not enough. Any guidance would be welcome.

I think I understand why Siano's extension to dimensional analysis works! In fact, if my logic is correct, I would propose an extension to Siano's approach to space-time that might be useful in relativistic physics.

I think I understand why Siano's extension to dimensional analysis works! In fact, if my logic is correct, I would propose an extension to Siano's approach to space-time that might be useful in relativistic physics.

Could you provide details? I've been looking at Section IX of Siano's http://dx.doi.org/10.1016/0016-0032(85)90032-8" [Broken], and it turns out he does give a partial justification of why his method works, by assuming that physical laws are tensor equations (as they must be from rotational invariance). This helps me understand what was going on in the example given in #31 (namely, if you're going to treat the forces as vectors, then there will be a rotation matrix involved to to relate them, whose entries have the right dimensions to ensure everything works out right), but I still can't see how assigning dimensions to angles can be justified.

EDIT: He also admits that his method doesn't work for equations with fractional exponents.

Last edited by a moderator:
Could you provide details? I've been looking at Section IX of Siano's http://dx.doi.org/10.1016/0016-0032(85)90032-8" [Broken], and it turns out he does give a partial justification of why his method works, by assuming that physical laws are tensor equations (as they must be from rotational invariance). This helps me understand what was going on in the example given in #31 (namely, if you're going to treat the forces as vectors, then there will be a rotation matrix involved to to relate them, whose entries have the right dimensions to ensure everything works out right), but I still can't see how assigning dimensions to angles can be justified.

EDIT: He also admits that his method doesn't work for equations with fractional exponents.

Yes, my ideas also started similarly, although I was looking at the rotation matrices in Cartesian coordinates (no curvilinear coordinates for now). I think his method works well because of the "well known" fact that in 3d the dual of an antisymmetric tensor of rank 2 is a vector (albeit an axial one). I don't want to say too much. Let me just say that Pauli matrices and the unit matrix:

$$\hat{\sigma}_{i} \, \hat{\sigma}_{k} = \delta_{i k} \, \hat{1} + i \, \epsilon_{i k l} \, \hat{\sigma}_{l}$$

obey a similar algebra as $V$ except that it is anti-Abellian for the Pauli matrices that are not equal. Perhaps I need more group theoretical knowledge to refine this point. One might ask, what do Pauli matrices have to do with rotations. Again, there is a very "convenient" coincidence in 3d.

BTW, at one point in his first paper (third paragraph on the sixth page), he says that $\sin{(\theta)}$ is orientationally quite different from $\sin{(\phi)}$, where $\theta$ is an angle (with orientational symbol 1z) and $\phi$ is a phase angle.

Last edited by a moderator:
BTW, at one point in his first paper (third paragraph on the sixth page), he says that $\sin{(\theta)}$ is orientationally quite different from $\sin{(\phi)}$, where $\theta$ is an angle (with orientational symbol 1z) and $\phi$ is a phase angle.

I think this is the same as the $$\sin(\theta+\pi/2)=\cos(\theta)$$ example. With explicit orientational symbols;

$$\sin(\theta\,\,1x+\phi \,\,1x)=\sin(\theta \,\,1x)\cos(\phi\, \,1x)+\sin(\phi \,\,1x)\cos(\theta \,\, 1x) = 1x\, \sin(\theta)\cos(\phi)+1x \,\sin(\phi)\cos(\theta)$$

so that $$\sin(\theta\,\,1x+\pi/2\,\,1x)=1x \cos(\theta)$$ and the discrepancy disappears.

I think I understand why Siano's extension to dimensional analysis works! In fact, if my logic is correct, I would propose an extension to Siano's approach to space-time that might be useful in relativistic physics.

Yes! You can derive the algebra of the directional symbols by the dot product, which is dimensionless ($$e_i$$ are unit vectors):

$$(A_j\mathbf{e}_j 1_j)\cdot(B_k \mathbf{e}_k 1_k)=A_j B_j 1_j^2 = A_j B_j 1_0$$

which proves $$1_j^2=1_0$$ (dimensionless). Then do the cross product:

$$(A_j\mathbf{e}_j 1_j) \mathrm{X} (B_k \mathbf{e}_k 1_k)=\varepsilon_{ijk}A_j B_k \mathbf{e}_i (1_j 1_k) = \varepsilon_{ijk}A_j B_k \mathbf{e}_i 1_i$$

which proves that $$1_j 1_k= 1_i$$ where i,j, and k are all different. The same procedure could be carried out for the invariant analogs in relativity for the direction symbols 1x, 1y, 1z, 1t, 10.

I think this is the same as the $$\sin(\theta+\pi/2)=\cos(\theta)$$ example. With explicit orientational symbols;

$$\sin(\theta\,\,1x+\phi \,\,1x)=\sin(\theta \,\,1x)\cos(\phi\, \,1x)+\sin(\phi \,\,1x)\cos(\theta \,\, 1x) = 1x\, \sin(\theta)\cos(\phi)+1x \,\sin(\phi)\cos(\theta)$$

so that $$\sin(\theta\,\,1x+\pi/2\,\,1x)=1x \cos(\theta)$$ and the discrepancy disappears.

The sine function, being a Taylor series of only odd powers of its argument, has the same orientational symbol as its argument. If the argument had orientational symbol $1_{0}$, then the value of the sine has the dimension $1_{0}$ as well. If the argument has orientational symbol $1_{z}$, then, so does the value of the sine function.

Siano actually talks about this. Due to orientational analysis, we can distinguish between angular velocity (which has orientation) and circular frequency (which does not); torque (which is oriented) and work (which is not) and so on. Perhaps a very drastic example is the case of coefficient of surface tension. It is defined as the energy per unit area and, thus it has the same orientation as the area. It's dimension is, however $\mathrm{M} \mathrm{T}^{-2}$, just as the rate of change of growth of, e.g. an animal, which, of course, is orientationless.
Yes! You can derive the algebra of the directional symbols by the dot product, which is dimensionless ($$e_i$$ are unit vectors):

$$(A_j\mathbf{e}_j 1_j)\cdot(B_k \mathbf{e}_k 1_k)=A_j B_j 1_j^2 = A_j B_j 1_0$$

which proves $$1_j^2=1_0$$ (dimensionless). Then do the cross product:

$$(A_j\mathbf{e}_j 1_j) \mathrm{X} (B_k \mathbf{e}_k 1_k)=\varepsilon_{ijk}A_j B_k \mathbf{e}_i (1_j 1_k) = \varepsilon_{ijk}A_j B_k \mathbf{e}_i 1_i$$

which proves that $$1_j 1_k= 1_i$$ where i,j, and k are all different. The same procedure could be carried out for the invariant analogs in relativity for the direction symbols 1x, 1y, 1z, 1t, 10.

Cross product is defined only in 3d.

The sine function, being a Taylor series of only odd powers of its argument, has the same orientational symbol as its argument. If the argument had orientational symbol $1_{0}$, then the value of the sine has the dimension $1_{0}$ as well. If the argument has orientational symbol $1_{z}$, then, so does the value of the sine function.
Agreed. Do you have in mind a physical situation in which the argument is dimensionless? I will try to think of one.

Siano actually talks about this. Due to orientational analysis, we can distinguish between angular velocity (which has orientation) and circular frequency (which does not); torque (which is oriented) and work (which is not) and so on. Perhaps a very drastic example is the case of coefficient of surface tension. It is defined as the energy per unit area and, thus it has the same orientation as the area. It's dimension is, however $\mathrm{M} \mathrm{T}^{-2}$, just as the rate of change of growth of, e.g. an animal, which, of course, is orientationless.

I would say that the dimension of the energy per unit area is $$m\,\,1x/t^2$$ (or 1y or whatever) rather than say its dimension is $$m/t^2$$ and is oriented. This is a semantic disagreement, so its not super critical.

Cross product is defined only in 3d.

I didn't say "cross product", I said "invariant analogs". The cross product in 3d is

$$(\mathbf{A} \mathrm{x} \mathbf{B})_i = \varepsilon_{ijk}A_jB_k$$

where $$\varepsilon_{ijk}$$ is the permutation symbol (=1 for even permutations of 123, -1 for odd, and zero otherwise). For a 4-d Euclidean space the analog is

$$\varepsilon_{ijkl}A_kB_l$$

where $$\varepsilon_{ijkl}$$ is the permutation symbol (=1 for even permutations of 1234, -1 for odd, and zero otherwise). For Minkowski space, we need to make the covariant/contravariant distinction and the analog is

$$\varepsilon_{ijkl}A^kB^l$$

Note that the analog is a 2nd rank tensor rather than a vector.