Baluncore said:
A measurement without units is meaningless. Consider a measured value, complete with units as an input to a process. The units identify the dimension of the value. Convert that value to SI using known conversion factors. The dimension will not change.
That's backwards to the usual approach because the outlook of conventional dimensional analysis is that
dimensions (e.g. time, mass) are the fundamental properties of nature and various
units of measure (e.g. kilograms, seconds) are invented to quantify a dimension. You are saying that "dimensions" are identified by the SI "units of measure" - i.e. that the "unit of measure" is more fundamental than the concept of "dimension".
Proceed with the computations while tracking any and all the combinatorial changes of dimensions. Adding or comparing apples and oranges will raise an immediate runtime error.
Why make the assumption that adding different dimensions is an error? As pointed out by others in the thread, there are two possible interpretations of "addition". One type of addition is "appending to a set" - for example, put 2 apples in a bag and then put 3 oranges in the bag. Another type of addition is "summation of numerical coefficients of units and creation of a new type of unit that does not distinguish the summands". An example of that would be: 2 apples + 3 oranges = 5 apples+oranges.
It's easy to say that "5 apples+oranges" makes no sense, but
why do we say that? After all we don't object to products of units with different dimensions like 5 (ft)( lbs). What makes a unit representing a sum of dimensions taboo, but allows a unit representing a product of dimensions to be "the usual type of thing" ?
The answer might be that Nature prefers the ambiguity in products. For example, in many situations, the "final effect" on a process of a measurement 5 (ft)(lbs) is the same , no matter whether it came from a situation implemented as (1 ft) (5 lbs) or (2.5 ft) ( 2 lbs), etc. So the ambiguity introduced in recording data in the unit (ft)(lbs) is often harmless. However, it is not harmless is all physical situations. If a complicated experiment involves a measurement of 2 ft on something at one end of the laboratory and 2.5 lbs on something at the other end of the laboratory, summarizing the situation as 5 (ft)(lbs) may lose vital information.
Is it a "natural law" that products are the only permitted ambiguities? Allowing the ambiguity implied by a sum-of-units fails to distinguish situations that are (intuitively) vastly different. For example a measurement of 5 apples+oranges could have resulted from inputs of 3 apples and 2 oranges, or 0 apples and 5 oranges, or 15 applies and -10 oranges. However (taking the world view of a logician) it is possible to conceive of situations where this type of ambiguity has the same "net effect". We can resort to thinking of a machine with a slot for inputting apples and another slot for inputting oranges. The machine counts the total number of things entered and moves itself along the table for a distance of X feet where X is the total.
Is the argument in favor of products-of-units and against sums-of-units to be based only on statistics? - i.e that one type of ambiguity is often (but not always) adequate for predicting outcomes in nature, but the other type of ambiguity is rarely adequate ?
I suspect we can make a better argument in favor of products-of-units if we make some assumptions about the mathematical form of natural laws. For example, do natural laws stated as differential equations impose constraints on the type of ambiguity we permit in the measurements of the quantities that are involved ?