Dale said:
Then read again.
This is impossible as a matter of definition. The definition of the one way speed of light is the distance that light travels (in a single straight line path) divided by the time that it takes for the light to travel that distance.
I have used a different definition of speed,
avoiding the conundrum that you say I have not read. I propose that I have just avoided it, but if I am wrong and not avoided it at all then surely it is so obviously a fallacy? If so,
please just tell me where the logic breaks down.
Say;
D = known test distance
V1 = unknown speed of 1st measurand
V2 = unknown speed of 2nd measurand
average{V1} = k * average{V2}, by local calibration of each measurand following an identical loop circuit whose test radius << D
{and using the same clock}
I propose there no longer a need for any time synchronisation between two distant points.
Instead, let dt = observed time
interval between 1st measurand and 2nd measurand passing the end of test distance D, having started off together,
and only time-measured at end of D, not requiring any prior synchronisation reference to any other point {and using the same clock}.
then V1 = (D/dt) * (k-1)
'IF' I can perform that calibration, then thereafter this definition of velocity is
only using a time measurement made at one singular point and
not synchronised to any other clock or any other time reference anywhere else.
There may be some fallacy embedded in the calibration procedure, but I am not seeing it as it does not favour any particular velocity direction
prior to the actual one way measurement.
I can do the test run along D, one way, with nothing more than one singular time piece. I do not need two timepieces, thus there is no synchronisation error.