ghwellsjr said:
How does setting up a sequence of sensors with matched sets of cables make any difference than having just one sensor with one cable?
As you've said, to measure the speed of light we need a start point, and end point, a known distance between them and a time interval. So I guess just two sensors would be ok. (I would have done a number of them in series to help measure the error that’s all.)
ghwellsjr said:
I agree if the delay in a cable segment is 5 nano seconds then the problem is solved, but where'd you get that number from?
I have to admit, this was plucked from thin air, just to demonstrate the point.
ghwellsjr said:
I agree that the delay in each cable section is the same as all the others but so what? You're just multiplying the problem over and over again without addressing the problem.
You can get reflections of signals down cables just as easily as you can get reflections of light off of mirrors, there's no difference in principle. So let's say you have a straight piece of cable that's 1000 feet long and you leave it unterminated (open circuit). Then you apply a current to one end of the cable at the same time that you start your timing device. Three microseconds later, you measure the reflected signal and stop your timer. You have measured the round-trip signal speed through this cable and can calculate the average speed of the signal as being 2000 feet divided by 3000 nanoseconds or 2/3 feet per nanosecond. But if you think that the signal took 1.5 microseconds to get to the far end of the cable and another 1.5 microsecond to return, then you are jumping to a conclusion, because you haven't measured that. You can't tell if it took 1 microsecond to go down the cable and 2 microseconds to come back to you or the other way around or any other pair of numbers that add up to 3 microseconds.
Ok, first of all I think I need to clarify my experiment, as
I am not measuring the signal from the sensor to the clock. The signal from the sensor to the clock is simply carrying a wave of information from the sensor to the clock to say that the sensor has detected a light soruce. I have no interest or no need to know how long it takes for that information to get from the sensor to the clock.
All that is important is that the time taken for that information to get from the sensor to the clock is the same for both detections.
So in effect, I fire a laser beam down a tube which has two sensors A and B which are placed at a distance x, When A detects a light source, it sends a wave of information, in a one way direction, to the clock. The clock registers that signal and makes a note of the time t1. The light source continues down the tube until it registers at B, which sends a one-way wave of information to the clock and registers a time t2.
I now have an elapsed time between two events (t2-t1) and a known distance x, hence I now know the speed the light source was traveling through the tube.
That seems fairly straight forward to me, I don't understand why that causes a problem.
ghwellsjr said:
So if you agree that cables are no better or worse than just using light, why'd you introduce cables?
To test your hypothesis that we cannot measure the speed of light traveling in one-way direction.
EDIT: Just to add to that, the reason for cables is that I am only using one clock as I didn't know if using two clocks at each sensor would cause problems, as they would be separated by a distance.
If this is not an issue, I could do away with the cables and have two sensors with clocks in that were synchronised. Then just take the difference in readings from the clocks to establish elapsed time.