bahamagreen said:
The local observer at rest wrt to car would see all the clocks set to the same time.
Only if you define "see" to mean he *constructs* this. It is *not* what he actually sees, as in the light actually reaching him at a given instant by his own clock.
bahamagreen said:
For the first observer, all his readings match each other, indicating the same time.
Only if you specify *which* readings match each other. Once again, the clock readings the first observer actually sees (as in, the light signals actually reaching the first observer at a given instant by his own clock) are *not* all the same. He has to construct a surface of simultaneity in which all the clock readings match "at the same time" by correcting for the light travel time from different parts of the object to him.
bahamagreen said:
For the second observer, the different readings indicate different local times on the clocks of the car.
Different compared to what? The clock readings actually seen by either observer at a given instant (as in, the light signals actually reaching the observer at a given instant) are different simply because of light travel time delay. Each observer has to correct for that; and each observer will have to correct *differently*. Once again, there are two reasons for the difference: the observers can be spatially separated, and they can be in relative motion.
bahamagreen said:
Since the readings of both observers were performed in what each would consider a single moment, the conclusion they must draw is that they observed different "times" of the car
*If* they correct their actual observations for light travel time delay (as above), *and* if they interpret the resulting readings that way.
bahamagreen said:
except for the plain of the car in which their readings of the car clocks matched.
There won't be any such plane if they are in relative motion. There will be one instant by each observer's clock in which what they actually see (as in, the light signals actually reaching them at that instant) is the same; this is the instant at which they pass each other. But they construct *different* simultaneity planes even at this instant, because they have to correct differently for light travel time delay due to their relative motion.
As I said before, all this gets a lot clearer if you draw a spacetime diagram of the scenario. If you don't currently know how to do this, I strongly recommend learning how. IMO it really helps to understand what's going on in scenarios like this.
bahamagreen said:
Instead of clocks with faces, replace them with clocks that change geometric shape through time.
I think we're in agreement on what "clock readings" mean, and how they can change from event to event along the worldline of a given object (or part of an object).
bahamagreen said:
the "snapshot" term is a photography term.
I shouldn't have used that term, since the way I was using it is not really the right way. You're using in the right way in what follows, so let me correct my own terminology. Instead of the word "snapshot" for what I was talking about, I'll use the word "slice". A "slice" of an object is the intersection of its world-tube (i.e., the set of worldlines of all parts of the object) with a particular 3-D surface of simultaneity; i.e., it is the set of events within that object that happen at some particular coordinate time according to a particular inertial frame.
bahamagreen said:
The snapshot is a brief timed exposure, but the eyes are receiving an integrating constant input.
But the input received by the eyes is just a series of snapshots (where now I'm using that word in your sense, the proper sense). What the brain does with the data provided by the series of snapshots is a separate question, and it's not a question of physics, it's a question of neurobiology and cognitive science. The physics of light reaching the eye is prior to all that, and IMO we should be very careful not to confuse them.
bahamagreen said:
The data captured by the snapshot represents a plane of light (maybe a fat plane)
Actually the usual way a "snapshot" is modeled is as a sphere (or section of a sphere). More precisely, it's the intersection of a 2-sphere (or section of a sphere) in space at a given instant of time in some frame, with a set of light rays that just reach that 2-sphere at that instant of time in the same frame.
For example, think of the intersection of a set of light rays that all pass through the focal point of your eyeball with your retina (which is more or less a section of a sphere). Ideally, the focal point of the light rays is the center of the sphere, so light rays that all pass through the focal point at some instant in the retina's rest frame will all reach the retina at the same instant in the retina's rest frame (the second instant will be delayed by the light travel time from the focal point to the retina).
So over time, the data collected by the eye is a series of snapshots, taken at a series of instants in the retina's rest frame. For an idealized thought experiment, we can think of this series as continuous (i.e., the series of instants is continuous), but of course a real retina does not take continuous snapshots; it takes a snapshot roughly once every 20 ms or so (the recovery time of the neurons in the optic nerve, IIRC--i.e., the time it takes for a neuron to be ready to fire again after it has fired once).
bahamagreen said:
whose thickness may be less than the object being observed
A snapshot, as defined above, doesn't have a "thickness", if by that you mean a thickness in time (or in space). It is taken at a single instant.
bahamagreen said:
I wonder if in that case there is a difference between the fast image and the slow image with respect to what is going on at the near and far ends of the object?
I don't think so; or rather, I think that if we're going to talk about how we actually consciously perceive objects, we are no longer talking about physics but about neurobiology and cognitive science, as above. The physics itself is as I described it above.