This is a thought experiment, I can set up any conditions I like: I send a satellite into a perfect circle at a constant distance from my perfectly round earth, and I take a signal from it each time it is directly overhead. I know the distance and can adjust for ToF of the signal perfectly.
If I adjust the clock I send into orbit such that it always matches my clock on Earth when it comes overhead, then the clocks remain perfectly matched - for however long the satellite stays up there. However, for someone floating with the clock they observe the clocks increasingly desynchronise.
Just before the clock begins its decent back to earth, on the ground I am reading that the clocks are perfectly synchronised, because that is how I have set them up in my thought experiment. But to the guy floating with the clock, they look desynchronised because they have always been increasing their desynchronisation. For me, I monitor the clock coming back down to Earth and would I notice that they become more desynchronised as the clock gets closer to me, and if so by how much? That act of the clock coming back to me surely cannot change the time on the clock by a variable amount, according to how long the satellite has been up there. So what do I see the radio signal, from the clock as it is coming back down to earth, telling me? Do I see the time signal slowing down considerably for the period of its decent, so that, by the time it gets to me, it matches the desynchronised time the guy floating with it saw and expects to see when he gets down here?
If so, why would the rate of 'correction' during the same descent path be different depending on whether I bring the satellite down after one month compared with 10 years?
If not, then the clocks will read the same time and the guys who was with the clocks gets real confused?