this post was submitted on 15 Jun 2024
871 points (99.4% liked)
Technology
59322 readers
5123 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How would you sync them... ? Seems to beg the premise.
Sync them right next to each other, then move one of them. The other way you could test this theory is to have one clock tell the other the time over an optical link and then have the other do the same. If the speed of light was different in different directions. Each would measure a different lag.
Well, moving them is out of the question, since, you know, motion will change the clocks time. If you re-sync them, you bake the "error" into your framework. If you try a timer, the timer is offset. If you try and propagate a signal, the signal is offset. And eventually, you have to compare the two times, which muddies the waters by introducing a third clock.
Basically, there is no way to sync two clocks without checking both clocks, ergo, no way of proving or disproving. That's the premise.
In practicality, I assume it is constant, but it's like N=NP. You can't prove it within the framework, even if you really, really want to believe one thing.
If you move one clock very slowly away from the other, the error is minimised, perhaps even to a degree that allows for statistically significant measurements.
To cite the Wikipedia entry that one of the other commenters linked:
"The clocks can remain synchronized to an arbitrary accuracy by moving them sufficiently slowly. If it is taken that, if moved slowly, the clocks remain synchronized at all times, even when separated, this method can be used to synchronize two spatially separated clocks."
One-Way Speed of Light
And further down:
Yes, I understand that part, but it doesn't disprove that such an experiment could show isotropy. Instead, it says that it would always indicate isotropy, which is not entirely useful either, of course. I'll dig deeper into the publication behind that section when I have the time. Nonetheless, my original point still stands. With a highly synchronised clock, you could measure the (an)isotropy of the one-way speed of light. To determine whether the time dilation issue is surmountable I'll have to look at the actual research behind it.
Except if you continue reading beyond your Quote, it goes on to explain why that actually doesn't help.
That the measurements from the slow clock transport synchronisation method are equivalent to the Einstein synchronisation and its isotropic speed of light can be interpreted to show that the one-way speed of light is indeed isotropic for a given set-up and not anisotropic. The problem with this is that anisotropy could not even be measured if it were to exist in this context. But this is definitely not a clear-cut zero sum game, there's no evidence suggesting anisotropy while there are observations that would at least suggest isotropy, but neither possibility can be ruled out. However, my initial point was that, could you have ultra-synchronised clocks, you could potentially be able to draw a reliable conclusion. But I'll dig into the publication the Wiki entry cites for the time dilation part in the slow clock section when I have the time.