mirror of
https://gitlab.com/chrony/chrony.git
synced 2025-12-03 18:05:06 -05:00
By default, the clock precision is set to the minimum measured time needed to read the clock. This value is typically larger than the actual resolution, which causes the NTP server to add more noise to NTP timestamps than necessary. With HW timestamping and PTP corrections enabled by the NTP-over-PTP transport that can be the limiting factor in the stability of NTP measurements. Try to determine the actual resolution of the clock. On non-Linux systems use the clock_getres() function. On FreeBSD and NetBSD it seems to provide expected values. On illumos it returns a large value (kernel tick length?). On Linux it seems to be the internal timer resolution, which is 1 ns with hrtimers, even when using a lower-resolution clocksource like hpet or acpi_pm. On Linux, try to measure the resolution as the minimum observed change in differences between consecutive readings of the CLOCK_MONOTONIC_RAW clock with a varying amount of busy work. Ignore 1ns changes due to the kernel converting readings to timespec. This seems to work reliably. In a test with the acpi_pm clocksource, differences of 3073, 3352, and 3631 ns were measured, which gives a resolution of 279 ns, matching the clocksource frequency of ~3.58 MHz. With a tsc clocksource it gives the minimum accepted resolution of 2 ns and with kvm-clock 10 ns. As the final value of the precision, use the minimum value from the measured or clock_getres() resolution and the original minimum time needed to read the clock.