local: improve measurement of clock precision

By default, the clock precision is set to the minimum measured time
needed to read the clock. This value is typically larger than the actual
resolution, which causes the NTP server to add more noise to NTP
timestamps than necessary. With HW timestamping and PTP corrections
enabled by the NTP-over-PTP transport that can be the limiting factor in
the stability of NTP measurements.

Try to determine the actual resolution of the clock. On non-Linux
systems use the clock_getres() function. On FreeBSD and NetBSD it seems
to provide expected values. On illumos it returns a large value (kernel
tick length?). On Linux it seems to be the internal timer resolution,
which is 1 ns with hrtimers, even when using a lower-resolution
clocksource like hpet or acpi_pm.

On Linux, try to measure the resolution as the minimum observed change
in differences between consecutive readings of the CLOCK_MONOTONIC_RAW
clock with a varying amount of busy work. Ignore 1ns changes due to
the kernel converting readings to timespec. This seems to work reliably.
In a test with the acpi_pm clocksource, differences of 3073, 3352, and
3631 ns were measured, which gives a resolution of 279 ns, matching the
clocksource frequency of ~3.58 MHz. With a tsc clocksource it gives
the minimum accepted resolution of 2 ns and with kvm-clock 10 ns.

As the final value of the precision, use the minimum value from the
measured or clock_getres() resolution and the original minimum time
needed to read the clock.
This commit is contained in:
Miroslav Lichvar
2025-10-08 13:09:10 +02:00
parent 8084961011
commit 2e29935c54
2 changed files with 176 additions and 17 deletions

View File

@@ -1133,23 +1133,29 @@ distances are in milliseconds.
[[clockprecision]]*clockprecision* _precision_::
The *clockprecision* directive specifies the precision of the system clock (in
seconds). It is used by *chronyd* to estimate the minimum noise in NTP
measurements and randomise low-order bits of timestamps in NTP responses. By
default, the precision is measured on start-up as the minimum time to read the
clock.
seconds). This value is used by *chronyd* as the minimum expected error and
amount of noise in NTP and refclock measurements, and to randomise low-order
bits of timestamps in NTP responses to make them less predictable. The minimum
value is 1 nanosecond and the maximum value is 1 second.
+
The measured value works well in most cases. It generally overestimates the
precision and it can be sensitive to the CPU speed, however, which can
change over time to save power. In some cases with a high-precision clocksource
(e.g. the Time Stamp Counter of the CPU) and hardware timestamping, setting the
precision on the server to a smaller value can improve stability of clients'
NTP measurements. The server's precision is reported on clients by the
By default, *chronyd* tries to determine the precision on start-up as the
resolution of the clock. On Linux, it tries to measure the resolution by
observing the minimum change in differences between consecutive readings of the
clock. On other systems it relies on the *clock_getres(2)* system function.
+
If the measurement fails, or the value provided by the system is too large, the
minimum measured time needed to read the clock will be used instead. This value
is typically larger than the resolution, and it is sensitive to the CPU speed,
however, which can change over time to save power.
+
The server's precision is reported on clients by the
<<chronyc.adoc#ntpdata,*ntpdata*>> command.
+
An example setting the precision to 8 nanoseconds is:
An example setting the precision to 1 nanosecond (e.g. when the system clock is
using a Time Stamp Counter (TSC) updated at a rate of at least 1 GHz) is:
+
----
clockprecision 8e-9
clockprecision 1e-9
----
[[corrtimeratio]]*corrtimeratio* _ratio_::