[etherlab-users] etherlab dc sync check
Jeroen Van den Keybus
jeroen.vandenkeybus at gmail.com
Fri Jan 3 23:10:50 CET 2014
>
> Master clock is accurate (more or less), but delivery is not, since
> software is involved.
>
It cannot be accurate, since the time between TX descriptor update in the
NIC and actual packet on wire is highly dependent on the PC and NIC
hardware. The _rate_ however, is accurate.
It could become highly accurate, however, when one would use the PTP
facilities that some PHYs/MACs offer (using SO_TIMESTAMPING). But it would
not be easily possible using the patched drivers, I'm afraid.
> I'm just asking why slave(s) shall synchronize to "jumpy" clock of master?
> Maybe be it would be more correct, if master could synchronize itself to
> the first slave and then will run pll adjusting it's clock to the slave's
> one?
>
It is best if the PLL loop filter of the slave #0 clock would be set
sufficiently slow. The other PLLs loop time constants can be set much
tighter (faster). But that's where the mysterious parameters at
0x0930..0x935 come in. I have no idea what the loop filter looks like. The
Standard says it's implementation specific, but the ET1200 datasheet
doesn't provide sufficient data to fully understand the implications of
fiddling with these parameters.
> In that case we can completely eliminate this time-consuming
> synchronization phase.
>
I believe the best way to eliminate much, if not all, of the
synchronization is to continue sending the synchronizing FRMW to slave #0
in the IDLE thread (and preferably also the FPWR). That way at least the
slaves do not lose sync with eachother when the application isn't running.
> Say,
> a) master sends it's virtual clock to the first slave and that slave
> becomes a clock master
> b) master re-distributes fist slave's clock to other slaves
> c) master adjust it's clock permanently, relatively to the clock master
> (first slave)
> d) master periodically re-distributes clock from the first slave to other
> in order to eliminate time skew in slave.
>
>
I assume you would rather mean 'drift' instead of 'skew'.
I think we cannot do that, it's against the ethercat standard :)
>
I believe it is not. The standard states that the reference clock should be
assigned to a slave upstream of slaves to be synchronized. The reference
clock itself is said to be 'adjustable'. In a wider sense, simply leaving
it run freely is compliant to this requirement.
Besides, if it understandably works, why not do it ? But it is a rather
complex affair to synchronize the master's system clock to the slave
reference clock (using adjtime() or adjtimex()). It would also take tens of
seconds to lock the master's clock onto the reference slave. And you would
end up with a drifting real-time clock in your master.
But if we would be sending the FRMW continuously, and use the
ecrt_master_reference_clock_time() to read the slave reference clock
without bothering about the _absolute_ time, you would be able to schedule
pretty much anything you would need relative to this timestamp, without any
calibration delays in your program. You would in fact be running a
rudimentary PLL, something along the lines of (didn't try this out, but
should illustrate the principle):
first = 1;
sync_slave_clocks();
master_send();
t_next = now() + t_cycle;
while (1) {
suspend_until(t_next);
master_receive();
master_reference_clock_time(&t_ref);
if (first) {
t_offset = t_next - t_ref; /* Determine (large) initial offset
between master and slave */
t_ref_next = t_ref + t_cycle;
first = 0;
}
else {
t_offset += t_ref_next - t_ref; /* Adjust offset based on difference
between desired ref timestamp and real one */
t_ref_next += t_cycle;
}
t_next = t_ref_next + t_offset;
...
sync_slave_clocks();
master_send()
}
> It actually works similiar like you suggest, just in another way. It
> is something like this:
>
I'm not sure if it's right what has been said here. Either this is about
the offset compensation, which is carried out only once upon the bus scan.
The offsets are derived from comparing the local packet receive time at the
ingress port with the time at which the slave sees the packet returning
through the loop at its second port. This approach has its issues when
another packet is still in that loop (very long loops) or with a software
slave that needs to receive a frame fully before emitting it again. (I'm
unsure how this condition is to be handled, since the delay then includes
the full packet transmission time and is also anything but constant.)
Or it refers to the continuous drift compensation, which merely consists of
repeatedly sending the app time to the reference clock and sending the FRMW
datagram to synchronize all remaining slaves. No time comparisons are done
in the master.
So I doubt the explanations below.
Ref clock -> Master clock
> 1. The master asks the first slave which time it has.
>
2. The master compares the timestamp with its app time, and sends a
> new time offset to the first slave.
> 3. The first slave adds the new time offset to its clock. Now he has
> the same time as the master. Drift is the next thing to worry about.
>
> Other slave clock -> Ref clock
> 1. The master asks the slave which is not a ref clock which time it has.
> 2. The master compares the timestamp with the time of the ref clock,
> and sends a new time offset to the slave.
> 3. The slave adds the new time offset to its clock. Now he shall have
> the same time as the ref clock.
>
> I believe the current etherlabmaster doesn't do well in step 2. it
> could be done better.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.etherlab.org/pipermail/etherlab-users/attachments/20140103/5f383c24/attachment-0003.htm>
More information about the Etherlab-users
mailing list