<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Master clock is accurate (more or less), but delivery is not, since software is involved.<br></blockquote><div><br></div><div>It cannot be accurate, since the time between TX descriptor update in the NIC and actual packet on wire is highly dependent on the PC and NIC hardware. The _rate_ however, is accurate.</div>
<div><br></div><div>It could become highly accurate, however, when one would use the PTP facilities that some PHYs/MACs offer (using SO_TIMESTAMPING). But it would not be easily possible using the patched drivers, I'm afraid.</div>
<div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
I'm just asking why slave(s) shall synchronize to "jumpy" clock of master? Maybe be it would be more correct, if master could synchronize itself to the first slave and then will run pll adjusting it's clock to the slave's one?<br>
</blockquote><div><br></div><div>It is best if the PLL loop filter of the slave #0 clock would be set sufficiently slow. The other PLLs loop time constants can be set much tighter (faster). But that's where the mysterious parameters at 0x0930..0x935 come in. I have no idea what the loop filter looks like. The Standard says it's implementation specific, but the ET1200 datasheet doesn't provide sufficient data to fully understand the implications of fiddling with these parameters.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
In that case we can completely eliminate this time-consuming synchronization phase.<br></blockquote><div><br></div><div> I believe the best way to eliminate much, if not all, of the synchronization is to continue sending the synchronizing FRMW to slave #0 in the IDLE thread (and preferably also the FPWR). That way at least the slaves do not lose sync with eachother when the application isn't running.</div>
<div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Say,<br>
a) master sends it's virtual clock to the first slave and that slave becomes a clock master<br>
b) master re-distributes fist slave's clock to other slaves<br>
c) master adjust it's clock permanently, relatively to the clock master (first slave)<br>
d) master periodically re-distributes clock from the first slave to other in order to eliminate time skew in slave.<br>
<div><br></div></blockquote><div><br></div><div>I assume you would rather mean 'drift' instead of 'skew'.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div> I think we cannot do that, it's against the ethercat standard :)<br></div></blockquote><div><br></div><div>I believe it is not. The standard states that the reference clock should be assigned to a slave upstream of slaves to be synchronized. The reference clock itself is said to be 'adjustable'. In a wider sense, simply leaving it run freely is compliant to this requirement.</div>
<div><br></div><div>Besides, if it understandably works, why not do it ? But it is a rather complex affair to synchronize the master's system clock to the slave reference clock (using adjtime() or adjtimex()). It would also take tens of seconds to lock the master's clock onto the reference slave. And you would end up with a drifting real-time clock in your master.<br>
</div><div><br></div><div>But if we would be sending the FRMW continuously, and use the <span style="font-size:13px;font-family:arial,sans-serif">ecrt_master_reference_clock_</span><span style="font-size:13px;font-family:arial,sans-serif">time() to read the slave reference clock without bothering about the _absolute_ time, you would be able to schedule pretty much anything you would need relative to this timestamp, without any calibration delays in your program. You would in fact be running a rudimentary PLL, </span><span style="font-family:arial,sans-serif;font-size:13px">something along the lines of (didn't try this out, but should illustrate the principle):</span></div>
<div><span style="font-family:arial,sans-serif;font-size:13px"><br></span></div><div>first = 1;<br></div><div><span style="font-family:arial,sans-serif;font-size:13px"><br></span></div><div>sync_slave_clocks();<br></div>
<div>
master_send();<br></div><div><div><span style="font-family:arial,sans-serif;font-size:13px">t_next = now() + t_cycle;</span></div></div><div><span style="font-family:arial,sans-serif;font-size:13px"><br></span></div><div>
<span style="font-family:arial,sans-serif;font-size:13px">while (1) {</span></div><div><span style="font-family:arial,sans-serif;font-size:13px"> suspend_until(t_next);</span></div><div><span style="font-family:arial,sans-serif;font-size:13px"> master_receive();</span></div>
<div><span style="font-family:arial,sans-serif;font-size:13px"> </span><span style="font-size:13px;font-family:arial,sans-serif">master_reference_clock_</span><span style="font-size:13px;font-family:arial,sans-serif">time</span><span style="font-family:arial,sans-serif;font-size:13px">(&t_ref);</span></div>
<div><span style="font-family:arial,sans-serif;font-size:13px"> if (first) {</span></div><div><span style="font-family:arial,sans-serif;font-size:13px"> t_offset = t_next - t_ref; /* Determine (large) initial offset between master and slave */</span></div>
<div><span style="font-family:arial,sans-serif;font-size:13px"> t_ref_next = t_ref + t_cycle;</span></div>
<div><span style="font-family:arial,sans-serif;font-size:13px"> first = 0;</span><br></div><div><span style="font-family:arial,sans-serif;font-size:13px"> }</span></div><div><span style="font-family:arial,sans-serif;font-size:13px"> else {</span></div>
<div><span style="font-family:arial,sans-serif;font-size:13px"> t_offset += t_ref_next - t_ref; /* Adjust offset based on difference between desired ref timestamp and real one */</span></div><div><span style="font-family:arial,sans-serif;font-size:13px"> t_ref_next += t_cycle;</span></div>
<div>
<span style="font-family:arial,sans-serif;font-size:13px"> }</span><br></div><div><span style="font-family:arial,sans-serif;font-size:13px"> t_next = t_ref_next + t_offset;</span></div><div><span style="font-family:arial,sans-serif;font-size:13px"> ...</span><br>
</div><div> sync_slave_clocks();<br></div><div><span style="font-family:arial,sans-serif;font-size:13px"> master_send()</span><br></div><div><span style="font-family:arial,sans-serif;font-size:13px">}</span></div><div>
<span style="font-family:arial,sans-serif;font-size:13px"><br></span></div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div></div><div><div>
It actually works similiar like you suggest, just in another way. It<br>
is something like this:<br></div></div></blockquote><div><br></div><div>I'm not sure if it's right what has been said here. Either this is about the offset compensation, which is carried out only once upon the bus scan. The offsets are derived from comparing the local packet receive time at the ingress port with the time at which the slave sees the packet returning through the loop at its second port. This approach has its issues when another packet is still in that loop (very long loops) or with a software slave that needs to receive a frame fully before emitting it again. (I'm unsure how this condition is to be handled, since the delay then includes the full packet transmission time and is also anything but constant.)</div>
<div><br></div><div>Or it refers to the continuous drift compensation, which merely consists of repeatedly sending the app time to the reference clock and sending the FRMW datagram to synchronize all remaining slaves. No time comparisons are done in the master.</div>
<div><br></div><div>So I doubt the explanations below.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div><div>
Ref clock -> Master clock<br>
1. The master asks the first slave which time it has.</div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div><div>
2. The master compares the timestamp with its app time, and sends a<br>
new time offset to the first slave.<br>
3. The first slave adds the new time offset to its clock. Now he has<br>
the same time as the master. Drift is the next thing to worry about.<br>
<br>
Other slave clock -> Ref clock<br>
1. The master asks the slave which is not a ref clock which time it has.<br>
2. The master compares the timestamp with the time of the ref clock,<br>
and sends a new time offset to the slave.<br>
3. The slave adds the new time offset to its clock. Now he shall have<br>
the same time as the ref clock.<br>
<br>
I believe the current etherlabmaster doesn't do well in step 2. it<br>
could be done better.</div></div></blockquote></div></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div></div>