[etherlab-users] How to perform DC time synchronisation the right way?

Graeme Foot Graeme.Foot at touchcut.com
Wed May 16 02:04:56 CEST 2018


Hi,



Firstly, there is no way of checking if the frame transfer is complete.



One comment, to reduce the jitter and offset of the master application time, place the clock_gettime(), ecrt_master_application_time(), ecrt_master_sync_reference_clock(), ecrt_master_sync_slave_clocks() calls after ecrt_domain_queue() (and before ecrt_master_send()).



It doesn't matter how long "do a lot of stuff" takes as long as the application time is set just before the send and your cycle start time is in sync with the initial call to ecrt_master_application_time().





Traditional cycle:

- master receive

- domain process

- perform application calcs

- domain queue

- dc sync

- master send



-> as long as the time to master send and the time on the wire is shorter than sync0 offset then you are all good.





Alternate option:

- cycle time 0, wakeup

- domain queue

- dc sync

- master send

- sleep for data over wire time

- wakeup

- master receive

- domain process

- perform application calcs

- sleep



-> you need to determine the appropriate sleep time between the master send and receive to allow for software overhead and time on the wire.



Note: You can guess a time on the wire by using the "ethercat slaves -v" command.  Look up the "DC system time transmission delay" on your last slave and double it.





I do something a little different.  My cycle:

- master receive

- domain process

- write cached PDO values

- domain queue

- dc sync

- master send

- perform application calcs (writes to PDO data is cached)



This has the advantage of a very short turnaround with minimal jitter between the receive and send.  It allows nearly a whole cycle for the data to be on the wire.  It also allows for up to the remainder of that cycle to be used for application calculations, in parallel to the data being on the wire.  The drawbacks are:

- The domain process step will overwrite any PDO data changes you have made while performing your application calcs, so you need to cache your changes somewhere else then apply them after the domain process step

- You add 1 extra cycle delay for the PDO data being read.



However, your cycle time can generally also be reduced by an amount since the "app calc time" and "data on the wire time" are now in parallel.



The traditional cycle takes around three cycles between writing data and receiving its results (3 * 1ms = 3ms turnaround)

My cycle can often be reduced to at least ½ of the traditional cycle.  Even though it has the extra cycle overhead it still has a better turnaround (4 * 0.5ms = 2ms turnaround)



I personally don't reduce my cycle time (I keep it at 1ms) as I'm happy at the extra cycle delay and some of our controllers can have quite a large calculation overhead.





Just some info on timing from one of our controllers (around 55 slaves)

- 30us to perform master receive through to master send

- 150us to perform application calcs





One last comment.  Assuming a linear topology, the EtherCAT frames are sent out through all of the slaves and once it reaches the last slave it returns through all of the slaves.  It should take a similar time between outgoing and returning.  Only the outgoing data needs to arrive before the Sync0 event.  So with my method you can allow nearly the whole cycle for the data to be on the wire as long as your Sync0 events are configured to be after the cycle half time.  If it's not a linear topology, this does not apply.





Regards,

Graeme.





-----Original Message-----

From: etherlab-users [mailto:etherlab-users-bounces at etherlab.org] On Behalf Of Michael Ruder

Sent: Wednesday, 16 May 2018 4:34 AM

To: etherlab-users at etherlab.org

Subject: [etherlab-users] How to perform DC time synchronisation the right way?



Hello,



I am progressing quite well with EtherLab and am currently working on synchronizing outputs/movement with the Master time. We are using the Master 1.5.2 from the 1.5.2 branch, ec_generic driver with PREEMPT RT (kernel 4.14.28).



In our application, we need to be synchronized to the real time (UTC). We use a GPS receiver and Chrony to synchronize our PC clock to within a few microseconds.



Now I want to have the slaves also synchronized to this time frame and have the following dilemma:



- normally, I would like to call



// cycle begins



ecrt_master_receive(master);

ecrt_domain_process(domain1);



// do a lot of stuff



clock_gettime(CLOCK_REALTIME, &time);

ecrt_master_application_time(master, ((time.tv_sec - 946684800ULL) * 1000000000ULL + time.tv_nsec));



ecrt_master_sync_reference_clock(master);

ecrt_master_sync_slave_clocks(master);



ecrt_domain_queue(domain1);

ecrt_master_send(master);



// cycle ends, wait for next cycle



However, as the "lot of stuff" takes different amounts of time, this seems to be not good, as this means a few hundred microseconds jitter as to when the application time is set in our (1 ms long) cycle.



- therefore, I could call it that way:



// cycle begins



clock_gettime(CLOCK_REALTIME, &time);

ecrt_master_application_time(master, ((time.tv_sec - 946684800ULL) * 1000000000ULL + time.tv_nsec));



ecrt_master_sync_reference_clock(master);

ecrt_master_sync_slave_clocks(master);



ecrt_domain_queue(domain1);

ecrt_master_send(master);



// wait for frame transfer to complete



ecrt_master_receive(master);

ecrt_domain_process(domain1);



// do a lot of stuff



// cycle ends, wait for next cycle



This removes the jitter and I could also cope with the updated domain data being sent at the begin of the next cycle somehow. However, the problematic part is the "wait for frame transfer to complete". There seems to be no way to actually know when this has happened. Also, I experienced stable operation with a low as 50 us, while sometimes 200 us is needed to avoid "datagram UNMATCHED" messages and a short drop out of OP (working counter going to 0 for a moment).



>From what I read, there seems to be not a single good solution to this, or am I overlooking something?



Is there any way of actually checking if the frame transfer is complete?

As I am in a realtime cycle, I could then busywait instead of using a fixed (probably rather large) delay.



Or can I still use the first solution? I am a bit afraid of it, as it is mentioned that the SYNC0 timing will be in phase with the first call to this function.



Thanks for your help!

--

.      -Michael

_______________________________________________

etherlab-users mailing list

etherlab-users at etherlab.org

http://lists.etherlab.org/mailman/listinfo/etherlab-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.etherlab.org/pipermail/etherlab-users/attachments/20180516/fa595c41/attachment-0003.htm>


More information about the Etherlab-users mailing list