[Etherlab-users] DC synchronization demo about etherlab master

Graeme Foot Graeme.Foot at touchcut.com
Thu Apr 17 02:26:16 CEST 2025


Hi Circle,

Re below, I'm also cc'ing the forum in case anyone has more input to add.

> From: Circle Fang circlefang at live.com<mailto:circlefang at live.com>
> Sent: Wednesday, 16 April 2025 20:02
> To: Graeme Foot Graeme.Foot at touchcut.com<mailto:Graeme.Foot at touchcut.com>
> Subject: DC synchronization demo about etherlab master
>
> Dear Graeme,
>
> I am interested in some motion control research, and I am really thankful about all your response in etherlab-users mail-list, that help me a lot. And i am very sorry if my mail dirturbe you, I am not able to post questions on the maillist.
>
> I like the way you mentioned here https://lists.etherlab.org/pipermail/etherlab-users/2018-May/014740.html by CACHED PDOs sent in next cycle, but i am not able to find the attachment you mentioned here, https://lists.etherlab.org/pipermail/etherlab-users/2022-March/019231.html. Is it possible that you send me a copy?
>
> I am having dc sync error occasionally with 4 YASKAWA servos(first time with this brand) connected a line to the master, even 0x92c is very small like no more than 20. I am using xenomai in the traditional ways (the master is synchronized to reference with PDOs in current cycle), and my realtime cycle(1ms) takes various mount of time. And if my OS is ok, then the only thing i can do is sync 0 shift_time(now 300us) and CACHED PDOs sent in next cycle as you mentioned? Do you have any other suggestions?
>
> I also have one more question about sync0 and shift time, is "DC cyclic operation start time to 64460955283935" the first sync0 time in slave? I saw in some docs said the sync0_shift is only a time after snyc0 in the slave(like all motors move at the same time, i.e., after shift time), If so, why we use sync0_shift to calculate dc_start_time? because dc errors is about the time SM2 and sync0, it's not because sync0_shift.
>
> This confuse me for several years, and I would be very very appreciated if you give me some guidance.
>
> Thanks again
> Circle


I've attached the email you mention and the test program.

Inside ecMaster_syncDistClock() I call ecrt_master_64bit_reference_clock_time_queue().  This is used to put a long running timestamp in the wireshark logs so you can check and ensure your cycle period is not drifting.

If drifting is not the problem then calling ecrt_master_send() at a consistent time (with enough time on the wire before the DC Sync time at the slave) is essential.  As you say, I do this by caching the data to write from the previous cycle and send it as soon as I receive and process the previous domain information.  This ensures the previous frames have returned and reduces the jitter of the send.  I also set the dc shift time on the slaves to 500us (half of my 1ms period) so that the frames have half the cycle to reach all the slaves and the other half of the cycle to return.  (We only have 60 to 80 slaves so are nowhere near needing to worry about time on the wire, unless we decide at some stage to reduce the cycle period.)  The calculations for the next period can then also be performed in parallel to the frames being on the wire.

A few other options are:

  1.  Set your DC Sync offset to 900us.  A time large enough to process the previous frame reads, do calcs, send and time on the wire.  The drawback is that non DC slaves will apply their data more early than the dc slaves.


  1.  Wake up the master twice.  E.g.:

At 0us: Wake up and perform the send

At 500us: Wake up and receive, do calcs

At 1000us: Wake up and perform the send

Etc.



Also, people have asked the forum previously if you can wake up at an event when the PDO data frames return.  There is no such event, so they have attempted to poll for this.  The master is not designed for this so it doesn't work so well.  What you can do however is figure out how much time the frames take on the wire by looking at the "ethercat sl -v -p0" command and look at the Diff [ns] value for port 1 (or calc the diff between the first and last port if using a start topology).  That will tell you how long it takes for the frames to go out and return to the first slave.  Add a little overhead for the master send / receive and you can tune the mid cycle wake up time above to maximize the calc time.  That of course then becomes system specific (and can change if you have hot plug groups).



  1.  Do what TwinCAT does and separate the calc from the EtherCAT processing (different threads and memory space).

The EtherCAT module wakes up at 0us and reads linked data from other modules, it then reads the returned PDO information and caches it for other process to read, applies the PDO write data and sends it.

The calc modules can wake up when they want and follow the same steps, read linked data from other modules (including the EtherCAT PDO data), perform calcs, write results to the modules out data.

This is kind of similar to my caching method except that it uses the different modules to achieve the data caching and decouples the EtherCAT data cycle and calculations.  It's also helpful due to TwinCAT using overlapped PDO's.  Different calc modules can also have different cycle rates, but the EtherCAT module needs to run at least at the fastest module cycle rate.  There's also more locking issues to resolve and memory shuffling.


Re your last question, I'm not really sure what you are asking but from my recollection:

  *   The sync0 time is to do with getting all dc clock slaves to a common time base and in sync with your masters nominal cycle zero time.  The dc syncing after that is just to keep the master and dc slaves in sync with each other.
  *   By default slaves will apply data from incoming PDO's immediately, and write outgoing data at their current values.  DC capable slaves can optionally have DC sync enabled (with ecrt_slave_config_dc()), which effectively caches the incoming PDO data to be applied at the shift time, so all DC slaves can activate their incoming data at the same time (or in a coordinated fashion).  It also caches write data at the same time (or at its own offset with sync1).  The only requirement is that the slave needs to have processed the PDO frames each cycle before the offset time, else it starts raising errors.
  *   The master can choose when in the nominal cycle period it sends the PDO frames, but as close to the cycle zero point with minimal jitter is preferable.  (Minimal jitter is especially good for the non dc activated slaves.)

ecrt_slave_config_dc() params:
sc                                     the slave to configure
assign_activate       the assign activate word the slave uses to enable DC sync (often 0x0300) (see ETG ethercat_esc_datasheet_sec2_registers pdf, registers 0x0980, 0x0981)
sync0_cycle               your nominal EtherCAT master cycle time (e.g. 1ms)
sync0_shift                 when to raise the sync0 event, as a shift time from the nominal cycle zero time (aka when to apply the read PDO values in the cycle period)
sync1_cycle               some slaves can have a sync1 event, used for caching PDO write values at a different time to sync0.  Not generally required, use 0 to ignore
sync1_shift                 not generally required, use 0 to ignore.  The master has a bug with this.  There's a patch floating around to fix this if needed

In summary, the sync0 event is when the PDO data is applied.  If you active the slaves dc, then the sync0 occurs at the sync0_shift time after the nominal cycle zero time.  If it is not activated the sync0 event is raised as the PDO data comes in so it is applied immediately.


Regards,
Graeme.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.etherlab.org/pipermail/etherlab-users/attachments/20250417/88acaa60/attachment-0001.htm>
-------------- next part --------------
An embedded message was scrubbed...
From: Graeme Foot <Graeme.Foot at touchcut.com>
Subject: Re: [etherlab-users] No CoE communication
Date: Wed, 23 Aug 2017 06:49:05 +0000
Size: 44179
URL: <https://lists.etherlab.org/pipermail/etherlab-users/attachments/20250417/88acaa60/attachment-0001.eml>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: yaskawaTest.zip
Type: application/x-zip-compressed
Size: 24167 bytes
Desc: yaskawaTest.zip
URL: <https://lists.etherlab.org/pipermail/etherlab-users/attachments/20250417/88acaa60/attachment-0001.bin>


More information about the Etherlab-users mailing list