[etherlab-users] Distribute Clock new flow

Graeme Foot Graeme.Foot at touchcut.com
Mon Apr 4 04:39:33 CEST 2016


Hi,

The DC flow from the post below is an old version, back in the days when RTDM was first being added to the master.  I no longer need the separate thread to activate the master.

I use a slave as a reference clock, adjusting the PC’s cycle to match.  I haven’t used Xenomai or RT-PREMPT but they should only need something similar to rt_sleep_until().

My current DC flow is:

configure master
  ecrt_request_master
  ecrt_master_create_domain (x3)

configure slaves (pdos / sdos / dc)
  ecrt_master_get_slave                 // get slave info and check it matches the expected device and revision
  ecrt_master_slave_config
  ecrt_slave_config_pdos
  ecrt_slave_config_create_sdo_request  // I create one general usage SDO per module
  ecrt_slave_config_reg_pdo_entry       // one call per pdo entry
  ecrt_slave_config_sdo8 etc            // configuration entries
  ecrt_slave_config_dc                  // for dc slaves

prepare to run
  ecrt_master_setup_domain_memory       // pre-setup domain memory (via one of my patches)
  per domain:
    ecrt_domain_data                    // get domain data address
    ecrt_domain_size                    // get domain data size
    allocate memory x2                  // for a mask and a cache
  per module:
    calculate/cache pdo data addresses
    set initial module data/control values
  set up dc clocks:
    ecrt_master_application_time        // set initial app time (must be in phase with realtime cycle)
    ecrt_master_select_reference_clock  // select slave to use as reference clock
  ecrt_master_activate

start realtime cycle (I use RTAI)
  rt_thread_init
  rt_set_oneshot_mode
  rt_allow_nonroot_hrt
  mlockall
  calc first wake time (in phase with initial ecrt_master_application_time call)
  start_rt_timer
  rt_make_hard_real_time

realtime cycle
  ecrt_master_receive
  ecrt_domain_process x3
  ecrt_domain_state                     // check WC / state of domains
  ecrt_master_state                     // check state of master
  see if any modules want to write “special” data
  write cached domain data
  ecrt_domain_queue x3
  sync distributed clock                // as close to ecrt_master_send as possible
    ecrt_master_reference_clock_time    // calc diff between expected slave time and actual, after first realtime scan
    ecrt_master_sync_slave_clocks
    ecrt_master_application_time
  ecrt_master_send
  calc slave to master time drift and adjust master clock and cycle period to match
  see if any modules want to reset “special” data or calc extra data (eg Actual Velocity)

  perform application logic.
  note: any data to be written to the domain memory is actually written to the domain cache and mask

  run device logic (eg axis enable / disable logic)

  wait for remainder of cycle
    rt_sleep_until

stop realtime cycle (called before wait for remainder of cycle if app is flagged to shutdown)
  (while continuing realtime cycle)
  tell modules to prepare to stop, ie stop and disable axes nicely etc
  once safe:
  ecrt_master_deactivate_slaves         // from one of my patches, it stops a few shutdown errors
  (continue cycling until the rest of the app is ready to shutdown also)

and stop
  rt_make_soft_real_time
  stop_rt_timer
  rt_task_delete
  ecrt_master_deactivate
  ecrt_release_master
  free domain cache/mask memory



The call to ecrt_master_reference_clock_time() is what gives us the reference slave clock time.  We compare this to our expected time and the difference is the slaves drift compared to the master PC.  I cache a running history of these to calc an average drift to figure out my base cycle time in terms of the PC clock.  I also gradually adjust the cycle time as required to keep the slave and master periods in sync.

The application time needs to be the master PC’s time adjusted by the overall drift with the reference slaves clock.



As for the Working counter changed messages I’m not really sure.  Check the cable between the last two modules is good / try a new cable maybe.


Regards,
Graeme.



From: 陈成细 [mailto:crazyintermilan at gmail.com]
Sent: Friday, 25 March 2016 1:16 a.m.
To: Graeme Foot
Subject: Distribute Clock new flow

Dear Graeme,

Thanks a lot for your help during last two years .
Now i can run yaskawa driver and motor with ethercat. However sometimes it will run into sync error, I am not aware the problem until i bought another secondhand Omron driver . I decided to solve this DC issue from the root, so I carefully go through all the threads from mailing list which related to DC many times.

Of course your thread(http://thread.gmane.org/gmane.network.etherlab.user/1335/focus=1349) is from A-Z to solve this issue. I would like to use the same method to solve my problem, but I have following questions:

1. your DC flow:

configure master

  ecrt_request_master

  ecrt_master_create_domain

configure slaves (pdos / sdos / dc)

  ecrt_master_slave_config

  ecrt_slave_config_pdos

  ecrt_slave_config_reg_pdo_entry

  ecrt_slave_config_sdo8

  ecrt_slave_config_dc

connect to rtdm

  rt_dev_open(rtdm)

  ecrt_domain_index

  ecrt_rtdm_master_attach

setup domain memory

  ecrt_master_setup_domain_memory

  ecrt_domain_data

  ecrt_domain_size



create master_activate thread -------+

                                     |

start realtime                       |

  request master_activate     -->    | ecrt_master_activate

  ecrt_rtdm_master_recieve           | (after first sync dc)

  sync dc                            |

  ecrt_rtdm_master_send              |

                              <--    | master activated

  ecrt_rtdm_master_recieve           |

  ecrt_rtdm_domain_process_all       |

  check domain state                 |

  check master state                 |

  update domain data                 |

  sync dc                            |

  ecrt_rtdm_domain_queue_all         |

  ecrt_rtdm_master_send              |

                                     |

  request master deactivate   -->    | ecrt_master_deactivate_slaves

  ecrt_rtdm_master_recieve           | (wait for PREOP or 5s)

  ecrt_rtdm_domain_process_all       |

  check domain state                 |

  check master state                 |

  update domain data                 |

  sync dc                            |

  ecrt_rtdm_domain_queue_all         |

  ecrt_rtdm_master_send              |

                              <--    | master deactivating

  ecrt_rtdm_master_recieve           | ecrt_master_deactivate

  sync dc                            |

  ecrt_rtdm_master_send              |

                              <--    | master deactivated

stop realtime                        |

stop master_activate thread          +



cleanup

  rt_dev_close(rtdm)

  ecrt_release_master



There is a master_activate thread, can I do it in the same real time thread?

But it seems you denied your previous conclusion in this thread(http://thread.gmane.org/gmane.network.etherlab.user/1412/focus=1416).

I'm using RTAI but I was not calibrating my cpu-frequency.  By default Linux only calibrates the cpu-freq to

the PIT timer to an accuracy of 500 parts per million.  So each time I rebooted I could get quite varied

cpu-freq values.



When they were outside a good operating range I would often get:

  EtherCAT WARNING 0-9: Slave did not sync after 5001 ms.



And sometimes get:

  EtherCAT DEBUG 0-9: OP -> SAFEOP + ERROR.

  EtherCAT ERROR 0-9: AL status message 0x001A: "Synchronization error".



The drive would then report an error code of 0x0A12 (sync error).



When I changed my code to start cycling the pdo information and the dc information before calling

ecrt_master_activate() the problem pretty much went away.  But I would often still get the odd sync error

at some later stage.



Now after calibrating my cpu-freq value, doing a lot of other stuff and generally gathering more knowledge

I see the original diagnosis was wrong.  All that was happening was that the drives could not stay synced if

the masters time cycle was too fast or too slow.  Starting to cycle the dc information sooner seemed to help

them stabilise better on startup but they would still sometimes loose sync at a later stage anyway.



(Note: I'm setting my calibrated cpu-freq by using the "rtai_cpufreq_arg" when calling insmod on rtai_hal.ko.)
 How can I check my cpu-freq and define whether it get issue or not?
 May I have a little bit detail how to calibrate it in your case?
And how about RT-PREMPT or Xenomai?

By the way, I already use the lastest patches from Jun Yuan(http://lists.etherlab.org/pipermail/etherlab-users/2016/002922.html)

hg clone http://hg.code.sf.net/u/mensch88/etherlabmaster -r rtleaders

etherlab-dev


2 My issue is omron driver keep changing working counter like:
   [10353.644769] EtherCAT 0: Domain 2: Working counter changed to 2/3.
   [10354.652702] EtherCAT 0: Domain 2: Working counter changed to 3/3.
   [ 10358.496736] EtherCAT 0: Domain 2: Working counter changed to 2/3.
   [10359.504699] EtherCAT 0: Domain 2: Working counter changed to 3/3.

   Do you have any experience about it? detail Dmesg file attached.



I am so sorry drop you so long email. I am appreciated that you can give any hints to help me solve this nightmare issue.

Thanks very much!
--
Best regards!
陈成细
R&D Engineer
(\__/)
(='.'=) This is Bunny. Copy and paste bunny into your
(")_(") signature to help him gain world domination.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.etherlab.org/pipermail/etherlab-users/attachments/20160404/cefaae85/attachment-0002.htm>


More information about the Etherlab-users mailing list