[etherlab-users] Distributed Clocks
Richard Hacker
ha at igh.de
Mon Feb 22 10:52:16 CET 2016
I do not have the patience to look into your code in detail to find out
what you're doing wrong, but here is an excerpt of my code I use in
etherlab Simulink Coder (ecrt_support.c):
void
ecs_send()
{
//...
list_for_each(master, &ecat_data.master_list, list) {
list_for_each(domain, &master->domain_list, list) {
// ...
ecrt_domain_queue(domain->handle);
}
if (/* send frames */) {
struct timespec tp;
clock_gettime(CLOCK_MONOTONIC, &tp);
ecrt_master_application_time(master->handle,
EC_TIMESPEC2NANO(tp));
if (/* sync_clocks */)) {
ecrt_master_sync_reference_clock(master->handle);
//...
}
ecrt_master_sync_slave_clocks(master->handle);
ecrt_master_send(master->handle);
}
}
}
void
ecs_receive()
{
//...
list_for_each(master, &ecat_data.master_list, list) {
if (/* receive frames */) {
ecrt_master_receive(master->handle);
}
list_for_each(domain, &master->domain_list, list) {
ecrt_domain_process(domain->handle);
ecrt_domain_state(domain->handle, &domain->state);
}
}
}
void
calculate()
{
while (forever) {
ecs_receive();
// do calculations
ecs_send();
sleep();
}
}
Some of this is pseudo code, but I trust in your programming
capabilities to be able to adapt this to your code ;)
It is very important to call clock_gettime() and
ecrt_master_application_time() as close to ecrt_master_send() as possible.
As you can see, it is not necessary to call
ecrt_master_sync_reference_clock() with every ecrt_master_send(). In
fact, ~100Hz of syncing should be enough, provided that your call to
clock_gettime() is accurate. On the other hand, if clock_gettime() is
inaccurate, you're in trouble when using DC anyway!
- Richard
Am 2016-02-22 um 04:23 schrieb Thomas Bitsky Jr:
> Ok, after following all your advice, plus the clock adjustment from
> Mattieu, I’m pretty close. I enable the DC on the drive, get everything
> into op, the domains all come up and it would appear that the clocks are
> syncing. Except for one thing:
>
> [10248.633619] EtherCAT 0: Domain 0: Working counter changed to 1/17.
> [10249.535776] EtherCAT WARNING 0: 3989 datagrams UNMATCHED!
> [10249.539861] EtherCAT WARNING: Datagram f3bf2d0c (domain0-0-main) was
> SKIPPED 1 time.
> [10249.539887] EtherCAT WARNING: Datagram f3bf2a8c (domain2-120-main)
> was SKIPPED 1 time.
> [10249.539918] EtherCAT WARNING: Datagram f3bf2e8c (domain1-114-main)
> was SKIPPED 1 time.
> [10249.635934] EtherCAT 0: Domain 0: 9 working counter changes - now 17/17.
> [10249.919163] EtherCAT 0: Domain 2: Working counter changed to 1/1.
> [10250.018245] EtherCAT 0: Domain 1: Working counter changed to 1/1.
> [10250.040259] EtherCAT 0: Slave states on main device: OP.
> [10250.536654] EtherCAT WARNING 0: 4001 datagrams UNMATCHED!
> [10251.537469] EtherCAT WARNING 0: 4000 datagrams UNMATCHED!
> [10252.538282] EtherCAT WARNING 0: 4000 datagrams UNMATCHED!
> [10252.538296] EtherCAT WARNING 0: 4000 datagrams UNMATCHED!
> [10253.539097] EtherCAT WARNING 0: 4000 datagrams UNMATCHED!
> [10254.539909] EtherCAT WARNING 0: 4000 datagrams UNMATCHED!
> [10255.540723] EtherCAT WARNING 0: 4000 datagrams UNMATCHED!
> [10256.541537] EtherCAT WARNING 0: 4000 datagrams UNMATCHED!<— This
> continues indefinitely
>
>
> Everything else seems okay:
>
> user at core:~/dist$ ethercat slaves
> 0 0:0 OP + EK1100 EtherCAT-Koppler (2A E-Bus)
> 1 0:1 OP + EL1104 4K. Dig. Eingang 24V, 3ms, Sensorversorgung
> 2 0:2 OP + EL1104 4K. Dig. Eingang 24V, 3ms, Sensorversorgung
> 3 0:3 OP + EL5101 1K. Inc. Encoder 5V
> 4 0:4 OP + EL7041 1K. Schrittmotor-Endstufe (50V, 5A)
> 5 0:5 OP + EL7041 1K. Schrittmotor-Endstufe (50V, 5A)
> 6 0:6 OP + EL2004 4K. Dig. Ausgang 24V, 0.5A
> 7 0:7 OP + EL2004 4K. Dig. Ausgang 24V, 0.5A
> 8 0:8 OP + EL4001 1K. Ana. Ausgang 0-10V, 12bit
> 9 1:0 OP + 0x00000539:0x02200001
> user at core:~/dist$ ethercat master
> Master0
> Phase: Operation
> Active: yes
> Slaves: 10
> Ethernet devices:
> Main: 00:30:18:c8:d4:92 (attached)
> Link: UP
> Tx frames: 5204542
> Tx bytes: 733318966
> Rx frames: 5204539
> Rx bytes: 733318636
> Tx errors: 0
> Tx frame rate [1/s]: 4000 3944 2090
> Tx rate [KByte/s]: 823.7 811.9 412.9
> Rx frame rate [1/s]: 4000 3944 2090
> Rx rate [KByte/s]: 823.7 811.9 412.9
> Common:
> Tx frames: 5204542
> Tx bytes: 733318966
> Rx frames: 5204539
> Rx bytes: 733318636
> Lost frames: 3
> Tx frame rate [1/s]: 4000 3944 2090
> Tx rate [KByte/s]: 823.7 811.9 412.9
> Rx frame rate [1/s]: 4000 3944 2090
> Rx rate [KByte/s]: 823.7 811.9 412.9
> Loss rate [1/s]: 0 0 0
> Frame loss [%]: 0.0 0.0 0.0
> Distributed clocks:
> Reference clock: Slave 0
> Application time: 509424243322706771
> 2016-02-22 02:44:03.322706771
>
>
>
>
>
>
> I must have something out of sync. Is there something more I’m supposed
> to do to adjust the clock? I can make the number of dropped packets
> worse by simplifying the time adjustment, but I can’t seem to get it any
> better. I think I’m not understanding something about the clock
> adjustment. Can you see what I’m doing wrong?
>
> Thanks!
>
> My cyclic task:
>
>
> static void
> sync_distributed_clocks(void)
> {
> uint32_t ref_time = 0;
> uint64_t prev_app_time = dc_time_ns;
>
> dc_time_ns = system_time_ns();
>
> // set master time in nano-seconds
> ecrt_master_application_time(master_, dc_time_ns);
>
> // get reference clock time to synchronize master cycle
> ecrt_master_reference_clock_time(master_, &ref_time);
> dc_diff_ns = (uint32_t) prev_app_time - ref_time;
>
>
> // call to sync slaves to ref slave
> ecrt_master_sync_slave_clocks(master_);
> }
>
> static unsigned int cycle_ns = 1000000; // 1 millisecond
>
> void update_master_clock(void)
> {
> // calc drift (via un-normalised time diff)
> int32_t delta = dc_diff_ns - prev_dc_diff_ns;
> prev_dc_diff_ns = dc_diff_ns;
>
> // normalise the time diff
> dc_diff_ns =
> ((dc_diff_ns + (cycle_ns / 2)) % cycle_ns) - (cycle_ns / 2);
>
> // only update if primary master
> if (dc_started)
> {
>
> // add to totals
> dc_diff_total_ns += dc_diff_ns;
> dc_delta_total_ns += delta;
> dc_filter_idx++;
>
> if (dc_filter_idx >= DC_FILTER_CNT)
> {
> // add rounded delta average
> dc_adjust_ns +=
> ((dc_delta_total_ns + (DC_FILTER_CNT / 2)) /
> DC_FILTER_CNT);
>
> // and add adjustment for general diff (to pull in drift)
> dc_adjust_ns += sign(dc_diff_total_ns / DC_FILTER_CNT);
>
> // limit crazy numbers (0.1% of std cycle time)
> if (dc_adjust_ns < -1000) {
> dc_adjust_ns = -1000;
> }
> if (dc_adjust_ns > 1000) {
> dc_adjust_ns = 1000;
> }
>
> // reset
> dc_diff_total_ns = 0LL;
> dc_delta_total_ns = 0LL;
> dc_filter_idx = 0;
> }
>
> // add cycles adjustment to time base (including a spot adjustment)
> system_time_base += dc_adjust_ns + sign(dc_diff_ns);
> }
> else
> {
> dc_started = (dc_diff_ns != 0);
>
> if (dc_started)
> {
> // output first diff
> PRINT("First master diff: %d.\n", dc_diff_ns);
>
> // record the time of this initial cycle
> dc_start_time_ns = dc_time_ns;
> }
> }
>
> }
>
>
> int
>
> ecatMain_process(void* lp)
>
> {
>
>
> sync_distributed_clocks();
>
> update_master_clock();
>
>
> ecrt_master_receive(master_);
>
> ecrt_domain_process(lrwDomainMgr_.domain);
>
> ecrt_domain_process(noLrwWriteDomainMgr_.domain);
>
> ecrt_domain_process(noLrwReadDomainMgr_.domain);
>
> … // handle my business
>
>
> // send process data
>
> ecrt_domain_queue(lrwDomainMgr_.domain);
>
> ecrt_domain_queue(noLrwWriteDomainMgr_.domain);
>
> ecrt_domain_queue(noLrwReadDomainMgr_.domain);
>
> // send EtherCAT data
>
> ecrt_master_send(master_);
>
> return 1;
>
> }
>
>
>
>
> I made my sync0 time half of the can rate, as you prescribed:
>
> void
> ecatModule_registerDcClock( Slave* slave, uint16_t addr )
> {
> struct timespec cur_time;
> clock_gettime(CLOCK_REALTIME, &cur_time);
> uint16_t sync0time = (uint16_t)((globalScanRate_us * 1000) / 2);
> ecrt_slave_config_dc(
> slave->sc,
> addr,
> (globalScanRate_us * 1000),
> sync0time,
> 0,
> 0);
> }
>
>
>
>
>
> From: Graeme Foot <Graeme.Foot at touchcut.com
> <mailto:Graeme.Foot at touchcut.com>>
> Date: Sunday, February 21, 2016 at 4:07 PM
> To: Thomas Bitsky <tbj at automateddesign.com <mailto:tbj at automateddesign.com>>
> Cc: "etherlab-users at etherlab.org <mailto:etherlab-users at etherlab.org>"
> <etherlab-users at etherlab.org <mailto:etherlab-users at etherlab.org>>
> Subject: RE: Distributed Clocks
>
> Hi,
>
> I build and patch against revision 2526, so don’t know what patches /
> fixes have made it through to the latest release. However for my
> revision I need fixes for reference clock selections and dc
> synchronization issues.
>
> I’ve attached the dc related patches I use, but these are applied after
> some other patches so you may get some conflicts or offsetting.
>
> *01 - Distributed Clock fixes and helpers.patch*
>
> This sorts out some ref slave issues, allowing a user selected ref
> slave. It also adds some helper functions:
>
> - ecrt_master_setup_domain_memory() : this allows me to set up the
> domain memory and do stuff with it before calling ecrt_master_activate()
> (for user space apps)
>
> - ecrt_master_deactivate_slaves() : this lets me deactivate the slaves
> while still in realtime to avoid the slaves getting some shutdown sync
> errors
>
> *02 - Distributed Clock fixes from Jun Yuan - dc sync issues.patch*
>
> This sorts out some timing issues to do with slave dc syncing. Without
> it they can start syncing from one cycle out causing a large syncing
> time overhead, one slave at a time.
>
> Regards,
>
> Graeme.
>
> *From:*Thomas Bitsky Jr [mailto:tbj at automateddesign.com]
> *Sent:* Sunday, 21 February 2016 10:27 a.m.
> *To:* Graeme Foot
> *Cc:* etherlab-users at etherlab.org <mailto:etherlab-users at etherlab.org>
> *Subject:* Re: Distributed Clocks
>
> Graeme,
>
> Thank you so much for the detailed response. I'm away from my computer
> right now, so I can't try out your advice, but I noticed you asked about
> patches. I am not running any patches; which should I get?
>
> Thanks!
> Thomas Bitsky Jr
>
> On Feb 20, 2016, at 3:04 PM, Graeme Foot <Graeme.Foot at touchcut.com
> <mailto:Graeme.Foot at touchcut.com>> wrote:
>
> Hi,
>
> The slave clocks get synced via the distributed clock system using
> the master methods: ecrt_master_reference_clock_time(),
> ecrt_master_sync_slave_clocks(), ecrt_master_application_time(),
> ecrt_master_sync_reference_clock().
>
> However each individual slave can choose (if it is capable of it)
> whether to synchronize its reading and writing of data to a
> particular point in time within the dc cycle. If that slave does
> not choose to do so then the reading and writing of the data occurs
> at the time the EtherCAT frame goes past, resulting in a progressive
> update and application of data as the frame progresses through all
> of the slaves.
>
> If a slave is registered to use the dc clock then it caches the
> frame data until the sync0 interrupt so in theory all dc slaves
> apply the data at the same time. It also means you don’t have to
> worry about jitter as to the time the frame is sent over the wire.
> The only thing is to choose a good sync0 time to ensure your frames
> are always sent out and have reached all of the slaves before the
> sync0 time occurs, and that the next frame is not sent out before
> the previous sync0 time occurs.
>
> In my application my cycle time is 1000us. I choose a sync0 time of
> 500us. I also send my frame as close as possible to the beginning
> of the cycle. This gives the frame up to half the cycle time to
> reach all of the slaves and then the other half of the cycle for the
> frame to return in time for the master receive call. I could choose
> a sync0 time later than 500us but I want it processed as soon as
> possible after the frame is received while still allowing for a bus
> with a large number of terminals.
>
> So below where you are calculating loop_shift based on the current
> time is wrong. Just choose a time within the dc cycle and use that
> value for all slaves. Note: the beginning of the dc cycle is in
> phase with the first call to ecrt_master_application_time(), so all
> of your realtime looping should also be in phase with that initial time.
>
> Note, when selecting a slave to be the DC reference slave you should
> generally choose the first slave on your bus, regardless of whether
> it will be (or can be) registered to use the dc synchronization. At
> the very least it must be before, or be the, first slave that will
> be registered as a dc slave.
>
> Also note that some slaves clocks are not very stable and shouldn’t
> be used as the DC reference slave. My original testing was on a
> Beckhoff CX1020 with a CX1100-0004, this could not be used as a
> reference slave as its clock was not stable.
>
> I see you are configuring the slaves via ecrt_slave_config_*()then
> calling ecrt_slave_config_pdos()at the end. If you call
> ecrt_slave_config_pdos() at the beginning you don’t need all the
> other config calls. However I hear AKD drives and some other slaves
> prefer explicit slave config calls but most slaves are happy with
> just the ecrt_slave_config_reg_pdo_entry()methods.
>
> This also leads to another issue. One of the configured PDO items
> is the “mode of operation” value (0x6060 0x00). You are also
> configuring this with: ecrt_slave_config_sdo8( sc, 0x6060, 0, 8 ).
> This value should be instead be set via the PDO value. Use
> ecrt_slave_config_reg_pdo_entry()to get the offset to the value and
> set the value there.
>
> Sorry if that was a bit long but DC’s is not an easy topic to get
> your head around. Here’s a bit of a summary:
>
> - You can choose which slaves get registered with
> ecrt_slave_config_dc(). But each slave you want synced must get its
> own call to ecrt_slave_config_dc().
>
> - If your yaskawa drives get to OP without
> ecrt_slave_config_dc()then they should get to OP with
> ecrt_slave_config_dc().
>
> - The yaskawa drives require a very stable reference slave time,
> which is why we sync the EtherCAT master to the reference slave
> rather than the other way around.
>
> And some other comments:
>
> - Never use anEL9010 endcap module. These break the distributed
> clock calculations. I don’t think they are available anymore though.
>
> - There are some patches out there that fix various DC clock issues,
> are you using any of these?
>
> Regards,
>
> Graeme.
>
> *From:*Thomas Bitsky Jr [mailto:tbj at automateddesign.com]
> *Sent:* Sunday, 21 February 2016 7:15 a.m.
> *To:* Graeme Foot; etherlab-users at etherlab.org
> <mailto:etherlab-users at etherlab.org>
> *Subject:* Re: Distributed Clocks
>
> [snip]
>
> I’ve never been able to get the EL7041 stepper modules to work in dc
> mode.
>
> [/snip]
>
> Is it all or nothing? I need the servo drives, the LVDT and the
> EL3356 tied to a distributed clock. The EL7041 is optional for me.
>
> [snip]
>
> I don’t see in your code calls to ecrt_slave_config_dc().
>
> For the yaskawa drive, during the config stage, I use the following
> calls…
>
> [/snip]
>
> Forgot to put that part; my bad. This is what I had for the
> Yaskawa/AKD, although I was only doing it to one of the drives. I
> thought I was supposed to set up one distributed clock, and it
> became the master and handled the rest. Am I supposed to do this for
> all the cards, or do I select?
>
> Yaskawa(AKD drive code is pretty much the same):
>
> if (!(sc = ecrt_master_slave_config(
>
> master,
>
> slavePosDomain,
>
> slavePosIndex,
>
> vendorId, productCode)))
>
> {
>
> return FALSE;
>
> }
>
> ecrt_slave_config_sdo8( sc, 0x1C12, 0, 0 ); /* clear sm pdo 0x1c12 */
>
> ecrt_slave_config_sdo8( sc, 0x1C13, 0, 0 ); /* clear sm pdo 0x1c12 */
>
> ecrt_slave_config_sdo8( sc, 0x1A00, 0, 0 ); /* clear TxPDO0 */
>
> ecrt_slave_config_sdo8( sc, 0x1A01, 0, 0 ); /* clear TxPDO1 */
>
> ecrt_slave_config_sdo8( sc, 0x1A02, 0, 0 ); /* clear TxPDO2 */
>
> ecrt_slave_config_sdo8( sc, 0x1A03, 0, 0 ); /* clear TxPDO3 */
>
> ecrt_slave_config_sdo8( sc, 0x1600, 0, 0 ); /* number of var in this
> PDO */
>
> ecrt_slave_config_sdo8( sc, 0x1601, 0, 0 ); /* clear RxPdo 0x1601 */
>
> ecrt_slave_config_sdo8( sc, 0x1602, 0, 0 ); /* clear RxPdo
> 0x1602 */
>
> ecrt_slave_config_sdo8( sc, 0x1603, 0, 0 ); /* clear RxPdo
> 0x1603 */
>
> ecrt_slave_config_sdo8( sc, 0x1A00, 0, 0 ); /* clear TxPDO0 */
>
> ecrt_slave_config_sdo32( sc, 0x1A00, 1, 0x60410010 ); // Status word
>
> ecrt_slave_config_sdo32( sc, 0x1A00, 2,0x60640020 );// Position
> actual value, per encoder
>
> ecrt_slave_config_sdo32( sc, 0x1A00, 3,0x60770010 );// Torque,
> actual value
>
> ecrt_slave_config_sdo32( sc, 0x1A00, 4,0x60F40020 );// Following
> error, actual value
>
> ecrt_slave_config_sdo32( sc, 0x1A00, 5,0x60610008 );// Modes of
> operation display
>
> ecrt_slave_config_sdo32( sc, 0x1A00, 6,0x00000008 );// GAP
>
> ecrt_slave_config_sdo32( sc, 0x1A00, 7,0x60B90010 );// Touch probe
> status
>
> ecrt_slave_config_sdo32( sc, 0x1A00, 8, 0x60BA0020 ); // Touch probe
> 1 position
>
> ecrt_slave_config_sdo8( sc, 0x1A00, 0, 8 ); /* pdo entries */
>
> ecrt_slave_config_sdo8( sc, 0x1A01, 0, 0 ); /* clear TxPDO1 */
>
> ecrt_slave_config_sdo32( sc, 0x1A01,1,0x60410010 ); // Status word
>
> ecrt_slave_config_sdo32( sc, 0x1A01,2,0x60640020 );// Position
> actual value, per encoder
>
> ecrt_slave_config_sdo8( sc, 0x1A01, 0, 2 ); /* pdo entries */
>
> ecrt_slave_config_sdo8( sc, 0x1A02, 0, 0 ); /* clear TxPDO2 */
>
> ecrt_slave_config_sdo32( sc, 0x1A02,1,0x60410010 ); // Status word
>
> ecrt_slave_config_sdo32( sc, 0x1A02,2,0x60640020 );// Position
> actual value, per encoder
>
> ecrt_slave_config_sdo8( sc, 0x1A02, 0, 2 ); /* pdo entries */
>
> ecrt_slave_config_sdo8( sc, 0x1A03, 0, 0 ); /* clear TxPDO2 */
>
> ecrt_slave_config_sdo32( sc, 0x1A03,1,0x60410010 ); // Status word
>
> ecrt_slave_config_sdo32( sc, 0x1A03,2,0x60640020 );// Position
> actual value, per encoder
>
> ecrt_slave_config_sdo32( sc, 0x1A03,3,0x60770010 );// Torque, actual
> value
>
> ecrt_slave_config_sdo8( sc, 0x1A03, 0, 3 ); /* pdo entries */
>
> ecrt_slave_config_sdo8( sc, 0x1600, 0, 0 ); /* clear entries */
>
> ecrt_slave_config_sdo32( sc, 0x1600, 1, 0x60400010 ); /* control
> word */
>
> ecrt_slave_config_sdo32( sc, 0x1600, 2, 0x607A0020 ); /* target
> position */
>
> ecrt_slave_config_sdo32( sc, 0x1600, 3, 0x60FF0020 ); /* target
> velocity */
>
> ecrt_slave_config_sdo32( sc, 0x1600, 4, 0x60710010 ); /* target
> torque */
>
> ecrt_slave_config_sdo32( sc, 0x1600, 5, 0x60720010 ); /* max torque */
>
> ecrt_slave_config_sdo32( sc, 0x1600, 6, 0x60600008 ); /* modes of
> operation */
>
> ecrt_slave_config_sdo32( sc, 0x1600, 7, 0x00000008 ); /* gap */
>
> ecrt_slave_config_sdo32( sc, 0x1600, 8, 0x60B80010 ); /* touch
> probe function */
>
> ecrt_slave_config_sdo8(sc, 0x1600, 0, 8 ); /* pdo entries */
>
> ecrt_slave_config_sdo8( sc, 0x1601, 0, 0 ); /* clear entries */
>
> ecrt_slave_config_sdo32( sc, 0x1601, 1, 0x60400010 ); /* control
> word */
>
> ecrt_slave_config_sdo32( sc, 0x1601, 2, 0x607A0020 ); /* target
> position */
>
> ecrt_slave_config_sdo8( sc, 0x1601, 0, 2 ); /* pdo entries */
>
> ecrt_slave_config_sdo8( sc, 0x1602, 0, 0 ); /* clear entries */
>
> ecrt_slave_config_sdo32( sc, 0x1602, 1, 0x60400010 ); /* control
> word */
>
> ecrt_slave_config_sdo32( sc, 0x1602, 2, 0x60FF0020 ); /* target
> position */
>
> ecrt_slave_config_sdo8( sc, 0x1602, 0, 2 ); /* pdo entries */
>
> ecrt_slave_config_sdo8( sc, 0x1603, 0, 0 ); /* clear entries */
>
> ecrt_slave_config_sdo32( sc, 0x1603, 1, 0x60400010 ); /* control
> word */
>
> ecrt_slave_config_sdo32( sc, 0x1603, 2, 0x60710020 ); /* target
> position */
>
> ecrt_slave_config_sdo8( sc, 0x1603, 0, 2 ); /* pdo entries */
>
> ecrt_slave_config_sdo16( sc, 0x1C12, 1, 0x1601 ); /* download pdo
> 1C12 index */
>
> ecrt_slave_config_sdo8( sc, 0x1C12, 0, 1 ); /* set number of RxPDO */
>
> ecrt_slave_config_sdo16( sc, 0x1C13, 1, 0x1A01 ); /* download pdo
> 1C13 index */
>
> ecrt_slave_config_sdo8( sc, 0x1C13, 0, 1 ); /* set number of TxPDO */
>
> // OPMODE
>
> // Yaskawa recommends 8
>
> ecrt_slave_config_sdo8( sc, 0x6060, 0, 8 );
>
> unsigned char interpolationTime = 0xFF;
>
> // 250
>
> unsigned char cycleExponent = 0xFA;
>
> // microseconds
>
> // globalSCanRate_us equals either 250 or 125.
>
> unsigned int us = globalScanRate_us;
>
> size_t i;
>
> for ( i=0;i<6, us > 0xFF;++i )
>
> {
>
> us /= 10;
>
> cycleExponent += 1;
>
> }
>
> interpolationTime = us;
>
> ecrt_slave_config_sdo8( akd->sc_akd, 0x60C2, 1, interpolationTime );
> /* Interpolation time */
>
> ecrt_slave_config_sdo8( akd->sc_akd, 0x60C2, 2, cycleExponent ); /*
> Cycle exponent */
>
> PRINT("Configuring PDOs...\n");
>
> if (ecrt_slave_config_pdos(sc, EC_END, slave_syncs))
>
> {
>
> PRINT("Failed to configure Yaskawa Sigma PDOs.\n");
>
> return FALSE;
>
> }
>
> *struct timespec cur_time;*
>
> *clock_gettime(CLOCK_REALTIME, &cur_time);*
>
> *size_t loop_period = globalScanRate_us * 1000;*
>
> *if ( loop_period == 0 ) loop_period = 1;*
>
> *size_t loop_shift *
>
> *= loop_period - (cur_time.tv_nsec % loop_period);*
>
> *ecrt_slave_config_dc(*
>
> *sc, *
>
> *0x0300, *
>
> *loop_period, *
>
> *loop_shift, *
>
> *0, *
>
> *0);*
>
> For the EL3356, would I then?
>
> KL3356StrainGauge* sg = (KL3356StrainGauge*)slave->instance;
>
> printf( "Begin kl3356_ecConfigure...\n");
>
> //
>
> // Create the slave configuration
>
> //
>
> if (!(sg->sc = ecrt_master_slave_config(
>
> master,
>
> slavePosDomain, slavePosIndex, // Bus position
>
> vendorId, productCode
>
> // Slave type
>
> )))
>
> {
>
> printf(
>
> "kl3356_ecConfigure -- Failed to get slave configuration.\n");
>
> return FALSE;
>
> }
>
> //
>
> // Register startup configuration for the hardware
>
> //
>
> ecrt_slave_config_sdo8( sg->sc, 0x1C12, 0, 0 ); /* clear sm pdo
> 0x1c12 */
>
> ecrt_slave_config_sdo8( sg->sc, 0x1C13, 0, 0 ); /* clear sm pdo
> 0x1c12 */
>
> ecrt_slave_config_sdo16( sg->sc, 0x1C12, 1, 0x1600 ); /* download
> pdo 1C12 index */
>
> ecrt_slave_config_sdo8( sg->sc, 0x1C12, 0, 1 ); /* set number of
> RxPDO */
>
> ecrt_slave_config_sdo16( sg->sc, 0x1C13, 1, 0x1A00 ); /* download
> pdo 1C13 index */
>
> ecrt_slave_config_sdo16( sg->sc, 0x1C13, 2, 0x1A02 ); /* download
> pdo 1C13 index */
>
> ecrt_slave_config_sdo8( sg->sc, 0x1C13, 0, 2 ); /* set number of
> TxPDO */
>
> //
>
> // Configure the hardware's PDOs
>
> //
>
> if (ecrt_slave_config_pdos(sg->sc, EC_END, kl3356_syncs))
>
> {
>
> printf(
>
> "kl3356_ecConfigure -- Failed to configure PDOs.\n");
>
> return FALSE;
>
> }
>
> *struct timespec cur_time;*
>
> *clock_gettime(CLOCK_REALTIME, &cur_time);*
>
> *size_t loop_period = globalScanRate_us * 1000;*
>
> *if ( loop_period == 0 ) loop_period = 1;*
>
> *size_t loop_shift *
>
> *= loop_period - (cur_time.tv_nsec % loop_period);*
>
> *ecrt_slave_config_dc(*
>
> *s**g->sc, *
>
> *0x0300, *
>
> *loop_period, *
>
> *loop_shift, *
>
> *0, *
>
> *0);*
>
> Thanks!
>
> Thomas C. Bitsky Jr. | Lead Developer
>
> ADC | automateddesign.com <http://automateddesign.com/>
>
> P: 630-783-1150 F: 630-783-1159 M: 630-632-6679
>
> Follow ADC news and media:
>
> Facebook <https://facebook.com/automateddesigncorp> | Twitter
> <https://twitter.com/ADCSportsLogic> | YouTube
> <https://www.youtube.com/user/ADCSportsLogic>
>
> *From: *Graeme Foot <Graeme.Foot at touchcut.com
> <mailto:Graeme.Foot at touchcut.com>>
> *Date: *Friday, February 19, 2016 at 7:24 PM
> *To: *Thomas Bitsky <tbj at automateddesign.com
> <mailto:tbj at automateddesign.com>>, "etherlab-users at etherlab.org
> <mailto:etherlab-users at etherlab.org>" <etherlab-users at etherlab.org
> <mailto:etherlab-users at etherlab.org>>
> *Subject: *RE: Distributed Clocks
>
> Hi,
>
> I don’t see in your code calls to ecrt_slave_config_dc().
>
> For the yaskawa drive, during the config stage, I use the following
> calls:
>
> // set interpolation time period (free run mode)
>
> // where 0x60C2 is time in seconds = (0x60C2, 0x01) x
> 10^(0x60C2, 0x02)
>
> // eg period of 1ms:
>
> // (0x60C2, 0x01) = 1
>
> // (0x60C2, 0x02) = -3
>
> // => 1 x 10^(-3) = 0.001s
>
> ecrt_slave_config_sdo8(dev->slaveConfig, 0x60C2, 0x01,
> (uint8_t)g_app.scanTimeMS);
>
> ecrt_slave_config_sdo8(dev->slaveConfig, 0x60C2, 0x02,
> (int8_t)(-3));
>
> // set up the distributed clock
>
> // 0x0000 = free run, 0x0300 = dc
>
> // (Supported DC cycle: 125us to 4ms (every 125us cycle))
>
> ecrt_slave_config_dc(dev->slaveConfig, 0x0300,
> g_app.scanTimeNS, 500000, 0, 0);
>
> 0x60C2 shouldn’t be necessary for dc mode, but I used it before I
> had dc mode working and have never tried it without and it doesn’t
> harm anything having it in.
>
> The second value that is being passed to the
> ecrt_slave_config_dcmethod is a value that is written to the ESC
> register 0x980. The Yaskawa SGDV doco says this value should be
> 0x0000 for free run mode and 0x0300 for dc mode. Other ESC’s may
> required different values.
>
> I’ve never been able to get the EL7041 stepper modules to work in dc
> mode.
>
> Graeme.
>
> *From:*etherlab-users [mailto:etherlab-users-bounces at etherlab.org]
> *On Behalf Of *Thomas Bitsky Jr
> *Sent:* Saturday, 20 February 2016 1:09 p.m.
> *To:* etherlab-users at etherlab.org <mailto:etherlab-users at etherlab.org>
> *Subject:* [etherlab-users] Distributed Clocks
>
> Hello.
>
> I’ve been using the EtherCAT master for years to great success, but
> I’m stuck on a problem I can’t figure out that I think several
> people here are doing successfully. I can’t implement distributed
> clocks in my application.
>
> I am having the same problem on two systems I have up and running:
>
> SYSTEM ONE:
>
> EtherLAB Master 1.52, E1000E Driver, Scan Rate 4Khz, Ubuntu Server
> 14.04LTS, RT-PREEMPT 3.12.50-rt68
>
> alias=0, position=0, device=EK1100
>
> alias=0, position=1, device=EL1104
>
> alias=0, position=2, device=EL2004
>
> alias=0, position=3, device=EL9510
>
> alias=0, position=4, device=EL3356
>
> alias=0, position=5, device=Kollmorgen AKD
>
> alias=0, position=6, device=MTS LVDT
>
> SYSTEM TWO:
>
> EtherLAB Master 1.52, E1000E Driver, Scan Rate 8Khz, Ubuntu Server
> 14.04LTS, RT-PREEMPT 3.12.50-rt68
>
> alias=0, position=0, device=EK1100
>
> alias=0, position=1, device=EL3001
>
> alias=0, position=2, device=EL1104
>
> alias=0, position=3, device=EL1104
>
> alias=0, position=4, device=EL1104
>
> alias=0, position=5, device=EL2004
>
> alias=0, position=6, device=EL2004
>
> alias=0, position=7, device=EL9505
>
> alias=0, position=8, device=EL7041
>
> alias=0, position=9, device=EL7041
>
> alias=0, position=10, device=EL7041
>
> alias=0, position=11, device=EL7041
>
> alias=0, position=12, device=EL7041
>
> alias=0, position=13, device=EL7041
>
> alias=0, position=14, device=EK1110
>
> alias=1, position=0, device=SIGMA5-05
>
> alias=2, position=0, device=Yaskawa SIGMA5-05
>
> alias=3, position=0, device=Yaskawa SIGMA5-05
>
> Both of the system are fully operational. However, for various
> reasons, I need to implement distributed clocks on these systems.
> I’ve never been able to get this to work.
>
> What follows is the code I used for both systems to try this. I read
> through examples on the mailing list, plus the examples, but I’m not
> seeing what I’m doing wrong. The result is always the same: all the
> fieldbus cards go into operation, but the servo drives won’t because
> of “bad configuration.” Take out the distributed clock code, and
> they work fine. I’m getting away with it for now, but I do need
> better clock resolution.
>
> The systems have an LRW domain, then a separate read domain and
> write domain for the servo drive(s) for a total of three domains
> (LRW, read, write). The yaskawa drives necessitate this. The scan
> rate is usually 4Khz, but I have tried it at both 1Khz and 8Khz and
> gotten the same results. Everything about the implementation is
> fairly straight-forward. There’s just something fundamental about
> the DC configuration that I’m not understanding.
>
> Almost all the code below is taken right from the examples or the
> message boards (thanks, everybody!). If anyone could tell me what
> I’m going wrong or offer any insights, it’s greatly appreciated. For
> brevity, I tried to narrow it down to relevant parts, but please let
> me know any additional information or code I can provide.
>
> Thank you in advance,
>
> Tom
>
> **********************************************************
>
> // EtherCAT distributed clock variables
>
> #define DC_FILTER_CNT 1024
>
> #define SYNC_MASTER_TO_REF 1
>
> static uint64_t dc_start_time_ns = 0LL;
>
> static uint64_t dc_time_ns = 0;
>
> static uint8_t dc_started = 0;
>
> static int32_t dc_diff_ns = 0;
>
> static int32_t prev_dc_diff_ns = 0;
>
> static int64_t dc_diff_total_ns = 0LL;
>
> static int64_t dc_delta_total_ns = 0LL;
>
> static int dc_filter_idx = 0;
>
> static int64_t dc_adjust_ns;
>
> static int64_t system_time_base = 0LL;
>
> static uint64_t wakeup_time = 0LL;
>
> static uint64_t overruns = 0LL;
>
> /** Get the time in ns for the current cpu, adjusted by
> system_time_base.
>
> *
>
> * \attention Rather than calling rt_get_time_ns() directly, all
> application
>
> * time calls should use this method instead.
>
> *
>
> * \ret The time in ns.
>
> */
>
> uint64_t system_time_ns(void)
>
> {
>
> struct timespec ts;
>
> clock_gettime(GLOBAL_CLOCK_TO_USE, &ts);
>
> return TIMESPEC2NS(ts);
>
> }
>
> static
>
> void sync_distributed_clocks(void)
>
> {
>
> uint32_t ref_time = 0;
>
> uint64_t prev_app_time = dc_time_ns;
>
> dc_time_ns = system_time_ns();
>
> // set master time in nano-seconds
>
> ecrt_master_application_time(master_, dc_time_ns);
>
> // get reference clock time to synchronize master cycle
>
> ecrt_master_reference_clock_time(master_, &ref_time);
>
> dc_diff_ns = (uint32_t) prev_app_time - ref_time;
>
> // call to sync slaves to ref slave
>
> ecrt_master_sync_slave_clocks(master_);
>
> }
>
> /** Return the sign of a number
>
> *
>
> * ie -1 for -ve value, 0 for 0, +1 for +ve value
>
> *
>
> * \retval the sign of the value
>
> */
>
> #define sign(val) \
>
> ({ typeof (val) _val = (val); \
>
> ((_val > 0) - (_val < 0)); })
>
> /*****************************************************************************/
>
> /** Update the master time based on ref slaves time diff
>
> *
>
> * called after the ethercat frame is sent to avoid time jitter in
>
> * sync_distributed_clocks()
>
> */
>
> static unsigned int cycle_ns = 1000000; // 1 millisecond
>
> void update_master_clock(void)
>
> {
>
> // calc drift (via un-normalised time diff)
>
> int32_t delta = dc_diff_ns - prev_dc_diff_ns;
>
> prev_dc_diff_ns = dc_diff_ns;
>
> // normalise the time diff
>
> dc_diff_ns =
>
> ((dc_diff_ns + (cycle_ns / 2)) % cycle_ns) - (cycle_ns / 2);
>
> // only update if primary master
>
> if (dc_started) {
>
> // add to totals
>
> dc_diff_total_ns += dc_diff_ns;
>
> dc_delta_total_ns += delta;
>
> dc_filter_idx++;
>
> if (dc_filter_idx >= DC_FILTER_CNT) {
>
> // add rounded delta average
>
> dc_adjust_ns +=
>
> ((dc_delta_total_ns + (DC_FILTER_CNT / 2)) /
> DC_FILTER_CNT);
>
> // and add adjustment for general diff (to pull in drift)
>
> dc_adjust_ns += sign(dc_diff_total_ns / DC_FILTER_CNT);
>
> // limit crazy numbers (0.1% of std cycle time)
>
> if (dc_adjust_ns < -1000) {
>
> dc_adjust_ns = -1000;
>
> }
>
> if (dc_adjust_ns > 1000) {
>
> dc_adjust_ns = 1000;
>
> }
>
> // reset
>
> dc_diff_total_ns = 0LL;
>
> dc_delta_total_ns = 0LL;
>
> dc_filter_idx = 0;
>
> }
>
> // add cycles adjustment to time base (including a spot
> adjustment)
>
> system_time_base += dc_adjust_ns + sign(dc_diff_ns);
>
> }
>
> else {
>
> dc_started = (dc_diff_ns != 0);
>
> if (dc_started)
>
> {
>
> // record the time of this initial cycle
>
> dc_start_time_ns = dc_time_ns;
>
> }
>
> }
>
> }
>
> struct timespec dcTime_;
>
> int
>
> ecatMain_process(void* lp)
>
> {
>
> ecrt_master_receive(master_);
>
> clock_gettime(CLOCK_REALTIME, &dcTime_);
>
> ecrt_master_application_time(master_, TIMESPEC2NS(dcTime_));
>
> ecrt_master_sync_reference_clock(master_);
>
> ecrt_master_sync_slave_clocks(master_);
>
> ecrt_domain_process(lrwDomainMgr_.domain);
>
> ecrt_domain_process(noLrwWriteDomainMgr_.domain);
>
> ecrt_domain_process(noLrwReadDomainMgr_.domain);
>
> … // handle my business
>
> // write application time to master
>
> clock_gettime(CLOCK_REALTIME, &dcTime_);
>
> ecrt_master_application_time(master_, TIMESPEC2NS(dcTime_));
>
> if (sync_ref_counter_)
>
> {
>
> sync_ref_counter_--;
>
> }
>
> else
>
> {
>
> sync_ref_counter_ = 1; // sync every cycle
>
> ecrt_master_sync_reference_clock(master_);
>
> }
>
> // send process data
>
> ecrt_domain_queue(lrwDomainMgr_.domain);
>
> ecrt_domain_queue(noLrwWriteDomainMgr_.domain);
>
> ecrt_domain_queue(noLrwReadDomainMgr_.domain);
>
> // sync distributed clock just before master_send to set
>
> // most accurate master clock time
>
> sync_distributed_clocks();
>
> // send EtherCAT data
>
> ecrt_master_send(master_);
>
> // update the master clock
>
> // Note: called after ecrt_master_send() to reduce time
>
> // jitter in the sync_distributed_clocks() call
>
> update_master_clock();
>
> return 1;
>
> }
>
> int
>
> ecatMain_start(void* lp)
>
> {
>
> //
>
> // domain regs must end in a null entry
>
> //
>
> lrwDomainMgr_.domainRegs = realloc(
>
> lrwDomainMgr_.domainRegs,
>
> sizeof(ec_pdo_entry_reg_t) * (lrwDomainMgr_.size + 1) );
>
> memset(
>
> &(lrwDomainMgr_.domainRegs[lrwDomainMgr_.size]),
>
> 0,
>
> sizeof(ec_pdo_entry_reg_t) );
>
> noLrwReadDomainMgr_.domainRegs = realloc(
>
> noLrwReadDomainMgr_.domainRegs,
>
> sizeof(ec_pdo_entry_reg_t) * (noLrwReadDomainMgr_.size + 1) );
>
> memset(
>
> &(noLrwReadDomainMgr_.domainRegs[noLrwReadDomainMgr_.size]),
>
> 0,
>
> sizeof(ec_pdo_entry_reg_t) );
>
> noLrwWriteDomainMgr_.domainRegs = realloc(
>
> noLrwWriteDomainMgr_.domainRegs,
>
> sizeof(ec_pdo_entry_reg_t) * (noLrwWriteDomainMgr_.size + 1) );
>
> memset(
>
> &(noLrwWriteDomainMgr_.domainRegs[noLrwWriteDomainMgr_.size]),
>
> 0,
>
> sizeof(ec_pdo_entry_reg_t) );
>
> //
>
> // NOTE: the Output Domain must be registered with
>
> // ecrt_domain_reg_pdo_entry_list before the Input Domain otherwise you
>
> // will not have any data exchanged even though the drive goes into OP
>
> // mode.
>
> //
>
> PRINT("\nAttempting to register PDOs on WRITE ONLY domain...\n");
>
> if (ecrt_domain_reg_pdo_entry_list(
>
> noLrwWriteDomainMgr_.domain, noLrwWriteDomainMgr_.domainRegs))
>
> {
>
> PRINT("WRITE ONLY PDO entry registration failed!\n");
>
> return FALSE;
>
> }
>
> PRINT("\nAttempting to register PDOs on READ ONLY domain...\n");
>
> if (ecrt_domain_reg_pdo_entry_list(
>
> noLrwReadDomainMgr_.domain, noLrwReadDomainMgr_.domainRegs))
>
> {
>
> PRINT("READ ONLY PDO entry registration failed!\n");
>
> return FALSE;
>
> }
>
> //
>
> // And now we register the bi-directional domain.
>
> //
>
> PRINT("\nAttempting to register PDOs on LRW domain...\n");
>
> if (ecrt_domain_reg_pdo_entry_list(
>
> lrwDomainMgr_.domain, lrwDomainMgr_.domainRegs))
>
> {
>
> PRINT("LRW PDO entry registration failed!\n");
>
> return FALSE;
>
> }
>
> /*
>
> * Finishes the configuration phase and prepares for cyclic operation.
>
> * This function tells the master that the configuration phase
>
> * is finished and the realtime operation will begin.
>
> * The function allocates internal memory for the domains and calculates
>
> * the logical FMMU addresses for domain members.
>
> * It tells the master state machine that the bus configuration is
>
> * now to be applied
>
> */
>
> PRINT("\nAttempting to activate ECAT master...\n");
>
> if (ecrt_master_activate(master_))
>
> {
>
> PRINT(
>
> "%s Failed to activate master!\n",
>
> __FUNCTION__ );
>
> return FALSE;
>
> }
>
> /*
>
> * Returns the domain's process data.
>
> */
>
> PRINT( "%s getting LRW process data from master.\n", __FUNCTION__ );
>
> if (!(lrwDomainMgr_.processData
>
> = ecrt_domain_data(lrwDomainMgr_.domain)))
>
> {
>
> PRINT(
>
> "%s set ecProcessData -- domain data is NULL!\n",
>
> __FUNCTION__ );
>
> return FALSE;
>
> }
>
> if (!(noLrwReadDomainMgr_.processData
>
> = ecrt_domain_data(noLrwReadDomainMgr_.domain)))
>
> {
>
> PRINT(
>
> "%s set read ProcessData -- domain data is NULL!\n",
>
> __FUNCTION__ );
>
> return FALSE;
>
> }
>
> if (!(noLrwWriteDomainMgr_.processData
>
> = ecrt_domain_data(noLrwWriteDomainMgr_.domain)))
>
> {
>
> PRINT(
>
> "%s set write ProcessData -- domain data is NULL!\n",
>
> __FUNCTION__ );
>
> return FALSE;
>
> }
>
> … // blah blah blah
>
> doScan_ = TRUE;
>
> PRINT( "%s completed successfully.\n", __FUNCTION__ );
>
> return TRUE;
>
> }
>
>
>
> _______________________________________________
> etherlab-users mailing list
> etherlab-users at etherlab.org
> http://lists.etherlab.org/mailman/listinfo/etherlab-users
>
More information about the Etherlab-users
mailing list