[etherlab-users] Distributed Clocks

Thomas Bitsky Jr tbj at automateddesign.com
Wed Mar 2 16:56:07 CET 2016


I’m not very familiar with FTRACE. Is this log file telling me that the mutex and spin lock calls are the problem? My understanding is that the “+” indicates calls that took exceedingly long.

cyclicte-1572    2....... 64066us+: _mutex_unlock <-rb_simple_write
cyclicte-1572    2....... 64159us+: rt_mutex_unlock <-__do_page_fault
cyclicte-1572    2....11. 64170us+: sub_preempt_count <-migrate_enable
cyclicte-1572    2....... 64190us+: __copy_to_user_ll <-SyS_rt_sigaction
cyclicte-1572    2....... 64476us+: put_pid <-do_fork
  <idle>-0       1dN..2.. 64566us+: update_stats_wait_end <-set_next_entity
cyclicte-1573    3.....11 64689us+: rt_spin_lock
migratio-19      1....... 65201us+: rt_mutex_lock <-cpu_stopper_thread

Thomas C. Bitsky Jr. | Lead Developer
ADC | automateddesign.com <http://automateddesign.com/>
P: 630-783-1150 F: 630-783-1159 M: 630-632-6679

Follow ADC news and media:
Facebook <https://facebook.com/automateddesigncorp> | Twitter <https://twitter.com/ADCSportsLogic> | YouTube <https://www.youtube.com/user/ADCSportsLogic>









On 3/2/16, 8:47 AM, "Thomas Bitsky Jr" <tbj at automateddesign.com> wrote:

>Thanks for the response, Richard.
>
>CyclicTest on the hardware:
>user at core:~$ sudo cyclictest --smp -p95 -m
># /dev/cpu_dma_latency set to 0us
>policy: fifo: loadavg: 0.01 0.04 0.04 1/162 1547
>
>T: 0 ( 1543) P:95 I:1000 C:  26093 Min:      6 Act:   30 Avg:   25 Max:      56
>T: 1 ( 1544) P:95 I:1500 C:  17386 Min:      6 Act:   14 Avg:   23 Max:      66
>T: 2 ( 1545) P:95 I:2000 C:  13036 Min:      5 Act:   28 Avg:   29 Max:      57
>T: 3 ( 1546) P:95 I:2500 C:  10424 Min:      6 Act:   30 Avg:   29 Max:      58
>
>
>
>So, the max latency is definitely in the ballpark of what I was saying last night.
>
>Setting cylictest with a break value fails almost instantly.
>
>root at core:/sys/kernel/debug/tracing# cyclictest --smp -p95 -f -b 200
># /dev/cpu_dma_latency set to 0us
>INFO: debugfs mountpoint: /sys/kernel/debug/tracing/
>policy: fifo: loadavg: 0.00 0.01 0.05 3/165 1703
>
>T: 0 ( 1700) P:95 I:1000 C:      0 Min:1000000 Act:    0 Avg:    0 Max:       0
>T: 1 ( 1701) P:95 I:1500 C:      0 Min:1000000 Act:    0 Avg:    0 Max:       0
>T: 2 (    0) P:95 I:2000 C:      0 Min:1000000 Act:    0 Avg:    0 Max:       0
>T: 3 ( 1703) P:95 I:2500 C:      0 Min:1000000 Act:    0 Avg:    0 Max:       0
># Thread Ids: 01700 01701 01702 01703
># Break thread: 1702
># Break value: 217
>
>
>
>[snip]
>First things first: when you are running RT proggies, you need a stable 
>clock source and a preemptable kernel!
>[/snip]
>
>
>I am using RT Preempt:
># uname -a
>Linux core 3.12.50-rt68 #1 SMP PREEMPT RT Mon Nov 23 18:17:14 CST 2015 i686 i686 i686 GNU/Linux
>
>
>Thanks!
>Thomas C. Bitsky Jr. | Lead Developer
>ADC | automateddesign.com <http://automateddesign.com/>
>P: 630-783-1150 F: 630-783-1159 M: 630-632-6679
>
>Follow ADC news and media:
>Facebook <https://facebook.com/automateddesigncorp> | Twitter <https://twitter.com/ADCSportsLogic> | YouTube <https://www.youtube.com/user/ADCSportsLogic>
>
>
>
>
>
>
>
>
>On 3/2/16, 5:30 AM, "etherlab-users on behalf of Richard Hacker" <etherlab-users-bounces at etherlab.org on behalf of ha at igh.de> wrote:
>
>>Hi John,
>>
>>Skipped frames are either a timing or a hardware problem (faulty 
>>EtherCAT slaves, cabling, noise). Some slaves don't care, DC slaves puke!
>>
>>First things first: when you are running RT proggies, you need a stable 
>>clock source and a preemptable kernel! Even more so when you're running 
>>at 250us cycle time. Your jitter of 50us is roughly 20% of your cycle 
>>time, definitely not a neglegible fraction! So tackle that first.
>>
>>Download cyclictest and test that first:
>>https://rt.wiki.kernel.org/index.php/Cyclictest
>>
>>Once your cyclic test is stable (~10us max jitter), you may start 
>>looking for delays in your code.
>>
>>Instrumate your loop and find out what is causing delays. Comment out 
>>large sections of code to start from a known good state (maybe doing 
>>EtherCAT IO exclusively and NOTHING else, no GUI, signal processing, 
>>etc) and reinsert code slowly to find the error.
>>
>>Do you have ANY system calls in your RT cycle, like sockets(), read() or 
>>write()? Do remember that system calls (including those to the master) 
>>may cause page faults that introduce delays. Only a very small subset of 
>>known and documented system calls can be used for RT. Notably read() and 
>>write() to pipes, shared memory (once set up of coarse), semaphores are OK.
>>
>>BTW: Needless to say that we have paid particular attention that the 
>>master does not cause any page faults or other delays once started in 
>>cyclic mode ;)
>>
>>Good luck!
>>Richard
>>
>>On 02.03.2016 05:51, Thomas Bitsky Jr wrote:
>>>
>>> I’ve been continuing to work on this, but have had limited success.
>>> While a distributed clock is technically running, it’s caused a few
>>> other problems that I’ve been unable to correct. I think my main problem
>>> all stems from this:
>>>
>>> [36524.681778] EtherCAT 0: Domain 0: Working counter changed to 10/10.
>>> [36524.681787] EtherCAT 0: Domain 2: Working counter changed to 1/1.
>>> [36524.681792] EtherCAT 0: Domain 1: Working counter changed to 1/1.
>>> [36525.858760] EtherCAT 0: Domain 0: Working counter changed to 0/10.
>>> [36525.858810] EtherCAT 0: Domain 2: Working counter changed to 0/1.
>>> [36525.858827] EtherCAT 0: Domain 1: Working counter changed to 0/1.
>>> [36526.203067] EtherCAT WARNING: Datagram f185d88c (domain0-0-main) was
>>> SKIPPED 2 times.
>>> [36526.203099] EtherCAT WARNING: Datagram f185d90c (domain2-28-main) was
>>> SKIPPED 2 times.
>>> [36526.203104] EtherCAT WARNING: Datagram f185d28c (domain1-22-main) was
>>> SKIPPED 2 times.
>>> [36526.743379] EtherCAT WARNING 0: 12 datagrams UNMATCHED!
>>> [36526.863556] EtherCAT 0: Domain 0: 5 working counter changes - now 10/10.
>>> [36526.863566] EtherCAT 0: Domain 2: 5 working counter changes - now 1/1.
>>> [36526.863572] EtherCAT 0: Domain 1: 5 working counter changes - now 1/1.
>>>
>>> … and on and on and on…
>>>
>>> The AKD servo drive I’m using will run fine, no warnings, then suddenly
>>> drop to an F125 fault and shut off. The is a frame synchronization
>>> error. Basically, it’s saying that the sync0 frame isn’t received at a
>>> consistent enough rate, so it faults out.
>>>
>>> My scan rate is 250 microseconds, and I admit there is jitter. It varies
>>> from as little as +/- 50 microseconds, though I’m not sure why. The
>>> "ethercat master" command reports a steady 4000 frames/sec, but the
>>> scoping part of my project records a timestamp, and I am seeing the +/-
>>> 50 microseconds.
>>>
>>> My timing function is straight out of the EtherCAT master examples and
>>> is also similar to methods I’ve seen in other cyclic task projects. The
>>> whole cyclic task is below. Can anyone see what idiotic thing I must be
>>> doing to get unmatched datagrams?
>>>
>>> #define TIMESPEC_ADD_NS(TS, NS)\
>>> (TS).tv_nsec += (NS);\
>>> while ( (TS).tv_nsec >= NANOS_PER_SEC ){\
>>> (TS).tv_nsec -= NANOS_PER_SEC;\
>>> (TS).tv_sec++; }
>>>
>>> #define TIMESPEC2NSEPOCH2000(T) \
>>> ((uint64_t) (((T).tv_sec - 946684800ULL) * 1000000000ULL) + (T).tv_nsec)
>>>
>>> #define TON struct timespec
>>> #define TON_ENDTIME(MS)\
>>> time_add_ns((MS) * NANOS_PER_MILLISEC)
>>>
>>>
>>> static TON clockSyncTon_;
>>>
>>>
>>> int
>>> TON_ISDONE( struct timespec ts )
>>> {
>>> struct timespec now;
>>> clock_gettime(CLOCK_MONOTONIC, &now);
>>> if ( now.tv_sec > ts.tv_sec )
>>> return 1;
>>> else if ( now.tv_sec == ts.tv_sec
>>> && now.tv_nsec >= ts.tv_nsec )
>>> return 1;
>>> else
>>> return 0;
>>> }
>>>
>>>
>>> static bool
>>> wait_period(RtaiMain* inst)
>>> {
>>>
>>> int rc;
>>> bool done = false;
>>> while ( !done && inst->doScan && runAll_ )
>>> {
>>> rc = clock_nanosleep(CLOCK_MONOTONIC,
>>> TIMER_ABSTIME,
>>> &inst->wakeupTime,
>>> NULL );
>>>
>>>
>>> if ( rc == EFAULT )
>>> {
>>> return false;
>>> }
>>> else if ( rc == EINTR )
>>> {
>>> continue;
>>> }
>>> else if ( rc == EINVAL )
>>> {
>>> return false;
>>> }
>>> else
>>> {
>>> done = 1;
>>> }
>>> }
>>> TIMESPEC_ADD_NS(inst->wakeupTime, inst->cycleNs);
>>> return true;
>>>
>>> }
>>>
>>>
>>> static void
>>> cyclic_task(RtaiMain* inst)
>>> {
>>>
>>> clock_gettime(CLOCK_MONOTONIC ,&(inst->wakeupTime));
>>> /* start after one second */
>>> inst->wakeupTime.tv_sec++;
>>> wait_period(inst);
>>>         while (runAll_ && inst->doScan)
>>> {
>>> //
>>> // Trigger Fieldbus RX here.
>>> //
>>> //
>>> ecrt_master_receive(master_);
>>>
>>>   // record the time we received the data so other parts of the program
>>>   // have an accurate time reading
>>>   globalTickTimeNs = ton_get_ns();
>>>
>>>   ecrt_domain_process(lrwDomainMgr_.domain);
>>>   ecrt_domain_process(noLrwWriteDomainMgr_.domain);
>>>   ecrt_domain_process(noLrwReadDomainMgr_.domain);
>>>
>>> if (counter_)
>>> {
>>>
>>>     counter_—;
>>>         }
>>>        else
>>>        {
>>>     counter_ = 4000;
>>>
>>>     // check for master state
>>>     check_master_state();
>>>        }
>>>
>>>
>>> … tick sub systems
>>>
>>>
>>> //
>>> // Trigger Fieldbus TX. This should be the last step
>>> //
>>> //
>>> ecrt_domain_queue(lrwDomainMgr_.domain);
>>> ecrt_domain_queue(noLrwWriteDomainMgr_.domain);
>>> ecrt_domain_queue(noLrwReadDomainMgr_.domain);
>>> clock_gettime(CLOCK_REALTIME, &dcTime_ );
>>> ecrt_master_application_time(
>>> master_,
>>> TIMESPEC2NSEPOCH2000(dcTime_) );
>>>
>>>
>>> if ( TON_ISDONE(clockSyncTon_) )
>>> {
>>> ecrt_master_sync_reference_clock(master_);
>>> clockSyncTon_ = TON_ENDTIME(10);// milliseconds
>>> }
>>> ecrt_master_sync_slave_clocks(master_);
>>>
>>> // send EtherCAT data
>>> ecrt_master_send(master_);
>>>
>>>
>>> if ( !wait_period(inst) )
>>> {
>>> PRINT( "%s Error with waiting! Stopping cyclic_task.\n",
>>> __FUNCTION__ );
>>> inst->doScan = false;
>>> }
>>>      }
>>>
>>> }
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> From: Graeme Foot <Graeme.Foot at touchcut.com
>>> <mailto:Graeme.Foot at touchcut.com>>
>>> Date: Sunday, February 21, 2016 at 4:07 PM
>>> To: Thomas Bitsky <tbj at automateddesign.com <mailto:tbj at automateddesign.com>>
>>> Cc: "etherlab-users at etherlab.org <mailto:etherlab-users at etherlab.org>"
>>> <etherlab-users at etherlab.org <mailto:etherlab-users at etherlab.org>>
>>> Subject: RE: Distributed Clocks
>>>
>>> Hi,
>>>
>>> I build and patch against revision 2526, so don’t know what patches /
>>> fixes have made it through to the latest release.  However for my
>>> revision I need fixes for reference clock selections and dc
>>> synchronization issues.
>>>
>>> I’ve attached the dc related patches I use, but these are applied after
>>> some other patches so you may get some conflicts or offsetting.
>>>
>>> *01 - Distributed Clock fixes and helpers.patch*
>>>
>>> This sorts out some ref slave issues, allowing a user selected ref
>>> slave.  It also adds some helper functions:
>>>
>>> - ecrt_master_setup_domain_memory() : this allows me to set up the
>>> domain memory and do stuff with it before calling ecrt_master_activate()
>>> (for user space apps)
>>>
>>> - ecrt_master_deactivate_slaves() : this lets me deactivate the slaves
>>> while still in realtime to avoid the slaves getting some shutdown sync
>>> errors
>>>
>>> *02 - Distributed Clock fixes from Jun Yuan - dc sync issues.patch*
>>>
>>> This sorts out some timing issues to do with slave dc syncing.  Without
>>> it they can start syncing from one cycle out causing a large syncing
>>> time overhead, one slave at a time.
>>>
>>> Regards,
>>>
>>> Graeme.
>>>
>>> *From:*Thomas Bitsky Jr [mailto:tbj at automateddesign.com]
>>> *Sent:* Sunday, 21 February 2016 10:27 a.m.
>>> *To:* Graeme Foot
>>> *Cc:* etherlab-users at etherlab.org <mailto:etherlab-users at etherlab.org>
>>> *Subject:* Re: Distributed Clocks
>>>
>>> Graeme,
>>>
>>> Thank you so much for the detailed response. I'm away from my computer
>>> right now, so I can't try out your advice, but I noticed you asked about
>>> patches. I am not running any patches; which should I get?
>>>
>>> Thanks!
>>> Thomas Bitsky Jr
>>>
>>> On Feb 20, 2016, at 3:04 PM, Graeme Foot <Graeme.Foot at touchcut.com
>>> <mailto:Graeme.Foot at touchcut.com>> wrote:
>>>
>>>     Hi,
>>>
>>>     The slave clocks get synced via the distributed clock system using
>>>     the master methods: ecrt_master_reference_clock_time(),
>>>     ecrt_master_sync_slave_clocks(), ecrt_master_application_time(),
>>>     ecrt_master_sync_reference_clock().
>>>
>>>     However each individual slave can choose (if it is capable of it)
>>>     whether to synchronize its reading and writing of data to a
>>>     particular point in time within the dc cycle.  If that slave does
>>>     not choose to do so then the reading and writing of the data occurs
>>>     at the time the EtherCAT frame goes past, resulting in a progressive
>>>     update and application of data as the frame progresses through all
>>>     of the slaves.
>>>
>>>     If a slave is registered to use the dc clock then it caches the
>>>     frame data until the sync0 interrupt so in theory all dc slaves
>>>     apply the data at the same time.  It also means you don’t have to
>>>     worry about jitter as to the time the frame is sent over the wire.
>>>     The only thing is to choose a good sync0 time to ensure your frames
>>>     are always sent out and have reached all of the slaves before the
>>>     sync0 time occurs, and that the next frame is not sent out before
>>>     the previous sync0 time occurs.
>>>
>>>     In my application my cycle time is 1000us.  I choose a sync0 time of
>>>     500us.  I also send my frame as close as possible to the beginning
>>>     of the cycle. This gives the frame up to half the cycle time to
>>>     reach all of the slaves and then the other half of the cycle for the
>>>     frame to return in time for the master receive call.  I could choose
>>>     a sync0 time later than 500us but I want it processed as soon as
>>>     possible after the frame is received while still allowing for a bus
>>>     with a large number of terminals.
>>>
>>>     So below where you are calculating loop_shift based on the current
>>>     time is wrong.  Just choose a time within the dc cycle and use that
>>>     value for all slaves.  Note: the beginning of the dc cycle is in
>>>     phase with the first call to ecrt_master_application_time(), so all
>>>     of your realtime looping should also be in phase with that initial time.
>>>
>>>     Note, when selecting a slave to be the DC reference slave you should
>>>     generally choose the first slave on your bus, regardless of whether
>>>     it will be (or can be) registered to use the dc synchronization.  At
>>>     the very least it must be before, or be the, first slave that will
>>>     be registered as a dc slave.
>>>
>>>     Also note that some slaves clocks are not very stable and shouldn’t
>>>     be used as the DC reference slave.  My original testing was on a
>>>     Beckhoff CX1020 with a CX1100-0004, this could not be used as a
>>>     reference slave as its clock was not stable.
>>>
>>>     I see you are configuring the slaves via ecrt_slave_config_*()then
>>>     calling ecrt_slave_config_pdos()at the end.  If you call
>>>     ecrt_slave_config_pdos() at the beginning you don’t need all the
>>>     other config calls. However I hear AKD drives and some other slaves
>>>     prefer explicit slave config calls but most slaves are happy with
>>>     just the ecrt_slave_config_reg_pdo_entry()methods.
>>>
>>>     This also leads to another issue.  One of the configured PDO items
>>>     is the “mode of operation” value (0x6060 0x00).  You are also
>>>     configuring this with: ecrt_slave_config_sdo8( sc, 0x6060, 0, 8 ).
>>>     This value should be instead be set via the PDO value.  Use
>>>     ecrt_slave_config_reg_pdo_entry()to get the offset to the value and
>>>     set the value there.
>>>
>>>     Sorry if that was a bit long but DC’s is not an easy topic to get
>>>     your head around.  Here’s a bit of a summary:
>>>
>>>     - You can choose which slaves get registered with
>>>     ecrt_slave_config_dc(). But each slave you want synced must get its
>>>     own call to ecrt_slave_config_dc().
>>>
>>>     - If your yaskawa drives get to OP without
>>>     ecrt_slave_config_dc()then they should get to OP with
>>>     ecrt_slave_config_dc().
>>>
>>>     - The yaskawa drives require a very stable reference slave time,
>>>     which is why we sync the EtherCAT master to the reference slave
>>>     rather than the other way around.
>>>
>>>     And some other comments:
>>>
>>>     - Never use anEL9010 endcap module.  These break the distributed
>>>     clock calculations.  I don’t think they are available anymore though.
>>>
>>>     - There are some patches out there that fix various DC clock issues,
>>>     are you using any of these?
>>>
>>>     Regards,
>>>
>>>     Graeme.
>>>
>>>     *From:*Thomas Bitsky Jr [mailto:tbj at automateddesign.com]
>>>     *Sent:* Sunday, 21 February 2016 7:15 a.m.
>>>     *To:* Graeme Foot; etherlab-users at etherlab.org
>>>     <mailto:etherlab-users at etherlab.org>
>>>     *Subject:* Re: Distributed Clocks
>>>
>>>     [snip]
>>>
>>>     I’ve never been able to get the EL7041 stepper modules to work in dc
>>>     mode.
>>>
>>>     [/snip]
>>>
>>>     Is it all or nothing? I need the servo drives, the LVDT and the
>>>     EL3356 tied to a distributed clock. The EL7041 is optional for me.
>>>
>>>     [snip]
>>>
>>>     I don’t see in your code calls to ecrt_slave_config_dc().
>>>
>>>     For the yaskawa drive, during the config stage, I use the following
>>>     calls…
>>>
>>>     [/snip]
>>>
>>>     Forgot to put that part; my bad. This is what I had for the
>>>     Yaskawa/AKD, although I was only doing it to one of the drives. I
>>>     thought I was supposed to set up one distributed clock, and it
>>>     became the master and handled the rest. Am I supposed to do this for
>>>     all the cards, or do I select?
>>>
>>>     Yaskawa(AKD drive code is pretty much the same):
>>>
>>>        if (!(sc = ecrt_master_slave_config(
>>>
>>>                          master,
>>>
>>>     slavePosDomain,
>>>
>>>     slavePosIndex,
>>>
>>>     vendorId, productCode)))
>>>
>>>     {
>>>
>>>     return FALSE;
>>>
>>>          }
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1C12, 0, 0 ); /* clear sm pdo 0x1c12 */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1C13, 0, 0 ); /* clear sm pdo 0x1c12 */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1A00, 0, 0 ); /* clear TxPDO0 */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1A01, 0, 0 ); /* clear TxPDO1 */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1A02, 0, 0 ); /* clear TxPDO2 */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1A03, 0, 0 ); /* clear TxPDO3 */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1600, 0, 0 ); /* number of var in this
>>>     PDO */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1601, 0, 0 ); /* clear RxPdo 0x1601 */
>>>
>>>          ecrt_slave_config_sdo8( sc, 0x1602, 0, 0 ); /* clear RxPdo
>>>     0x1602 */
>>>
>>>          ecrt_slave_config_sdo8( sc, 0x1603, 0, 0 ); /* clear RxPdo
>>>     0x1603 */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1A00, 0, 0 ); /* clear TxPDO0 */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A00, 1, 0x60410010 ); // Status word
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A00, 2,0x60640020 );// Position
>>>     actual value, per encoder
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A00, 3,0x60770010 );// Torque,
>>>     actual value
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A00, 4,0x60F40020 );// Following
>>>     error, actual value
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A00, 5,0x60610008 );// Modes of
>>>     operation display
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A00, 6,0x00000008 );// GAP
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A00, 7,0x60B90010 );// Touch probe
>>>     status
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A00, 8, 0x60BA0020 ); // Touch probe
>>>     1 position
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1A00, 0, 8 ); /* pdo entries */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1A01, 0, 0 ); /* clear TxPDO1 */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A01,1,0x60410010 ); // Status word
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A01,2,0x60640020 );// Position
>>>     actual value, per encoder
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1A01, 0, 2 ); /* pdo entries */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1A02, 0, 0 ); /* clear TxPDO2 */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A02,1,0x60410010 ); // Status word
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A02,2,0x60640020 );// Position
>>>     actual value, per encoder
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1A02, 0, 2 ); /* pdo entries */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1A03, 0, 0 ); /* clear TxPDO2 */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A03,1,0x60410010 ); // Status word
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A03,2,0x60640020 );// Position
>>>     actual value, per encoder
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1A03,3,0x60770010 );// Torque, actual
>>>     value
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1A03, 0, 3 ); /* pdo entries */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1600, 0, 0 ); /* clear entries */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1600, 1, 0x60400010  ); /* control
>>>     word */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1600, 2, 0x607A0020  ); /* target
>>>     position */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1600, 3, 0x60FF0020  ); /* target
>>>     velocity */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1600, 4, 0x60710010  ); /* target
>>>     torque */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1600, 5, 0x60720010  ); /* max torque */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1600, 6, 0x60600008  ); /* modes of
>>>     operation */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1600, 7, 0x00000008  ); /* gap */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1600, 8, 0x60B80010  ); /* touch
>>>     probe function */
>>>
>>>     ecrt_slave_config_sdo8(sc, 0x1600, 0, 8 ); /* pdo entries */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1601, 0, 0 ); /* clear entries */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1601, 1, 0x60400010  ); /* control
>>>     word */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1601, 2, 0x607A0020  ); /* target
>>>     position */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1601, 0, 2 ); /* pdo entries */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1602, 0, 0 ); /* clear entries */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1602, 1, 0x60400010  ); /* control
>>>     word */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1602, 2, 0x60FF0020  ); /* target
>>>     position */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1602, 0, 2 ); /* pdo entries */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1603, 0, 0 ); /* clear entries */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1603, 1, 0x60400010  ); /* control
>>>     word */
>>>
>>>     ecrt_slave_config_sdo32( sc, 0x1603, 2, 0x60710020  ); /* target
>>>     position */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1603, 0, 2 ); /* pdo entries */
>>>
>>>     ecrt_slave_config_sdo16( sc, 0x1C12, 1, 0x1601 ); /* download pdo
>>>     1C12 index */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1C12, 0, 1 ); /* set number of RxPDO */
>>>
>>>     ecrt_slave_config_sdo16( sc, 0x1C13, 1, 0x1A01 ); /* download pdo
>>>     1C13 index */
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x1C13, 0, 1 ); /* set number of TxPDO */
>>>
>>>     // OPMODE
>>>
>>>     // Yaskawa recommends 8
>>>
>>>     ecrt_slave_config_sdo8( sc, 0x6060, 0, 8 );
>>>
>>>     unsigned char interpolationTime = 0xFF;
>>>
>>>     // 250
>>>
>>>     unsigned char cycleExponent = 0xFA;
>>>
>>>     // microseconds
>>>
>>>     // globalSCanRate_us equals either 250 or 125.
>>>
>>>     unsigned int us = globalScanRate_us;
>>>
>>>     size_t i;
>>>
>>>     for ( i=0;i<6, us > 0xFF;++i )
>>>
>>>     {
>>>
>>>     us /= 10;
>>>
>>>     cycleExponent += 1;
>>>
>>>     }
>>>
>>>     interpolationTime = us;
>>>
>>>     ecrt_slave_config_sdo8( akd->sc_akd, 0x60C2, 1, interpolationTime );
>>>     /* Interpolation time */
>>>
>>>     ecrt_slave_config_sdo8( akd->sc_akd, 0x60C2, 2, cycleExponent ); /*
>>>     Cycle exponent */
>>>
>>>          PRINT("Configuring PDOs...\n");
>>>
>>>          if (ecrt_slave_config_pdos(sc, EC_END, slave_syncs))
>>>
>>>     {
>>>
>>>     PRINT("Failed to configure Yaskawa Sigma PDOs.\n");
>>>
>>>              return FALSE;
>>>
>>>          }
>>>
>>>     *struct timespec cur_time;*
>>>
>>>     *clock_gettime(CLOCK_REALTIME, &cur_time);*
>>>
>>>     *size_t loop_period = globalScanRate_us * 1000;*
>>>
>>>     *if ( loop_period == 0 ) loop_period = 1;*
>>>
>>>     *size_t loop_shift *
>>>
>>>     *= loop_period - (cur_time.tv_nsec % loop_period);*
>>>
>>>     *ecrt_slave_config_dc(*
>>>
>>>     *sc, *
>>>
>>>     *0x0300, *
>>>
>>>     *loop_period, *
>>>
>>>     *loop_shift, *
>>>
>>>     *0, *
>>>
>>>     *0);*
>>>
>>>     For the EL3356, would I then?
>>>
>>>     KL3356StrainGauge* sg = (KL3356StrainGauge*)slave->instance;
>>>
>>>     printf( "Begin kl3356_ecConfigure...\n");
>>>
>>>     //
>>>
>>>     // Create the slave configuration
>>>
>>>     //
>>>
>>>     if (!(sg->sc = ecrt_master_slave_config(
>>>
>>>     master,
>>>
>>>     slavePosDomain, slavePosIndex, // Bus position
>>>
>>>     vendorId, productCode
>>>
>>>     // Slave type
>>>
>>>     )))
>>>
>>>     {
>>>
>>>     printf(
>>>
>>>     "kl3356_ecConfigure -- Failed to get slave configuration.\n");
>>>
>>>         return FALSE;
>>>
>>>     }
>>>
>>>     //
>>>
>>>     // Register startup configuration for the hardware
>>>
>>>     //
>>>
>>>     ecrt_slave_config_sdo8( sg->sc, 0x1C12, 0, 0 ); /* clear sm pdo
>>>     0x1c12 */
>>>
>>>     ecrt_slave_config_sdo8( sg->sc, 0x1C13, 0, 0 ); /* clear sm pdo
>>>     0x1c12 */
>>>
>>>     ecrt_slave_config_sdo16( sg->sc, 0x1C12, 1, 0x1600 ); /* download
>>>     pdo 1C12 index */
>>>
>>>     ecrt_slave_config_sdo8( sg->sc, 0x1C12, 0, 1 ); /* set number of
>>>     RxPDO */
>>>
>>>     ecrt_slave_config_sdo16( sg->sc, 0x1C13, 1, 0x1A00 ); /* download
>>>     pdo 1C13 index */
>>>
>>>     ecrt_slave_config_sdo16( sg->sc, 0x1C13, 2, 0x1A02 ); /* download
>>>     pdo 1C13 index */
>>>
>>>     ecrt_slave_config_sdo8( sg->sc, 0x1C13, 0, 2 ); /* set number of
>>>     TxPDO */
>>>
>>>     //
>>>
>>>     // Configure the hardware's PDOs
>>>
>>>     //
>>>
>>>     if (ecrt_slave_config_pdos(sg->sc, EC_END, kl3356_syncs))
>>>
>>>     {
>>>
>>>         printf(
>>>
>>>     "kl3356_ecConfigure -- Failed to configure PDOs.\n");
>>>
>>>         return FALSE;
>>>
>>>          }
>>>
>>>     *struct timespec cur_time;*
>>>
>>>     *clock_gettime(CLOCK_REALTIME, &cur_time);*
>>>
>>>     *size_t loop_period = globalScanRate_us * 1000;*
>>>
>>>     *if ( loop_period == 0 ) loop_period = 1;*
>>>
>>>     *size_t loop_shift *
>>>
>>>     *= loop_period - (cur_time.tv_nsec % loop_period);*
>>>
>>>     *ecrt_slave_config_dc(*
>>>
>>>     *s**g->sc, *
>>>
>>>     *0x0300, *
>>>
>>>     *loop_period, *
>>>
>>>     *loop_shift, *
>>>
>>>     *0, *
>>>
>>>     *0);*
>>>
>>>     Thanks!
>>>
>>>     Thomas C. Bitsky Jr. | Lead Developer
>>>
>>>     ADC | automateddesign.com <http://automateddesign.com/>
>>>
>>>     P: 630-783-1150 F: 630-783-1159 M: 630-632-6679
>>>
>>>     Follow ADC news and media:
>>>
>>>     Facebook <https://facebook.com/automateddesigncorp> | Twitter
>>>     <https://twitter.com/ADCSportsLogic> | YouTube
>>>     <https://www.youtube.com/user/ADCSportsLogic>
>>>
>>>     *From: *Graeme Foot <Graeme.Foot at touchcut.com
>>>     <mailto:Graeme.Foot at touchcut.com>>
>>>     *Date: *Friday, February 19, 2016 at 7:24 PM
>>>     *To: *Thomas Bitsky <tbj at automateddesign.com
>>>     <mailto:tbj at automateddesign.com>>, "etherlab-users at etherlab.org
>>>     <mailto:etherlab-users at etherlab.org>" <etherlab-users at etherlab.org
>>>     <mailto:etherlab-users at etherlab.org>>
>>>     *Subject: *RE: Distributed Clocks
>>>
>>>     Hi,
>>>
>>>     I don’t see in your code calls to ecrt_slave_config_dc().
>>>
>>>     For the yaskawa drive, during the config stage, I use the following
>>>     calls:
>>>
>>>          // set interpolation time period (free run mode)
>>>
>>>          // where 0x60C2 is time in seconds = (0x60C2, 0x01) x
>>>     10^(0x60C2, 0x02)
>>>
>>>          // eg period of 1ms:
>>>
>>>          //   (0x60C2, 0x01) = 1
>>>
>>>          //   (0x60C2, 0x02) = -3
>>>
>>>          // => 1 x 10^(-3) = 0.001s
>>>
>>>          ecrt_slave_config_sdo8(dev->slaveConfig, 0x60C2, 0x01,
>>>     (uint8_t)g_app.scanTimeMS);
>>>
>>>          ecrt_slave_config_sdo8(dev->slaveConfig, 0x60C2, 0x02,
>>>     (int8_t)(-3));
>>>
>>>          // set up the distributed clock
>>>
>>>          // 0x0000 = free run, 0x0300 = dc
>>>
>>>          // (Supported DC cycle: 125us to 4ms (every 125us cycle))
>>>
>>>          ecrt_slave_config_dc(dev->slaveConfig, 0x0300,
>>>     g_app.scanTimeNS, 500000, 0, 0);
>>>
>>>     0x60C2 shouldn’t be necessary for dc mode, but I used it before I
>>>     had dc mode working and have never tried it without and it doesn’t
>>>     harm anything having it in.
>>>
>>>     The second value that is being passed to the
>>>     ecrt_slave_config_dcmethod is a value that is written to the ESC
>>>     register 0x980. The Yaskawa SGDV doco says this value should be
>>>     0x0000 for free run mode and 0x0300 for dc mode.  Other ESC’s may
>>>     required different values.
>>>
>>>     I’ve never been able to get the EL7041 stepper modules to work in dc
>>>     mode.
>>>
>>>     Graeme.
>>>
>>>     *From:*etherlab-users [mailto:etherlab-users-bounces at etherlab.org]
>>>     *On Behalf Of *Thomas Bitsky Jr
>>>     *Sent:* Saturday, 20 February 2016 1:09 p.m.
>>>     *To:* etherlab-users at etherlab.org <mailto:etherlab-users at etherlab.org>
>>>     *Subject:* [etherlab-users] Distributed Clocks
>>>
>>>     Hello.
>>>
>>>     I’ve been using the EtherCAT master for years to great success, but
>>>     I’m stuck on a problem I can’t figure out that I think several
>>>     people here are doing successfully. I can’t implement distributed
>>>     clocks in my application.
>>>
>>>     I am having the same problem on two systems I have up and running:
>>>
>>>     SYSTEM ONE:
>>>
>>>     EtherLAB Master 1.52, E1000E Driver, Scan Rate 4Khz, Ubuntu Server
>>>     14.04LTS, RT-PREEMPT 3.12.50-rt68
>>>
>>>     alias=0, position=0, device=EK1100
>>>
>>>     alias=0, position=1, device=EL1104
>>>
>>>     alias=0, position=2, device=EL2004
>>>
>>>     alias=0, position=3, device=EL9510
>>>
>>>     alias=0, position=4, device=EL3356
>>>
>>>     alias=0, position=5, device=Kollmorgen AKD
>>>
>>>     alias=0, position=6, device=MTS LVDT
>>>
>>>     SYSTEM TWO:
>>>
>>>     EtherLAB Master 1.52, E1000E Driver, Scan Rate 8Khz, Ubuntu Server
>>>     14.04LTS, RT-PREEMPT 3.12.50-rt68
>>>
>>>     alias=0, position=0, device=EK1100
>>>
>>>     alias=0, position=1, device=EL3001
>>>
>>>     alias=0, position=2, device=EL1104
>>>
>>>     alias=0, position=3, device=EL1104
>>>
>>>     alias=0, position=4, device=EL1104
>>>
>>>     alias=0, position=5, device=EL2004
>>>
>>>     alias=0, position=6, device=EL2004
>>>
>>>     alias=0, position=7, device=EL9505
>>>
>>>     alias=0, position=8, device=EL7041
>>>
>>>     alias=0, position=9, device=EL7041
>>>
>>>     alias=0, position=10, device=EL7041
>>>
>>>     alias=0, position=11, device=EL7041
>>>
>>>     alias=0, position=12, device=EL7041
>>>
>>>     alias=0, position=13, device=EL7041
>>>
>>>     alias=0, position=14, device=EK1110
>>>
>>>     alias=1, position=0, device=SIGMA5-05
>>>
>>>     alias=2, position=0, device=Yaskawa SIGMA5-05
>>>
>>>     alias=3, position=0, device=Yaskawa SIGMA5-05
>>>
>>>     Both of the system are fully operational. However, for various
>>>     reasons, I need to implement distributed clocks on these systems.
>>>     I’ve never been able to get this to work.
>>>
>>>     What follows is the code I used for both systems to try this. I read
>>>     through examples on the mailing list, plus the examples, but I’m not
>>>     seeing what I’m doing wrong. The result is always the same: all the
>>>     fieldbus cards go into operation, but the servo drives won’t because
>>>     of “bad configuration.” Take out the distributed clock code, and
>>>     they work fine. I’m getting away with it for now, but I do need
>>>     better clock resolution.
>>>
>>>     The systems have an LRW domain, then a separate read domain and
>>>     write domain for the servo drive(s) for a total of three domains
>>>     (LRW, read, write). The yaskawa drives necessitate this. The scan
>>>     rate is usually 4Khz, but I have tried it at both 1Khz and 8Khz and
>>>     gotten the same results. Everything about the implementation is
>>>     fairly straight-forward. There’s just something fundamental about
>>>     the DC configuration that I’m not understanding.
>>>
>>>     Almost all the code below is taken right from the examples or the
>>>     message boards (thanks, everybody!). If anyone could tell me what
>>>     I’m going wrong or offer any insights, it’s greatly appreciated. For
>>>     brevity, I tried to narrow it down to relevant parts, but please let
>>>     me know any additional information or code I can provide.
>>>
>>>     Thank you in advance,
>>>
>>>     Tom
>>>
>>>     **********************************************************
>>>
>>>     // EtherCAT distributed clock variables
>>>
>>>     #define DC_FILTER_CNT          1024
>>>
>>>     #define SYNC_MASTER_TO_REF        1
>>>
>>>     static uint64_t dc_start_time_ns = 0LL;
>>>
>>>     static uint64_t dc_time_ns = 0;
>>>
>>>     static uint8_t  dc_started = 0;
>>>
>>>     static int32_t  dc_diff_ns = 0;
>>>
>>>     static int32_t  prev_dc_diff_ns = 0;
>>>
>>>     static int64_t  dc_diff_total_ns = 0LL;
>>>
>>>     static int64_t  dc_delta_total_ns = 0LL;
>>>
>>>     static int      dc_filter_idx = 0;
>>>
>>>     static int64_t  dc_adjust_ns;
>>>
>>>     static int64_t  system_time_base = 0LL;
>>>
>>>     static uint64_t wakeup_time = 0LL;
>>>
>>>     static uint64_t overruns = 0LL;
>>>
>>>     /** Get the time in ns for the current cpu, adjusted by
>>>     system_time_base.
>>>
>>>       *
>>>
>>>       * \attention Rather than calling rt_get_time_ns() directly, all
>>>     application
>>>
>>>       * time calls should use this method instead.
>>>
>>>       *
>>>
>>>       * \ret The time in ns.
>>>
>>>       */
>>>
>>>     uint64_t system_time_ns(void)
>>>
>>>     {
>>>
>>>     struct timespec ts;
>>>
>>>     clock_gettime(GLOBAL_CLOCK_TO_USE, &ts);
>>>
>>>     return TIMESPEC2NS(ts);
>>>
>>>     }
>>>
>>>     static
>>>
>>>     void sync_distributed_clocks(void)
>>>
>>>     {
>>>
>>>          uint32_t ref_time = 0;
>>>
>>>          uint64_t prev_app_time = dc_time_ns;
>>>
>>>          dc_time_ns = system_time_ns();
>>>
>>>          // set master time in nano-seconds
>>>
>>>          ecrt_master_application_time(master_, dc_time_ns);
>>>
>>>          // get reference clock time to synchronize master cycle
>>>
>>>          ecrt_master_reference_clock_time(master_, &ref_time);
>>>
>>>          dc_diff_ns = (uint32_t) prev_app_time - ref_time;
>>>
>>>          // call to sync slaves to ref slave
>>>
>>>          ecrt_master_sync_slave_clocks(master_);
>>>
>>>     }
>>>
>>>     /** Return the sign of a number
>>>
>>>       *
>>>
>>>       * ie -1 for -ve value, 0 for 0, +1 for +ve value
>>>
>>>       *
>>>
>>>       * \retval the sign of the value
>>>
>>>       */
>>>
>>>     #define sign(val) \
>>>
>>>          ({ typeof (val) _val = (val); \
>>>
>>>          ((_val > 0) - (_val < 0)); })
>>>
>>>     /*****************************************************************************/
>>>
>>>     /** Update the master time based on ref slaves time diff
>>>
>>>       *
>>>
>>>       * called after the ethercat frame is sent to avoid time jitter in
>>>
>>>       * sync_distributed_clocks()
>>>
>>>       */
>>>
>>>     static unsigned int cycle_ns = 1000000;  // 1 millisecond
>>>
>>>     void update_master_clock(void)
>>>
>>>     {
>>>
>>>          // calc drift (via un-normalised time diff)
>>>
>>>          int32_t delta = dc_diff_ns - prev_dc_diff_ns;
>>>
>>>          prev_dc_diff_ns = dc_diff_ns;
>>>
>>>          // normalise the time diff
>>>
>>>          dc_diff_ns =
>>>
>>>              ((dc_diff_ns + (cycle_ns / 2)) % cycle_ns) - (cycle_ns / 2);
>>>
>>>          // only update if primary master
>>>
>>>          if (dc_started) {
>>>
>>>              // add to totals
>>>
>>>              dc_diff_total_ns += dc_diff_ns;
>>>
>>>              dc_delta_total_ns += delta;
>>>
>>>              dc_filter_idx++;
>>>
>>>              if (dc_filter_idx >= DC_FILTER_CNT) {
>>>
>>>                  // add rounded delta average
>>>
>>>                  dc_adjust_ns +=
>>>
>>>                      ((dc_delta_total_ns + (DC_FILTER_CNT / 2)) /
>>>     DC_FILTER_CNT);
>>>
>>>                  // and add adjustment for general diff (to pull in drift)
>>>
>>>                  dc_adjust_ns += sign(dc_diff_total_ns / DC_FILTER_CNT);
>>>
>>>                  // limit crazy numbers (0.1% of std cycle time)
>>>
>>>                  if (dc_adjust_ns < -1000) {
>>>
>>>                      dc_adjust_ns = -1000;
>>>
>>>                  }
>>>
>>>                  if (dc_adjust_ns > 1000) {
>>>
>>>                      dc_adjust_ns =  1000;
>>>
>>>                  }
>>>
>>>                  // reset
>>>
>>>                  dc_diff_total_ns = 0LL;
>>>
>>>                  dc_delta_total_ns = 0LL;
>>>
>>>                  dc_filter_idx = 0;
>>>
>>>              }
>>>
>>>              // add cycles adjustment to time base (including a spot
>>>     adjustment)
>>>
>>>              system_time_base += dc_adjust_ns + sign(dc_diff_ns);
>>>
>>>          }
>>>
>>>          else {
>>>
>>>              dc_started = (dc_diff_ns != 0);
>>>
>>>              if (dc_started)
>>>
>>>     {
>>>
>>>                  // record the time of this initial cycle
>>>
>>>                  dc_start_time_ns = dc_time_ns;
>>>
>>>              }
>>>
>>>          }
>>>
>>>     }
>>>
>>>     struct timespec dcTime_;
>>>
>>>     int
>>>
>>>     ecatMain_process(void* lp)
>>>
>>>     {
>>>
>>>     ecrt_master_receive(master_);
>>>
>>>     clock_gettime(CLOCK_REALTIME, &dcTime_);
>>>
>>>     ecrt_master_application_time(master_, TIMESPEC2NS(dcTime_));
>>>
>>>     ecrt_master_sync_reference_clock(master_);
>>>
>>>     ecrt_master_sync_slave_clocks(master_);
>>>
>>>     ecrt_domain_process(lrwDomainMgr_.domain);
>>>
>>>     ecrt_domain_process(noLrwWriteDomainMgr_.domain);
>>>
>>>     ecrt_domain_process(noLrwReadDomainMgr_.domain);
>>>
>>>     … // handle my business
>>>
>>>     // write application time to master
>>>
>>>     clock_gettime(CLOCK_REALTIME, &dcTime_);
>>>
>>>     ecrt_master_application_time(master_, TIMESPEC2NS(dcTime_));
>>>
>>>     if (sync_ref_counter_)
>>>
>>>     {
>>>
>>>     sync_ref_counter_--;
>>>
>>>     }
>>>
>>>     else
>>>
>>>     {
>>>
>>>     sync_ref_counter_ = 1; // sync every cycle
>>>
>>>     ecrt_master_sync_reference_clock(master_);
>>>
>>>     }
>>>
>>>     // send process data
>>>
>>>     ecrt_domain_queue(lrwDomainMgr_.domain);
>>>
>>>     ecrt_domain_queue(noLrwWriteDomainMgr_.domain);
>>>
>>>     ecrt_domain_queue(noLrwReadDomainMgr_.domain);
>>>
>>>     // sync distributed clock just before master_send to set
>>>
>>>     // most accurate master clock time
>>>
>>>     sync_distributed_clocks();
>>>
>>>     // send EtherCAT data
>>>
>>>     ecrt_master_send(master_);
>>>
>>>     // update the master clock
>>>
>>>     // Note: called after ecrt_master_send() to reduce time
>>>
>>>     // jitter in the sync_distributed_clocks() call
>>>
>>>     update_master_clock();
>>>
>>>     return 1;
>>>
>>>     }
>>>
>>>     int
>>>
>>>     ecatMain_start(void* lp)
>>>
>>>     {
>>>
>>>     //
>>>
>>>     // domain regs must end in a null entry
>>>
>>>     //
>>>
>>>     lrwDomainMgr_.domainRegs = realloc(
>>>
>>>     lrwDomainMgr_.domainRegs,
>>>
>>>     sizeof(ec_pdo_entry_reg_t) * (lrwDomainMgr_.size + 1)  );
>>>
>>>     memset(
>>>
>>>     &(lrwDomainMgr_.domainRegs[lrwDomainMgr_.size]),
>>>
>>>     0,
>>>
>>>     sizeof(ec_pdo_entry_reg_t) );
>>>
>>>     noLrwReadDomainMgr_.domainRegs = realloc(
>>>
>>>     noLrwReadDomainMgr_.domainRegs,
>>>
>>>     sizeof(ec_pdo_entry_reg_t) * (noLrwReadDomainMgr_.size + 1)  );
>>>
>>>     memset(
>>>
>>>     &(noLrwReadDomainMgr_.domainRegs[noLrwReadDomainMgr_.size]),
>>>
>>>     0,
>>>
>>>     sizeof(ec_pdo_entry_reg_t) );
>>>
>>>     noLrwWriteDomainMgr_.domainRegs = realloc(
>>>
>>>     noLrwWriteDomainMgr_.domainRegs,
>>>
>>>     sizeof(ec_pdo_entry_reg_t) * (noLrwWriteDomainMgr_.size + 1)  );
>>>
>>>     memset(
>>>
>>>     &(noLrwWriteDomainMgr_.domainRegs[noLrwWriteDomainMgr_.size]),
>>>
>>>     0,
>>>
>>>     sizeof(ec_pdo_entry_reg_t) );
>>>
>>>     //
>>>
>>>     // NOTE: the Output Domain must be registered with
>>>
>>>     // ecrt_domain_reg_pdo_entry_list before the Input Domain otherwise you
>>>
>>>     // will not have any data exchanged even though the drive goes into OP
>>>
>>>     // mode.
>>>
>>>     //
>>>
>>>     PRINT("\nAttempting to register PDOs on WRITE ONLY domain...\n");
>>>
>>>     if (ecrt_domain_reg_pdo_entry_list(
>>>
>>>     noLrwWriteDomainMgr_.domain, noLrwWriteDomainMgr_.domainRegs))
>>>
>>>     {
>>>
>>>     PRINT("WRITE ONLY PDO entry registration failed!\n");
>>>
>>>     return FALSE;
>>>
>>>          }
>>>
>>>     PRINT("\nAttempting to register PDOs on READ ONLY domain...\n");
>>>
>>>     if (ecrt_domain_reg_pdo_entry_list(
>>>
>>>     noLrwReadDomainMgr_.domain, noLrwReadDomainMgr_.domainRegs))
>>>
>>>     {
>>>
>>>     PRINT("READ ONLY PDO entry registration failed!\n");
>>>
>>>     return FALSE;
>>>
>>>       }
>>>
>>>     //
>>>
>>>     // And now we register the bi-directional domain.
>>>
>>>     //
>>>
>>>     PRINT("\nAttempting to register PDOs on LRW domain...\n");
>>>
>>>     if (ecrt_domain_reg_pdo_entry_list(
>>>
>>>     lrwDomainMgr_.domain, lrwDomainMgr_.domainRegs))
>>>
>>>     {
>>>
>>>     PRINT("LRW PDO entry registration failed!\n");
>>>
>>>     return FALSE;
>>>
>>>          }
>>>
>>>     /*
>>>
>>>     * Finishes the configuration phase and prepares for cyclic operation.
>>>
>>>     * This function tells the master that the configuration phase
>>>
>>>     * is finished and the realtime operation will begin.
>>>
>>>     * The function allocates internal memory for the domains and calculates
>>>
>>>     * the logical FMMU addresses for domain members.
>>>
>>>     * It tells the master state machine that the bus configuration is
>>>
>>>     * now to be applied
>>>
>>>     */
>>>
>>>     PRINT("\nAttempting to activate ECAT master...\n");
>>>
>>>     if (ecrt_master_activate(master_))
>>>
>>>     {
>>>
>>>     PRINT(
>>>
>>>     "%s Failed to activate master!\n",
>>>
>>>     __FUNCTION__ );
>>>
>>>     return FALSE;
>>>
>>>     }
>>>
>>>     /*
>>>
>>>     * Returns the domain's process data.
>>>
>>>     */
>>>
>>>     PRINT( "%s getting LRW process data from master.\n", __FUNCTION__ );
>>>
>>>     if (!(lrwDomainMgr_.processData
>>>
>>>     = ecrt_domain_data(lrwDomainMgr_.domain)))
>>>
>>>     {
>>>
>>>     PRINT(
>>>
>>>     "%s set ecProcessData -- domain data is NULL!\n",
>>>
>>>     __FUNCTION__ );
>>>
>>>     return FALSE;
>>>
>>>       }
>>>
>>>     if (!(noLrwReadDomainMgr_.processData
>>>
>>>     = ecrt_domain_data(noLrwReadDomainMgr_.domain)))
>>>
>>>     {
>>>
>>>     PRINT(
>>>
>>>     "%s set read ProcessData -- domain data is NULL!\n",
>>>
>>>     __FUNCTION__ );
>>>
>>>     return FALSE;
>>>
>>>          }
>>>
>>>     if (!(noLrwWriteDomainMgr_.processData
>>>
>>>     = ecrt_domain_data(noLrwWriteDomainMgr_.domain)))
>>>
>>>     {
>>>
>>>     PRINT(
>>>
>>>     "%s set write ProcessData -- domain data is NULL!\n",
>>>
>>>     __FUNCTION__ );
>>>
>>>     return FALSE;
>>>
>>>          }
>>>
>>>     … // blah blah blah
>>>
>>>     doScan_ = TRUE;
>>>
>>>     PRINT( "%s completed successfully.\n", __FUNCTION__ );
>>>
>>>     return TRUE;
>>>
>>>     }
>>>
>>>
>>>
>>> _______________________________________________
>>> etherlab-users mailing list
>>> etherlab-users at etherlab.org
>>> http://lists.etherlab.org/mailman/listinfo/etherlab-users
>>>
>>_______________________________________________
>>etherlab-users mailing list
>>etherlab-users at etherlab.org
>>http://lists.etherlab.org/mailman/listinfo/etherlab-users


More information about the Etherlab-users mailing list