[etherlab-users] Distributed Clock with Yaskawa SGDV drives

George Broz GBroz at moog.com
Wed Feb 15 02:47:32 CET 2012


Hello - 

When I applied this patch I was able to sync and run slaves in 
an application, but the ethercat reg_write and reg_read commands 
(and possibly others) stopped working reliably.

For example, I would clear the System Offset Register (0x920) and then
use reg_read to check it. In subsequent calls, somewhat randomly it would 
return the expected zero value or a garbage value. This happened for me 
while the master was in the IDLE phase (i.e. no program running). 

If I issued reg_read enough times I will get a kernel oops. This happens
very quickly if I am running a program (master in 'Operation' Phase).

Here is a typical dump from one of the kernel oops:

Feb 14 16:57:15 nxtgenhd kernel: [ 2219.244105] EtherCAT DEBUG 0-1: Register request successful.
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.256056] EtherCAT DEBUG 0-0: Processing register request, offset 0x0990, length 4...
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.260070] BUG: unable to handle kernel paging request at 1a48c474
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.260270] IP: [<c01f98e7>] __kmalloc+0xc7/0x200
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.260387] *pde = 00000000
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.260449] Oops: 0000 [#1] SMP
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.260520] last sysfs file: /sys/devices/pci0000:00/0000:00:1c.4/0000:04:00.0/local_cpus
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.260665] Modules linked in: ec_rtdm ec_e1000e ec_master i915 binfmt_misc drm_kms_helper drm xeno_native xeno_posix xeno_
rtdm i2c_algo_bit i2c_core xeno_nucleus ppdev intel_agp parport_pc video lp intel_gtt output parport psmouse serio_raw agpgart usbhid hid ata_piix libata [last
 unloaded: e1000e]
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.261294]
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.261327] Pid: 1248, comm: EtherCAT-IDLE Not tainted 2.6.37.6.010512-xen2.5.6 #7 Intel Corporation Lobster Rock/To be fil
led by O.E.M.
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.261571] EIP: 0060:[<c01f98e7>] EFLAGS: 00010006 CPU: 0
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.261672] EIP is at __kmalloc+0xc7/0x200
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.261748] EAX: f6c103b0 EBX: 00000004 ECX: 1a48c474 EDX: 00000000
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.261860] ESI: f6002200 EDI: 000000d0 EBP: f45edf50 ESP: f45edf24
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.261972]  DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.262070] Process EtherCAT-IDLE (pid: 1248, ti=f45ec000 task=eeeb2b20 task.ti=f45ec000)
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.262213] I-pipe domain Linux
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.262270] Stack:
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.262309]  eeef83c0 eee2b790 f45edf34 f8810591 f45edf50 c01f8ab7 1a48c474 00000200
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.262491]  f4163ebc eecfe97c eecfebe0 f45edf74 f8810591 f45edf60 c01977cd f45edf68
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.262671]  eecfe800 eecfe800 eecfee24 00000001 f45edf7c f880e9af f45edfb8 f881ac5a
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.262853] Call Trace:
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022]  [<f8810591>] ? ec_fsm_master_state_reg_request+0xb1/0x180 [ec_master]
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022]  [<c01f8ab7>] ? kfree+0xb7/0x140
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022]  [<f8810591>] ? ec_fsm_master_state_reg_request+0xb1/0x180 [ec_master]
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022]  [<c01977cd>] ? __ipipe_restore_root+0x2d/0x40
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022]  [<f880e9af>] ? ec_fsm_master_exec+0x1f/0x30 [ec_master]
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022]  [<f881ac5a>] ? ec_master_idle_thread+0x9a/0x1b0 [ec_master]
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022]  [<c01977cd>] ? __ipipe_restore_root+0x2d/0x40
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022]  [<f881abc0>] ? ec_master_idle_thread+0x0/0x1b0 [ec_master]
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022]  [<c01592c4>] ? kthread+0x74/0x80
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022]  [<c0159250>] ? kthread+0x0/0x80
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022]  [<c01034fe>] ? kernel_thread_helper+0x6/0x10
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022] Code: 80 69 c0 8d 80 a0 91 69 c0 f0 0f ba 28 00 9d 8b 06 64 03 05 9c 80 69 c0 8b 10 85 d2 89 55 ec 0f 84 1d 01
00 00 8b 56 10 8b 4d ec <8b> 14 11 89 10 8b 4d f0 85 c9 74 55 31 c0 e8 a6 de f9 ff 8b 55
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022] EIP: [<c01f98e7>] __kmalloc+0xc7/0x200 SS:ESP 0068:f45edf24
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022] CR2: 000000001a48c474
Feb 14 16:57:15 nxtgenhd kernel: [ 2219.264022] ---[ end trace e02d6ab9367f84d0 ]---



I'm running Linux 2.6.37.6 with Xenomai 2.5.6 on Intel Atom D510. Do you see the same 
behavior? What is your system?

I was using changeset 2271 (default branch 1798bcdaa8d0) previously. I was going
to diff against that to see if anything popped out, but was hoping you had some ideas...


Thanks,
--George Broz
Moog Inc. Industrial Group




-----<etherlab-users-bounces at etherlab.org> wrote: -----
To: <etherlab-users at etherlab.org>
From: Graeme Foot 
Sent by: 
Date: 02/06/2012 07:46PM
Cc: George Broz <GBroz at moog.com>
Subject: Re: [etherlab-users] Distributed Clock with Yaskawa SGDV drives

Hi,

I think I've got my system stable now.  I've made two main changes since the last post:

1) I'm now setting the axis slave to Operation Mode 7 (Interpolated Position) when it is disabled and Mode 8 (Cyclic Sync Position).  This is primarily to avoid Sync Errors on shutdown due to the amps Sync Error Count Limit (0x10F1:2).

2) I've added a new ecrt function "ecrt_master_deactivate_slaves".  This is called prior to "ecrt_master_deactivate".  It deactivates each slave dc config settings and then put all of the slaves into PREOP (except EoE slaves).  The slaves are put into PREOP to clear the dc activation register.

I am then waiting for the slaves to attain the PREOP state (monitored with masterState.al_states which is being updated in my realtime thread), or for 5 seconds to elapse (in case of EoE slaves).


I am continuing to send the pdo data while the slaves are deactivating.  I stop sending pdo data just before calling "ecrt_master_deactivate" (as the pdo memory will be released).


My new command flow is:

configure master
  ecrt_request_master
  ecrt_master_create_domain
configure slaves (pdos / sdos / dc)
  ecrt_master_slave_config
  ecrt_slave_config_pdos
  ecrt_slave_config_reg_pdo_entry
  ecrt_slave_config_sdo8
  ecrt_slave_config_dc
connect to rtdm
  rt_dev_open(rtdm)
  ecrt_domain_index
  ecrt_rtdm_master_attach
setup domain memory
  ecrt_master_setup_domain_memory
  ecrt_domain_data
  ecrt_domain_size

create master_activate thread -------+
                                     |
start realtime                       |
  request master_activate     -->    | ecrt_master_activate
  ecrt_rtdm_master_recieve           | (after first sync dc)
  sync dc                            |
  ecrt_rtdm_master_send              |
                              <--    | master activated
  ecrt_rtdm_master_recieve           |
  ecrt_rtdm_domain_process_all       |
  check domain state                 |
  check master state                 |
  update domain data                 |
  sync dc                            |
  ecrt_rtdm_domain_queue_all         |
  ecrt_rtdm_master_send              |
                                     |
  request master deactivate   -->    | ecrt_master_deactivate_slaves
  ecrt_rtdm_master_recieve           | (wait for PREOP or 5s)
  ecrt_rtdm_domain_process_all       |
  check domain state                 |
  check master state                 |
  update domain data                 |
  sync dc                            |
  ecrt_rtdm_domain_queue_all         |
  ecrt_rtdm_master_send              |
                              <--    | master deactivating
  ecrt_rtdm_master_recieve           | ecrt_master_deactivate
  sync dc                            |
  ecrt_rtdm_master_send              |
                              <--    | master deactivated
stop realtime                        |
stop master_activate thread          +

cleanup
  rt_dev_close(rtdm)
  ecrt_release_master


I have attached a patch for the changes to rtdm and dc.  It has been made against EtherCAT master 1.5.0 ec8e1151b8a7 (2266).


Regards,
Graeme.


-----<etherlab-users-bounces at etherlab.org> wrote: -----
To: <etherlab-users at etherlab.org>
From: Graeme Foot 
Sent by: 
Date: 02/02/2012 05:04PM
Subject: Re: [etherlab-users] Distributed Clock with Yaskawa SGDV drives

I think I'm on track to getting it sorted.


First off, I didn't understand the distributed clock system well enough.

I was originally getting "Slave did not sync after 5000 ms" error messages.  To avoid this error I moved the ecrt_slave_config_dc call to the realtime section (making an rtdm version) and waiting for all of the slaves to be in op mode.  This is wrong, because ecrt_slave_config_dc needs to be called during the config stage as the dc settings are applied as the slave goes from PREOP to OP.

Instead to avoid the "Slave did not sync after 5000 ms" error I have changed the "EC_SYSTEM_TIME_TOLERANCE_NS" value to 10000 in fsm_master.c.  This is the value used to decide whether the slave's system time offset should be updated or not.  My slaves (when they are working) take a long time to settle and the default of 100000000 (100ms) was way too large.


Another thing I was doing wrong is that I was activating the master too early.  As part of my configuration setup I need to get the domain data addresses.  To be able to get these in user space the master has to be activated, at which point the domain data is allocated.  But my SGDV slaves need to have regular communications once the master is activated, and my realtime thread didn't start up until some time later.

So to sort this out I have added a user space method "ecrt_master_setup_domain_memory".  This will set up the domain data memory.  It needs to be called after all PDO entries have been registered and before activating the master.  This method is optional, so if it has not been called the ecrt_master_activate will set up the memory as usual.

I have also added a user space version of ecrt_domain_size.


The last piece of the puzzle is that I need to call ecrt_master_activate once my realtime loop has been activated.  This can't be an rtdm call because 1) it blocks for a few milliseconds and 2) it needs to be called in the context of the character device (especially if the domain memory needs to be set up).  Note: ecrt_rtdm_master_application_time should be called before the master is activated, otherwise the master will skip updating slave System Time Offsets.

So to be able to call ecrt_master_activate I have created a non-realtime worker thread that I use to make the ecrt_master_activate and ecrt_master_deactivate calls when appropriate.


So the general flow of commands is:

configure master
  ecrt_request_master
  ecrt_master_create_domain
configure slaves (pdos)
  ecrt_master_slave_config
  ecrt_slave_config_pdos
  ecrt_slave_config_reg_pdo_entry
connect to rtdm
  rt_dev_open(rtdm)
  ecrt_domain_index
  ecrt_rtdm_master_attach
setup domain memory
  ecrt_master_setup_domain_memory
  ecrt_domain_data
  ecrt_domain_size
configure slaves (sdos / dc)
  ecrt_slave_config_sdo8
  ecrt_slave_config_dc

create master_activate thread -------+
                                     |
start realtime                       |
  request master_activate     -->    | ecrt_master_activate
  ecrt_rtdm_master_recieve           | (after first sync dc)
  sync dc                            |
  ecrt_rtdm_master_send              |
                              <--    | master activated
  ecrt_rtdm_master_recieve           |
  ecrt_rtdm_domain_process_all       |
  check domain state                 |
  check master state                 |
  update domain data                 |
  sync dc                            |
  ecrt_rtdm_domain_queue_all         |
  ecrt_rtdm_master_send              |
                                     |
  request master deactivate   -->    | ecrt_master_deactivate
  ecrt_rtdm_master_recieve           |
  ecrt_rtdm_domain_process_all       |
  check domain state                 |
  check master state                 |
  update domain data                 |
  sync dc                            |
  ecrt_rtdm_domain_queue_all         |
  ecrt_rtdm_master_send              |
                              <--    | master deactivated
stop realtime                        |
stop master_activate thread          +

cleanup
  rt_dev_close(rtdm)
  ecrt_release_master



Unfortunately my slaves still do not always behave.
- Once I've had synchronization errors I need to power off the slave before it will start behaving again.
- The Speed Counter Diff (0x0932:0x0933) value stays zero, so I can't tell how much drift is occurring.
- I'm getting the odd synchronization error on shutdown.  I think I need to wait until all slaves are in PREOP state before stopping sending the pdo data.


Another thing I've found is that the Speed Counter Diff value seems to be an int16, whereas the System Time Diff (0x092C:0x092F) value is an sm32.  Is there any documentation as to which values are sm (sign and magnitude) vs int?


Hopefully the above all made sense and was not too rambling.

Regards,
Graeme.


-----Original Message-----
From: etherlab-users-bounces at etherlab.org [mailto:etherlab-users-bounces at etherlab.org] On Behalf Of Graeme Foot
Sent: Wednesday, 1 February 2012 20:28
To: etherlab-users at etherlab.org
Cc: George Broz
Subject: Re: [etherlab-users] Distributed Clock with Yaskawa SGDV drives


Hi,

After a bit more testing (still some to go):

- I have solved the problem where the System Time Delay values are wrong for the yaskawa SGDV's.  At the end of slave 8 we were using an EL9010 terminal.  The master then detected that port1 was open on slave 8, but there were no slaves to communicate too.  (The SGDV's are hanging off slave 2, EK1100.)  The drift still occurs.


- I have found that the Speed Counter Diff (0x0932:0x0933) value on the SGDV slaves remain zero whereas the rest of the slaves seem to be doing their jobs.

George, the Speed Counter Diff values you have below should be 16 bit, so your values will be more like:
  slave 1: Speed Counter Diff   0x0932:0x0933: 0xf1d6 -470
  slave 2: Speed Counter Diff   0x0932:0x0933: 0xf247 -583
  slave 3: Speed Counter Diff   0x0932:0x0933: 0xf1fa -506


- All slaves System Time Diff (0x092C:0x092F) values are bouncing around in the +-50 range.  Sometimes the SGDV values start to take off, but the drift is there whether they look stable or are taking off.


- My DC Reference Clock slave is slave 0 (the CX1100-0004).  This is also a 32 bit dc.  The master is only distributing the first 4 bytes from this ref slave to the rest of the slaves which looks correct.


- I still need to look into why the ecrt_slave_config_dc call is failing.



At the moment I'm suspecting a problem with the drives firmware (which is 3.1).  I'll contact Yaskawa and see if they have any ideas.


Regards,
Graeme.


-----Original Message-----
From: George Broz [mailto:GBroz at moog.com] 
Sent: Wednesday, 1 February 2012 08:45
To: Graeme Foot
Cc: etherlab-users at etherlab.org
Subject: Re: [etherlab-users] Distributed Clock with Yaskawa SGDV drives

Hello,

Regarding Speed Counter Diff (0x0932:0x0933) - I haven't paid too
much attention to it, but for us all of the slaves generally
settle on the same approx. large number when in sync - for example:

slave 1: Speed Counter Diff   0x0932:0x0933: 0x0c04f1d6 201650646
slave 2: Speed Counter Diff   0x0932:0x0933: 0x0c04f247 201650759
slave 3: Speed Counter Diff   0x0932:0x0933: 0x0c04f1fa 201650682



Regarding the place in the code where the reference slave 
is determined, see:

ec_master_find_dc_ref_clock()

This runs at the same times the propagation delay is calculated
(driver insertion, ethercat rescan, or bus change causes rescan).

It is in this function too where the sync slave datagram is setup
which is a FRMW (read/multiple write) instruction. Read 0x910 from
reference slave address and write it to the other slaves.


Not certain if the slave with the reference clock needs to have its 
distributed clock enabled (with ecrt_slave_config_dc). From the 
EtherCAT spec. it seems that DC synchronization and SYNC0/1 
generation are independent functions. The slave synchronization 
depends on distributing the 0x910 register which continues to update 
even when the slave is not "Activated". That's a good question.


Best regards,

--George Broz
Moog, Inc. Industrial Group

-----<etherlab-users-bounces at etherlab.org> wrote: -----
To: <etherlab-users at etherlab.org>
From: Graeme Foot 
Sent by: 
Date: 01/30/2012 11:04PM
Subject: Re: [etherlab-users] Distributed Clock with Yaskawa SGDV drives

Hi,

I have been playing with various options but I'm still getting a drift on my slave relative to the master.


What I have found so far:

1) It looks like my slave requires no other frames on the ring when first setting up the distributed clocks (thanks for pointing that out George).
- I now run only the distributed clock functions for 1 second before adding in the polling pdos.
- This seems to make the System Time Diff (0x092C:0x092F) value remain small, but the Speed Counter Diff (0x0932:0x0933) remains at zero.


2) The System Time Delay (0x0928:0x092B) is a negative number.  The reported slave delays are:
  Slave 0  (CX1100-0004)  0 ns
  Slave 1  (EK1110)       140 ns
  Slave 2  (EK1100)       2670 ns
  Slave 3  (EL2602)       2815 ns
  Slave 4  (EL2008)       2960 ns
  Slave 5  (EL1008)       3100 ns
  Slave 6  (EL1018)       3240 ns
  Slave 7  (EL3162)       3385 ns
  Slave 8  (EL4132)       3530 ns
  Slave 9  (Yaskawa SGDV) 2147486168 ns  (-2520)
  Slave 10 (Yaskawa SGDV) 2147487253 ns  (-3605)

The last two being the ones with the problems.  I still need to look into how the calculations are working with 32 bit dc.


3) The System Time Diff (0x092C:0x092F) is jumping around small -ve and positive numbers but the Speed Counter Diff (0x0932:0x0933) is remaining at zero.  If there was drift in the local clock I would have expected the System Time Diff to get bigger or the Speed Counter Diff to become non-zero to compensate for the drift.  Some example values:
  Recieve Time Port 0 (0x0900:0x0903):  0x6c18cae8 1813564136
  Recieve Time Port 1 (0x0904:0x0907):  0x6c18d362 1813566306
  Recieve EPU         (0x0918:0x091F):  0x000000006c18cae8 1813564136
  System Time         (0x0910:0x0917):  0x00000000e7ce6469 3889063017
  System Time Offset  (0x0920:0x0927):  0x0000000048df3bfd 1222589437
  System Time Delay   (0x0928:0x092B):  0x800009d8 -2520
  System Time Diff    (0x092C:0x092F):  0x00000003 3
  Speed Counter Start (0x0930:0x0931):  0x1001 4097
  Speed Counter Diff  (0x0932:0x0933):  0x0000 0


4) In the examples there are two methods used to pass the time to ecrt_ master_application_time:
a) in the kernel space example it is just adding the scan time to the current time, eg:
  dcTime += 1000000;
  ecrt_master_application_time(master, dcTime);
b) in the user space example it gets the time every cycle, eg:
  clock_gettime(CLOCK_TO_USE, &time);
  ecrt_master_application_time(master, TIMESPEC2NS(time));

I'm using RTAI LXRT with rtdm.  I have confirmed that my cycle times are consistent and when using rt_get_cpu_time_ns (with method b) I am not getting drift at the master.  I've also tried method a with no different result.  I still need to match the slave time against the master time, but I'll need to read the register (via rtdm) to do that


5) I'm calling ecrt_slave_config_dc on the SGDV drives after the system is up and running but the sync Cyclic Unit Control and Activation (0x0980:0x0981) bytes remain zero as does the SYNC 0 Cycle Time (0x09A0:0x09A3).  eg:
  ecrt_slave_config_dc(slave->sc, 0x0300, g_app.scanTimeNS, 500000, 0, 0);

Results in:
  SYNC 0 Start Time   (0x0990:0x0997):  0x0000000048df3ad1 1222589137
  SYNC 0 Pulse Length (0x0982:0x0983):  0x03e8 1000
  SYNC 0 Cycle Time   (0x09A0:0x09A3):  0x00000000 0
  Sync activation     (0x0980:0x0981):  0x0000 0

I still need to look at the code as to why this might be happening.

If I fill in these values manually, the start time starts ticking away as it should, but I still get drift.  (ie setting SYNC 0 Cycle Time (0x09A0:0x09A3) to 1000000; Sync activation (0x0980:0x0981) to 0x0300)



Apart from a couple of things mentioned above that I still need to look into, does anyone have anything else I should consider?

Also a couple of questions:
- The first slave with a distributed clock becomes the reference slave.  Is there any way to find out which slave that is?  (I haven't looked into the code yet.)
- Does the slave that is being used as the reference clock need to have its distributed clock enabled (with ecrt_slave_config_dc)?



I am using the latest 1.5.0 stable master:
  # ethercat version
  IgH EtherCAT master 1.5.0 ec8e1151b8a7+



Regards,
Graeme.





-----Original Message-----
From: etherlab-users-bounces at etherlab.org [mailto:etherlab-users-bounces at etherlab.org] On Behalf Of George Broz
Sent: Saturday, 28 January 2012 14:36
To: Graeme Foot
Cc: etherlab-users at etherlab.org
Subject: Re: [etherlab-users] Distributed Clock with Yaskawa SGDV drives

Hi,

It looks like the propagation (transmission) delay determination 
is failing. That algorithm runs on driver insertion, if there is 
a bus change, or if you use the command line tool and issue 
'ethercat rescan'. You can see the result (its register is 
0x0x928) using:

ethercat -p $1 -t uint32 reg_read 2344 4
(where $1 is your slave position.)

The number returned is in nanoseconds.

It should return with something that's a couple of 
hundred or a few thousand nanoseconds. This is then
used with the the System offset register (0x920) to
produce a System time from the slave's local time.

For starters, I'd try to write directly to the 
0x928 register supplying something reasonable. For example:

ethercat -p $1 -t uint32 reg_write 2344 1300

That value will stay in there as long as your bus configuration
does not change or is not scanned. That's just to make sure
the slave takes a value. 

If that works, you can take a quick look at the receive times 
for each port. These will change whenever the propagation delay
algorithm is run (e.g. ethercat rescan). These are what make 
the propagation measurement possible. Only pay attention to the 
ones where you have a physically connected slave (unconnected 
ports return a number - something that looks uninitialized, 
perhaps down at the slave?).

echo "Receive Time Port 0 - 0x0900:0x0903:" `ethercat -p $1 -t uint32 reg_read 2304 4`
echo "Receive Time Port 1 - 0x0904:0x0907:" `ethercat -p $1 -t uint32 reg_read 2308 4`
echo "Receive Time Port 2 - 0x0908:0x090B:" `ethercat -p $1 -t uint32 reg_read 2312 4`
echo "Receive Time Port 4 - 0x090C:0x090F:" `ethercat -p $1 -t uint32 reg_read 2316 4`
(where $1 is the slave position).

The difference between the numbers returned for connected ports
should be small (thousands of nanoseconds). 

There is a note in the EtherCAT spec that states that for some 
slaves the ring has to be empty of all other frames before the 
broadcast write packet associated with this measurment can work 
(see Beckhoff EtherCAT ET1100 datasheet section 9.1.2, just above 
table 26.) That would require support from the master. Perhaps 
your Yaskawa drives fall into that category. 

I have no experience with 32 bit DC slaves. Maybe there is a 
bug there?


BTW - if you want to view 0x92c (or others like it) with sign 
bit removed use -

ethercat -p $1 -t sm32 reg_read 2348 4


Best regards,

--George Broz
Moog, Inc. Industrial Group

-----<etherlab-users-bounces at etherlab.org> wrote: -----
To: <etherlab-users at etherlab.org>
From: Graeme Foot 
Sent by: 
Date: 01/26/2012 06:38PM
Subject: [etherlab-users] Distributed Clock with Yaskawa SGDV drives


Hi,
 
I'm having a problem with my Yaskawa SGCV drives.  I am using them with Cyclic Synchronous Position Mode.  The problems is that while moving they frequently get an unstable position error.  The unstable position errors last for about ΒΌ of a second and occur regularly every 4 (approx) seconds.
 
The position error is always less than the cycle delta position.  I am setting up the drives to use the distributed clock but I am suspecting that something is not set up correctly and the position error is occurring due to a drift in cycle relative to the master.
 
 The "DC system time transmission delay" (from the "ethercat slaves -v" command) for the drives are:
drive 1 (slave 9): 2147486148 ns
drive 2 (slave 10): 2147487218 ns
They are being reported as 32bit distributed clocks.
 
 
When doing the "ethercat reg_read -p<slave> -tint32 0x092C" command on each of the slaves then the slaves prior to the yaskawa drives get numbers such as:
0x00000010 16
0x00000024 36
0x80000011 -2147483631
0x80000004 -2147483644
 
Where the low order bytes contain low numbers but the sign bit may be on or off.
 
However, the yaskawa slaves are more like:
0x66a32ed9 1721970393
0x669f8483 1721730179
0x669d0223 1721565731
0x669b553f 1721455935
 
 
If I disconnect the drives then the rest of the slaves still behave the same (where the sign bit may be on or off).
 
 
Does anyone have any ideas what I should be looking at next?
 
 
Regards,
Graeme Foot
_______________________________________________
etherlab-users mailing list
etherlab-users at etherlab.org
http://lists.etherlab.org/mailman/listinfo/etherlab-users
_______________________________________________
etherlab-users mailing list
etherlab-users at etherlab.org
http://lists.etherlab.org/mailman/listinfo/etherlab-users
_______________________________________________
etherlab-users mailing list
etherlab-users at etherlab.org
http://lists.etherlab.org/mailman/listinfo/etherlab-users
_______________________________________________
etherlab-users mailing list
etherlab-users at etherlab.org
http://lists.etherlab.org/mailman/listinfo/etherlab-users
_______________________________________________
etherlab-users mailing list
etherlab-users at etherlab.org
http://lists.etherlab.org/mailman/listinfo/etherlab-users
_______________________________________________
etherlab-users mailing list
etherlab-users at etherlab.org
http://lists.etherlab.org/mailman/listinfo/etherlab-users

_______________________________________________
etherlab-users mailing list
etherlab-users at etherlab.org
http://lists.etherlab.org/mailman/listinfo/etherlab-users



[attachment "etherlabmaster-1.5-2266-a_rtdm_dc.patch" removed by George Broz/ber/us/moog]



More information about the Etherlab-users mailing list