[etherlab-users] CoE mailbox problem: "Received unknown response while uploading SDO"

Jun Yuan j.yuan at rtleaders.com
Thu May 29 14:01:26 CEST 2014


Hello,

Gavin has mentioned the problem he has with the 0x1C1x:00, which makes me
to believe I may be not the single victim of that. BTW, has anyone tried
the Etherlabmaster on Ubuntu 14.04? As I turn the debug level of the
etherlabmaster to 1, the crash report program Apport always jumps out,
saying he observed a crash. Weird because there is no crash, but only
several DEBUG messages. This is resolved when I misspell the DEBUG as DEBUH
in the master source code. I think Apport parses the kernel log message and
takes DEBUG as a BUG.

The CoE mailbox communication between the master and slave is like the
following:
1) the master sends a request datagram to the slave, and receives the
acknowledge from the slave;
2) the master sends a mailbox check datagram to the slave, asking if the
slave has a reply mail prepared for him;
3) If the slave says no, go to 2) send another check datagram; otherwise,
go to 4);
4) the master sends a fetch-response datagram to fetch the reply mail from
the slave.

The problem I discovered with the etherlabmaster is, when the bus topology
changes and a bus rescan is required, the state machine of the CoE mailbox
communication will be interrupted/stopped and later be reset to START
state. And this causes the problem.

Take an example, the master sends a request A to the slave, and the slave
acknowledges, and prepares the reply A, which takes a while. At that time
we attach a new slave to the bus, the bus topology changes, which stops the
CoE mailbox communication. No the master scans the bus, and begins to use
CoE mailbox to fetch the SDO 1C12:00 for SM2 of the slave. He sends the
request, then checks the mailbox and find a reply mail. He fetches the mail
from the slave, opens it and it's the reply A, not the reply for the SDO
1C12:00 which he's waiting for.

At this time the master gets angry, saying "Received unknown response while
uploading SDO". He throws the reply A away, and never asks again for SM2.
It is buggy.

Then he send the request for SM3, the SDO 1C13:00. And no surprise, he
receives the reply for SM2. Only this time, he says "Received upload
response for wrong SDO", and rechecks the mailbox. Yes, there is another
reply there, which is correct for the SM3.

The consequent: "Failed to read number of assigned PDOs for SM2", and the
slave cannot turn on without a PDO assignment.

I don't know why, but it seems this phenomenon happens more often in a star
bus topology than in a line topology.

Whose fault is this? Is it the slave who stores an outdated reply in the
mailbox? Is it the master who forgets his last pending request A, and
complains that he receives reply A during the next request? What if the
master get a reboot, and there's no way for him to remember the last
request?

Is there any command in the EtherCAT standard for the master to ask the
slaves to clear their mailbox? Why doesn't the mail have any id, so the
master can tell which reply is for which request? Is simultaneous multiple
request allowed?

What should the master do when he receives a wrong reply from the slave?
How can it tell whether it is a wrong reply or a outdated reply for the
last request?

Another thing: why does the master not zero the mail data? Is it only for
the performance issue? It is so weird to read the chaos data in the mail
request datagram.

Without answering the questions above, I find the following possible
solutions:
1) Before send the request datagram, send a check datagram at first to see
if there is any outdated response there, and clean it if it exists. But
what if the slave finishes the old response after the check datagram?

2) If the master receives a wrong reply, instead of giving up and returning
an error, he keeps checking until timeout. That's what I did in my patch,
but is there any side effects?

Regards,
Jun
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.etherlab.org/pipermail/etherlab-users/attachments/20140529/c7447764/attachment-0002.htm>


More information about the Etherlab-users mailing list