[etherlab-dev] Support for multiple mailbox protocols

Jun Yuan j.yuan at rtleaders.com
Mon Jun 30 09:41:22 CEST 2014

Hi Knud,

I thought the same, until I observed the problem in a pure CoE application,
see my bug report
http://lists.etherlab.org/pipermail/etherlab-dev/2014/000377.html, in which
I mentioned CoE problem in 3 scenarios, while scenario 2 and 3 describe
when concurrent CoE requests happens.

The slave's CoE FSM instance is not controlled by the master FSM, but by
the slave FSM itself. The slave FSM is responsible for the CoE requests in
ec_slave_t issued directly by the user via the function
ecrt_master_sdo_download, ecrt_master_sdo_download_complete, and
ecrt_master_sdo_upload. These functions can be executed either in the user
application before the master's activation, or in the terminal in whatever

The master's CoE FSM instance, on the other hand, is responsible for the
CoE requests which should executed in the background by the master FSM,
such as automatically fetch slaves' coe dictionary while master is idle,
those in the slave->config->sdo_requests, which could be issued via the
ecrt_slave_config_sdo functions (which should be configured during the
master's activation), or via the function
ecrt_slave_config_create_sdo_request (which can be issued afterwards in the
user application's RT thread)

This made me to believe that the slave could support concurrent CoE
requests, but Gavin corrects my thought, as this is actually dependent upon
the slave's implementation. So this seems to be another problem we need to
fix on the master's side.


On 30 June 2014 08:22, Knud Baastrup <kba at deif.com> wrote:

> Hi Gavin, Jun
> I am still not able to identify a scenario where the EtherCAT master use
> concurrent CoE requests towards the same slave. I know that both master and
> slave FSM have CoE FSM instances, but their usage are controlled by the
> master FSM in dedicated states that should prevent concurrent access. Do
> any of you have any likely concrete scenario this can happen?
> I believe it is fair to say that an EtherCAT slave only should support a
> mailbox queue if it wants to support multiple mailbox protocols and that
> the master always should secure only one running/active instance of a
> CoE/EoE/SoE/VoE/FoE FSM per slave. I will find it very difficult/impossible
> to support e.g. multiple instances of an EoE FSM for the same slave as it
> would be impossible to dispatch the asynchronous replies to the right EoE
> FSM.
> If it is possible to identify a scenario with concurrent CoE requests
> towards the same slave, I hope it is a bug that can be fixed within the
> current design of the EtherCAT master.
> BR, Knud
> -----Original Message-----
> From: Gavin Lambert [mailto:gavinl at compacsort.com]
> Sent: 30. juni 2014 02:11
> To: 'Jun Yuan'
> Cc: Knud Baastrup; etherlab-dev at etherlab.org
> Subject: RE: [etherlab-dev] Support for multiple mailbox protocols
> On 28 June 2014, quoth Jun Yuan:
> > Are you sure that some slaves will choke with multiple CoE requests?
> > Does these slaves then support simultaneous mailbox requests in
> > different protocol, i.e. CoE and EoE, or CoE and SoE in parallel?
> > Do we always need to wait until the slave have a response mail for the
> > last mailbox request, before another mailbox request is able to send?
> > I didn't find the answer in the ethercat documentation yet.
> As far as I am aware, all of those things are unspecified and left up to
> the actual slave implementation, so the answers may vary.
> But for what it's worth, the example slave code that I've seen lets the
> vendor choose whether to implement internal send/receive queues (such that
> the mailbox accepts subsequent requests before finishing processing, and
> will *typically* respond in order -- but asynchronous replies such as CoE
> emergencies and EoE packets can be injected at any time), or whether to
> save memory and skip the queues (in which case only one thing can be
> processed at a time).
> Really it mostly depends on whether the slaves internally process mailbox
> requests synchronously or asynchronously, which is also left up to the
> vendor to decide.  And again, looking at slave example code suggests that
> most commonly processing is synchronous, but in some cases there is some
> asynchronous plumbing that supports only one pending conversation per
> protocol.
> The master can sort of tell the difference between these cases; a slave
> without queues will clear the send mailbox (allowing a subsequent send to
> succeed) while it is processing a request, but until it finishes and
> fetches the next request any further attempts to send will fail (as the
> send mailbox is still full).  Conversely a slave that implements a queue is
> likely to clear the send mailbox fairly quickly and repeatedly (even if
> it's still synchronously processing only the first request) and might
> eventually either stop pulling requests from the send mailbox or pulling
> them anyway and sending error responses (out of order relative to
> processing), if the master is sending requests faster than they can be
> queued.
> I *believe* that in general it is only safe to rely on having one
> in-progress conversation per protocol, such that the protocol can be used
> to determine which conversation thread the reply applies to, but that the
> master must be prepared to accept specific protocols in a different order
> to when they were sent (and again, asynchronous replies at any time).
> It's likely that many slaves will be able to cope with receiving multiple
> CoE conversations in parallel (even when internal processing is only
> synchronous), but then the master logic required to route the replies to
> the appropriate FSM becomes more complex (in case internal processing is
> asynchronous).  And I'm dubious that this should be relied on by the
> master, which is why I'm uncomfortable with the way this patch would work
> -- it would try to send both requests and then *hope* that the mailbox read
> lock is acquired by the one that the slave responds to first; if it gets it
> wrong, it will treat both replies as failures because they were routed
> incorrectly.
> But no, I can't point to anything concrete in the standards.  This is
> mostly just a feeling, partly based on slave example code.
> Regards,
> Gavin Lambert
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.etherlab.org/pipermail/etherlab-dev/attachments/20140630/e9039b5a/attachment-0003.htm>

More information about the Etherlab-dev mailing list