[etherlab-dev] Possible Realtime Issues with Ethercat Master and RT Preempt Kernel

Tillman, Scott Scott.Tillman at bhemail.com
Thu Feb 4 00:33:15 CET 2016


> On February 03, 2016 5:48, Gavin Lambert replied:
> 
> On 3 February 2016 21:02, quoth Tillman, Scott:
> > Since you brought up the typical process cycle: I have been using a
> > process similar the second one you describe.
[...]
> > As it is the double buffering is the same
> > idea, but causes an extra memcpy just prior to sending the domain data.
> 
> The expectation is that you'll use the EC_WRITE_* macros to insert values
> into the domain memory; this takes care of byte-swapping to little-endian for
> you if you happen to be running on a big-endian machine.  You can usually
> only get away with a blanket memcpy if you know your master code will only
> ever run on little-endian machines.

In my generic I/O framework byte ordering is performed (if needed) during the write into, or read out of, the data exchange arena.  If the outbound frame and the inbound frame were not overlaid there wouldn't be any extra copies at all (since the exchange arena is shared memory visible to all applications).  We have yet to actually *use* it on a big endien system, so that's all just optimized away anyway.

> > More problematic is the absence of any way to block (in user-space)
> > waiting for the domain's return packet.  As it is I am setting up my
> > clock at 0.5ms to handle a 1ms frame time:
> [...]
> > Are these two things there somewhere and I've just missed them, or is
> > there a good reason they haven't been implemented?  It seems like
> > these two items would minimize the overhead and maximize the
> > processing time available for most applications.
> 
> There isn't really a way to do that; it's a fundamental design choice of the
> master.  The EtherCAT-custom drivers disable interrupts and operate purely
> in polled mode in order to reduce the latency of handling an interrupt and
> subsequent context-switching to a kernel thread and then a user thread.
> What gets sacrificed along the way is any ability to wake up a thread when
> the packet arrives, since nothing actually knows that the packet has arrived
> until polled.
[...]

This was my understanding from perusing the mailing list, but the conversations were years old, so, possibly, out of date.  As I mentioned I have two kinds of RT workloads, computations that don't require immediate input data, and computations that do rely on immediate inputs.  The former is often more complex, so it works out for me in my process.  It might be of more concern were I dealing with shorter cycle times or a higher proportion of reactive computations.
 
-Scott Tillman


More information about the Etherlab-dev mailing list