[etherlab-users] Knowing when the packet has finished cycle

Shahbaz Yousefi shabbyx at gmail.com
Wed Dec 14 15:43:02 CET 2011


Hi Jun,

There is a fundamental problem here. There is no way to find out the value
5ms! Yes, with our formula, our guesses and some measures, we think it
should be 5ms, but we can't be sure. If it were so, then of course, wait
6ms and everything is perfect. Another reason we cannot hardcode a value
(if not knowing the value is not enough!!) is that the user of the skin
(yes, we are creating a skin) might just add another piece of skin to the
network, increasing the delay. So, the delay may increase, skin would fail,
no tactile data, and the robot might just crush your hand shaking hands
with you.

I did indeed think about extending the master with this feature, but there
is one thing holding me back. In fact, I would be happy if you (or Richard)
would answer this question. Imagine you have a big domain for which each *
ecrt_master_send* would require splitting its data into different packets.
Now, if you, by shear bad luck, do an *ecrt_master_receive* while only half
of the packets have arrived, what happens? I mean, the fact that master
thinks something went wrong I can fix in the master code, but would
receiving another time after that make everything correct? Or would those
half of packets be thrown away and therefore another read would again cause
an error?

I'm not sure if I'm being clear so let me go with a simple example. Imagine
your domain is big and so each *ecrt_master_send* needs to send 10 packets.
The delay is 2ms. Now exactly 2ms after, you issue an *ecrt_master_receive*.
At this time, 4 packets have arrived, the 5th is being read and 5 others
are on the way. If you issue an *ecrt_master_receive*, the master gives you
an error. Fine. Now if you see that error, wait 1ms more so that all the
remaining packets arrive, would *ecrt_master_receive* work correctly?

Regards,
Shahbaz

On Wed, Dec 14, 2011 at 12:19 PM, Jun Yuan <j.yuan at rtleaders.com> wrote:

> Hello Shahbaz,
>
> Could you tell me how small the delay has to be in your application then?
> You said it takes about 5 ms to gather data, so if I set the master cycle
> at 6ms, for sure I won't get errors from the ecrt_master_receive, will I?
> Then you've already reduced your delay from 40 ms to 6ms. And you still
> wants more? You're greedy :D
>
> It's interesting to see that the EtherLab Master doesn't limit your
> thought. You think it differently. But first, let's discuss about my last
> version further, in the framework of EtherLab. I made a mistake in *if
> (answer_is_there)*, as I don't actually need it, see the reason bellow.
> So my master runs with a 6ms constant cycle time. And there are two ways to
> speed it up further:
> 1) As I wrote in my last email, try to not let that "all domain requiring
> data" happens. For example, you can limit the number of the domains in the
> sending list. If there is more domains needs update, put them in a waiting
> list first, and send those domain request in the next cycle. Or, maybe
> better, schedule your domain request, let them happens one after another.
> 2) Reduce your network. You've talked about redundancy. It would be safer
> to splitt your sensors into two EtherCAT buses with two EtherCAT master.
> Having 100 slaves on a single EtherCAT bus would means if one of them goes
> down, the whole bus doesn't work anymore.
>
> On Tue, Dec 13, 2011 at 3:34 PM, Shahbaz Yousefi <shabbyx at gmail.com>wrote:
>
>> Hello Jun,
>>
>> Thanks for not giving up on this matter. You got it all right, and your
>> suggested solution is more or less what we had come with also. There is no
>> necessity in splitting the PDOs in multiple domains and indeed EtherCAT
>
>
> You don't need to gather all the pdos at once in each cycle, right? As
> Emanuele said to me, "It is a waste of band width." Therefore I would
> splitt it into different domains, and gather data
>


By the way, it's a waste of bandwidth if the have different cycle times. So
if one type of sensor produces data at 1000hz and the other at 100hz, then
we do split them in two, but all the 1000hz ones would be in one domain and
all the 100hz ones in the other.



>
>
>> proves to be fast enough to handle them. The problem, which was our
>> original question, is *How do you implement this line:
>>
>>     if (answer_is_there)*
>>
>
> You're right. I delete this line now :D  I would make sure that *ecrt_master_receive
> *never comes too soon though, in your case, using 6 ms as master cycle
> would be fine. So I don't actually need to check *if (answer_is_there).*
>
>
>> (Note: I would like to remind you that as of the current version, if you
>> *ecrt_master_receive* too soon, the master decides that something has
>> gone horribly wrong, output some log and threat the situation as an error.
>> So in your algorithm, if none of the answers *are there* (that is, you
>> received too soon), then the master decides it's an error)
>>
>
> Right, we must make sure that we don't receive too soon.
>
>
>> Emanuele can give you more precise numbers if you were interested, but
>> for our case, it would take EtherCAT about 5ms to gather data from all
>> those PDOs. Now, the thing is, with the EtherCAT master, there comes no
>> facility that implements the aforementioned *if*. So the only way to
>> realize it, would be to* make sure enough time has passed since the
>> packet asking for data of this domain was sent.*
>
>
> Exactly
>
> How would you know this time? That is our question.
>>
>
> Trial and error, looking for the worst case.
>
> I saw in *master.c*, in *ec_master_idle_thread*, an amount of *
>> sent_bytes*EC_BYTE_TRANSMISSION_TIME_NS* sleep. This suggests that we
>> could follow the same formula and guess our delay. We are however not sure
>> if that is a really well-tested always-correct formula, or just something
>> that has worked for you and has been found by trial and error.
>>
>
> I'm not among the developers of Etherlab. After the sending out by the
> master, the datagram needs to go though all the slaves on the bus, forth
> and back. So if such a formula exists, it has to take account of those
> things like the number of slaves, the EtherCAT datagram processing time on
> each slaves, or even cable length etc. Well, it could be complicated, and
> none of these parameters are constant for different applications.
>
>
>> Emanuele has done some latency calculations and have indeed come to the
>> conclusion that, all considered, the better estimate is indeed higher (I'm
>> not sure, but say by around 10%).
>>
>
> Great, so you've got an empirical formula of your own. It's reasonable
> for you to have a higher value, because your network is bigger. As I said,
> this value cannot be constant for different application. If it works for
> your case, take it in your application. And I recommend you to change this
> macro *EC_BYTE_TRANSMISSION_TIME_NS *to 90 ns or higher in your EtherLab
> too, I think that would speed up your transition time from PREOP to OP.
>
>
>> A better way to do it, although I am not sure of the complexities of its
>> implementation, would be for the EtherCAT master to keep a flag for each
>> domain that indicates whether the last packet that has been sent
>> (containing this domain) has arrived yet or not. That is of course, updated
>> after a call to *ecrt_master_receive*.
>>
>> So in the end the code would look something like this:
>>
>> while (1)
>> {
>>     ecrt_domain_queue(domain);
>>     ecrt_master_send(master);
>>
>>     sleep(a lower bound);
>>     ecrt_master_receive(master);
>>     while (!domain->*has_returned*)
>>
>     {
>>         sleep(a little);
>>         ecrt_master_receive(master);   // try receiving again
>>     }
>>
>>     ecrt_domain_process(domain);
>>
>>     // process the data
>>     wait_period();
>> }
>>
>> Do you think you could implement such a thing? If so, we would most
>> certainly appreciate this feature. We will even send you a delicious
>> chocolate cake as a thank you.
>>
>
> Thanks for the attempting cake offer. But I've got my things to do. And I
> agree with Richard at this point. There are alternatives to reduce your
> delay within the framework of EtherLab. Why bother change the EtherLab? And
> personally, I don't like the repeating polling, it would be better to have
> interrupt when datagram arrives, but that's against the basic idea of
> EtherLab.
>
> If you like the polling so much that you want to implement this feature
> yourself, well, I've had a look at the source code, I think theoretically
> it would work, and it seems that the effort is not that huge. So good luck!
> And tell me how it goes ;)
>
> Best Regards,
> Jun YUAN
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.etherlab.org/pipermail/etherlab-users/attachments/20111214/ed805e0c/attachment-0004.htm>


More information about the Etherlab-users mailing list