[etherlab-users] Knowing when the packet has finished cycle

Jordi Blanch Carles jordi_blanch at encopim.com
Wed Oct 19 10:52:03 CEST 2011


First of all, I don't want to enter into any type of discussion, and  
also, I wouldn't like this thread to be much longer that it already  
is, but in our case, also knowing when a packet has already been  
received after being sent would be very useful because we want to  
close a real time control loop (e.g. PID) in the EtherCat Master, that  
is, our PC.

In our case, the whole EtherCat loop would be:

ask_for_inputs() <----- ecrt_domain_queue + ecrt_master_send
wait_for_packet_received()
process_inputs() <----- ecrt_master_receive + ecrt_domain_process

make_calculations()

send_outputs() <----- ecrt_domain_queue + ecrt_master_send

wait_for_next_period()

So, can somebody give us a clue on how to implement this  
wait_for_packet_received() method, either as an events implementation  
or a polling implementation?

Thank you very much


Quoting Richard Hacker <ha at igh-essen.com>:

> Hello Shahbaz,
>
> what started off as a reasonably small question has become a rather large
> support case. I am helping you both in my spare and working time. I request
> you to consider this when my answers are sometimes too short for your liking
> and I miss out on some specific questions.
>
> On the other hand, I am a minimalist. I tell you basic information from which
> you should be able to deduce the rest.
>
> Now to your case:
> 1) No, there is no way to find out whether a packet has arrived. The master
> can be seen as a library that is controlled entirely by your application.
> There is no separate engine running doing timing, processing events or
> interrupts. So all processing is done with the 4 calls that I   
> explained to you
> the last time. Hence, you could deduce for yourself, if you call
> master_receive() too early, i.e. before the ethernet packet has arrived, the
> master would complain that the packet is lost, and rightly so! How could the
> master deduce, "Oh, I was called too early, the packet is still in transit, I
> should wait a little longer". No it cannot. That is why you should wait for
> your next wakeup of the program to handle your inputs where you are sure that
> the ethernet packet has arrived.
> As I explained last time, master_receive() goes out to the network card and
> transfers the ethernet paket to the main memory. domain_process() is the one
> that complains that data was lost.
>
> 2) Using different threads is for me not an option. If one thread stops for
> some reason, then find out why it stopped and fix that. If it ran   
> for one hour
> in a test setup, why should you be worried that it would suddenly stop when
> you are really sampling??? On the other hand, threads run in the same memory
> context. That means, if one thread SEGV's, the other is kicked out by the
> kernel as well (I assume you are running in user space - if you were running
> in kernel context, you computer would not be very useful thereafter either)
>
> 3) The master should be handled by one and only one process, as I said
> earlier. I have no idea how you would like to handle master_send() and
> master_receive() from different threads and/or processes. If the   
> main process,
> the fastest process tends to stop working, fix that - after all, it is your
> only option.
>
> 4) Sampling data is exactly the metier of EtherCAT. It has the "distributed
> clocks" feature which synchronizes a bunch of slaves to sample data   
> all at the
> same time with nanosecond accuracy. When your relatively slow and jittered
> program picks up the data from the slaves is pretty irrelevant. Even though
> your data gets processed on the next wakeup makes no difference because you
> know when the data was sampled, i.e. the last time when master_send() was
> called. Simply collecting data as in your case and attaching a   
> timestamp to it
> is one of the easiest tasks for EtherCAT! Why you are having doubts in your
> specific case still eludes me.
>
> 5) For me, having 2 tasks, one at 40ms and one at 30ms is a rather
> hypothetical case and as such does not really fit into the programming model
> of one task, i.e. the fastest task, controlling master_send and
> master_receive(). Having 20ms and 40ms makes more sense which fits the model
> again.
>
> 6) If you are really concerned about data redundancy as you pointed out that
> you sample the same physical data with more than one sensor, then you really
> should consider making everything redundant, that means at least 2 network
> cards with 2 physically different EtherCAT networks running on 2 separate
> computers in different locations with different programmers writing them,
> running off different power supplies from different power stations etc...
> Otherwise fix your program so that it does not crash ;)
>
> I would like to close this case on this ironic note. I do not see that there
> is a basic problem with EtherCAT for which the mailing list is the   
> right place
> for discussion.
>
> If you still need support, because your application does sound quite  
>  exciting,
> then do not hesitate to contact us directly. We are quite willing to  
>  help you.
>
> - Regards
> Richard
>
> On Friday 14 October 2011 19:10:55 Shahbaz Yousefi wrote:
>> Hi again,
>>
>> That certainly shed some light on the situation. We finally came to the
>> conclusion that we would split each thread's data in the same domain, even
>> if they are working at the same rate.
>>
>> (This email is quite long, so I divided it in sections that you could
>> respond to without having to read to the end of the email)
>>
>> ___________________________________________________________________________
>>
>> Let me explain to you our situation and then ask my question about
>> master_send and master_receive.
>>
>> We are working on a huge network of sensors and (not that many) actuators.
>> By huge I mean about 30,000 pdo entries, so let's say 100 slaves. We didn't
>> want to create many domains as that introduced considerable delay, but that
>> seems inevitable.
>>
>> However, the reason for splitting data in different threads is not the
>> amount of data. The reason is first, the sensors provide data at different
>> rates and therefore they are in different domains and are read at different
>> rates (hence different threads with different cyclic times). There is a
>>  more important reason though. We need to have *high reliability*. In fact,
>>  the different sensor types all sense the same physical value and are
>>  overlayed so if one type fails, the other type provides that value.
>>
>> Consequently, we cannot rely on *one* thread to do the job. That is why,
>>  for each domain, we have one thread so if one thread fails, the system is
>>  only partially down.
>>
>> With the same argument, *we cannot rely on only one thread calling
>> master_send and master_receive*. That is why I was trying to  figure out a
>> way to prevent unnecessary send and receives while each thread
>>  independently tries sending and receiving. (I wonder why you didn't answer
>>  this question though: *Is there a way to understand whether a sent frame
>>  has arrived back to master yet or not?*)
>>
>> ___________________________________________________________________________
>>
>> Now we were wondering about one thing. If each thread, independently calls
>> master_send and master_receive, what happens? Here are the situations that
>> may arise that might cause problem and I would like to know if they are
>> properly handled in EtherLab.
>>
>> 1. Thread 1 queues domain and sends the frame.
>>     Thread 2 immediately after queues another domain and sends the frame.
>>
>> In this situation, the two frames would be traversing the network one right
>> after the other. Can the slaves handle that? Do they have queues for many
>> frames arriving faster than they can process and forward them? Or does the
>> master know that it shouldn't send the frames too fast?
>>
>> 2. Thread 1 queues domain and sends the frame
>>     Thread 2 queues domain and sends the frame
>>     In the network, both frames finish the cycle and return back to the
>> master
>>     Thread 1 wakes up and receives (then processes the domain)
>>     Thread 2 wakes up and receives (then processes the domain)
>>
>> In this situation, does master_receive called by Thread 1 exchange data
>> arrived from both frames or only one? In the former case, the call to
>> master_receive by Thread 2 would observe that there are no new packets in
>> the ethernet card. Can it handle that or does it assume there would always
>> be a packet there?
>>
>> 3. Thread 1 queues domain and sends the frame
>>     Thread 2 queues domain and sends the frame
>>     Thread 1 dies!
>>     Thread 2 wakes up and receives (then processes the domain)
>>
>> This would somehow be answered by the answer to the previous situation but
>>  I just wanted to emphasize. In such a case, would there be a residue
>>  packet in the ethernet card (because Thread 2's master_receive call took
>>  only one frame) or would it properly exchange data from both arriving
>>  frames?
>>
>> ___________________________________________________________________________
>>
>> Let me emphasize again why we can't have master_send and master_receive
>>  only in one thread (the fastest thread). One reason is reliability. If the
>>  fastest thread dies, we don't want the program to halt. The second reason
>>  is delay. Imagine these two threads:
>>
>> Thread 1 working at period 40ms
>> Thread 2 working at period 30ms
>>
>> According to you, we should have Thread 2 do the master_send and
>> master_receive. Now consider this scenario:
>>
>> 1. Thread 1 queues domain and sends frame
>> 2. Immediately after Thread 2 queues another domain (but will not be sent,
>> because Thread 2 came too late)
>> 3. 30ms after (30ms after step 1), Thread 1 wakes up, exchanges data,
>> calculates something, queues domain and sends frame (This time both domains
>> are included in the frame)
>> 4. 10ms after (40ms after step 2), Thread 2 wakes up, but there is no new
>> data. It queues its domain again and sleeps
>> 5. 20ms after (30ms after step 3), Thread 1 wakes up, exchanges data etc
>> 6. 20ms after (40ms after step 4), Thread 2 wakes up and finally gets the
>> new data for its domain
>>
>> As you can see, it took Thread 2, 80ms to get its data, which is twice as
>> its period. The delay could have been reduced to a few milliseconds, even
>> hundreds of microseconds if such a thing was done:
>>
>> Each thread queues domain and sends
>> loop while data has not arrived
>>       sleep in the loop so it's not really busy waiting
>> The thread receives and processes
>>
>> According to our measurements and calculations, this value can be less than
>> 4ms for the huge network I mentioned in the beginning. With your suggestion
>> we would be having 20 times the delay we could have had (only if it was
>> possible to check if there is new packet arrived in the network card (Is it
>> possible?))
>>
>> ___________________________________________________________________________
>>
>> I really appreciate your time and effort and hope our use of EtherLab would
>> also provide useful feedback for you,
>> Shahbaz
>>
>> On Fri, Oct 14, 2011 at 5:11 PM, Richard Hacker <ha at igh-essen.com> wrote:
>> > Hello
>> >
>> > I think you do not quite understand what happens when:
>> >
>> > EtherCAT has 4 essential functions in cyclic mode:
>> > ecrt_master_receive(master_ptr): Fetches ethernet (yes etherNET!) data
>> > from the card. This ethernet packet contains all your input domains.
>> > ecrt_domain_process(domain_ptr): Processes the ethernet packet for domain
>> > ecrt_domain_queue(domain_ptr): Puts domain in a linked list to be sent
>> > ecrt_master_send(master_ptr): Transfers an ethernet packet made up of
>> > your input and output domains to the card
>> >
>> > Only the fastest thread should handle the pair ecrt_master_receive() and
>> > ecrt_master_send()! Different threads each handle the pair
>> > ecrt_domain_process() and ecrt_domain_queue(). Note: you must ensure that
>> > only
>> > one thread calls ecrt_domain_process() and ecrt_domain_queue() - protect
>> > these
>> > functions with semaphores.
>> >
>> > You should not have two threads running on the same domain. Open new
>> > domains
>> > for this purpose, after all, that is what domains are for!
>> >
>> > I cannot figure out from your description whether your threads have
>> > different
>> > frequencies. Having two threads running at the same rate will present you
>> > with
>> > some trouble, aka waking up or queueing just a little too late or too
>> > early.
>> >
>> > If you have so much data that you want to split things up into different
>> > threads, you sould consider using 2 masters.
>> >
>> > - Richard
>> >
>> > On Friday 14 October 2011 15:54:04 Shahbaz Yousefi wrote:
>> > > Richard,
>> > >
>> > > Thanks for the prompt reply. I understood what you mean and I
>> > > understood where my idea was wrong. There is however a question that
>> > > still needs to
>> >
>> > be
>> >
>> > > answered.
>> > >
>> > > Imagine you have one domain that contains two types of pdo entries. You
>> > >  have two threads that each work on this same domain although on
>> >
>> > different
>> >
>> > >  sections of it. (The reason they are two in one domain is that (if I
>> > >  understood correctly) having more domains means more overhead in the
>> > >  network, so we are grouping entries that have the same data rate in
>> > > one domain)
>> > >
>> > > So, the threads go: calculate; exchange I/O; wait (therefore, get
>> > > input; calculate; write output; wait)
>> > >
>> > > Now, imagine this sequence:
>> > > 1. Thread 1 finishes calculation and writes output
>> > > 2. Slightly after, Thread 2 finishes calculation and writes output on
>> > > the SAME domain
>> > > 3. After Thread1 wakes up, it receives the input
>> > > 4. Slightly after, Thread 2 wakes up and wants to receive input from
>> > > the SAME domain.
>> > >
>> > > First question is, would this work with EtherLab? (So, does EtherLab
>> >
>> > itself
>> >
>> > > understand that at step 2, the exchange data of this domain is in
>> >
>> > progress
>> >
>> > > and would automatically ignore this step?)
>> > >
>> > > I am assuming this won't work. Even if it does though, a solution such
>> > > as the following makes more sense to me:
>> > >
>> > > function: ethercat manager - write output
>> > >   if domain data exchange in progress
>> > >     ignore (because data is already being exchanged)
>> > >   else
>> > >     domain_queue
>> > >     master_send
>> > >
>> > > function: ethercat manager - read input
>> > >   if last write already read
>> > >     ignore (and use the latest available data)
>> > >   else if domain data exchange in progress
>> > >     ignore (if data exchange in progress, it means the other thread has
>> > > issued data exchange recently)
>> > >   else
>> > >     master_receive
>> > >     domain_process
>> > >
>> > > Now each thread goes: ethercat manager - read input; calculate;
>> > > ethercat manager - write output; wait
>> > >
>> > > This way the previous scenario would be like this:
>> > > 1. Thread 1 finishes calculation and writes output
>> > > 2. Slightly after, Thread 2 finishes calculation and write output is
>> > >  ignored by the ethercat manager
>> > > 3. After Thread1 wakes up, it receives the input
>> > > 4. Slightly after, Thread 2 wakes up and uses the same results
>> > >
>> > > Now, however another problem arises. What if Thread 1's read is ignored
>> > > because data is still being exchanged, and then while Thread 1 is
>> > >  performing "calculate", Thread 2 performs a successful read and
>> > > changes the data, ruining Thread 1's work?
>> > >
>> > > The second question is, regardless of the solution of the mentioned
>> > >  problem, how do you check "domain data exchange is in progress" to
>> > >  implement the ethercat manager functions in the first place? (That is,
>> > >  when you issue a write output, how can you be sure when the read input
>> >
>> > is
>> >
>> > >  issued, the data HAS actually finished exchanging?)
>> > >
>> > > Thank you very much for your attention,
>> > > Shahbaz
>> > >
>> > > P.S. The delay of one cycle is not a problem. That had been a
>> > > misunderstanding on my side.
>> > >
>> > > P.S.2. I have been considering separating the pdo entries in more
>> > > domains so that different threads won't share domains. That however,
>> > > would be
>> >
>> > the
>> >
>> > >  last resort. If it is possible to have two threads working with the
>> > > same domain, I would be happier with that solution. If it is
>> > > impossible, tell me and I'll simply make sure no two threads use data
>> > > from the same
>> >
>> > domain.
>> >
>> > > On Fri, Oct 14, 2011 at 10:01 AM, Richard Hacker <ha at igh-essen.com>
>> >
>> > wrote:
>> > > > Hi,
>> > > >
>> > > > I am not sure why you want to go through all this trouble. Of coarse,
>> >
>> > if
>> >
>> > > > your
>> > > > calculation is so long that there is no time to exchange IO, you have
>> > > > trouble
>> > > > looming anyway!
>> > > >
>> > > > So what do you want to do with the data if you receive it in the same
>> > > > cycle instead of waiting till your next call? For me, there is no
>> > > > point of busily waiting till your packet arrives, instead of being
>> > > > relaxed
>> >
>> > and
>> >
>> > > > receiving the
>> > > > packet next cycle.
>> > > >
>> > > > The normal run of a control program is:
>> > > > calculate; exchange io; wait; calculate; exchange io; wait; etc.
>> > > > where exchange io means: write output and get new inputs. Master
>> >
>> > receive
>> >
>> > > > and
>> > > > domain process simply fetches and processes the data that was
>> >
>> > transmitted
>> >
>> > > > with
>> > > > master_send at the end of you pseudo code examples.
>> > > >
>> > > > Now, exchange io is done in the background by the network card. This
>> > > > means, that you could call exchange io right at the start of your
>> > > > cycle and subsequently calculate. In this case your calculation and
>> > > > exchange
>> >
>> > io
>> >
>> > > > runs in
>> > > > parallel. This is useful when your calculation is long and you have a
>> >
>> > lot
>> >
>> > > > of
>> > > > data to exchange, i.e.
>> > > > exchange io; calculate; wait; exchange io; calculate wait; etc.
>> > > >
>> > > > The drawback is that your propagation time from input change to
>> > > > output reaction is 2 cycles, instead of only 1. That is the price to
>> > > > pay if
>> >
>> > you
>> >
>> > > > have
>> > > > lots of data and a long calculation - there is no free lunch!!!
>> > > >
>> > > > I do not think that you have a problem. Draw your operations on a
>> > > > time line and convince yourself that once you are in in the loop, you
>> > > > have
>> >
>> > max
>> >
>> > > > 1 cycle
>> > > > delay from input to output. If that is too long for you, then
>> > > > ethercat
>> >
>> > is
>> >
>> > > > not
>> > > > your solution. Then you need direct IO like that of microcontrollers
>> >
>> > and
>> >
>> > > > the
>> > > > like.
>> > > >
>> > > > - Richard
>> > > >
>> > > > On Thursday 13 October 2011 17:12:46 Shahbaz Yousefi wrote:
>> > > > > Hi,
>> > > > >
>> > > > > I have been working with etherlab recently and got ethercat working
>> >
>> > up
>> >
>> > > > and
>> > > >
>> > > > > everything is fine.
>> > > > >
>> > > > > There is a delay issue however that I'm concerned about. As seen in
>> >
>> > the
>> >
>> > > > > examples, the way you read from the network is like this (imagine
>> > > > > you are interested in reading sensor values):
>> > > > >
>> > > > > while (running)
>> > > > > {
>> > > > >   master receive
>> > > > >   domain process
>> > > > >
>> > > > >   read data
>> > > > >
>> > > > >   domain queue
>> > > > >   master send
>> > > > >
>> > > > >   rt wait period
>> > > > > }
>> > > > >
>> > > > > in which case you assume that the task period is long enough to be
>> >
>> > sure
>> >
>> > > > >  that the packet sent in the bottom of the loop has returned when
>> > > > > the
>> > > >
>> > > > loop
>> > > >
>> > > > >  starts again and so you can receive the data.
>> > > > >
>> > > > > However, I was wondering if it is possible to, instead of taking an
>> > > > > upper bound of the time, simply check to see whether the data has
>> > > > > arrived or
>> > > >
>> > > > not.
>> > > >
>> > > > > After some research, I got to this code:
>> > > > >
>> > > > > while (running)
>> > > > > {
>> > > > >   domain queue
>> > > > >   master send
>> > > > >
>> > > > >   do
>> > > > >   {
>> > > > >     sleep a little
>> > > > >
>> > > > >     master receive
>> > > > >     domain process
>> > > > >
>> > > > >     ecrt_domain_state(domain, &state);
>> > > > >
>> > > > >   } while (state.wc_state != EC_WC_COMPLETE && !timeout)
>> > > > >
>> > > > >   read data
>> > > > >
>> > > > >   rt wait period
>> > > > > }
>> > > > >
>> > > > > This way, after sending the packet, you would read the data as soon
>> >
>> > as
>> >
>> > > > they
>> > > >
>> > > > > arrive.
>> > > > >
>> > > > > The problem with this was that, besides the fact that early calls
>> > > > > to master_receive (or domain_process) generated a huge amount of
>> > > > > warning
>> > > >
>> > > > about
>> > > >
>> > > > > working counter changing to 0/1 and back to 1/1 again, the kernel
>> > > > > started to at some point keep crashing.
>> > > > >
>> > > > > I would like to know, how can I detect when the packet has arrived
>> > > >
>> > > > without
>> > > >
>> > > > > knowing an upper bound about it and wait blindly by that much?
>> > > > >
>> > > > > Note: This is most useful for me for this reason:
>> > > > >
>> > > > > I may have different threads requesting data from a domain which
>> > > > > includes different sensors. Each type of sensor produces data at a
>> > > > > different rate and I would like to read the data at different rates
>> >
>> > to.
>> >
>> > > > > I don't want to (and I don't think is even possible) to have
>> >
>> > different
>> >
>> > > > > threads
>> > > >
>> > > > requesting
>> > > >
>> > > > >  data from the same domain (because they may send the packet while
>> >
>> > the
>> >
>> > > > one
>> > > >
>> > > > >  for the previous thread hasn't yet arrived). So what I want to is
>> > > > > this:
>> > > > >
>> > > > > ethercat coordinator:
>> > > > >
>> > > > > domain_updating = no
>> > > > >
>> > > > > send_request_for_domain
>> > > > > {
>> > > > >   if (domain_updating == no)
>> > > > >   {
>> > > > >     domain_updating = yes
>> > > > >     domain queue
>> > > > >     master send
>> > > > >   }
>> > > > >   // else, do nothing, it is being updated!
>> > > > > }
>> > > > >
>> > > > > receive_from_domain()
>> > > > > {
>> > > > >   while (ecrt_domain_data_not_yet_received) // this is the function
>> > > > > I
>> > > >
>> > > > need
>> > > >
>> > > > >     wait
>> > > > >   domain_updating = no
>> > > > >   // data available
>> > > > > }
>> > > > >
>> > > > > and then, each thread that wants something from the domain would
>> > > > > look
>> > > >
>> > > > like
>> > > >
>> > > > > this:
>> > > > >
>> > > > > thread:
>> > > > >   send_request_for_domain
>> > > > >   receive_from_domain
>> > > > >   read data
>> > > > >
>> > > > > This way, if two threads call send_request_for_domain at the same
>> >
>> > time,
>> >
>> > > > >  only one of them would actually do the
>> > > > >
>> > > > > domain queue
>> > > > > master send
>> > > > >
>> > > > > and both of them use the result.
>> > > > >
>> > > > > I would appreciate it if you could shed some light on this matter.
>> > > > > Shahbaz
>> > > >
>> > > > Mit freundlichem Gruß
>> > > >
>> > > > Richard Hacker
>> > > >
>> > > > --
>> >
>> > ------------------------------------------------------------------------
>> >
>> > > > Richard Hacker M.Sc.
>> > > > richard.hacker at igh-essen.com
>> > > > Tel.: +49 201 / 36014-16
>> > > >
>> > > > Ingenieurgemeinschaft IgH
>> > > > Gesellschaft für Ingenieurleistungen mbH
>> > > > Heinz-Bäcker-Str. 34
>> > > > D-45356 Essen
>> > > > Amtsgericht Essen HRB 11500
>> > > > USt-Id.-Nr.: DE 174 626 722
>> > > > Geschäftsführung:
>> > > > - Dr.-Ing. S. Rotthäuser,
>> > > > - Dr.-Ing. T. Finke,
>> > > > - Dr.-Ing. W. Hagemeister
>> > > > Tel.: +49 201 / 360-14-0
>> > > > http://www.igh-essen.com
>> >
>> > ------------------------------------------------------------------------
>> >
>> >
>> > Mit freundlichem Gruß
>> >
>> > Richard Hacker
>> >
>> > --
>> > ------------------------------------------------------------------------
>> >
>> > Richard Hacker M.Sc.
>> > richard.hacker at igh-essen.com
>> > Tel.: +49 201 / 36014-16
>> >
>> > Ingenieurgemeinschaft IgH
>> > Gesellschaft für Ingenieurleistungen mbH
>> > Heinz-Bäcker-Str. 34
>> > D-45356 Essen
>> > Amtsgericht Essen HRB 11500
>> > USt-Id.-Nr.: DE 174 626 722
>> > Geschäftsführung:
>> > - Dr.-Ing. S. Rotthäuser,
>> > - Dr.-Ing. T. Finke,
>> > - Dr.-Ing. W. Hagemeister
>> > Tel.: +49 201 / 360-14-0
>> > http://www.igh-essen.com
>> >
>> > ------------------------------------------------------------------------
>>
>
> Mit freundlichem Gruß
>
> Richard Hacker
>
> --
> ------------------------------------------------------------------------
>
> Richard Hacker M.Sc.
> richard.hacker at igh-essen.com
> Tel.: +49 201 / 36014-16
>
> Ingenieurgemeinschaft IgH
> Gesellschaft für Ingenieurleistungen mbH
> Heinz-Bäcker-Str. 34
> D-45356 Essen
> Amtsgericht Essen HRB 11500
> USt-Id.-Nr.: DE 174 626 722
> Geschäftsführung:
> - Dr.-Ing. S. Rotthäuser,
> - Dr.-Ing. T. Finke,
> - Dr.-Ing. W. Hagemeister
> Tel.: +49 201 / 360-14-0
> http://www.igh-essen.com
>
> ------------------------------------------------------------------------
> _______________________________________________
> etherlab-users mailing list
> etherlab-users at etherlab.org
> http://lists.etherlab.org/mailman/listinfo/etherlab-users
>



Jordi Blanch Carles
Unidad de Ensayo y Control

ENCOPIM S.L.
C/. del Parc, 5 (nave 13)
P.I. Els Pinetons
E-08291 RIPOLLET (Barcelona)
Tel: (+34) 935 94 23 47
Fax: (+34) 935 94 64 15

==========================================================
La información contenida en la presente transmisión es confidencial y su
uso únicamente está permitido a su(s) destinatario(s). Si Ud. no es la
persona destinataria de la presente transmisión, rogamos nos lo
comunique de manera inmediata por teléfono (+34 935 942 347) y destruya
cualquier copia de la misma (tanto digitales como en papel).

The information contained in this transmission is confidential and is
intended only for the use of the addressee(s). If you are not the
designated recipient of this transmission, please advise us immediately
by telephone (+34 935 942 347) and destroy any copies (digital and
paper).
======================================================



More information about the Etherlab-users mailing list