[ofa-general] Question on IB RDMA read timing.
Gleb Natapov
glebn at voltaire.com
Wed Oct 17 00:56:31 PDT 2007
On Wed, Oct 17, 2007 at 09:44:04AM +0200, Dotan Barak wrote:
> Hi.
>
> Bharath Ramesh wrote:
>> I wrote a simple test program to actual time it takes for RDMA read over
>> IB. I find a huge difference in the numbers returned by timing. I was
>> wondering if someone could help me in finding what I might be doing
>> wrong in the way I am measuring the time.
>>
>> Steps I do for timing is as follows.
>>
>> 1) Create the send WR for RDMA Read.
>> 2) call gettimeofday ()
>> 3) ibv_post_send () the WR
>> 4) Loop around ibv_poll_cq () till I get the completion event.
>> 5) call gettimeofday ();
>>
>> The difference in time would give me the time it takes to perform RDMA
>> read over IB. I constantly get around 35 microsecs as the timing which
>> seems to be really large considering the latency of IB. I am measuring
>> the time for transferring 4K bytes of data. If anyone wants I can send
>> the code that I have written. I am not subscribed to the list, if you
>> could please cc me in the reply.
>>
>
> I don't familiar with the implementation of gettimeofday, but i believe
> that this function do a context switch
> (and/or spend some time in the function to fill the struct that you supply
> to it)
>
Here:
struct timeval tv_s, tv_e;
gettimeofday(&tv_s, NULL);
gettimeofday(&tv_e, NULL);
printf("%d\n", tv_e.tv_usec - tv_s.tv_usec);
Compile and run it. The overhead of two calls to gettimeofday is at most
1 microsecond.
--
Gleb.
More information about the general
mailing list