[ofa-general] Re: low bi bw with qperf for hca loopback test

Or Gerlitz ogerlitz at voltaire.com
Mon Aug 11 23:29:21 PDT 2008


Johann George wrote:
> Interesting results.  I assume in the loopback case, you are running both instances of qperf on the same machine?  I'm wondering if you are somehow CPU limited?  If you give qperf the -v option, it will print out percentage CPU utilization. Note that in its printout, a cpu is a core so 150% utilization indicates 1.5 cores.
Yes, I am running both instances on the same machine.
>
> I'm also wondering (although most unlikely) if somehow both instances of qperf are somehow stuck on the same CPU.  You can use the -la and -ra options to set the processor affinities of the client and server.
I used the la and ra directives to place each instance on a different 
CPU. Its a dual cpu - quad core machine so I placed the client on cpu 0 
and the server on cpu 4. It didn't help, I also tried with polling (-P 
1) which indeed increased dramatically the cpu utilization but it didn't 
help either. I have a feeling that for some reason the cpu being the 
bottleneck in this test. Just to make sure, I also made more two runs, 
one with -ar 0 () and one with -ar 1, the later caused bw reduction from 
2.5 GB/s to 1.5 GB/s so I concluded that all the runs I made before 
where I didn't specify any "access-receive" directive the data was not 
touched.

The 2.5 GB/s is too similar to the CPU frequency of this system which is 
2.66 GB/s, also if I run one instance of the stream RAM benchmark, I get 
2.6 GB/s BW so its another evidence that qperf rc_bi_bw when run in 
loopback config has some bottleneck which is not related to the HCA or 
the PCI bus / bridge...

> Finally, rc_bw and rc_bi_bw determine bandwidth using Send/Receives.  I believe that the ib_rdma_bw utility uses RDMA Writes.  For the equivalent test, use the qperf rc_rdma_write_bw test.  
mmm, it's not really possible .... since qperf doesn't have a 
rdma_write_bi_bw test and ib_send_bw doesn't really work with the -b 
directive.

Or.







More information about the general mailing list