[openib-general] How about ib_send_page() ?

Felix Marti felix at chelsio.com
Wed May 18 20:00:15 PDT 2005


Hi Roland,
 
define SMP :) at these rates, system architecture comes into place, i.e. on opteron (numa) platforms i get 7.5G (with toe) when rx/tx on cpu0 (on the tyan motherboard, the pci-x tunnel is connected to cpu0). when i run the same test on cpu1, goodput degrades to 7.3G. similarly, running netperf rr tests, i get just below 10us user space latency when running on cpu0 and about 12us when running cpu1. xeon platforms don't show this difference (smp), but throughput and latency are always a bit worse.
 
yes, cacheline pingponging can really affect performance, but we try to avoid it as much as we can in the driver. once i have a bit more time, i'm gonna do more performance work and benchmarks on MP, which includes are more detailed look at napi. 
 
felix 

________________________________

From: Roland Dreier [mailto:roland at topspin.com]
Sent: Wed 5/18/2005 1:42 PM
To: Felix Marti
Cc: Jeff Carr; openib-general at openib.org
Subject: Re: [openib-general] How about ib_send_page() ?



    Felix> I get just above 5G on RX (goodput, as reported by netperf)
    Felix> on a single opteron 248 (100%) using standard ethernet MTU
    Felix> (1500).

    Felix> TX performance is higher (close to 7G), but it is probably
    Felix> not the kind of comparison that you're interested in, since
    Felix> TSO removes the dependency on the wire MTU.

    Felix> Similarly, the TOE completely shields the host from the
    Felix> wire MTU (in addition of removing ACK traffic) and I'm
    Felix> getting 7.5G RX and TX with about 50% CPU utilization
    Felix> (user/kernel space memory copy!)

    Felix> These numbers are without NAPI.

Thanks very much for posting these numbers.  It's interesting to me
that you are using a single CPU for benchmarking.  How do the numbers
compare if you run on an SMP system?  Obviously there is more CPU
available but there are SMP losses due to cacheline pingpong, "lock"
prefix and other locking overhead, etc.

Thanks,
  Roland






More information about the general mailing list