[openib-general] [PATCH] FMR support in mthca
Libor Michalek
libor at topspin.com
Mon Mar 28 17:03:51 PST 2005
On Sun, Mar 27, 2005 at 05:31:13PM +0200, Michael S. Tsirkin wrote:
> OK, here's an updated version of the patch. This passed basic
> tests: allocate/free, map/remap/unmap.
>
> For Tavor, MTTs for FMR are separate from regular MTTs, and are reserved
> at driver initialization. This is done to limit the amount of
> virtual memory needed to map the MTTs.
> For Arbel, there's no such limitation, and all MTTs and MPTs may be used
> for FMR or for regular MR.
> It would be easy to remove the limitation for Tavor for 64-bit systems, where
> it's feasible to ioremap the whole MTT table. Let me know if this is
> of interest.
>
> Please comment.
I haven't looked closely at the code yet, but I did try it out
with SDP/AIO on a pair of x86 systems with Tavors and a pair of
x86_64 systems with Arbels. With a small change to core/fmr_pool.c
and enabling pool creation in SDP it worked as expected. Here are
throughput results:
x86 x86_64
-------- --------
SDP sync 610 MB/s 710 MB/s
SDP async (hit) 740 MB/s 910 MB/s
SDP async (miss) 590 MB/s 910 MB/s
For sync sockets I used 81600 byte buffers. For async socket I kept
20 96K buffers in flight. For the FMR pool cache hit async results I
used only 20 different buffers. For the FMR pool cache miss async
results I used 1000 different buffers, of which only 20 were in flight
at a time.
-Libor
Here is the change I made to core/fmr_pool.c:
Index: fmr_pool.c
===================================================================
--- fmr_pool.c (revision 2055)
+++ fmr_pool.c (working copy)
@@ -105,7 +105,7 @@
{
return jhash_2words((u32) first_page,
(u32) (first_page >> 32),
- 0);
+ 0) & (IB_FMR_HASH_SIZE - 1);
}
/* Caller must hold pool_lock */
More information about the general
mailing list