[nvmewin] IO queue memory

Luse, Paul E paul.e.luse at intel.com
Tue Jul 24 13:29:58 PDT 2012


Took a week for this to make it out to the list.... odd.  We talked about this already and I'm still investigating (time has not permitted) the scenario where it appears we're getting paged pool memory - have confirmed w/Msft contact that we shouldn't be

From: nvmewin-bounces at lists.openfabrics.org [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Luse, Paul E
Sent: Wednesday, July 18, 2012 5:44 AM
To: nvmewin at lists.openfabrics.org
Subject: [nvmewin] IO queue memory

Discussion point I wanted to get some input on:

Memory type:  When we designed this, we chose cached memory for our IO queues because we don't have to worry about DMA coherency with IA anymore however the implication here is that our queues can now be paged out which I don't think we want for performance reasons.  Also, if we don't decide to switch to non-paged for that reason we need to rework (minor) our shutdown code which is touching IO queue memory at DIRQL which, of course, you can't do.  I think for the paging reason alone we should consider non cached allocations for the IO queues.  Other thoughts?

We may want to also think about a different strategy for IO queue sizing as well, if we switch to non cached, to be a little more accurate/conservative with how much memory we're using based on the current config.  Right now, for example, on a 32 core system we'll use 2MB of memory just for IO queues.

Thx
Paul
____________________________________
Paul Luse
Sr. Staff Engineer
PCG Server Software Engineering
Desk: 480.554.3688, Mobile: 480.334.4630

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/nvmewin/attachments/20120724/4c365f66/attachment.html>


More information about the nvmewin mailing list