[nvmewin] IO queue memory
Chang, Alex
Alex.Chang at idt.com
Tue Jul 24 15:58:30 PDT 2012
Hi Paul,
Thanks for the explanation.
When being shutdown, I am not sure if it is really necessary to look for pending commands in "normal" shutdown cases.
Alex
________________________________
From: Luse, Paul E [mailto:paul.e.luse at intel.com]
Sent: Tuesday, July 24, 2012 3:45 PM
To: Chang, Alex; nvmewin at lists.openfabrics.org
Subject: RE: IO queue memory
Hi Alex,
I'm aware of what the docs say but I have a BSOD (with verifier on) that claims that an address in the range of our large chunk of queue memory is paged memory. Its accessed when we look for pending commands when being shutdown via the adapterControl entry which runs at DIRQL. With verifier on, its supposed to throw regardless of whether the memory is actually paged out or not but simply based on whether its capable of being paged out. I have a request from Msft to send them the DMP which I plan on doing this week. Will keep you all posted.
Thx
Paul
From: Chang, Alex [mailto:Alex.Chang at idt.com]
Sent: Tuesday, July 24, 2012 3:39 PM
To: Luse, Paul E; nvmewin at lists.openfabrics.org
Subject: RE: IO queue memory
Hi Paul,
Have you confirm that the IO queue memory the driver allocates can be paged out? You brought the issue up last week and nobody could confirm that. According to the link below, StorPortAllocateContiguousMemorySpecifyCacheNode allocates a range of physically contiguous, noncached, nonpaged memory.
http://msdn.microsoft.com/en-us/library/ff567027(v=vs.85)
Thanks,
Alex
________________________________
From: nvmewin-bounces at lists.openfabrics.org<mailto:nvmewin-bounces at lists.openfabrics.org> [mailto:nvmewin-bounces at lists.openfabrics.org]<mailto:[mailto:nvmewin-bounces at lists.openfabrics.org]> On Behalf Of Luse, Paul E
Sent: Wednesday, July 18, 2012 5:44 AM
To: nvmewin at lists.openfabrics.org<mailto:nvmewin at lists.openfabrics.org>
Subject: [nvmewin] IO queue memory
Discussion point I wanted to get some input on:
Memory type: When we designed this, we chose cached memory for our IO queues because we don't have to worry about DMA coherency with IA anymore however the implication here is that our queues can now be paged out which I don't think we want for performance reasons. Also, if we don't decide to switch to non-paged for that reason we need to rework (minor) our shutdown code which is touching IO queue memory at DIRQL which, of course, you can't do. I think for the paging reason alone we should consider non cached allocations for the IO queues. Other thoughts?
We may want to also think about a different strategy for IO queue sizing as well, if we switch to non cached, to be a little more accurate/conservative with how much memory we're using based on the current config. Right now, for example, on a 32 core system we'll use 2MB of memory just for IO queues.
Thx
Paul
____________________________________
Paul Luse
Sr. Staff Engineer
PCG Server Software Engineering
Desk: 480.554.3688, Mobile: 480.334.4630
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/nvmewin/attachments/20120724/eab046c8/attachment.html>
More information about the nvmewin
mailing list