[nvmewin] ***UNCHECKED*** MSID 0 share w/IO queue patch
Luse, Paul E
paul.e.luse at intel.com
Mon Oct 1 16:02:39 PDT 2012
All-
Here is the final patch for admin queue sharing and a few other misc cleanup items. Fully tested and I also reviewed w/Alex and Kwok F2F last week but will let Alex approve for himself. PW is intel123
Please let me know if there are any questions
Thx
Paul
nvmeInit.c:
- changes in loops and conditions to account for the number of cores now being the same as the number of vectors requested
- removed unused element pMsiMsgTbl->CoreNum
- removed unused element pMMT->CoreNum
- in NVMeMsiMapCores(), now init the MsgId in the core table to the CQ number -1 (same as core #)
- in DBG mode, track when learning is complete to support an assertion to make sure learning is always working
- for the identify commands during the init state machine, we were DMA'ing directly into elements within devExt structures with no assurance of alignment. To address this I changed the target address of the xfer to use the driver state machine data buffer and copy it over on completion
- NVMeAllocIoQueues(): fix to use an index that counts up through both core and NUMA loops (what Alex saw) instead of the inner loop. This val is used to index into the coreTable, the QueuId value will continue to behave as before and wrap at the number of queues
nvmeIo.c:
- two changes both wrapped in DBG, one to print PRP details and one to init the core # element in the srbExt used to make sure learning continues to do its job (check submitting core & completing core)
nvmeStat.c
- changes in loops and conditions to account for the number of cores now being the same as the number of vectors requested
- init of the debug var used for learning mode
nvmeStd.c
- alloc one less entry for the MsgTable
- in the completion routines (either DPC or ISR):
-- switched the logic to check for the common case first (shared == FALSE)
-- got rid of the learningMode var as we detect now based on startState as follows:
If we're done with the init State machine:
? use the msg table to figure out which queue to look in
Else if we're in learning mode:
? use the msgId + 1. Recall that when we alloc'd the queues we setup the CQ's such that QP 1 would use MSID0, QP 2 MSID 1, etc. Learning mode will loop through all of the QPs by walking the core table 0..number of cores. In the event that there are fewer QPs than cores because of an HBA limitation, this still works we just learn each queue more than once which does not hurt anything. Clearly when this happens things will not be optimal, they can't be without enough QPs, however we'll still fully utilize all of the available queue pairs
- the loop has changed as the previous for loop didn't have the flexibility to check 2 non back-back queues. The QP that shared MSIX0 could be any of the other queues. I reworked the loops to be 2 do while loops and for an actual admin queue request we'll just check the admin queue, for the shared IO queue however we have to always check the admin queue as well. This logic is at the bottom of the loop and is fairly straightforward
nvmeStd.h:
- a few supporting changes - obvious
Also made a few changes enar the end following review w/IDT:
- replaced Rtl copy commands with storport copy commands
- replaced Rtl zero mem commands with memset
- added print at the end of learning mode to see updated mappings (initial mappings still print)
____________________________________
Paul Luse
Sr. Staff Engineer
PCG Server Software Engineering
Desk: 480.554.3688, Mobile: 480.334.4630
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/nvmewin/attachments/20121001/8732050c/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: msix0.zip
Type: application/x-zip-compressed
Size: 169883 bytes
Desc: msix0.zip
URL: <http://lists.openfabrics.org/pipermail/nvmewin/attachments/20121001/8732050c/attachment.bin>
More information about the nvmewin
mailing list