[nvmewin] patch reminder
Chang, Alex
Alex.Chang at idt.com
Wed Feb 15 15:00:16 PST 2012
Hi Paul and Ray,
I had merged the patch and tested it a week also: drive formatting, busTrace, IOMeters, SCSI_Compliance and SDStress, etc. It works well. The only place I did not test is when the interrupt routing is in logical mode. I wonder if Paul has tested it or not.
Thanks,
Alex
________________________________
From: nvmewin-bounces at lists.openfabrics.org [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Luse, Paul E
Sent: Tuesday, February 14, 2012 9:19 AM
To: nvmewin at lists.openfabrics.org
Subject: [nvmewin] patch reminder
All-
Although we're not on a strict timeline, I'd like to make sure patches don't sit for too long. This one is close to two weeks old, IDT & LSI can you guys take a few minutes to review and let Ray know if its good to go or comment otherwise?
Note that its been running now on a test machine (full speed with Chatham hw and busTRACE 32 thread data integrity) for over a week now w/no issues.
Thx
Paul
From: nvmewin-bounces at lists.openfabrics.org [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Luse, Paul E
Sent: Friday, February 03, 2012 11:19 AM
To: nvmewin at lists.openfabrics.org
Subject: [nvmewin] ***UNCHECKED*** REVIEW REQUEST: patch for perf fix and other misc updates
Password for zip is nvme1234
Main changes: (based on tags\format_nvm_pt_ioctl)
1) Fix DPC watchdog timeout under heavy IO by adding call to ste per lun storport queue depth following INQ
2) Added support for optional (compile switch) DPC or ISR completions. Defaulting to DPC as this is 'standard' recommended method
3) Updated mode block descriptor creation to return all F's for # blocks if namespace is too big to fit in field (per SPC)
4) Changed logical mode to 1:1 map cores to MSIX vectors, not optimal for vector matching but better than sending all IO through one core and we're covered in any scenario wrt protection on submit/complete
5) Pile of CHATHAM only changes
6) Changed passiveInit to wait for state machine to complete based on lots of issues with us missing enum because we weren't ready and storport doesn't retry the early enum commands. Ran into this at Msft as well as UNH when using the chatham in various platforms. Ray also got it with QEMU on his HW (different speed than mine)
Tested (2008-R2 with Chatham and Win7-64 with QEMU, with and without driver verifier):
- Sdstress
- SCSI compliance (write 10 fails, not clear why as trace shows no issue. Fails with baseline code also, note related to these changes)
- BusTRACE scsi compliance
- BusTRACE data integrity
- Iometer all access specs, Q depth 32 8 workers
Changes:
Nvme.inf:
- Updated version
Nvmeinit.c
- Misc asserts added, some braces added here and there for readability
- NVMeMsixMapCores(): changes to support logical mode using all cores/all vectors 1:1 mapped
- Misc chatham changes
- Compile switch for DPC or ISR
Nvmeio.c
- New assert
nvmePwrMgmt.c
- Chatham only changes
nvmeSnti.c
- SntiTranslateCommand() added adapter ext parm for use by API to set per lun Q depth, also set Q depth post INQ
- Bunch of chatham changes
- SntiCreateModeParameterDescBlock() added code to correctly fill in # blocks when we overflow
nvmeSnti.h
- Defines used by Q depth setting, function proto changes
nvmeStd.c
- DPC vs ISR compile switches
- PassiveInit waits on init state machine now
- Removed storport perf opt, has on effect based on our mapping
- Changed assert checking on vector/proc mapping so it doesn't affect admin queue, is ignored for QEMU and for logical mode
- NVMeIsrMsix: fixed issue where shared mode would cause BSOD
- Added ISR completion support
- Chatham changes
nvmeStd.h
- Supporting sturct changes
Sources
- New compile switches for ISR vs DPC and for QEMU
-
____________________________________
Paul Luse
Sr. Staff Engineer
PCG Server Software Engineering
Desk: 480.554.3688, Mobile: 480.334.4630
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/nvmewin/attachments/20120215/8c32be43/attachment.html>
More information about the nvmewin
mailing list