[nvmewin] HGST's review of patch for namespace management
Foster, Carolyn D
carolyn.d.foster at intel.com
Fri Jan 29 08:04:22 PST 2016
Hi Tom, thank you for the feedback, I will review your comments and make appropriate changes. I will try to get the updates out in the next two weeks.
Carolyn
From: Thomas Freeman [mailto:thomas.freeman at hgst.com]
Sent: Friday, January 29, 2016 8:41 AM
To: Foster, Carolyn D <carolyn.d.foster at intel.com>; nvmewin at lists.openfabrics.org
Subject: RE: HGST's review of patch for namespace management
Hi Carolyn,
During my testing, I saw a few more items that should be addressed. I know a few of these are not changes you made, but they are issues I saw while stepping through the code.
nvmeInit.c
NVMeSetFeaturesCompletion:Line 1459
should set pAE->DriverState.NumQueuesSet = TRUE;
NVMeSetFeaturesCompletion:line 1496
Check should validate that the SCT is GENERIC_COMMAND_STATUS
NVMeSetFeaturesCompletion:line 1500
Also need to check for SC==INVALID_NAMESPACE_OR_FORMAT. In my testing, that is the value I saw going down this path
NVMeSetFeaturesCompletion:line 1513
For the case where NS Mgmt is not supported, nsStatus should be set to ATTACHED
if (!pAE->controllerIdentifyData.OACS.SupportsNamespaceMgmtAndAttachment) {
pLunExt->nsStatus = ATTACHED;
}
NVMeSetFeaturesCompletion:line 1518
In the case where NS mgmt is not supported, nsStatus will be "INVALID"
} else if ((INACTIVE == pLunExt->nsStatus) || (INVALID == pLunExt->nsStatus))
NVMeAccessLbaRangeEntry:line 2361
size should be a full page, not sizeof LBA RANGE entry
nvmeStd.c
NVMeIoctlSetGetFeatures:line 3197
dataBufferSize should be set to a full page (LBA RANGE)
Also, this is required to be contiguous memory. I don't think we can guarantee that with the IOCTL databuffer.
During testing (with devices that do and do not support NS Mgmt), I ran into issues when I had 0 namespaces or a single detached namespace. Below are some of my suggestions on possible ways to address this
nvmeInit.c
NVMESetFeaturesCompletion:line 1490
When there are no namespaces (NumKnowNamespaces is 0), after setting int coalescing and number of queues, there is no need to get LBA range types. Go directly to NVMeWaitOnSetupQueues
if (pAE->DriverState.TtlLbaRangeExamined < pAE->DriverState.NumKnownNamespaces) {
pAE->DriverState.NextDriverState = NVMeWaitOnSetFeatures;
} else {
/* There are no valid luns, so skip set features steps that */
/* are issued to namespaces */
pAE->DriverState.NextDriverState = NVMeWaitOnSetupQueues;
}
NVMeInitCallback:line 1808
In my testing, going down this path left the controller disabled. Instead of calling FatalError, go to NVMeWaitOnSetFeatures.
if (pAE->controllerIdentifyData.NN == 0) {
pAE->DriverState.NextDriverState = NVMeWaitOnSetFeatures;
pAE->visibleLuns = 0;
}
NVMeInitCallback:line 1884
After processing the attached and existing NS lists, if there are no NS's, skip the step to review Identify NS's and go directly to NVMeWaitOnSetFeatures.
if (pAE->DriverState.NumKnownNamespaces == 0) {
pAE->DriverState.NextDriverState = NVMeWaitOnSetFeatures;
} else {
pAE->DriverState.NextDriverState = NVMeWaitOnIdentifyNS;
}
Tom Freeman
Software Engineer, Device Manager and Driver Development
HGST, a Western Digital company
thomas.freeman at hgst.com<mailto:thomas.freeman at hgst.com>
507-322-2311
[HGST_Logo_email]
3605 Hwy 52 N
Rochester, MN 55901
www.hgst.com<https://hgst.jiveon.com/external-link.jspa?url=http://www.hgst.com/>
From: nvmewin-bounces at lists.openfabrics.org<mailto:nvmewin-bounces at lists.openfabrics.org> [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Thomas Freeman
Sent: Wednesday, January 27, 2016 12:53 PM
To: Foster, Carolyn D <carolyn.d.foster at intel.com<mailto:carolyn.d.foster at intel.com>>; nvmewin at lists.openfabrics.org<mailto:nvmewin at lists.openfabrics.org>
Subject: [nvmewin] HGST's review of patch for namespace management
Hi Carolyn,
I have some additional comments
nvmeInit.c:
NVMeInitCallback: line 1808:
NN is unsigned so the "<=" should be replaced with "=="
if (pAE->controllerIdentifyData.NN == 0) {
nvmeStd.c:
NVMeIoctlIdentify: line 3060
According to the NVMe 1.2 spec, when CNS is 2, "A list of 1024 namespace IDs is returned". So
Databuffersize should be set to 4k.
NVMeIoctlIdentify: line 3060
databuffersize needs to be set for other values of cns (e.g. 0x10) or it will fail call to NVMePreparePRP's
NVMeCompletionNsAttachment: line 4061;
The code works as is, but for correctness sizeof(ADMIN_IDENTIFY_NAMESPACE) should be specified.
Also, I'm seeing some incorrect behavior with SCSI Report Luns
Here are the steps and the results I'm seeing
1. Create Namespaces 1,2,3,4,5. Attach only NSID 1.
2. Disable/re-enable the device
3. Attach NSID's 3 & 5 (now 1,2,3,4,5 are existing NSIDs, 1,3,5 are attached NSIDs)
4. Report LUNs results: Length 0x38, Lun List 0, 2, 4, 0, 0, 0, 0 - Should be 0x18/0, 2, 4
5. Disable/re-enable the device
6. Report LUNs results: Length 0x28 Lun list 0,1,2,0,0 - Should be 0x18/0,1,2
7. Namespace delete of NSID 1.
8. Report LUNs results: Length 0x20 Lun list 1,2,0,0 - Invalid. First LUN must be 0. From the SNTL 1.5 spec "The list shall contain logical unit numbers corresponding to namespaces present on the device with a Namespace Capacity (NCAP) field of the Identify Namespace Structure set to greater than 0h. Logical unit numbers shall begin with 0 and have a maximum value of NN-1, where NN is the Number of Namespaces field within Identify Controller Data Structure. "
9. Disable/re-enable the device
10. Report LUNs results: Length 0x20 Lun list 0,1,0,0 - Should be 0x10/0,1
Thanks,
Tom Freeman
Software Engineer, Device Manager and Driver Development
HGST, a Western Digital company
thomas.freeman at hgst.com<mailto:thomas.freeman at hgst.com>
507-322-2311
[HGST_Logo_email]
3605 Hwy 52 N
Rochester, MN 55901
www.hgst.com<https://hgst.jiveon.com/external-link.jspa?url=http://www.hgst.com/>
From: nvmewin-bounces at lists.openfabrics.org<mailto:nvmewin-bounces at lists.openfabrics.org> [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Foster, Carolyn D
Sent: Friday, January 15, 2016 5:57 PM
To: nvmewin at lists.openfabrics.org<mailto:nvmewin at lists.openfabrics.org>
Subject: [nvmewin] Patch with changes for Namespace Management
Hi all,
This patch includes changes to support Namespace Management updates from the NVMe specification 1.2. This patch implements some fixes for handling non-continuous namespaces, adds handling for attached and detached namespaces, and implements new IOCTLs to create, delete, attach and detach namespaces.
I have made a detailed overview of the changes in the text file in the attached zip file, the contents are also copied here below.
Password is intelnvme
Please let me know if you have any questions.
Carolyn Foster
This patch includes changes to support Namespace management, including create, delete,
attach and detach namespace operations. The new functionality in this patch was tested
using proprietary tools. We tested on Server 2008 R2, Server 2012 R2 and Windows 8.1
******************
Design Assumptions
******************
1. The numbering of namespaces need not be consecutive.
2. The namespace ID can be any number between 1 and 2^32.
3. A namespace is considered "active" only when it is created and attached to this controller.
4. A detached namespace, or one that is just created but not yet attached is considered "inactive".
5. A non-existent, or deleted namespace is considered "invalid".
6. An active namespace will result in one (and only one) "Online" LUN.
7. Assuming single-host, and single-controller NVMe system.
*********************
Architecture Overview
*********************
There are four new IOCTLs for namespace management, Create, Delete, Attach and Detach. An attached
namespace will result in a visible LUN to the Windows OS. The LUN extension table has been updated
to have a Namespace status:
typedef enum _NS_STATUS
{
INVALID = 0, //Namespace ID does not exist (not known to controller).
INACTIVE, //Namespace is created, but not attached to controller.
ATTACHED //Namespace is created and attached to controller.
} NS_STATUS;
In order to properly build the LUN extension table during initialization, we made changes to identify
all namespaces, and to determine each namespace's status. These changes include new states in the
Init State Machine NVMeRnningWaitOnListAttachedNs and NVMeRunningWaitOnListExistingNs. The updated
state machine steps are as follows:
1. Send an Identify Namespace command with CNS set to 02h. This should return a list of all active (created and attached) namespaces.
2. Go through the list and update LUN extension entries accordingly, one entry for each namespace. Set all LUN status to online.
3. Send an Identify Namespace command with CNS set to 10h. This should return a list of all existing namespaces in the system, active and inactive.
4. Go through the list.
5. If a corresponding LUN entry exists, skip this step, as this must have been an active namespace that was covered in previous steps.
LUN extension entries are populated as follows:
When a namespace is created:
- namespaceId is set.
- nsStatus is set to "INACTIVE"
- slotStatus is set to "FREE"
- identifyData is partially set
When a namespace is attached:
- drive is pulled for namespace identify
- identifyData is set accordingly
- nsStatus is set to "ATTACHED"
- slotStatus is set to "ONLINE"
- ReadOnly is set to FALSE
When a namespace is detached:
- nsStatus is set to "INACTIVE"
- slotStatus is set to "FREE"
- ReadOnly is set to TRUE
When a namespace is deleted:
- The entire LUN extension entry is set to 0.
There are also new reasons for the LUN to be offline:
typedef enum _LUN_OFFLINE_REASON
{
NOT_OFFLINE,
FORMAT_IN_PROGRESS,
DETACH_IN_PROGRESS,
DELETE_IN_PROGRESS
// Add more as needed
} LUN_OFFLINE_REASON;
When delete or detach requests are made, the driver will call StorportDeviceBusy to pause incoming requests,
and the LUN is marked as offline with the appropriate reason. When a user tries to delete an attached namespace,
the driver will first send a detach command, and then the delete command.
*****************
Known Limitations
*****************
1. If no namepsaces are present, the driver will fail to load.
2. If an error happens on any one namespace during initialization the driver will fail to load.
The handling for these two scenarios could be strengthened and improved, which we did not get to in this patch.
Western Digital Corporation (and its subsidiaries) E-mail Confidentiality Notice & Disclaimer:
This e-mail and any files transmitted with it may contain confidential or legally privileged information of WDC and/or its affiliates, and are intended solely for the use of the individual or entity to which they are addressed. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited. If you have received this e-mail in error, please notify the sender immediately and delete the e-mail in its entirety from your system.
Western Digital Corporation (and its subsidiaries) E-mail Confidentiality Notice & Disclaimer:
This e-mail and any files transmitted with it may contain confidential or legally privileged information of WDC and/or its affiliates, and are intended solely for the use of the individual or entity to which they are addressed. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited. If you have received this e-mail in error, please notify the sender immediately and delete the e-mail in its entirety from your system.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/nvmewin/attachments/20160129/9b9979cc/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 4274 bytes
Desc: image001.png
URL: <http://lists.openfabrics.org/pipermail/nvmewin/attachments/20160129/9b9979cc/attachment.png>
More information about the nvmewin
mailing list