[nvmewin] nvmewin Digest, Vol 38, Issue 9
Robles, Raymond C
raymond.c.robles at intel.com
Tue Feb 17 16:17:23 PST 2015
Hi,
The code below simply rounds up the memory allocated to the next page boundary. The NVMe controller expects this memory (queue size) to be host page aligned. Queue size will still be within the limits of what the controller communicates.
Thanks,
Ray
From: nvmewin-bounces at lists.openfabrics.org [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Wenqian Wu
Sent: Sunday, February 15, 2015 3:43 PM
To: nvmewin at lists.openfabrics.org
Subject: Re: [nvmewin] nvmewin Digest, Vol 38, Issue 9
Hi Ray,
Thanks for your reply. I understand your saying that before calling NVMeAllocQueues, the CAP.MQES has been checked and passed in as 3rd parameter(QEntries). But if you look further in NVMeAllocQueues function, QEntires will be modified.
if ((QEntries % SysPageSizeInSubEntries) != 0)
QEntries = (QEntries + SysPageSizeInSubEntries) &
~(SysPageSizeInSubEntries - 1);
And finally QEntires will be used to create the Queue. But QEntries may already bigger the CAP.MQES after the above round up.
[Inline image 1]
A simple example is if controller reports CAP.MQES is 32, host memory page size is 4K. The round up procedure will change QEntries to 64 (4 * 1024 / 64), and used to create the queue. Controller will returen Invalid Queue Size and failed the command.
In summary, I think the alignment requirement is for host memory only. Driver can still allocate 4K(page aligned) for 32 entries for the above example, just keeping the second half unused. But when creating the queue, the entries should be 32, instead of 64.
Please let me know if this makes sense.
Thanks,
Wenqian
On Fri, Feb 13, 2015 at 12:00 PM, <nvmewin-request at lists.openfabrics.org<mailto:nvmewin-request at lists.openfabrics.org>> wrote:
Send nvmewin mailing list submissions to
nvmewin at lists.openfabrics.org<mailto:nvmewin at lists.openfabrics.org>
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.openfabrics.org/mailman/listinfo/nvmewin
or, via email, send a message with subject or body 'help' to
nvmewin-request at lists.openfabrics.org<mailto:nvmewin-request at lists.openfabrics.org>
You can reach the person managing the list at
nvmewin-owner at lists.openfabrics.org<mailto:nvmewin-owner at lists.openfabrics.org>
When replying, please edit your Subject line so it is more specific
than "Re: Contents of nvmewin digest..."
Today's Topics:
1. Re: NVMe Queue entry number (Robles, Raymond C)
----------------------------------------------------------------------
Message: 1
Date: Fri, 13 Feb 2015 04:42:37 +0000
From: "Robles, Raymond C" <raymond.c.robles at intel.com<mailto:raymond.c.robles at intel.com>>
To: 'Wenqian Wu' <wuwq85 at gmail.com<mailto:wuwq85 at gmail.com>>, "nvmewin at lists.openfabrics.org<mailto:nvmewin at lists.openfabrics.org>"
<nvmewin at lists.openfabrics.org<mailto:nvmewin at lists.openfabrics.org>>
Subject: Re: [nvmewin] NVMe Queue entry number
Message-ID:
<49158E750348AA499168FD41D88983606B5DA191 at fmsmsx117.amr.corp.intel.com<mailto:49158E750348AA499168FD41D88983606B5DA191 at fmsmsx117.amr.corp.intel.com>>
Content-Type: text/plain; charset="utf-8"
Hi Wenqian,
Thank you for your inquiry. In the FindAdapter routine (in nvmeStd.c), the driver checks CAP.MQES and saves the value in pAE->InitInfo.IoQEntries (if the default or register override is smaller than CAP.MQES, it is saved instead). Within the function NVMeAllocIoQueues, there is loop that will iterate to create queue pairs based on the number of cpu cores.
In this loop, a call to NVMeAllocQueues is made. Just prior to this call, the value saved in pAE->InitInfo.IoQEntires is retrieved (stack variable ?QEntres?) and passed in as the 3rd parameter. So, once the allocation is taking place where you mention below, the 3rd parameter of the function has already been checked against CAP.MQES. Also, per the NVMe spec sections 5.3 and 5.4 ? Figures 33 and 37 (create completion queue and submission queue PRP 1), all queue memory must be ?physically contiguous and memory page aligned?.
Thanks
Ray
From: nvmewin-bounces at lists.openfabrics.org<mailto:nvmewin-bounces at lists.openfabrics.org> [mailto:nvmewin-bounces at lists.openfabrics.org<mailto:nvmewin-bounces at lists.openfabrics.org>] On Behalf Of Wenqian Wu
Sent: Wednesday, February 11, 2015 4:58 PM
To: nvmewin at lists.openfabrics.org<mailto:nvmewin at lists.openfabrics.org>
Subject: [nvmewin] NVMe Queue entry number
Hi OFA driver member,
I have one question regarding the Queue entry size. The driver will allocate number of entries aligned to memory page (line877, nvmeInit.c), instead of the actual queue size controller supports (CAP.MQES). Controller can return error if host request more entries while ignoring controller's capability. I think as long as the base address is page aligned, there is no reason to make the entries number aligned to page boundary. Can this be considered a driver bug or is there any particular consideration?
Thanks,
Wenqian
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/nvmewin/attachments/20150213/edd26ac2/attachment-0001.html>
------------------------------
_______________________________________________
nvmewin mailing list
nvmewin at lists.openfabrics.org<mailto:nvmewin at lists.openfabrics.org>
http://lists.openfabrics.org/mailman/listinfo/nvmewin
End of nvmewin Digest, Vol 38, Issue 9
**************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/nvmewin/attachments/20150218/07d9e350/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 94158 bytes
Desc: image001.png
URL: <http://lists.openfabrics.org/pipermail/nvmewin/attachments/20150218/07d9e350/attachment.png>
More information about the nvmewin
mailing list