<div dir="ltr">Hi Ray,<div><br></div><div>Thanks for your reply. I understand your saying that before calling <span style="font-size:12.8000001907349px">NVMeAllocQueues, the CAP.MQES has been checked and passed in as 3rd parameter(QEntries). But if you look further in NVMeAllocQueues function, QEntires will be modified.</span></div><div><span style="font-size:12.8000001907349px"><br></span></div><div><div style><span style="font-size:12.8000001907349px"> if ((QEntries % SysPageSizeInSubEntries) != 0)</span></div><div style><span style="font-size:12.8000001907349px"> QEntries = (QEntries + SysPageSizeInSubEntries) &</span></div><div style><span style="font-size:12.8000001907349px"> ~(SysPageSizeInSubEntries - 1);</span></div><div style="font-size:12.8000001907349px"><br></div></div><div style="font-size:12.8000001907349px">And finally QEntires will be used to create the Queue. But QEntries may already bigger the CAP.MQES after the above round up.</div><div style="font-size:12.8000001907349px"><br></div><div style="font-size:12.8000001907349px"><img src="cid:ii_14b8f621be303570" alt="Inline image 1" width="472" height="150"><br></div><div style="font-size:12.8000001907349px"><br></div><div style="font-size:12.8000001907349px">A simple example is if controller reports CAP.MQES is 32, host memory page size is 4K. The round up procedure will change QEntries to 64 (4 * 1024 / 64), and used to create the queue. Controller will returen Invalid Queue Size and failed the command.</div><div style="font-size:12.8000001907349px"><br></div><div style="font-size:12.8000001907349px">In summary, I think the alignment requirement is for host memory only. Driver can still allocate 4K(page aligned) for 32 entries for the above example, just keeping the second half unused. But when creating the queue, the entries should be 32, instead of 64.</div><div style="font-size:12.8000001907349px"><br></div><div style="font-size:12.8000001907349px">Please let me know if this makes sense.</div><div style="font-size:12.8000001907349px"><br></div><div style="font-size:12.8000001907349px">Thanks,</div><div style="font-size:12.8000001907349px">Wenqian</div><div style="font-size:12.8000001907349px"><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Feb 13, 2015 at 12:00 PM, <span dir="ltr"><<a href="mailto:nvmewin-request@lists.openfabrics.org" target="_blank">nvmewin-request@lists.openfabrics.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Send nvmewin mailing list submissions to<br>
<a href="mailto:nvmewin@lists.openfabrics.org">nvmewin@lists.openfabrics.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="http://lists.openfabrics.org/mailman/listinfo/nvmewin" target="_blank">http://lists.openfabrics.org/mailman/listinfo/nvmewin</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:nvmewin-request@lists.openfabrics.org">nvmewin-request@lists.openfabrics.org</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:nvmewin-owner@lists.openfabrics.org">nvmewin-owner@lists.openfabrics.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of nvmewin digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: NVMe Queue entry number (Robles, Raymond C)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Fri, 13 Feb 2015 04:42:37 +0000<br>
From: "Robles, Raymond C" <<a href="mailto:raymond.c.robles@intel.com">raymond.c.robles@intel.com</a>><br>
To: 'Wenqian Wu' <<a href="mailto:wuwq85@gmail.com">wuwq85@gmail.com</a>>, "<a href="mailto:nvmewin@lists.openfabrics.org">nvmewin@lists.openfabrics.org</a>"<br>
<<a href="mailto:nvmewin@lists.openfabrics.org">nvmewin@lists.openfabrics.org</a>><br>
Subject: Re: [nvmewin] NVMe Queue entry number<br>
Message-ID:<br>
<<a href="mailto:49158E750348AA499168FD41D88983606B5DA191@fmsmsx117.amr.corp.intel.com">49158E750348AA499168FD41D88983606B5DA191@fmsmsx117.amr.corp.intel.com</a>><br>
<br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Hi Wenqian,<br>
<br>
Thank you for your inquiry. In the FindAdapter routine (in nvmeStd.c), the driver checks CAP.MQES and saves the value in pAE->InitInfo.IoQEntries (if the default or register override is smaller than CAP.MQES, it is saved instead). Within the function NVMeAllocIoQueues, there is loop that will iterate to create queue pairs based on the number of cpu cores.<br>
<br>
In this loop, a call to NVMeAllocQueues is made. Just prior to this call, the value saved in pAE->InitInfo.IoQEntires is retrieved (stack variable ?QEntres?) and passed in as the 3rd parameter. So, once the allocation is taking place where you mention below, the 3rd parameter of the function has already been checked against CAP.MQES. Also, per the NVMe spec sections 5.3 and 5.4 ? Figures 33 and 37 (create completion queue and submission queue PRP 1), all queue memory must be ?physically contiguous and memory page aligned?.<br>
<br>
Thanks<br>
Ray<br>
<br>
From: <a href="mailto:nvmewin-bounces@lists.openfabrics.org">nvmewin-bounces@lists.openfabrics.org</a> [mailto:<a href="mailto:nvmewin-bounces@lists.openfabrics.org">nvmewin-bounces@lists.openfabrics.org</a>] On Behalf Of Wenqian Wu<br>
Sent: Wednesday, February 11, 2015 4:58 PM<br>
To: <a href="mailto:nvmewin@lists.openfabrics.org">nvmewin@lists.openfabrics.org</a><br>
Subject: [nvmewin] NVMe Queue entry number<br>
<br>
Hi OFA driver member,<br>
<br>
I have one question regarding the Queue entry size. The driver will allocate number of entries aligned to memory page (line877, nvmeInit.c), instead of the actual queue size controller supports (CAP.MQES). Controller can return error if host request more entries while ignoring controller's capability. I think as long as the base address is page aligned, there is no reason to make the entries number aligned to page boundary. Can this be considered a driver bug or is there any particular consideration?<br>
<br>
Thanks,<br>
Wenqian<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.openfabrics.org/pipermail/nvmewin/attachments/20150213/edd26ac2/attachment-0001.html" target="_blank">http://lists.openfabrics.org/pipermail/nvmewin/attachments/20150213/edd26ac2/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
nvmewin mailing list<br>
<a href="mailto:nvmewin@lists.openfabrics.org">nvmewin@lists.openfabrics.org</a><br>
<a href="http://lists.openfabrics.org/mailman/listinfo/nvmewin" target="_blank">http://lists.openfabrics.org/mailman/listinfo/nvmewin</a><br>
<br>
<br>
End of nvmewin Digest, Vol 38, Issue 9<br>
**************************************<br>
</blockquote></div><br></div>