From raymond.c.robles at intel.com Wed Mar 1 15:26:07 2017 From: raymond.c.robles at intel.com (Robles, Raymond C) Date: Wed, 1 Mar 2017 23:26:07 +0000 Subject: [nvmewin] Happy New Year... looking to 2017 and beyond... In-Reply-To: <49158E750348AA499168FD41D88983607C6EE588@fmsmsx117.amr.corp.intel.com> References: <49158E750348AA499168FD41D88983607C6E3EAD@fmsmsx117.amr.corp.intel.com> <49158E750348AA499168FD41D88983607C6E753A@fmsmsx117.amr.corp.intel.com> <49158E750348AA499168FD41D88983607C6EE588@fmsmsx117.amr.corp.intel.com> Message-ID: <49158E750348AA499168FD41D88983607C6FAE0F@fmsmsx117.amr.corp.intel.com> Hello OFA Community, As the deadline passed for the call for nominations, I did not receive any additional nominees. Therefore, Uma Parepalli will assume the role of OFA NVMe Windows Chair. Please join me in welcoming Uma to his new role. Over the next couple of weeks, I will work with Uma in transitioning the chair role from myself to him. If you have any outstanding issues, new questions, or just any general comments... please feel free to ask. I've enjoyed my time as part of this community and moving forward, I will continue to observe and be involved for consultation. I would like to thank all the OFA members who have helped make the OFA driver a successful open source Windows NVMe driver! Best Wishes! Ray Robles From: nvmewin [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Robles, Raymond C Sent: Friday, February 10, 2017 12:09 PM To: nvmewin at lists.openfabrics.org Cc: Huffman, Amber ; 'Uma.parepalli at viavisolutions.com' ; Olsson, Claes Subject: Re: [nvmewin] Happy New Year... looking to 2017 and beyond... Hello... Final reminder... today is the deadline for nominees. I will call for a vote next week. Thank you... Ray From: Robles, Raymond C Sent: Tuesday, January 31, 2017 9:16 AM To: Robles, Raymond C >; nvmewin at lists.openfabrics.org Cc: Huffman, Amber >; Olsson, Claes >; 'Uma.parepalli at viavisolutions.com' > Subject: RE: Happy New Year... looking to 2017 and beyond... Hello... Friendly reminder on nominations for new OFA NVMe Windows chair. The deadline for nominations is Feb. 10, 2017. Thank you... Ray From: nvmewin [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Robles, Raymond C Sent: Wednesday, January 25, 2017 4:18 PM To: nvmewin at lists.openfabrics.org Cc: Huffman, Amber >; Olsson, Claes > Subject: [nvmewin] Happy New Year... looking to 2017 and beyond... Happy new year OFA community family! As we closed out the end of 2016, we finally released our 1.5 revision of the OFA driver. This was a great achievement and I was very happy to be a part of it. As we look forward to 2017 and beyond, we are in a good position to keep providing a robust, performant, and reliable Windows NVMe reference driver. With that in mind, I believe it is time for me to step down as acting chair of the OFA NVMe Windows community. I've been in the chair position for about two years... and this was my second stint. As with the original intent, the chair/maintainer position was designed to rotate between companies in much the same way the NVMe promoters group works. As I prepare to step down, I have talked to a potential candidates for replacement. As always, I believe the OFA community should nominate and vote on its new chair. In that spirit, I would like to nominate Uma Parepalli. Uma has been a strong advocate of the OFA NVMe Windows driver (and all open source NVMe drivers) and has led technical talks in previous Flash Memory Summits. I've personally spoken to Uma and I believe he would make a great chair. At this time, I would like to call for any other nominations for the OFA NVMe Windows chair. Please submit any nominations before Feb. 10th, 2017... at which point, I will call for a vote. Thanks... Ray Raymond C. Robles NSG ISE Host Storage Software Intel Corporation Office: 480-554-2600 Mobile: 480-399-0645 raymond.c.robles at intel.com [cid:image001.gif at 01CB9B29.EA8D14F0] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1568 bytes Desc: image001.jpg URL: From raymond.c.robles at intel.com Thu Mar 2 11:55:01 2017 From: raymond.c.robles at intel.com (Robles, Raymond C) Date: Thu, 2 Mar 2017 19:55:01 +0000 Subject: [nvmewin] Opensource NVME driver In-Reply-To: References: Message-ID: <49158E750348AA499168FD41D88983607C6FBB07@fmsmsx117.amr.corp.intel.com> Hi David, Thanks for your question. First off, your email was blocked on this email distribution list because you are not subscribed (I had to manually approve your email). Please go to the following link and you can register/subscribe. Then you will be free to send/receive emails to this distribution. http://lists.openfabrics.org/mailman/listinfo/nvmewin As for your question on signing… we test sign the driver only. This is because the OFA NVMe Windows driver is an open source (under FreeBSD license) driver. The source is available for anyone to see/use. Signing a driver with a real digital certificate requires owning this certificate. This driver/community is about enabling NVMe SSD vendors and therefore we do not deliver a fully signed driver. That is for each user of the driver to determine the best path for signing. As for your second question on performance… there are several differences between the OFA NVMe Windows driver and Microsoft’s inbox NVMe driver. Our driver is NVMe 1.2 compliant, with several additional enhancements. MSFT’s inbox driver is 1.0e compliant. Performance is always dependent on platform and other software components, so we do not make any claims about the OFA driver vs. the MSFT inbox driver. However, the OFA driver has all documented performance enhancements provided by MSDN implemented. You are free to view the source code at the following link: https://svn.openfabrics.org/svnrepo/nvmewin/ Thanks… Ray From: nvmewin [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of david moheban Sent: Friday, February 24, 2017 7:58 AM To: nvmewin at lists.openfabrics.org Subject: [nvmewin] Opensource NVME driver Hi, Stumbled across your driver by accident but the readme states you have to disable Test Signing. Isn't there a signed driver out there already where you don't have to do that? Second is there performance benefits to your driver vs the standard windows NVME driver? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From mqudsi at neosmart.net Fri Mar 24 14:50:02 2017 From: mqudsi at neosmart.net (Mahmoud Al-Qudsi) Date: Fri, 24 Mar 2017 16:50:02 -0500 Subject: [nvmewin] Full system lockup w/ all NVMe drivers Message-ID: <58d5948a.d90c6b0a.1ade5.85bc@mx.google.com> Hello list, I’m writing here after attempting to use the openfabrics nvmewin driver on two different 7th-gen Intel machines (CM238 chipsets) under multiple, clean installations of Windows 10. In each case, I end up at a point where all disk access locks up and the machine slowly grinds to a halt (without a BSOD) as requests for unpaged data from the disk pile up. Generous use of !storagekd.* in windbg reveal that the last of the pending requests to the disk is a RESET LUN srb; previous commands failed with a non-descript SRB failure code 0x04, indicating a generic HBA failure without a specific error code. The sense data for the failed requests is all 0s. The same physical disk (a Samsung 960 Pro) work(s/ed) just fine in a different machine (6th-gen Xeon, CM236 chipset). The hang occurs without fail under random write stress testing, but it also happens when the machine is left unattended for a few hours. I’ve attempted to disable PCI-E link management power savings, automatic shutdown of the hard disk, etc. in the power savings options, but all to no avail. I’m really not sure what to try next. The testing has been primarily under Windows 10 RS2 betas, a clean install of build 15011 did not trigger the failure case, but perhaps it was not tested long enough. This system lockup occurs with the Microsoft, Samsung, Intel, and now OF nvmewin drivers. No 3rd party upper/lower filters are installed. I’ve attempted to track down the problem by logging all storport.sys/miniport commands via the performance monitor, but unfortunately it absolutely refuses to use unbuffered writes (smallest buffer size option is 1kb and the most often a flush can be configured for is every 1 second). I can visibly see the USB drive it is logging to blink as writes are flushed after disk access locks up, but still, the resulting ETL does not contain the very last requests to the disk as it reveals no timeouts, retries, or the final LUN reset. This error occurs even in safe mode. I am genuinely at my wits’ end with this one. I initially thought it was a very odd hardware-related error, but that seems to be ruled out by the fact that it occurs in multiple devices with the same drive, yet that same drive works flawlessly in other machines. I’d appreciate any insight or suggestions anyone has. Thank you, Mahmoud Al-Qudsi NeoSmart Technologies -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ankit.Patodiya at dell.com Tue Mar 21 03:29:45 2017 From: Ankit.Patodiya at dell.com (Patodiya, Ankit) Date: Tue, 21 Mar 2017 10:29:45 +0000 Subject: [nvmewin] Create Namespace with full NVM capacity Message-ID: <3F7D8571748BFC4A9FFA6A93A3223BAC449FFEBB@MX201CL04.corp.emc.com> As per 1.2.1 spec, "NVM Capacity (NVMCAP): This field indicates the total size of the NVM allocated to this namespace. The value is in bytes. This field shall be supported if Namespace Management and Namespace Attachment commands are supported. Note: This field may not correspond to the logical block size multiplied by the Namespace Size field. Due to thin provisioning or other settings (e.g., endurance), this field may be larger or smaller than the Namespace Size reported. Namespace Size (NSZE): This field indicates the total size of the namespace in logical blocks. A namespace of size n consists of LBA 0 through (n - 1). The number of logical blocks is based on the formatted LBA size. This field is undefined prior to the namespace being formatted. The size of a namespace is based on the number of logical blocks requested in a create operation, the format of the namespace, and any characteristics (e.g., endurance). The controller determines the NVM capacity allocated for that namespace. Namespaces may be created with different usage characteristics (e.g., endurance) that utilize differing amounts of NVM capacity. Namespace characteristics and the mapping of these characteristics to NVM capacity usage are outside the scope of this specification." I am interested in creating a single namespace that uses up the entire NVM capacity so that I am not left with any unallocated NVM capacity. However, namespace creation command takes in the number of logical blocks and not the size in bytes. What is the exact relation between NVM capacity in bytes and Namespace Size in blocks. How to incorporate the factors like endurance to get the number of blocks that such entire NVM capacity is used? Thanks, Ankit -------------- next part -------------- An HTML attachment was scrubbed... URL: