From Alex.Chang at idt.com Mon Nov 5 10:51:52 2012 From: Alex.Chang at idt.com (Chang, Alex) Date: Mon, 5 Nov 2012 18:51:52 +0000 Subject: [nvmewin] Bug Fix Patch - Review Request In-Reply-To: <82C9F782B054C94B9FC04A331649C77A07BA8F56@FMSMSX106.amr.corp.intel.com> References: <6B4557D9CF036C4E8F9D6C561818DABB0DD38680@FMSMSX106.amr.corp.intel.com> <548C5470AAD9DA4A85D259B663190D361FFBF3E7@corpmail1.na.ads.idt.com> <6B4557D9CF036C4E8F9D6C561818DABB0DD45232@FMSMSX106.amr.corp.intel.com> <548C5470AAD9DA4A85D259B663190D361FFBF425@corpmail1.na.ads.idt.com> <82C9F782B054C94B9FC04A331649C77A07BA8F56@FMSMSX106.amr.corp.intel.com> Message-ID: <548C5470AAD9DA4A85D259B663190D361FFBFFE7@corpmail1.na.ads.idt.com> Hi Paul, Last Friday, when I ran IOMeter with 4K random reads with latest patch, the performance numbers decreases by half, e.g. number of IOs. After debugging it, the reason is StorPortGetUncachedExtension calling. I am quite sure why. However, if the assertion only happen on check-built Windows 8, I think we should add the calling for that specific case only. I wonder you guys see the performance drop ever? Thanks, Alex ________________________________ From: Luse, Paul E [mailto:paul.e.luse at intel.com] Sent: Thursday, October 25, 2012 10:53 AM To: Chang, Alex; Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request I think the answer is that we don't care. Msft suggested that we avoid causing assertions in the checked OS; that's all this does. We don't use the mem (and don't have to free it) so the only issue that would be present if it failed would be if someone happened to be running on a checked OS when that happened, they'd get an assert in storport just after find adapter and then they could ignore it and move on w/o any further problems.... From: nvmewin-bounces at lists.openfabrics.org [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Chang, Alex Sent: Thursday, October 25, 2012 10:32 AM To: Murray, Kris R; nvmewin at lists.openfabrics.org Subject: Re: [nvmewin] Bug Fix Patch - Review Request My concern is what if the allocation fails by any chance? Alex ________________________________ From: Murray, Kris R [mailto:kris.r.murray at intel.com] Sent: Thursday, October 25, 2012 10:23 AM To: Chang, Alex; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Alex, Since we don't use the returned pointer I believe there is no need to validate it. The goal is to cause Storport to allocate a DMA adapter object. See the attached email from James Harris for more info. ~Kris From: Chang, Alex [mailto:Alex.Chang at idt.com] Sent: Thursday, October 25, 2012 10:11 AM To: Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Hi Kris, I have a quick question regarding StorPortGetUncachedExtension: The routine returns a pointer to the allocated buffer, should we validate the pointer before proceeding? Thanks, Alex ________________________________ From: nvmewin-bounces at lists.openfabrics.org [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Murray, Kris R Sent: Tuesday, October 16, 2012 10:07 AM To: nvmewin at lists.openfabrics.org Subject: [nvmewin] Bug Fix Patch - Review Request Hi all, The attached NVMe.zip file changes include the below fixes: * nvmeStd.c o Added a call to StorPortGetUncachedExtension to fix a checked OS assertion * nvmeSnti.c o Fixed SntiTranslateRead6 function to use the Read mask for the lba instead of the write mask o Fixed SntiTranslateWrite6 function to use the correct macro for getting 24 bits from the CDB using the correct offset * nvmeSntiTypes.h o Updated READ_6_CDB_LBA_MASK definition to match the one for write o Fixed WRITE_6_CDB_LBA_OFFSET from 0 to 1 The attached Results.zip file contains results from the test matrix below: Operating Systems: * Windows 7 * Windows 8 * Windows Server 2008 * Windows Server 2012 Tests: * IOMeter * SCSI Compliance * PCMark Please review the changes, feeling free to send me comments and questions. Thanks, ~Kris Murray -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.e.luse at intel.com Mon Nov 5 11:17:20 2012 From: paul.e.luse at intel.com (Luse, Paul E) Date: Mon, 5 Nov 2012 19:17:20 +0000 Subject: [nvmewin] Bug Fix Patch - Review Request In-Reply-To: <548C5470AAD9DA4A85D259B663190D361FFBFFE7@corpmail1.na.ads.idt.com> References: <6B4557D9CF036C4E8F9D6C561818DABB0DD38680@FMSMSX106.amr.corp.intel.com> <548C5470AAD9DA4A85D259B663190D361FFBF3E7@corpmail1.na.ads.idt.com> <6B4557D9CF036C4E8F9D6C561818DABB0DD45232@FMSMSX106.amr.corp.intel.com> <548C5470AAD9DA4A85D259B663190D361FFBF425@corpmail1.na.ads.idt.com> <82C9F782B054C94B9FC04A331649C77A07BA8F56@FMSMSX106.amr.corp.intel.com>, <548C5470AAD9DA4A85D259B663190D361FFBFFE7@corpmail1.na.ads.idt.com> Message-ID: Well that's odd. Let me see if I can reproduce it here and if so ill ping the storport guys and if not we can compare configs Sent from my iPhone On Nov 5, 2012, at 11:52 AM, "Chang, Alex" > wrote: Hi Paul, Last Friday, when I ran IOMeter with 4K random reads with latest patch, the performance numbers decreases by half, e.g. number of IOs. After debugging it, the reason is StorPortGetUncachedExtension calling. I am quite sure why. However, if the assertion only happen on check-built Windows 8, I think we should add the calling for that specific case only. I wonder you guys see the performance drop ever? Thanks, Alex ________________________________ From: Luse, Paul E [mailto:paul.e.luse at intel.com] Sent: Thursday, October 25, 2012 10:53 AM To: Chang, Alex; Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request I think the answer is that we don’t care. Msft suggested that we avoid causing assertions in the checked OS; that’s all this does. We don’t use the mem (and don’t have to free it) so the only issue that would be present if it failed would be if someone happened to be running on a checked OS when that happened, they’d get an assert in storport just after find adapter and then they could ignore it and move on w/o any further problems…. From: nvmewin-bounces at lists.openfabrics.org [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Chang, Alex Sent: Thursday, October 25, 2012 10:32 AM To: Murray, Kris R; nvmewin at lists.openfabrics.org Subject: Re: [nvmewin] Bug Fix Patch - Review Request My concern is what if the allocation fails by any chance? Alex ________________________________ From: Murray, Kris R [mailto:kris.r.murray at intel.com] Sent: Thursday, October 25, 2012 10:23 AM To: Chang, Alex; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Alex, Since we don’t use the returned pointer I believe there is no need to validate it. The goal is to cause Storport to allocate a DMA adapter object. See the attached email from James Harris for more info. ~Kris From: Chang, Alex [mailto:Alex.Chang at idt.com] Sent: Thursday, October 25, 2012 10:11 AM To: Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Hi Kris, I have a quick question regarding StorPortGetUncachedExtension: The routine returns a pointer to the allocated buffer, should we validate the pointer before proceeding? Thanks, Alex ________________________________ From: nvmewin-bounces at lists.openfabrics.org [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Murray, Kris R Sent: Tuesday, October 16, 2012 10:07 AM To: nvmewin at lists.openfabrics.org Subject: [nvmewin] Bug Fix Patch - Review Request Hi all, The attached NVMe.zip file changes include the below fixes: • nvmeStd.c o Added a call to StorPortGetUncachedExtension to fix a checked OS assertion • nvmeSnti.c o Fixed SntiTranslateRead6 function to use the Read mask for the lba instead of the write mask o Fixed SntiTranslateWrite6 function to use the correct macro for getting 24 bits from the CDB using the correct offset • nvmeSntiTypes.h o Updated READ_6_CDB_LBA_MASK definition to match the one for write o Fixed WRITE_6_CDB_LBA_OFFSET from 0 to 1 The attached Results.zip file contains results from the test matrix below: Operating Systems: • Windows 7 • Windows 8 • Windows Server 2008 • Windows Server 2012 Tests: • IOMeter • SCSI Compliance • PCMark Please review the changes, feeling free to send me comments and questions. Thanks, ~Kris Murray From paul.e.luse at intel.com Wed Nov 7 08:29:15 2012 From: paul.e.luse at intel.com (Luse, Paul E) Date: Wed, 7 Nov 2012 16:29:15 +0000 Subject: [nvmewin] Bug Fix Patch - Review Request In-Reply-To: <548C5470AAD9DA4A85D259B663190D361FFBFFE7@corpmail1.na.ads.idt.com> References: <6B4557D9CF036C4E8F9D6C561818DABB0DD38680@FMSMSX106.amr.corp.intel.com> <548C5470AAD9DA4A85D259B663190D361FFBF3E7@corpmail1.na.ads.idt.com> <6B4557D9CF036C4E8F9D6C561818DABB0DD45232@FMSMSX106.amr.corp.intel.com> <548C5470AAD9DA4A85D259B663190D361FFBF425@corpmail1.na.ads.idt.com> <82C9F782B054C94B9FC04A331649C77A07BA8F56@FMSMSX106.amr.corp.intel.com> <548C5470AAD9DA4A85D259B663190D361FFBFFE7@corpmail1.na.ads.idt.com> Message-ID: <82C9F782B054C94B9FC04A331649C77A07BCE2B4@FMSMSX106.amr.corp.intel.com> Hi Alex- I tried this on the following config and didn't see the performance drop. Can you please send details of your config (including your iometer ICF file) so I can try again? System: 8 core I7-2600 desktop (I can switch to a 2 NUMA node 32 core server if needed) Mem: 8GB OS: 2008-R2 SP1 Iometer: built from source, 2008-16-18-RC2 Config: 8 workers, 32 OIO per worker. 100% read, 100% random, 4K Resukts: Pretty steady at 187K with or without the change I also tried the prev patch, msix0 shared, and one that I'm about ready to submit that deals mostly with error handling (just to make sure) and they all behave the same. Could be that my use of a desktop is masking what you're seeing but before I mess with the big loud server I figured I'd check with you first J Paul From: Chang, Alex [mailto:Alex.Chang at idt.com] Sent: Monday, November 05, 2012 11:52 AM To: Luse, Paul E; Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Hi Paul, Last Friday, when I ran IOMeter with 4K random reads with latest patch, the performance numbers decreases by half, e.g. number of IOs. After debugging it, the reason is StorPortGetUncachedExtension calling. I am quite sure why. However, if the assertion only happen on check-built Windows 8, I think we should add the calling for that specific case only. I wonder you guys see the performance drop ever? Thanks, Alex ________________________________ From: Luse, Paul E [mailto:paul.e.luse at intel.com] Sent: Thursday, October 25, 2012 10:53 AM To: Chang, Alex; Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request I think the answer is that we don't care. Msft suggested that we avoid causing assertions in the checked OS; that's all this does. We don't use the mem (and don't have to free it) so the only issue that would be present if it failed would be if someone happened to be running on a checked OS when that happened, they'd get an assert in storport just after find adapter and then they could ignore it and move on w/o any further problems.... From: nvmewin-bounces at lists.openfabrics.org [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Chang, Alex Sent: Thursday, October 25, 2012 10:32 AM To: Murray, Kris R; nvmewin at lists.openfabrics.org Subject: Re: [nvmewin] Bug Fix Patch - Review Request My concern is what if the allocation fails by any chance? Alex ________________________________ From: Murray, Kris R [mailto:kris.r.murray at intel.com] Sent: Thursday, October 25, 2012 10:23 AM To: Chang, Alex; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Alex, Since we don't use the returned pointer I believe there is no need to validate it. The goal is to cause Storport to allocate a DMA adapter object. See the attached email from James Harris for more info. ~Kris From: Chang, Alex [mailto:Alex.Chang at idt.com] Sent: Thursday, October 25, 2012 10:11 AM To: Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Hi Kris, I have a quick question regarding StorPortGetUncachedExtension: The routine returns a pointer to the allocated buffer, should we validate the pointer before proceeding? Thanks, Alex ________________________________ From: nvmewin-bounces at lists.openfabrics.org [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Murray, Kris R Sent: Tuesday, October 16, 2012 10:07 AM To: nvmewin at lists.openfabrics.org Subject: [nvmewin] Bug Fix Patch - Review Request Hi all, The attached NVMe.zip file changes include the below fixes: * nvmeStd.c o Added a call to StorPortGetUncachedExtension to fix a checked OS assertion * nvmeSnti.c o Fixed SntiTranslateRead6 function to use the Read mask for the lba instead of the write mask o Fixed SntiTranslateWrite6 function to use the correct macro for getting 24 bits from the CDB using the correct offset * nvmeSntiTypes.h o Updated READ_6_CDB_LBA_MASK definition to match the one for write o Fixed WRITE_6_CDB_LBA_OFFSET from 0 to 1 The attached Results.zip file contains results from the test matrix below: Operating Systems: * Windows 7 * Windows 8 * Windows Server 2008 * Windows Server 2012 Tests: * IOMeter * SCSI Compliance * PCMark Please review the changes, feeling free to send me comments and questions. Thanks, ~Kris Murray -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.e.luse at intel.com Wed Nov 7 15:21:13 2012 From: paul.e.luse at intel.com (Luse, Paul E) Date: Wed, 7 Nov 2012 23:21:13 +0000 Subject: [nvmewin] Bug Fix Patch - Review Request References: <6B4557D9CF036C4E8F9D6C561818DABB0DD38680@FMSMSX106.amr.corp.intel.com> <548C5470AAD9DA4A85D259B663190D361FFBF3E7@corpmail1.na.ads.idt.com> <6B4557D9CF036C4E8F9D6C561818DABB0DD45232@FMSMSX106.amr.corp.intel.com> <548C5470AAD9DA4A85D259B663190D361FFBF425@corpmail1.na.ads.idt.com> <82C9F782B054C94B9FC04A331649C77A07BA8F56@FMSMSX106.amr.corp.intel.com> <548C5470AAD9DA4A85D259B663190D361FFBFFE7@corpmail1.na.ads.idt.com> Message-ID: <82C9F782B054C94B9FC04A331649C77A07BCED7A@FMSMSX106.amr.corp.intel.com> Hi Alex- I re-ran on a 2 NUMA node 32 core machine and now see a big difference, not quite as big as yours (20% vs 50%) but it's clearly that one LOC that is affecting what storport is doing obviously. I have a new patch I'll be sending out in the next few days that I mentioned before and I'll go ahead and remove this for now until we figure out why it's causing the issue. Getting rid of that checked OS assert was a suggestion, its not a requirement so there's no point in leaving it in there unless it does no other harm. I'll let you all know what comes of looking into further, if anything. Thanks for catching that! Thx Paul From: Luse, Paul E Sent: Wednesday, November 07, 2012 9:29 AM To: 'Chang, Alex'; Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Hi Alex- I tried this on the following config and didn't see the performance drop. Can you please send details of your config (including your iometer ICF file) so I can try again? System: 8 core I7-2600 desktop (I can switch to a 2 NUMA node 32 core server if needed) Mem: 8GB OS: 2008-R2 SP1 Iometer: built from source, 2008-16-18-RC2 Config: 8 workers, 32 OIO per worker. 100% read, 100% random, 4K Resukts: Pretty steady at 187K with or without the change I also tried the prev patch, msix0 shared, and one that I'm about ready to submit that deals mostly with error handling (just to make sure) and they all behave the same. Could be that my use of a desktop is masking what you're seeing but before I mess with the big loud server I figured I'd check with you first J Paul From: Chang, Alex [mailto:Alex.Chang at idt.com] Sent: Monday, November 05, 2012 11:52 AM To: Luse, Paul E; Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Hi Paul, Last Friday, when I ran IOMeter with 4K random reads with latest patch, the performance numbers decreases by half, e.g. number of IOs. After debugging it, the reason is StorPortGetUncachedExtension calling. I am quite sure why. However, if the assertion only happen on check-built Windows 8, I think we should add the calling for that specific case only. I wonder you guys see the performance drop ever? Thanks, Alex ________________________________ From: Luse, Paul E [mailto:paul.e.luse at intel.com] Sent: Thursday, October 25, 2012 10:53 AM To: Chang, Alex; Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request I think the answer is that we don't care. Msft suggested that we avoid causing assertions in the checked OS; that's all this does. We don't use the mem (and don't have to free it) so the only issue that would be present if it failed would be if someone happened to be running on a checked OS when that happened, they'd get an assert in storport just after find adapter and then they could ignore it and move on w/o any further problems.... From: nvmewin-bounces at lists.openfabrics.org [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Chang, Alex Sent: Thursday, October 25, 2012 10:32 AM To: Murray, Kris R; nvmewin at lists.openfabrics.org Subject: Re: [nvmewin] Bug Fix Patch - Review Request My concern is what if the allocation fails by any chance? Alex ________________________________ From: Murray, Kris R [mailto:kris.r.murray at intel.com] Sent: Thursday, October 25, 2012 10:23 AM To: Chang, Alex; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Alex, Since we don't use the returned pointer I believe there is no need to validate it. The goal is to cause Storport to allocate a DMA adapter object. See the attached email from James Harris for more info. ~Kris From: Chang, Alex [mailto:Alex.Chang at idt.com] Sent: Thursday, October 25, 2012 10:11 AM To: Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Hi Kris, I have a quick question regarding StorPortGetUncachedExtension: The routine returns a pointer to the allocated buffer, should we validate the pointer before proceeding? Thanks, Alex ________________________________ From: nvmewin-bounces at lists.openfabrics.org [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Murray, Kris R Sent: Tuesday, October 16, 2012 10:07 AM To: nvmewin at lists.openfabrics.org Subject: [nvmewin] Bug Fix Patch - Review Request Hi all, The attached NVMe.zip file changes include the below fixes: * nvmeStd.c o Added a call to StorPortGetUncachedExtension to fix a checked OS assertion * nvmeSnti.c o Fixed SntiTranslateRead6 function to use the Read mask for the lba instead of the write mask o Fixed SntiTranslateWrite6 function to use the correct macro for getting 24 bits from the CDB using the correct offset * nvmeSntiTypes.h o Updated READ_6_CDB_LBA_MASK definition to match the one for write o Fixed WRITE_6_CDB_LBA_OFFSET from 0 to 1 The attached Results.zip file contains results from the test matrix below: Operating Systems: * Windows 7 * Windows 8 * Windows Server 2008 * Windows Server 2012 Tests: * IOMeter * SCSI Compliance * PCMark Please review the changes, feeling free to send me comments and questions. Thanks, ~Kris Murray -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex.Chang at idt.com Wed Nov 7 15:27:30 2012 From: Alex.Chang at idt.com (Chang, Alex) Date: Wed, 7 Nov 2012 23:27:30 +0000 Subject: [nvmewin] Bug Fix Patch - Review Request In-Reply-To: <82C9F782B054C94B9FC04A331649C77A07BCED7A@FMSMSX106.amr.corp.intel.com> References: <6B4557D9CF036C4E8F9D6C561818DABB0DD38680@FMSMSX106.amr.corp.intel.com> <548C5470AAD9DA4A85D259B663190D361FFBF3E7@corpmail1.na.ads.idt.com> <6B4557D9CF036C4E8F9D6C561818DABB0DD45232@FMSMSX106.amr.corp.intel.com> <548C5470AAD9DA4A85D259B663190D361FFBF425@corpmail1.na.ads.idt.com> <82C9F782B054C94B9FC04A331649C77A07BA8F56@FMSMSX106.amr.corp.intel.com> <548C5470AAD9DA4A85D259B663190D361FFBFFE7@corpmail1.na.ads.idt.com> <82C9F782B054C94B9FC04A331649C77A07BCED7A@FMSMSX106.amr.corp.intel.com> Message-ID: <548C5470AAD9DA4A85D259B663190D361FFC032B@corpmail1.na.ads.idt.com> Thanks a lot, Paul, for verifying this issue. Alex ________________________________ From: Luse, Paul E [mailto:paul.e.luse at intel.com] Sent: Wednesday, November 07, 2012 3:21 PM To: Chang, Alex; Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Hi Alex- I re-ran on a 2 NUMA node 32 core machine and now see a big difference, not quite as big as yours (20% vs 50%) but it's clearly that one LOC that is affecting what storport is doing obviously. I have a new patch I'll be sending out in the next few days that I mentioned before and I'll go ahead and remove this for now until we figure out why it's causing the issue. Getting rid of that checked OS assert was a suggestion, its not a requirement so there's no point in leaving it in there unless it does no other harm. I'll let you all know what comes of looking into further, if anything. Thanks for catching that! Thx Paul From: Luse, Paul E Sent: Wednesday, November 07, 2012 9:29 AM To: 'Chang, Alex'; Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Hi Alex- I tried this on the following config and didn't see the performance drop. Can you please send details of your config (including your iometer ICF file) so I can try again? System: 8 core I7-2600 desktop (I can switch to a 2 NUMA node 32 core server if needed) Mem: 8GB OS: 2008-R2 SP1 Iometer: built from source, 2008-16-18-RC2 Config: 8 workers, 32 OIO per worker. 100% read, 100% random, 4K Resukts: Pretty steady at 187K with or without the change I also tried the prev patch, msix0 shared, and one that I'm about ready to submit that deals mostly with error handling (just to make sure) and they all behave the same. Could be that my use of a desktop is masking what you're seeing but before I mess with the big loud server I figured I'd check with you first J Paul From: Chang, Alex [mailto:Alex.Chang at idt.com] Sent: Monday, November 05, 2012 11:52 AM To: Luse, Paul E; Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Hi Paul, Last Friday, when I ran IOMeter with 4K random reads with latest patch, the performance numbers decreases by half, e.g. number of IOs. After debugging it, the reason is StorPortGetUncachedExtension calling. I am quite sure why. However, if the assertion only happen on check-built Windows 8, I think we should add the calling for that specific case only. I wonder you guys see the performance drop ever? Thanks, Alex ________________________________ From: Luse, Paul E [mailto:paul.e.luse at intel.com] Sent: Thursday, October 25, 2012 10:53 AM To: Chang, Alex; Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request I think the answer is that we don't care. Msft suggested that we avoid causing assertions in the checked OS; that's all this does. We don't use the mem (and don't have to free it) so the only issue that would be present if it failed would be if someone happened to be running on a checked OS when that happened, they'd get an assert in storport just after find adapter and then they could ignore it and move on w/o any further problems.... From: nvmewin-bounces at lists.openfabrics.org [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Chang, Alex Sent: Thursday, October 25, 2012 10:32 AM To: Murray, Kris R; nvmewin at lists.openfabrics.org Subject: Re: [nvmewin] Bug Fix Patch - Review Request My concern is what if the allocation fails by any chance? Alex ________________________________ From: Murray, Kris R [mailto:kris.r.murray at intel.com] Sent: Thursday, October 25, 2012 10:23 AM To: Chang, Alex; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Alex, Since we don't use the returned pointer I believe there is no need to validate it. The goal is to cause Storport to allocate a DMA adapter object. See the attached email from James Harris for more info. ~Kris From: Chang, Alex [mailto:Alex.Chang at idt.com] Sent: Thursday, October 25, 2012 10:11 AM To: Murray, Kris R; nvmewin at lists.openfabrics.org Subject: RE: Bug Fix Patch - Review Request Hi Kris, I have a quick question regarding StorPortGetUncachedExtension: The routine returns a pointer to the allocated buffer, should we validate the pointer before proceeding? Thanks, Alex ________________________________ From: nvmewin-bounces at lists.openfabrics.org [mailto:nvmewin-bounces at lists.openfabrics.org] On Behalf Of Murray, Kris R Sent: Tuesday, October 16, 2012 10:07 AM To: nvmewin at lists.openfabrics.org Subject: [nvmewin] Bug Fix Patch - Review Request Hi all, The attached NVMe.zip file changes include the below fixes: * nvmeStd.c o Added a call to StorPortGetUncachedExtension to fix a checked OS assertion * nvmeSnti.c o Fixed SntiTranslateRead6 function to use the Read mask for the lba instead of the write mask o Fixed SntiTranslateWrite6 function to use the correct macro for getting 24 bits from the CDB using the correct offset * nvmeSntiTypes.h o Updated READ_6_CDB_LBA_MASK definition to match the one for write o Fixed WRITE_6_CDB_LBA_OFFSET from 0 to 1 The attached Results.zip file contains results from the test matrix below: Operating Systems: * Windows 7 * Windows 8 * Windows Server 2008 * Windows Server 2012 Tests: * IOMeter * SCSI Compliance * PCMark Please review the changes, feeling free to send me comments and questions. Thanks, ~Kris Murray -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.e.luse at intel.com Thu Nov 8 13:12:13 2012 From: paul.e.luse at intel.com (Luse, Paul E) Date: Thu, 8 Nov 2012 21:12:13 +0000 Subject: [nvmewin] next patch Message-ID: <82C9F782B054C94B9FC04A331649C77A07BCFCE9@FMSMSX106.amr.corp.intel.com> All- Please review, will be looking for feedback here over the next few weeks - let's say by Mon Nov 19 if possible (just before the holidays). Let me know if you have any questions or if anyone would like to schedule a call to walk through any of this. Thx Paul Main Changes: - fix reset code, using wrong criteria for identifying outstanding IOs on a timeout/reset - misc cleanup of commands and some prints - removal of all code related to the driver issuing AERs - fix read_cap translations off by one error - add support for checking the max data transfer size reported by HW to make sure init can continue (we don't find out what it is until after we've reported to storport) - removed portion of last patch that addressed a storport assert but Alex discovered a perf side effect (confirmed here as well) Testing: - majority of testing was around error handling; I modified QEMU to drop an IO every 10K or so IOs to simulate an IO timeout in hardware. Verified this with and without load; with load used 4 threads of data integrity testing that ran 48 hrs with continual resets/recoveries and no adverse effects Detail: nvmeInit.c - removed excess prints, added a few on important non-frequent activities - replaced all StorPortMoveMemory with StorPortCopyMemory and memcpy with StorPortCopyMemory - added support for checking MDTS. If we find that the card doesn't support the xfer size that we already reported (if its too small) then we have no choice but to fail the init state machine at this point in time or we'll get transfers that the HW can't handle. - removed all code having to do with AER. This was an initial design choice to include issuing AERs are part of the init state machine however it makes little sense for the driver to do this. A mgmt. app should be doing this via PT IOCTL so that it can properly log the response. The driver can do very little with the response unless someone adds additional code to pass that up to a mgmt. app in which case there's no value add in the driver being in the middle of it. Nvmeinit.h - removed AER function proto Nvmeio.c - replaced all memcpy with StorPortCopyMemory - updated NVMeDetectPendingCmds () so it can be used by the reset DPC to cleanup pending commands. What we were doing before was cleaning up commands that were on the SQ but hadn't been picked up by FW yet which was simply wrong and will always be zero since we submit one command at a time. The correct set of commands that we need to send back following a reset are those detected by NVMeDetectPendingCmds() so IO added a parm so it can serve that purpose as well - changed the prints in NVMeDetectPendingCmds() so they print by default in a free build of the driver. Implementations can change this if they want but even on a free build you'd generally like to be able to see if anything timed out and what was sent back if so Nvmeio.h - supporting func header change Nvmepwrmgmt.c - new parm for call to NVMeDetectPendingCmds() Nvmensti.c - replaced all StorPortMoveMemory with StorPortCopyMemory and memcpy with StorPortCopyMemory - fix for read_cap translations, need to subtract one from translated value of NSZE as its not zero based Nvmestat.c - removed AER code Nvmestd.c - removed call to StorPortGetUncachedExtension(); causing performance issues. We'll add it back after we fully understand the correct implementation that avoids the storport assert and has no side effects -removed AER code -added debug print - reworked RecoveryDpcRoutine() to use NVMeDetectPendingCmds() for returning commands to storport - replaced all StorPortMoveMemory with StorPortCopyMemory Nvmestd.h -fix typo in enum -add new init state machine failure code for max xfer mismatch -remove AER code Nvme.h - pragma for SMART data to be properly formatted Nvmeioctl.h - new ioctl status code for max AER (even though we don't issue them from the driver, we can still track how many are issued) ____________________________________ Paul Luse Sr. Staff Engineer PCG Server Software Engineering Desk: 480.554.3688, Mobile: 480.334.4630 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OFA.zip Type: application/x-zip-compressed Size: 167467 bytes Desc: OFA.zip URL: From paul.e.luse at intel.com Mon Nov 12 17:11:38 2012 From: paul.e.luse at intel.com (Luse, Paul E) Date: Tue, 13 Nov 2012 01:11:38 +0000 Subject: [nvmewin] next patch Message-ID: <82C9F782B054C94B9FC04A331649C77A07BD2808@FMSMSX106.amr.corp.intel.com> FYI I had a request to leave the AER stuff in with a compile switch for anyone who wants the driver to manage the AER responses. I'm totally fine with that and given it was in there before I'm assuming nobody will have issues leaving it in, especially since I'll wrap it in a #define. I'll try to get that done tomorrow and send an updated patch out. Thx Paul From: Luse, Paul E Sent: Thursday, November 08, 2012 2:12 PM To: nvmewin at lists.openfabrics.org Subject: next patch All- Please review, will be looking for feedback here over the next few weeks - let's say by Mon Nov 19 if possible (just before the holidays). Let me know if you have any questions or if anyone would like to schedule a call to walk through any of this. Thx Paul Main Changes: - fix reset code, using wrong criteria for identifying outstanding IOs on a timeout/reset - misc cleanup of commands and some prints - removal of all code related to the driver issuing AERs - fix read_cap translations off by one error - add support for checking the max data transfer size reported by HW to make sure init can continue (we don't find out what it is until after we've reported to storport) - removed portion of last patch that addressed a storport assert but Alex discovered a perf side effect (confirmed here as well) Testing: - majority of testing was around error handling; I modified QEMU to drop an IO every 10K or so IOs to simulate an IO timeout in hardware. Verified this with and without load; with load used 4 threads of data integrity testing that ran 48 hrs with continual resets/recoveries and no adverse effects Detail: nvmeInit.c - removed excess prints, added a few on important non-frequent activities - replaced all StorPortMoveMemory with StorPortCopyMemory and memcpy with StorPortCopyMemory - added support for checking MDTS. If we find that the card doesn't support the xfer size that we already reported (if its too small) then we have no choice but to fail the init state machine at this point in time or we'll get transfers that the HW can't handle. - removed all code having to do with AER. This was an initial design choice to include issuing AERs are part of the init state machine however it makes little sense for the driver to do this. A mgmt. app should be doing this via PT IOCTL so that it can properly log the response. The driver can do very little with the response unless someone adds additional code to pass that up to a mgmt. app in which case there's no value add in the driver being in the middle of it. Nvmeinit.h - removed AER function proto Nvmeio.c - replaced all memcpy with StorPortCopyMemory - updated NVMeDetectPendingCmds () so it can be used by the reset DPC to cleanup pending commands. What we were doing before was cleaning up commands that were on the SQ but hadn't been picked up by FW yet which was simply wrong and will always be zero since we submit one command at a time. The correct set of commands that we need to send back following a reset are those detected by NVMeDetectPendingCmds() so IO added a parm so it can serve that purpose as well - changed the prints in NVMeDetectPendingCmds() so they print by default in a free build of the driver. Implementations can change this if they want but even on a free build you'd generally like to be able to see if anything timed out and what was sent back if so Nvmeio.h - supporting func header change Nvmepwrmgmt.c - new parm for call to NVMeDetectPendingCmds() Nvmensti.c - replaced all StorPortMoveMemory with StorPortCopyMemory and memcpy with StorPortCopyMemory - fix for read_cap translations, need to subtract one from translated value of NSZE as its not zero based Nvmestat.c - removed AER code Nvmestd.c - removed call to StorPortGetUncachedExtension(); causing performance issues. We'll add it back after we fully understand the correct implementation that avoids the storport assert and has no side effects -removed AER code -added debug print - reworked RecoveryDpcRoutine() to use NVMeDetectPendingCmds() for returning commands to storport - replaced all StorPortMoveMemory with StorPortCopyMemory Nvmestd.h -fix typo in enum -add new init state machine failure code for max xfer mismatch -remove AER code Nvme.h - pragma for SMART data to be properly formatted Nvmeioctl.h - new ioctl status code for max AER (even though we don't issue them from the driver, we can still track how many are issued) ____________________________________ Paul Luse Sr. Staff Engineer PCG Server Software Engineering Desk: 480.554.3688, Mobile: 480.334.4630 -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.e.luse at intel.com Wed Nov 14 10:32:32 2012 From: paul.e.luse at intel.com (Luse, Paul E) Date: Wed, 14 Nov 2012 18:32:32 +0000 Subject: [nvmewin] next patch - quick update Message-ID: <82C9F782B054C94B9FC04A331649C77A07BDF8C5@FMSMSX106.amr.corp.intel.com> I had almost forgotten that the AER code in the OFA repo was broken to begin with (commented out in the state machine since day 1) so I'll need to merge the fixed code in from an internal branch (not just re-include what was there before with #define wraps) so it will take a few more days as I'll need to fully test the driver issued AER function in the OFA base... Thx Paul From: Luse, Paul E Sent: Monday, November 12, 2012 6:12 PM To: nvmewin at lists.openfabrics.org Subject: RE: next patch FYI I had a request to leave the AER stuff in with a compile switch for anyone who wants the driver to manage the AER responses. I'm totally fine with that and given it was in there before I'm assuming nobody will have issues leaving it in, especially since I'll wrap it in a #define. I'll try to get that done tomorrow and send an updated patch out. Thx Paul From: Luse, Paul E Sent: Thursday, November 08, 2012 2:12 PM To: nvmewin at lists.openfabrics.org Subject: next patch All- Please review, will be looking for feedback here over the next few weeks - let's say by Mon Nov 19 if possible (just before the holidays). Let me know if you have any questions or if anyone would like to schedule a call to walk through any of this. Thx Paul Main Changes: - fix reset code, using wrong criteria for identifying outstanding IOs on a timeout/reset - misc cleanup of commands and some prints - removal of all code related to the driver issuing AERs - fix read_cap translations off by one error - add support for checking the max data transfer size reported by HW to make sure init can continue (we don't find out what it is until after we've reported to storport) - removed portion of last patch that addressed a storport assert but Alex discovered a perf side effect (confirmed here as well) Testing: - majority of testing was around error handling; I modified QEMU to drop an IO every 10K or so IOs to simulate an IO timeout in hardware. Verified this with and without load; with load used 4 threads of data integrity testing that ran 48 hrs with continual resets/recoveries and no adverse effects Detail: nvmeInit.c - removed excess prints, added a few on important non-frequent activities - replaced all StorPortMoveMemory with StorPortCopyMemory and memcpy with StorPortCopyMemory - added support for checking MDTS. If we find that the card doesn't support the xfer size that we already reported (if its too small) then we have no choice but to fail the init state machine at this point in time or we'll get transfers that the HW can't handle. - removed all code having to do with AER. This was an initial design choice to include issuing AERs are part of the init state machine however it makes little sense for the driver to do this. A mgmt. app should be doing this via PT IOCTL so that it can properly log the response. The driver can do very little with the response unless someone adds additional code to pass that up to a mgmt. app in which case there's no value add in the driver being in the middle of it. Nvmeinit.h - removed AER function proto Nvmeio.c - replaced all memcpy with StorPortCopyMemory - updated NVMeDetectPendingCmds () so it can be used by the reset DPC to cleanup pending commands. What we were doing before was cleaning up commands that were on the SQ but hadn't been picked up by FW yet which was simply wrong and will always be zero since we submit one command at a time. The correct set of commands that we need to send back following a reset are those detected by NVMeDetectPendingCmds() so IO added a parm so it can serve that purpose as well - changed the prints in NVMeDetectPendingCmds() so they print by default in a free build of the driver. Implementations can change this if they want but even on a free build you'd generally like to be able to see if anything timed out and what was sent back if so Nvmeio.h - supporting func header change Nvmepwrmgmt.c - new parm for call to NVMeDetectPendingCmds() Nvmensti.c - replaced all StorPortMoveMemory with StorPortCopyMemory and memcpy with StorPortCopyMemory - fix for read_cap translations, need to subtract one from translated value of NSZE as its not zero based Nvmestat.c - removed AER code Nvmestd.c - removed call to StorPortGetUncachedExtension(); causing performance issues. We'll add it back after we fully understand the correct implementation that avoids the storport assert and has no side effects -removed AER code -added debug print - reworked RecoveryDpcRoutine() to use NVMeDetectPendingCmds() for returning commands to storport - replaced all StorPortMoveMemory with StorPortCopyMemory Nvmestd.h -fix typo in enum -add new init state machine failure code for max xfer mismatch -remove AER code Nvme.h - pragma for SMART data to be properly formatted Nvmeioctl.h - new ioctl status code for max AER (even though we don't issue them from the driver, we can still track how many are issued) ____________________________________ Paul Luse Sr. Staff Engineer PCG Server Software Engineering Desk: 480.554.3688, Mobile: 480.334.4630 -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.e.luse at intel.com Fri Nov 16 09:44:34 2012 From: paul.e.luse at intel.com (Luse, Paul E) Date: Fri, 16 Nov 2012 17:44:34 +0000 Subject: [nvmewin] next patch - quick update Message-ID: <82C9F782B054C94B9FC04A331649C77A07BE393B@FMSMSX106.amr.corp.intel.com> OK, I'd like to add the driver initiated AER support in as its own patch for 2 reasons (1) what I've already sent out for consideration is fully tested and ready to go and (2) adding the AER code doesn't just mean adding back what was there before, it also means fixing it which we have done but there's a bit of merge effort to get it in there correctly. When we merge it, we'll have the compile switch either select driver imitated AERs or IOCTL (what is in there now). It will be next on the list, right after Thanksgiving. Let me know if anyone has an issue with that and if not please reply with your comments/concerns on the outstanding patch "as is". Thx Paul From: Luse, Paul E Sent: Wednesday, November 14, 2012 11:33 AM To: nvmewin at lists.openfabrics.org Subject: RE: next patch - quick update I had almost forgotten that the AER code in the OFA repo was broken to begin with (commented out in the state machine since day 1) so I'll need to merge the fixed code in from an internal branch (not just re-include what was there before with #define wraps) so it will take a few more days as I'll need to fully test the driver issued AER function in the OFA base... Thx Paul From: Luse, Paul E Sent: Monday, November 12, 2012 6:12 PM To: nvmewin at lists.openfabrics.org Subject: RE: next patch FYI I had a request to leave the AER stuff in with a compile switch for anyone who wants the driver to manage the AER responses. I'm totally fine with that and given it was in there before I'm assuming nobody will have issues leaving it in, especially since I'll wrap it in a #define. I'll try to get that done tomorrow and send an updated patch out. Thx Paul From: Luse, Paul E Sent: Thursday, November 08, 2012 2:12 PM To: nvmewin at lists.openfabrics.org Subject: next patch All- Please review, will be looking for feedback here over the next few weeks - let's say by Mon Nov 19 if possible (just before the holidays). Let me know if you have any questions or if anyone would like to schedule a call to walk through any of this. Thx Paul Main Changes: - fix reset code, using wrong criteria for identifying outstanding IOs on a timeout/reset - misc cleanup of commands and some prints - removal of all code related to the driver issuing AERs - fix read_cap translations off by one error - add support for checking the max data transfer size reported by HW to make sure init can continue (we don't find out what it is until after we've reported to storport) - removed portion of last patch that addressed a storport assert but Alex discovered a perf side effect (confirmed here as well) Testing: - majority of testing was around error handling; I modified QEMU to drop an IO every 10K or so IOs to simulate an IO timeout in hardware. Verified this with and without load; with load used 4 threads of data integrity testing that ran 48 hrs with continual resets/recoveries and no adverse effects Detail: nvmeInit.c - removed excess prints, added a few on important non-frequent activities - replaced all StorPortMoveMemory with StorPortCopyMemory and memcpy with StorPortCopyMemory - added support for checking MDTS. If we find that the card doesn't support the xfer size that we already reported (if its too small) then we have no choice but to fail the init state machine at this point in time or we'll get transfers that the HW can't handle. - removed all code having to do with AER. This was an initial design choice to include issuing AERs are part of the init state machine however it makes little sense for the driver to do this. A mgmt. app should be doing this via PT IOCTL so that it can properly log the response. The driver can do very little with the response unless someone adds additional code to pass that up to a mgmt. app in which case there's no value add in the driver being in the middle of it. Nvmeinit.h - removed AER function proto Nvmeio.c - replaced all memcpy with StorPortCopyMemory - updated NVMeDetectPendingCmds () so it can be used by the reset DPC to cleanup pending commands. What we were doing before was cleaning up commands that were on the SQ but hadn't been picked up by FW yet which was simply wrong and will always be zero since we submit one command at a time. The correct set of commands that we need to send back following a reset are those detected by NVMeDetectPendingCmds() so IO added a parm so it can serve that purpose as well - changed the prints in NVMeDetectPendingCmds() so they print by default in a free build of the driver. Implementations can change this if they want but even on a free build you'd generally like to be able to see if anything timed out and what was sent back if so Nvmeio.h - supporting func header change Nvmepwrmgmt.c - new parm for call to NVMeDetectPendingCmds() Nvmensti.c - replaced all StorPortMoveMemory with StorPortCopyMemory and memcpy with StorPortCopyMemory - fix for read_cap translations, need to subtract one from translated value of NSZE as its not zero based Nvmestat.c - removed AER code Nvmestd.c - removed call to StorPortGetUncachedExtension(); causing performance issues. We'll add it back after we fully understand the correct implementation that avoids the storport assert and has no side effects -removed AER code -added debug print - reworked RecoveryDpcRoutine() to use NVMeDetectPendingCmds() for returning commands to storport - replaced all StorPortMoveMemory with StorPortCopyMemory Nvmestd.h -fix typo in enum -add new init state machine failure code for max xfer mismatch -remove AER code Nvme.h - pragma for SMART data to be properly formatted Nvmeioctl.h - new ioctl status code for max AER (even though we don't issue them from the driver, we can still track how many are issued) ____________________________________ Paul Luse Sr. Staff Engineer PCG Server Software Engineering Desk: 480.554.3688, Mobile: 480.334.4630 -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.e.luse at intel.com Mon Nov 26 06:45:36 2012 From: paul.e.luse at intel.com (Luse, Paul E) Date: Mon, 26 Nov 2012 14:45:36 +0000 Subject: [nvmewin] next patch Message-ID: <82C9F782B054C94B9FC04A331649C77A07BE8B62@FMSMSX106.amr.corp.intel.com> Reminder: this patch is still out for review. Pleased try to have feedback before the end of this week (nothing has changed since the original patch was sent out). Note that we will follow this patch with a working compile time option for driver-driven AER support. Thx Paul From: Luse, Paul E Sent: Thursday, November 08, 2012 2:12 PM To: nvmewin at lists.openfabrics.org Subject: next patch All- Please review, will be looking for feedback here over the next few weeks - let's say by Mon Nov 19 if possible (just before the holidays). Let me know if you have any questions or if anyone would like to schedule a call to walk through any of this. Thx Paul Main Changes: - fix reset code, using wrong criteria for identifying outstanding IOs on a timeout/reset - misc cleanup of commands and some prints - removal of all code related to the driver issuing AERs - fix read_cap translations off by one error - add support for checking the max data transfer size reported by HW to make sure init can continue (we don't find out what it is until after we've reported to storport) - removed portion of last patch that addressed a storport assert but Alex discovered a perf side effect (confirmed here as well) Testing: - majority of testing was around error handling; I modified QEMU to drop an IO every 10K or so IOs to simulate an IO timeout in hardware. Verified this with and without load; with load used 4 threads of data integrity testing that ran 48 hrs with continual resets/recoveries and no adverse effects Detail: nvmeInit.c - removed excess prints, added a few on important non-frequent activities - replaced all StorPortMoveMemory with StorPortCopyMemory and memcpy with StorPortCopyMemory - added support for checking MDTS. If we find that the card doesn't support the xfer size that we already reported (if its too small) then we have no choice but to fail the init state machine at this point in time or we'll get transfers that the HW can't handle. - removed all code having to do with AER. This was an initial design choice to include issuing AERs are part of the init state machine however it makes little sense for the driver to do this. A mgmt. app should be doing this via PT IOCTL so that it can properly log the response. The driver can do very little with the response unless someone adds additional code to pass that up to a mgmt. app in which case there's no value add in the driver being in the middle of it. Nvmeinit.h - removed AER function proto Nvmeio.c - replaced all memcpy with StorPortCopyMemory - updated NVMeDetectPendingCmds () so it can be used by the reset DPC to cleanup pending commands. What we were doing before was cleaning up commands that were on the SQ but hadn't been picked up by FW yet which was simply wrong and will always be zero since we submit one command at a time. The correct set of commands that we need to send back following a reset are those detected by NVMeDetectPendingCmds() so IO added a parm so it can serve that purpose as well - changed the prints in NVMeDetectPendingCmds() so they print by default in a free build of the driver. Implementations can change this if they want but even on a free build you'd generally like to be able to see if anything timed out and what was sent back if so Nvmeio.h - supporting func header change Nvmepwrmgmt.c - new parm for call to NVMeDetectPendingCmds() Nvmensti.c - replaced all StorPortMoveMemory with StorPortCopyMemory and memcpy with StorPortCopyMemory - fix for read_cap translations, need to subtract one from translated value of NSZE as its not zero based Nvmestat.c - removed AER code Nvmestd.c - removed call to StorPortGetUncachedExtension(); causing performance issues. We'll add it back after we fully understand the correct implementation that avoids the storport assert and has no side effects -removed AER code -added debug print - reworked RecoveryDpcRoutine() to use NVMeDetectPendingCmds() for returning commands to storport - replaced all StorPortMoveMemory with StorPortCopyMemory Nvmestd.h -fix typo in enum -add new init state machine failure code for max xfer mismatch -remove AER code Nvme.h - pragma for SMART data to be properly formatted Nvmeioctl.h - new ioctl status code for max AER (even though we don't issue them from the driver, we can still track how many are issued) ____________________________________ Paul Luse Sr. Staff Engineer PCG Server Software Engineering Desk: 480.554.3688, Mobile: 480.334.4630 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OFA.zip Type: application/x-zip-compressed Size: 166267 bytes Desc: OFA.zip URL: