[ofiwg] Next steps on NVM PM Remote Access for HA

Hefty, Sean sean.hefty at intel.com
Mon Aug 6 17:06:34 PDT 2018

FWIW, I read through this document:

> https://www.snia.org/sites/default/files/technical_work/final/NVM_PM_R
> emote_Access_for_High_Availability_v1.0.pdf

made several notes, mostly around trying to identify API requirements, and then promptly forgot what I was thinking when I made the note.  :)  I tried to enumerate any implied requirement, and many are already handled by OFI.

General comments:
3.4. An example where the NIC can access the NVM directly would be useful, including access to disaggregated memory.  I.e. avoid CPU caches.  It might be useful to show accelerators with access to NVM as part of the consideration.

3.5. Does this model exist?  This section seems out of place and doesn't appear to be addressed.

4.2. Fred and Barney analogy is odd and unnecessary.  Mention of MS VSS is also odd.  It would be 
better to describe the abstract concept rather than a specific product.  I'm struggling to extract specific requirements from this section.

API requirements:
 4.1  copy data from local memory/nvm to remote memory/nvm
 4.3  support out of order completions
 4.4  need explicit definition of data placement order
 4.5  QoS guarantee for bandwidth and/or latency
 6.2  register mmapped persistent memory
      possible register device/uncached pmem
 6.3  write with durability semantics
 6.4  data/completion ordering semantics defined -- prior to flushing data
      remote read of pmem
 6.5  support different CPU architectures
 7.2  need MR keys for protection
      limit allowed access to regions
*7.3  maybe support encryption from fabric services (IPSec?)
 8    report errors to app
*8.2  maybe support replication capability (from source and/or target NICs)
*8.3  data corruption detection

I marked the areas that I see as gaps with *.  These are encryption/IPSec, data replication, and data corruption detection.  Note that I'm not suggestion that OFI necessarily address these gaps, but might be areas for further discussion.

- Sean

More information about the ofiwg mailing list