[Ofmfwg] [EXTERNAL] RE: Sunfish Redfish 2023 demonstration
Aguilar, Michael James
mjaguil at sandia.gov
Mon Sep 25 11:11:17 PDT 2023
Guys
I started working towards getting the Memory Chunks and Storage Volumes composed. I’ll have a minimized version of the Composer set-up ready to send out, very soon..
Mike
From: Herrell, Russ W (Senior System Architect) <russ.herrell at hpe.com>
Date: Monday, September 25, 2023 at 12:08 PM
To: CHRISTIAN PINTO <Christian.Pinto at ibm.com>, ofmfwg at lists.openfabrics.org <ofmfwg at lists.openfabrics.org>
Cc: Aguilar, Michael James <mjaguil at sandia.gov>, Cayton, Phil <phil.cayton at intel.com>, Doug Ledford <dledford at redhat.com>, Ahlvers, Richelle <richelle.ahlvers at intel.com>
Subject: [EXTERNAL] RE: Sunfish Redfish 2023 demonstration
I agree with what I think is the purpose of the demo as described in the outline, which is to show:
1. Sunfish aggregation of multiple Agent inventories (A CXL fabric Agent, and a Swordfish NVMe JBOD? Agent)
2. The ability to query Sunfish to locate composable resources (systems, storage and CXL FAM)
3. The ability to allocate MemoryChunks and storage volumes from the composable resources using Sunfish API (Redfish / Swordfish calls)
4. The ability to bind hosts (systems) to MemoryChunks and storage volumes via Redfish Connections
To do the above, we need to break out Step 3 into two steps, as I don’t propose we start the demo with predefined MemoryChunks and Volumes:
3.1) Retrieve the MemoryDomains from the CXL fabric tree, create one or two MemoryChunks out of these ‘memory pools’, and do the same with a storage pool
3.2) Retrieve the list of Systems from the CXL fabric tree, create a connection between one and a new MemoryChunk or storage volume
If we wish to demonstrate binding a single host to a new storage volume and to a MemoryChunk, we are missing one more ‘ability’ in the Sunfish reference code: We need the ability for Sunfish to notice that systems of the NVMeoF fabric and systems of the CXL fabric are the SAME systems. I propose we hide the need to resolve the multiple names for the same host by just making the names the same from both Agents in the mockups. (If anyone asks, we just acknowledge that this reconciliation of multiple IDs is functionality which is required, but not ready for demonstration yet.)
So, we are missing the discovery of resource pools and the creation of explicit sub-sets of them in step 3.1 and the accompanying functionality in the code stacks.
We also do not have the correct mockups for the two Agents, which is another item that needs to be added to the ‘missing’ list. Everything else looks good.
I suggest we work through the demo topology this Friday, and then create specific mockups that would be the proper models for the demo resources. Once we have the demo topology fixed, we can talk through how the GUI can most easily display this inventory and enable the GUI user to manipulate the components to demo the capabilities we want to show off.
Thoughts?
Russ
From: CHRISTIAN PINTO <Christian.Pinto at ibm.com>
Sent: Monday, September 25, 2023 8:15 AM
To: ofmfwg at lists.openfabrics.org
Cc: Aguilar, Michael J. <mjaguil at sandia.gov>; Cayton, Phil <phil.cayton at intel.com>; Herrell, Russ W (Senior System Architect) <russ.herrell at hpe.com>; Doug Ledford <dledford at redhat.com>; Ahlvers, Richelle <richelle.ahlvers at intel.com>
Subject: Sunfish Redfish 2023 demonstration
Hi All,
I have started working on a “script” for out demonstrator, mostly to identify what we have and what it is missing. What I have so far is attached to this email.
It appears the two main pieces we are missing are a GUI and the a rudimentary composition service. On Friday we should discuss who does what to make sure we arrive at SC with a demo.
Please, any comment or addition to the document are more than welcome.
Christian
Christian Pinto, Ph.D.
Research Scientist
IBM Research Europe - Ireland
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/ofmfwg/attachments/20230925/3919f67b/attachment-0001.htm>
More information about the Ofmfwg
mailing list