[libfabric-users] Suggestions needed for improved performance
Biddiscombe, John A.
john.biddiscombe at cscs.ch
Fri Jun 10 10:44:49 PDT 2022
As is usual in these cases - I found a nasty bug in my code after I posted my message to the list. It turns out that I was sending messages that were bigger than I thought they were - due to the way memory was allocated, they were rounded up to the next power of 2 - so a 100,000 byte message was actually 131,072 bytes - a tidy 30% larger than expected, and this accounts for the 30% BW difference on large messages. On smalller ones the difference was not such a big deal and masked by latencies, but for the larger sizes, it hammered my benchmark numbers.
Apologies for the noise.
JB
________________________________
From: Libfabric-users <libfabric-users-bounces at lists.openfabrics.org> on behalf of Biddiscombe, John A. <john.biddiscombe at cscs.ch>
Sent: 09 June 2022 14:48:43
To: libfabric-users at lists.openfabrics.org
Subject: [libfabric-users] Suggestions needed for improved performance
Dear list,
I'm looking suggestions on things to try - One of our benchmarks that uses libfabric, performs well enough with small messages. The benchmark is written in such a way that we can swap the back-end for a native MPI implementation, or a libfabric implementation and compare performance. The test uses tagged sends and receives between two nodes and simply does lots of them with a certain number of messages allowed o be 'in flight' per thread at any moment.
On piz daint, the cray machine at CSCS
8 threads, message size 1 byte, 10 per thread in flight at any time
libfabric : 0.80 MB/s
mpi : 0.38 MB/s
speedup 2x
8 threads, message size 100 byte, 10 per thread in flight at any time
libfabric : 85 MB/s
mpi : 37 MB/s
speedup 2x
8 threads, message size 10000 byte, 10 per thread in flight at any time
libfabric : 3600 MB/s
mpi : 2000 MB/s
speedup 1.5x
8 threads, message size 100000 byte, 10 per thread in flight at any time
libfabric : 10800 MB/s
mpi : 13900 MB/s
We are now lagging well behind mpi, which is reaching the approx BW of the system (as expected, similar to OSU benchmark)
The benchmark uses message buffer objects, which have a custom allocator, all memory from this allocator is pinned using fi_mr_reg (we use FI_MR_BASIC mode). So there is no pinning of memory during the benchmark run - everything is pinned in advance when the memory buffers are created at startup. The messages are sent using tagged send and each buffer has the memory descriptor supplied
execute_fi_function(fi_tsend, "fi_tsend",
m_tx_endpoint.get_ep(), send_region.get_address(), send_region.get_size(),
send_region.get_local_key(), dst_addr_, tag_, ctxt);
So the question is - what could be going wrong for the libfabric backend that causes such a significant drop in relative performance with larger messages. I've experimented with different FI_THREAD_SAFE options and removing/putting locks around the injection and polling code, but since we perform well with small messages - I do not think there is anything wrong with the basic framework around the send/recv and polling functions. It would appear to be a memory size issue. Is libfabric assuming that the buffers are not pinned and wasting time trying to pin them again?
One caveat, the benchmark uses MPI to initialize and so the libfabric tests are coexisting with MPI in the same executable (and using the GNI backend). I was running tests on LUMI (verbs backend) and saw similar speed drops (but on lumi the mpi uses the libfabric backend too), but cannot access the machine now until maintenance is over.
On daint, I launch using MPICH_GNI_NDREG_ENTRIES=1024, set the mem reg to udreg and lazy dreg to true (no that gni should be registering much since we've done it already)
I welcome any suggestions of what mpi might be doing better, or we might be doing wrong. (I tried profiling and saw no obvious hotspots in our code, the major time hog was in polling the receive queues).
Many thanks
JB
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/libfabric-users/attachments/20220610/a01f9a8c/attachment.htm>
More information about the Libfabric-users
mailing list