[openib-general] OpenSM (again)

Eitan Zahavi eitan at mellanox.co.il
Mon Apr 11 22:07:11 PDT 2005


Hi Roland,

If the case is reproducible, please run "opensm -V" and send us the osm.log

Thanks

Eitan Zahavi

> -----Original Message-----
> From: Roland Fehrenbacher [mailto:rf at q-leap.de]
> Sent: Monday, April 11, 2005 7:28 PM
> To: openib-general at openib.org
> Subject: [openib-general] OpenSM (again)
> 
> Hi,
> 
> I got gen2 opensm running fine now (there was a problem with a wrong
> include file), and managed to get IP running on a network of
> currently 40 machines (final size will be 144). Performance is pretty
> impressive (initial tests with a simple netpipe): I got a latency of
> 18microsec, and a maximum throughput of approx. 400MB/sec at packet
> size approx. 1MB which then levels of at about 340MB/s for larger
> packets.
> 
> One problem and two questions:
> 
> Problem: When I reboot all the 40 nodes (apart from the one the opensm
> is running), the network is non-functional (no pings go through, even
> though ports show status "Active") for quite a while (more than 10
> minutes) after all the nodes have come up. It then recovers without
> intervention. Is this normal? Single node reboots don't affect the
> network operation. osm Log file is appended.
> 
> Question 1: Can I run opensm in a master slave configuration? I noticed
> that there is a priority commandline option, but am not sure how to
> apply this.
> 
> Question 2: I plan to run the gen1/Mellanox IBGD drivers on the
> compute nodes (need fast MPI), and gen2 on the control/storage nodes
> (need only IP) with gen2 opensm running on the control nodes. Is there
> any reason why this should not work reliably?
> 
> Roland

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openfabrics.org/pipermail/general/attachments/20050412/39ca6a2c/attachment.html>


More information about the general mailing list