[openfabrics-ewg] Default and per-user MPI selections
Jeff Squyres
jsquyres at cisco.com
Tue Aug 29 09:35:03 PDT 2006
On 8/29/06 12:19 PM, "Scott Weitzenkamp (sweitzen)" <sweitzen at cisco.com>
wrote:
> (Jeff and I have had this discussion a few times offline...)
:-)
> I don't disagree with you about PATH. But LD_LIBRARY_PATH is an Open
> MPI issue, as MVAPICH does not have the problem.
This falls into the fine line of "feature" vs. "bug" kind of discussion.
:-)
The fact is that if we're setting PATH, it's trivial to also set
LD_LIBRARY_PATH. So to me, this is a moot point.
> For your hypothetical chemical engineer, can't the person who
> installed/verified OFED on their cluster also set up their account with
> the right PATH?
Possibly. But what if the person who setup the cluster is that same
chemical engineer? Isn't OFED designed to make IB so simple that anyone
with a root password can do it? Consider, too, that that chemical engineer
is frequently a grad student with little or no sysadmin training, but has
been tasked to setup the cluster and get it going.
It just seems weird to me that we do everything *except* set up a
system-wide default MPI. Consider some common cases:
1. Experienced sysadmin installs OFED on cluster. Then experienced sysadmin
picks a system-wide default and makes a distributed change to put it in
everyone's path (e.g., adding /etc/profile.d scripts, or modifying the
/etc/profile | /etc/bashrc | /etc/cshrc, or some via some other mechanism).
2. Inexperiences sysadmin installs OFED on cluster, but does not set a
system-wide default. Users all have to modify their shell startup files to
select an MPI.
Both of these use case scenarios involve the fact that *someone* has to do
something *after* the OFED installer is complete in order to use their
cluster the way they want to. Why shouldn't users be able to install OFED
and "mpicc ... ; mpirun ..." right out of the box? That just seems silly to
me (particularly if you choose the "HPC" software set -- in this case we
*know* that they're going to be using MPI).
--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems
More information about the ewg
mailing list