[openib-general] Re: [mvapich-discuss] RE: [openfabrics-ewg] Current OFED kernel snapshot
Abhinav Vishnu
vishnu at cse.ohio-state.edu
Thu May 4 09:27:27 PDT 2006
Hi Roland,
> Abhinav> The DDR flag is expected to be enabled for DDR mellanox
> Abhinav> HCAs, similarly PCI_EX is expected to enabled for
> Abhinav> PCI-Express Based HCAs. Also, starting MVAPICH-0.9.7, for
> Abhinav> scalability to ultra-scale clusters, we have defined
> Abhinav> MEMORY_SCALE flag, which is a combination of SRQ and
> Abhinav> ADAPTIVE_RDMA_FAST_PATH. However, AFAIK, SRQ is not
> Abhinav> available for PPC64.
>
> Why would SRQ not be available for ppc64? The low-level drivers are
> identical.
>
> And why would DDR and/or PCI Express not be available for ppc64?
>
> - R.
>
By referring to the PPC64 architecture, i was mentioning about the
IBM HCAs(4x/12x) running on GX/GX+ Bus. To the best of my knowledge,
these HCAs do not support the features mentioned above.
Thanks,
-- Abhinav
*** Forgot to CC this mail to everyone in the initial thread ***
Hello All,
Thanks for reporting the compilation problem of MVAPICH on PPC64.
I looked at the bugzilla entry #49 at openib.org.
The CFLAGS which have been used for compilation are indicated below.
----
D_DDR_ -DCH_GEN2 -
^^^^^
DMEMORY_SCALE -D_AFFINITY_ -g -Wall -D_PCI_EX_ -D_SMALL_CLUSTER -D_SMP_
^^^^^^^^^^^^^
-D_SMP_RNDV_ - ^^^^^^^^
DVI
ADEV_RPUT_SUPPORT -DEARLY_SEND_COMPLETION -DLAZY_MEM_UNREGISTER -D_IA64_
^^^^^^
The DDR flag is expected to be enabled for DDR mellanox HCAs, similarly
PCI_EX is expected to enabled for PCI-Express Based HCAs. Also, starting
MVAPICH-0.9.7, for scalability to ultra-scale clusters, we have defined
MEMORY_SCALE flag, which is a combination of SRQ and
ADAPTIVE_RDMA_FAST_PATH. However, AFAIK, SRQ is not available for PPC64.
We would also recommend using -D_PPC64_ as the CFLAG for the architecture.
In order to get the optimal performance, we have a unified script for
different architectures/platforms which is available in the top directory
of MVAPICH; make.mvapich.gen2 and make.mvapich.gen2_multirail. As an
example, the flags generated by the script for PPC64 would be:
-D_PPC64_ -DEARLY_SEND_COMPLETION -DMEMORY_SCALE
-DVIADEV_RPUT_SUPPORT -DLAZY_MEM_UNREGISTER -DCH_GEN2 -D_SMP_ -D_SMP_RNDV_
-D_PCI_X_ -D_SDR_
We would strongly encourage for this script to be used for compilation on
PPC64.
In addition, there seems to be an assembler problem, could possibly be a
gcc configuration problem?
/tmp/ccTRXdQu.s: Assembler messages:
/tmp/ccTRXdQu.s:127: Error: Unrecognized opcode: `mf'
Please let us know if the problem persists by using the top level make
script.
Thanks,
-- Abhinav
On Wed, 3 May 2006, Scott Weitzenkamp (sweitzen) wrote:
> > > > Known issues:
> > > > 1. ipath installation fails on 2.6.9 - 2.6.11* kernels
> > > > 2. OSU MPI compilation fails on SLES10, PPC64
> > > > 3. SRP is not supported on 2.6.9 - 2.6.13* kernels - Ishai
> > > will follow up with details
> > > > 4. Open MPI RPM build process fails - Jeff, will you be
> > > able to send us fixes by Wed?
> >
> > Do we have any progress on the MPI and SRP issues?
>
> I opened bug #49 regarding OSU MPI not compiling on PPC64, it's assigned
> to the default owner huanwei at cse.ohio-state.edu.
>
> http://openib.org/bugzilla/show_bug.cgi?id=49
>
> Scott Weitzenkamp
> SQA and Release Manager
> Server Virtualization Business Unit
> Cisco Systems
>
> _______________________________________________
> mvapich-discuss mailing list
> mvapich-discuss at cse.ohio-state.edu
> http://mail.cse.ohio-state.edu/mailman/listinfo/mvapich-discuss
>
More information about the general
mailing list