[openib-general] Configuration of sdp
Sean Hubbell
shubbell at dbresearch.net
Wed Nov 29 06:33:55 PST 2006
This is what I got in /var/log/messages:
Nov 29 08:32:14 neptune kernel: sdp_sock(32768:0): rdma_resolve_addr
failed: -99
Nov 29 08:32:14 neptune iperf: iperf[28484] libsdp Error connect: failed
with code <-1> for SDP fd:5
Where can I find out about the error messages? Do I still need to go
through the code to find these?
Sean
Eitan Zahavi wrote:
> Hi Sean
>
> Regarding libsdp.conf:
>
> The simplest config that would catch all applications/ports and will
> NOT break your Eth based connections is:
>
> log min-level 7 destination syslog
> use both server * *:*
> use both client * *:*
>
> Not the level=7 on the "log" directive means that you will get a line
> in /var/log/messages for every listen/accept/connect
> with info that should let you know if the connection was made through
> SDP or TCP.
>
> Sean Hubbell wrote:
>> Yes, I tried that in our testing when I loaded the ib_sdp (I did just
>> use the defaults). I was wondering if the numbers that we are seeing
>> seem consistent with everyone elses... and at what point will the sdp
>> module increase my bandwidth when using a legacy / third party
>> network library?
>>
>> Sean
>>
>> Karun Sharma wrote:
>>
>>> Hi Sean:
>>>
>>> If you are using OFED 1.1 release, you will find a "README.txt" file
>>> (/usr/local/ofed/ directory). In this file, configuration steps of
>>> SDP are mentioned.
>>> 1. modprobe ib_sdp
>>> 2. export LD_PRELOAD=/usr/local/ofed/lib/libsdp.so
>>> 3. export LIBSDP_CONFIG_FILE=/usr/local/ofed/etc/libsdp.conf
>>> <application name>.
>>>
>>> You may need to modify libsdp.conf before step 3.
>>>
>>> Thanks
>>> Karun
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>> *From:* openib-general-bounces at openib.org on behalf of Sean Hubbell
>>> *Sent:* Tue 11/28/2006 10:34 PM
>>> *To:* openib-general at openib.org
>>> *Subject:* [openib-general] Configuration of sdp
>>>
>>> Hello,
>>>
>>> I have a question... we have tested our network on our cluster using
>>> iperf v2.0.2.
>>>
>>> Our first test was using Gig-E network and the results were 941 Mbps.
>>> (i.e. iperf -c <ipaddress> and iperf -s)
>>>
>>> Our second test was using Gig-E network and the results were 1.05 Mbps
>>> using UDP.
>>> (i.e. iperf -c <ipaddress> -u and iperf -s -u)
>>>
>>> Our third test was using IBv4 network and the results was a max of 3.8
>>> Mbps using TCP.
>>> (i.e. iperf -c <ipaddress> -P[1-16] and iperf -s)
>>>
>>> Loaded ib_sdp module and then tested:
>>> Our fourth test was using IBv4 network and the results was a max of
>>> 4.16
>>> Mbps using UDP.
>>> (i.e. iperf -c <ipaddress> -b1000M -P[1-16] and iperf -s -u)
>>>
>>> UNLOADED ib_sdp module and then tested:
>>> Our fourth test was using IBv4 network and the results was a max of
>>> 4.16
>>> Mbps using UDP.
>>> (i.e. iperf -c <ipaddress> -b1000M -P[1-16] and iperf -s -u)
>>>
>>> My question is what impact would it have on our network if configured
>>> sdp and is there an example of how to configure sdp somewhere on the
>>> wiki?
>>>
>>> Thanks,
>>>
>>> Sean
>>>
>>>
>>> _______________________________________________
>>> openib-general mailing list
>>> openib-general at openib.org
>>> http://openib.org/mailman/listinfo/openib-general
>>>
>>> To unsubscribe, please visit
>>> http://openib.org/mailman/listinfo/openib-general
>>>
>>>
>>
>>
>> _______________________________________________
>> openib-general mailing list
>> openib-general at openib.org
>> http://openib.org/mailman/listinfo/openib-general
>>
>> To unsubscribe, please visit
>> http://openib.org/mailman/listinfo/openib-general
>>
>
>
More information about the general
mailing list