Hi Jason/All,<div>Thank you for your response. Do you mean the link layer flow control (VL's)? or the end to end flow control credits of the Transport layer? How do I set the end to end flow control credits? I looked at the driver source code and the file ipath_qp.c interested me. Here they calculate the credits based on the difference between the head and tail pointers of the 'qp' receive queue pairs (refer - drivers/infiniband/hw/ipath). should i change the size of these queues? am I even looking at the right file?</div>
<div><br></div><div>regards,</div><div>Ashwath.<br><br><div class="gmail_quote">On Thu, Aug 6, 2009 at 5:12 PM, Jason Gunthorpe <span dir="ltr"><<a href="mailto:jgunthorpe@obsidianresearch.com">jgunthorpe@obsidianresearch.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div class="im">On Wed, Aug 05, 2009 at 08:03:04PM -0400, Ashwath Narasimhan wrote:<br>
<br>
> The reason why I need such small rates is because I interface the<br>
> Infiniband HCA to an FPGA via an Infiniband physical link. Imagine<br>
> the FPGA as a simple repeater that simply forwards the infiniband<br>
> signals to the Target HCA. The FPGA cannot handle such a high data<br>
> rate and neither do I have as much memory as required to buffer it<br>
> on the FPGA (I might drop packets if the buffer becomes full). Hence<br>
> I wish to limit the rate to say 100Mbps instead of 2.5Gbps.<br>
<br>
</div>The correct thing to do is manage the flow control credits you are<br>
giving to the IB network so you don't loose packets.<br>
<font color="#888888"><br>
Jason<br>
</font></blockquote></div><br><br clear="all"><br>-- <br>regards,<br>Ashwath<br>
</div>