[openib-general] [PATCH] CMA: allow/require bind before connect

Tom Tucker tom at opengridcomputing.com
Sun Mar 26 07:07:59 PST 2006


On Sun, 2006-03-26 at 16:59 +0200, Michael S. Tsirkin wrote:
> Sean, CMA currently bails out if I bind to ANY address before connect.
> I think in TCP you bind before connect 

It is not typical to use bind on the active side before connect. When it
is done, is when the application wants to control which local port
and/or interface to use when connecting to a remove peer. bind(2) is
typically used on the passive side before listen.


> and if not it autobinds for you
> (see our discussion for legal port range for autobind).
> The reason it fails is here:
> 
> 
>         id_priv = container_of(id, struct rdma_id_private, id);
>         if (id_priv->cma_dev) {
>                 expected_state = CMA_ADDR_BOUND;
>                 src_addr = &id->route.addr.src_addr;
>         } else
>                 expected_state = CMA_IDLE;
> 
>         if (!cma_comp_exch(id_priv, expected_state, CMA_ADDR_QUERY))
>                 return -EINVAL;
> 
> So apparently, state must be idle unless cma_dev is set, but it's
> not set probably because I bind to ANY address.

Why are you binding to ANY? Are specifying a particular local port? 

> 
> Not sure what the proper fix would be. Maybe it makes sense to *require*
> binding before rdma_resolve_addr, like in TCP?
> 

IMO, it should be possible to reuse a cma_id after it is associated 
with a device, and that this capability requires some changes in
exactly this area, however, I'm not sure what it has to do with bind.

Can you explain what it is you're trying to do and why you're 
calling bind before connect?

> --
> 
> Require bind before connect (could be ANY port).
> 
> Signed-off-by: Michael S. Tsirkin <mst at mellanox.co.il>
> 
> Index: linux-2.6.16/drivers/infiniband/core/cma.c
> ===================================================================
> --- linux-2.6.16/drivers/infiniband/core/cma.c	(revision 6012)
> +++ linux-2.6.16/drivers/infiniband/core/cma.c	(working copy)
> @@ -1285,23 +1292,20 @@ int rdma_resolve_addr(struct rdma_cm_id 
>  		      struct sockaddr *dst_addr, int timeout_ms)
>  {
>  	struct rdma_id_private *id_priv;
> -	enum cma_state expected_state;
>  	int ret;
>  
>  	id_priv = container_of(id, struct rdma_id_private, id);
>  	if (id_priv->cma_dev) {
> -		expected_state = CMA_ADDR_BOUND;
>  		src_addr = &id->route.addr.src_addr;
> -	} else
> -		expected_state = CMA_IDLE;
> +	}
>  
> -	if (!cma_comp_exch(id_priv, expected_state, CMA_ADDR_QUERY))
> +	if (!cma_comp_exch(id_priv, CMA_ADDR_BOUND, CMA_ADDR_QUERY))
>  		return -EINVAL;
>  
>  	atomic_inc(&id_priv->refcount);
>  	memcpy(&id->route.addr.dst_addr, dst_addr, ip_addr_size(dst_addr));
>  	if (cma_loopback_addr(dst_addr))
> -		ret = cma_resolve_loopback(id_priv, src_addr, expected_state);
> +		ret = cma_resolve_loopback(id_priv, src_addr, CMA_ADDR_BOUND);
>  	else
>  		ret = rdma_resolve_ip(src_addr, dst_addr,
>  				      &id->route.addr.dev_addr,
> @@ -1311,7 +1315,7 @@ int rdma_resolve_addr(struct rdma_cm_id 
>  
>  	return 0;
>  err:
> -	cma_comp_exch(id_priv, CMA_ADDR_QUERY, expected_state);
> +	cma_comp_exch(id_priv, CMA_ADDR_QUERY, CMA_ADDR_BOUND);
>  	cma_deref_id(id_priv);
>  	return ret;
>  }
> 
> 




More information about the general mailing list