[openib-general] [PATCH] RFC: AMSO1100 iWARP Driver
Tom Tucker
tom at opengridcomputing.com
Tue Jan 24 05:11:38 PST 2006
Thanks for the review. Good catch on the free(pd).
On Tue, 2006-01-24 at 16:12 +0530, Krishna Kumar2 wrote:
> Hi Tom,
>
> - c2_create_qp() should kfree(qp) on error and not pd.
>
> Some very (very) MINOR nits :
>
> - c2_pd_alloc() should be called c2_pd_id_alloc() ? And why is
> might_sleep() required for this and
> c2_pd_free() ? Shouldn't that be in c2_alloc_pd() before the kmalloc() ?
>
> - netevent_notifier : why is it using KERN_ERR and not KERN_INFO ?
>
> - c2_mq_init() does a return at the end of routine, can be removed :
>
> + return;
>
> - Remove typecasts of void *, eg :
>
> + reply_vq = (struct c2_mq *)c2dev->qptr_array[mq_index];
>
> - Change (for consistency and to be clear) :
> + rx_ring->start = kmalloc(sizeof(*elem) * rx_ring->count,
> GFP_KERNEL);
> to
> + rx_ring->start = kmalloc(sizeof(*rx_ring->start) * rx_ring->count,
> GFP_KERNEL);
>
> - In c2_tx_clean, you can do :
>
> + if (netif_queue_stopped(c2_port->netdev) && c2_port->tx_avail >
> MAX_SKB_FRAGS + 1)
> + netif_wake_queue(c2_port->netdev);
>
> - Lots of
> + if (err) {
> + break;
> + }
>
> (braces for one line, not a big deal but can remove)
>
> - c2_init_qp_table() can be written :
> + if (err)
> + c2_alloc_cleanup(&c2dev->qp_table.alloc);
> + return err;
>
> removing some redundant returns.
>
> Thanks,
>
> - KK
>
>
> openib-general-bounces at openib.org wrote on 01/24/2006 10:45:52 AM:
>
> >
> >
> > Given some of the discussion re: support for the AMSO1100, enclosed is a
> > patch for an OpenIB provider in support of the AMSO1100. While we use
> > these devices extensively for testing of iWARP support at OGC, the
> > driver has not seen anywhere near the kind of attention that the mthca
> > driver has.
> >
> > This patch requires the previously submitted iWARP core support and CMA
> > patch.
> >
> > Please review and offer suggestions as to what we can do to improve it.
> > There are some known issues with ULP that do not filter based on node
> > type and can become confused and crash when loading and unloading this
> > driver.
> >
> > Patches are available for these ULP add_one and remove_one handlers, but
> > these are trivial and can be considered separately.
> >
> > Index: Kconfig
> > ===================================================================
> > --- Kconfig (revision 5098)
> > +++ Kconfig (working copy)
> > @@ -32,6 +32,8 @@
> >
> > source "drivers/infiniband/hw/mthca/Kconfig"
> >
> > +source "drivers/infiniband/hw/amso1100/Kconfig"
> > +
> > source "drivers/infiniband/hw/ehca/Kconfig"
> >
> > source "drivers/infiniband/ulp/ipoib/Kconfig"
> > Index: Makefile
> > ===================================================================
> > --- Makefile (revision 5098)
> > +++ Makefile (working copy)
> > @@ -1,6 +1,7 @@
> > obj-$(CONFIG_INFINIBAND) += core/
> > obj-$(CONFIG_IPATH_CORE) += hw/ipath/
> > obj-$(CONFIG_INFINIBAND_MTHCA) += hw/mthca/
> > +obj-$(CONFIG_INFINIBAND_AMSO1100) += hw/amso1100/
> > obj-$(CONFIG_INFINIBAND_IPOIB) += ulp/ipoib/
> > obj-$(CONFIG_INFINIBAND_SDP) += ulp/sdp/
> > obj-$(CONFIG_INFINIBAND_SRP) += ulp/srp/
> > Index: hw/amso1100/cc_ae.h
> > ===================================================================
> > --- hw/amso1100/cc_ae.h (revision 0)
> > +++ hw/amso1100/cc_ae.h (revision 0)
> > @@ -0,0 +1,108 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +#ifndef _CC_AE_H_
> > +#define _CC_AE_H_
> > +
> > +/*
> > + * WARNING: If you change this file, also bump CC_IVN_BASE
> > + * in common/include/clustercore/cc_ivn.h.
> > + */
> > +
> > +/*
> > + * Asynchronous Event Identifiers
> > + *
> > + * These start at 0x80 only so it's obvious from inspection that
> > + * they are not work-request statuses. This isn't critical.
> > + *
> > + * NOTE: these event id's must fit in eight bits.
> > + */
> > +typedef enum {
> > + CCAE_REMOTE_SHUTDOWN = 0x80,
> > + CCAE_ACTIVE_CONNECT_RESULTS,
> > + CCAE_CONNECTION_REQUEST,
> > + CCAE_LLP_CLOSE_COMPLETE,
> > + CCAE_TERMINATE_MESSAGE_RECEIVED,
> > + CCAE_LLP_CONNECTION_RESET,
> > + CCAE_LLP_CONNECTION_LOST,
> > + CCAE_LLP_SEGMENT_SIZE_INVALID,
> > + CCAE_LLP_INVALID_CRC,
> > + CCAE_LLP_BAD_FPDU,
> > + CCAE_INVALID_DDP_VERSION,
> > + CCAE_INVALID_RDMA_VERSION,
> > + CCAE_UNEXPECTED_OPCODE,
> > + CCAE_INVALID_DDP_QUEUE_NUMBER,
> > + CCAE_RDMA_READ_NOT_ENABLED,
> > + CCAE_RDMA_WRITE_NOT_ENABLED,
> > + CCAE_RDMA_READ_TOO_SMALL,
> > + CCAE_NO_L_BIT,
> > + CCAE_TAGGED_INVALID_STAG,
> > + CCAE_TAGGED_BASE_BOUNDS_VIOLATION,
> > + CCAE_TAGGED_ACCESS_RIGHTS_VIOLATION,
> > + CCAE_TAGGED_INVALID_PD,
> > + CCAE_WRAP_ERROR,
> > + CCAE_BAD_CLOSE,
> > + CCAE_BAD_LLP_CLOSE,
> > + CCAE_INVALID_MSN_RANGE,
> > + CCAE_INVALID_MSN_GAP,
> > + CCAE_IRRQ_OVERFLOW,
> > + CCAE_IRRQ_MSN_GAP,
> > + CCAE_IRRQ_MSN_RANGE,
> > + CCAE_IRRQ_INVALID_STAG,
> > + CCAE_IRRQ_BASE_BOUNDS_VIOLATION,
> > + CCAE_IRRQ_ACCESS_RIGHTS_VIOLATION,
> > + CCAE_IRRQ_INVALID_PD,
> > + CCAE_IRRQ_WRAP_ERROR,
> > + CCAE_CQ_SQ_COMPLETION_OVERFLOW,
> > + CCAE_CQ_RQ_COMPLETION_ERROR,
> > + CCAE_QP_SRQ_WQE_ERROR,
> > + CCAE_QP_LOCAL_CATASTROPHIC_ERROR,
> > + CCAE_CQ_OVERFLOW,
> > + CCAE_CQ_OPERATION_ERROR,
> > + CCAE_SRQ_LIMIT_REACHED,
> > + CCAE_QP_RQ_LIMIT_REACHED,
> > + CCAE_SRQ_CATASTROPHIC_ERROR,
> > + CCAE_RNIC_CATASTROPHIC_ERROR
> > + /* WARNING If you add more id's, make sure their values fit in
> eight bits. */
> > +} cc_event_id_t;
> > +
> > +/*
> > + * Resource Indicators and Identifiers
> > + */
> > +typedef enum {
> > + CC_RES_IND_QP = 1,
> > + CC_RES_IND_EP,
> > + CC_RES_IND_CQ,
> > + CC_RES_IND_SRQ,
> > +} cc_resource_indicator_t;
> > +
> > +#endif /* _CC_AE_H_ */
> > Index: hw/amso1100/Kconfig
> > ===================================================================
> > --- hw/amso1100/Kconfig (revision 0)
> > +++ hw/amso1100/Kconfig (revision 0)
> > @@ -0,0 +1,15 @@
> > +config INFINIBAND_AMSO1100
> > + tristate "Ammasso 1100 HCA support"
> > + depends on PCI && INFINIBAND
> > + ---help---
> > + This is a low-level driver for the Ammasso 1100 host
> > + channel adapter (HCA).
> > +
> > +config INFINIBAND_AMSO1100_DEBUG
> > + bool "Verbose debugging output"
> > + depends on INFINIBAND_AMSO1100
> > + default n
> > + ---help---
> > + This option causes the amso1100 driver to produce a bunch of
> > + debug messages. Select this if you are developing the driver
> > + or trying to diagnose a problem.
> > Index: hw/amso1100/c2_intr.c
> > ===================================================================
> > --- hw/amso1100/c2_intr.c (revision 0)
> > +++ hw/amso1100/c2_intr.c (revision 0)
> > @@ -0,0 +1,177 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +#include "c2.h"
> > +#include "c2_vq.h"
> > +
> > +static void handle_mq(struct c2_dev *c2dev, u32 index);
> > +static void handle_vq(struct c2_dev *c2dev, u32 mq_index);
> > +
> > +/*
> > + * Handle RNIC interrupts
> > + */
> > +void
> > +c2_rnic_interrupt(struct c2_dev *c2dev)
> > +{
> > + unsigned int mq_index;
> > +
> > + while (c2dev->hints_read != be16_to_cpu(c2dev->hint_count)) {
> > + mq_index = c2_read32(c2dev->regs + PCI_BAR0_HOST_HINT);
> > + if (mq_index & 0x80000000) {
> > + break;
> > + }
> > +
> > + c2dev->hints_read++;
> > + handle_mq(c2dev, mq_index);
> > + }
> > +
> > +}
> > +
> > +/*
> > + * Top level MQ handler
> > + */
> > +static void
> > +handle_mq(struct c2_dev *c2dev, u32 mq_index)
> > +{
> > + if (c2dev->qptr_array[mq_index] == NULL) {
> > + dprintk(KERN_INFO "handle_mq: stray activity for mq_index=%d\n",
> mq_index);
> > + return;
> > + }
> > +
> > + switch (mq_index) {
> > + case (0):
> > + /*
> > + * An index of 0 in the activity queue
> > + * indicates the req vq now has messages
> > + * available...
> > + *
> > + * Wake up any waiters waiting on req VQ
> > + * message availability.
> > + */
> > + wake_up(&c2dev->req_vq_wo);
> > + break;
> > + case (1):
> > + handle_vq(c2dev, mq_index);
> > + break;
> > + case (2):
> > + spin_lock(&c2dev->aeq_lock);
> > + c2_ae_event(c2dev, mq_index);
> > + spin_unlock(&c2dev->aeq_lock);
> > + break;
> > + default:
> > + c2_cq_event(c2dev, mq_index);
> > + break;
> > + }
> > +
> > + return;
> > +}
> > +
> > +/*
> > + * Handles verbs WR replies.
> > + */
> > +static void
> > +handle_vq(struct c2_dev *c2dev, u32 mq_index)
> > +{
> > + void *adapter_msg, *reply_msg;
> > + ccwr_hdr_t *host_msg;
> > + ccwr_hdr_t tmp;
> > + struct c2_mq *reply_vq;
> > + struct c2_vq_req* req;
> > +
> > + reply_vq = (struct c2_mq *)c2dev->qptr_array[mq_index];
> > +
> > + {
> > +
> > + /*
> > + * get next msg from mq_index into adapter_msg.
> > + * don't free it yet.
> > + */
> > + adapter_msg = c2_mq_consume(reply_vq);
> > + dprintk("handle_vq: adapter_msg=%p\n", adapter_msg);
> > + if (adapter_msg == NULL) {
> > + return;
> > + }
> > +
> > + host_msg = vq_repbuf_alloc(c2dev);
> > +
> > + /*
> > + * If we can't get a host buffer, then we'll still
> > + * wakeup the waiter, we just won't give him the msg.
> > + * It is assumed the waiter will deal with this...
> > + */
> > + if (!host_msg) {
> > + dprintk("handle_vq: no repbufs!\n");
> > +
> > + /*
> > + * just copy the WR header into a local variable.
> > + * this allows us to still demux on the context
> > + */
> > + host_msg = &tmp;
> > + memcpy(host_msg, adapter_msg, sizeof(tmp));
> > + reply_msg = NULL;
> > + } else {
> > + memcpy(host_msg, adapter_msg, reply_vq->msg_size);
> > + reply_msg = host_msg;
> > + }
> > +
> > + /*
> > + * consume the msg from the MQ
> > + */
> > + c2_mq_free(reply_vq);
> > +
> > + /*
> > + * wakeup the waiter.
> > + */
> > + req = (struct c2_vq_req *)(unsigned long)host_msg->context;
> > + if (req == NULL) {
> > + /*
> > + * We should never get here, as the adapter should
> > + * never send us a reply that we're not expecting.
> > + */
> > + vq_repbuf_free(c2dev, host_msg);
> > + dprintk("handle_vq: UNEXPECTEDLY got NULL req\n");
> > + return;
> > + }
> > + req->reply_msg = (u64)(unsigned long)(reply_msg);
> > + atomic_set(&req->reply_ready, 1);
> > + dprintk("handle_vq: wakeup req %p\n", req);
> > + wake_up(&req->wait_object);
> > +
> > + /*
> > + * If the request was cancelled, then this put will
> > + * free the vq_req memory...and reply_msg!!!
> > + */
> > + vq_req_put(c2dev, req);
> > + }
> > +
> > +}
> > +
> > Index: hw/amso1100/c2_mq.c
> > ===================================================================
> > --- hw/amso1100/c2_mq.c (revision 0)
> > +++ hw/amso1100/c2_mq.c (revision 0)
> > @@ -0,0 +1,182 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +#include "c2.h"
> > +#include "c2_mq.h"
> > +
> > +#define BUMP(q,p) (p) = ((p)+1) % (q)->q_size
> > +#define BUMP_SHARED(q,p) (p) = cpu_to_be16((be16_to_cpu(p)+1) %
> (q)->q_size)
> > +
> > +void *
> > +c2_mq_alloc(struct c2_mq *q)
> > +{
> > + assert(q);
> > + assert(q->magic == C2_MQ_MAGIC);
> > + assert(q->type == C2_MQ_ADAPTER_TARGET);
> > +
> > + if (c2_mq_full(q)) {
> > + return NULL;
> > + } else {
> > +#ifdef C2_DEBUG
> > + ccwr_hdr_t *m = (ccwr_hdr_t*)(q->msg_pool + q->priv *
> q->msg_size);
> > +#ifdef CCMSGMAGIC
> > + assert(m->magic == be32_to_cpu(~CCWR_MAGIC));
> > + m->magic = cpu_to_be32(CCWR_MAGIC);
> > +#endif
> > + dprintk("c2_mq_alloc %p\n", m);
> > + return m;
> > +#else
> > + return q->msg_pool + q->priv * q->msg_size;
> > +#endif
> > + }
> > +}
> > +
> > +void
> > +c2_mq_produce(struct c2_mq *q)
> > +{
> > + assert(q);
> > + assert(q->magic == C2_MQ_MAGIC);
> > + assert(q->type == C2_MQ_ADAPTER_TARGET);
> > +
> > + if (!c2_mq_full(q)) {
> > + BUMP(q, q->priv);
> > + q->hint_count++;
> > + /* Update peer's offset. */
> > + q->peer->shared = cpu_to_be16(q->priv);
> > + }
> > +}
> > +
> > +void *
> > +c2_mq_consume(struct c2_mq *q)
> > +{
> > + assert(q);
> > + assert(q->magic == C2_MQ_MAGIC);
> > + assert(q->type == C2_MQ_HOST_TARGET);
> > +
> > + if (c2_mq_empty(q)) {
> > + return NULL;
> > + } else {
> > +#ifdef C2_DEBUG
> > + ccwr_hdr_t *m = (ccwr_hdr_t*)
> > + (q->msg_pool + q->priv * q->msg_size);
> > +#ifdef CCMSGMAGIC
> > + assert(m->magic == be32_to_cpu(CCWR_MAGIC));
> > +#endif
> > + dprintk("c2_mq_consume %p\n", m);
> > + return m;
> > +#else
> > + return q->msg_pool + q->priv * q->msg_size;
> > +#endif
> > + }
> > +}
> > +
> > +void
> > +c2_mq_free(struct c2_mq *q)
> > +{
> > + assert(q);
> > + assert(q->magic == C2_MQ_MAGIC);
> > + assert(q->type == C2_MQ_HOST_TARGET);
> > +
> > + if (!c2_mq_empty(q)) {
> > +#ifdef C2_DEBUG
> > +{
> > + dprintk("c2_mq_free %p\n", (ccwr_hdr_t*)(q->msg_pool + q->priv *
> q->msg_size));
> > +}
> > +#endif
> > +
> > +#ifdef CCMSGMAGIC
> > +{
> > + ccwr_hdr_t *m = (ccwr_hdr_t*)
> > + (q->msg_pool + q->priv * q->msg_size);
> > + m->magic = cpu_to_be32(~CCWR_MAGIC);
> > +}
> > +#endif
> > + BUMP(q, q->priv);
> > + /* Update peer's offset. */
> > + q->peer->shared = cpu_to_be16(q->priv);
> > + }
> > +}
> > +
> > +
> > +void
> > +c2_mq_lconsume(struct c2_mq *q, u32 wqe_count)
> > +{
> > + assert(q);
> > + assert(q->magic == C2_MQ_MAGIC);
> > + assert(q->type == C2_MQ_ADAPTER_TARGET);
> > +
> > + while (wqe_count--) {
> > + assert(!c2_mq_empty(q));
> > + BUMP_SHARED(q, *q->shared);
> > + }
> > +}
> > +
> > +
> > +u32
> > +c2_mq_count(struct c2_mq *q)
> > +{
> > + s32 count;
> > +
> > + assert(q);
> > + if (q->type == C2_MQ_HOST_TARGET) {
> > + count = be16_to_cpu(*q->shared) - q->priv;
> > + } else {
> > + count = q->priv - be16_to_cpu(*q->shared);
> > + }
> > +
> > + if (count < 0) {
> > + count += q->q_size;
> > + }
> > +
> > + return (u32)count;
> > +}
> > +
> > +void
> > +c2_mq_init(struct c2_mq *q, u32 index, u32 q_size,
> > + u32 msg_size, u8 *pool_start, u16 *peer,
> > + u32 type)
> > +{
> > + assert(q->shared);
> > +
> > + /* This code assumes the byte swapping has already been done! */
> > + q->index = index;
> > + q->q_size = q_size;
> > + q->msg_size = msg_size;
> > + q->msg_pool = pool_start;
> > + q->peer = (struct c2_mq_shared *)peer;
> > + q->magic = C2_MQ_MAGIC;
> > + q->type = type;
> > + q->priv = 0;
> > + q->hint_count = 0;
> > + return;
> > +}
> > +
> > Index: hw/amso1100/cc_wr.h
> > ===================================================================
> > --- hw/amso1100/cc_wr.h (revision 0)
> > +++ hw/amso1100/cc_wr.h (revision 0)
> > @@ -0,0 +1,1340 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +#ifndef _CC_WR_H_
> > +#define _CC_WR_H_
> > +#include "cc_types.h"
> > +/*
> > + * WARNING: If you change this file, also bump CC_IVN_BASE
> > + * in common/include/clustercore/cc_ivn.h.
> > + */
> > +
> > +#ifdef CCDEBUG
> > +#define CCWR_MAGIC 0xb07700b0
> > +#endif
> > +
> > +#define WR_BUILD_STR_LEN 64
> > +
> > +#ifdef _MSC_VER
> > +#define PACKED
> > +#pragma pack(push)
> > +#pragma pack(1)
> > +#define __inline__ __inline
> > +#else
> > +#define PACKED __attribute__ ((packed))
> > +#endif
> > +
> > +/*
> > + * WARNING: All of these structs need to align any 64bit types on
> > + * 64 bit boundaries! 64bit types include u64 and u64.
> > + */
> > +
> > +/*
> > + * Clustercore Work Request Header. Be sensitive to field layout
> > + * and alignment.
> > + */
> > +typedef struct {
> > + /* wqe_count is part of the cqe. It is put here so the
> > + * adapter can write to it while the wr is pending without
> > + * clobbering part of the wr. This word need not be dma'd
> > + * from the host to adapter by libccil, but we copy it anyway
> > + * to make the memcpy to the adapter better aligned.
> > + */
> > + u32 wqe_count;
> > +
> > + /* Put these fields next so that later 32- and 64-bit
> > + * quantities are naturally aligned.
> > + */
> > + u8 id;
> > + u8 result; /* adapter -> host */
> > + u8 sge_count; /* host -> adapter */
> > + u8 flags; /* host -> adapter */
> > +
> > + u64 context;
> > +#ifdef CCMSGMAGIC
> > + u32 magic;
> > + u32 pad;
> > +#endif
> > +} PACKED ccwr_hdr_t;
> > +
> > +/*
> > + *------------------------ RNIC ------------------------
> > + */
> > +
> > +/*
> > + * WR_RNIC_OPEN
> > + */
> > +
> > +/*
> > + * Flags for the RNIC WRs
> > + */
> > +typedef enum {
> > + RNIC_IRD_STATIC = 0x0001,
> > + RNIC_ORD_STATIC = 0x0002,
> > + RNIC_QP_STATIC = 0x0004,
> > + RNIC_SRQ_SUPPORTED = 0x0008,
> > + RNIC_PBL_BLOCK_MODE = 0x0010,
> > + RNIC_SRQ_MODEL_ARRIVAL = 0x0020,
> > + RNIC_CQ_OVF_DETECTED = 0x0040,
> > + RNIC_PRIV_MODE = 0x0080
> > +} PACKED cc_rnic_flags_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 user_context;
> > + u16 flags; /* See cc_rnic_flags_t */
> > + u16 port_num;
> > +} PACKED ccwr_rnic_open_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > +} PACKED ccwr_rnic_open_rep_t;
> > +
> > +typedef union {
> > + ccwr_rnic_open_req_t req;
> > + ccwr_rnic_open_rep_t rep;
> > +} PACKED ccwr_rnic_open_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > +} PACKED ccwr_rnic_query_req_t;
> > +
> > +/*
> > + * WR_RNIC_QUERY
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 user_context;
> > + u32 vendor_id;
> > + u32 part_number;
> > + u32 hw_version;
> > + u32 fw_ver_major;
> > + u32 fw_ver_minor;
> > + u32 fw_ver_patch;
> > + char fw_ver_build_str[WR_BUILD_STR_LEN];
> > + u32 max_qps;
> > + u32 max_qp_depth;
> > + u32 max_srq_depth;
> > + u32 max_send_sgl_depth;
> > + u32 max_rdma_sgl_depth;
> > + u32 max_cqs;
> > + u32 max_cq_depth;
> > + u32 max_cq_event_handlers;
> > + u32 max_mrs;
> > + u32 max_pbl_depth;
> > + u32 max_pds;
> > + u32 max_global_ird;
> > + u32 max_global_ord;
> > + u32 max_qp_ird;
> > + u32 max_qp_ord;
> > + u32 flags; /* See cc_rnic_flags_t */
> > + u32 max_mws;
> > + u32 pbe_range_low;
> > + u32 pbe_range_high;
> > + u32 max_srqs;
> > + u32 page_size;
> > +} PACKED ccwr_rnic_query_rep_t;
> > +
> > +typedef union {
> > + ccwr_rnic_query_req_t req;
> > + ccwr_rnic_query_rep_t rep;
> > +} PACKED ccwr_rnic_query_t;
> > +
> > +/*
> > + * WR_RNIC_GETCONFIG
> > + */
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 option; /* see cc_getconfig_cmd_t */
> > + u64 reply_buf;
> > + u32 reply_buf_len;
> > +} PACKED ccwr_rnic_getconfig_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 option; /* see cc_getconfig_cmd_t */
> > + u32 count_len; /* length of the number of addresses configured */
> > +} PACKED ccwr_rnic_getconfig_rep_t;
> > +
> > +typedef union {
> > + ccwr_rnic_getconfig_req_t req;
> > + ccwr_rnic_getconfig_rep_t rep;
> > +} PACKED ccwr_rnic_getconfig_t;
> > +
> > +/*
> > + * WR_RNIC_SETCONFIG
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 option; /* See cc_setconfig_cmd_t */
> > + /* variable data and pad See cc_netaddr_t and
> > + * cc_route_t
> > + */
> > + u8 data[0];
> > +} PACKED ccwr_rnic_setconfig_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_rnic_setconfig_rep_t;
> > +
> > +typedef union {
> > + ccwr_rnic_setconfig_req_t req;
> > + ccwr_rnic_setconfig_rep_t rep;
> > +} PACKED ccwr_rnic_setconfig_t;
> > +
> > +/*
> > + * WR_RNIC_CLOSE
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > +} PACKED ccwr_rnic_close_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_rnic_close_rep_t;
> > +
> > +typedef union {
> > + ccwr_rnic_close_req_t req;
> > + ccwr_rnic_close_rep_t rep;
> > +} PACKED ccwr_rnic_close_t;
> > +
> > +/*
> > + *------------------------ CQ ------------------------
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 shared_ht;
> > + u64 user_context;
> > + u64 msg_pool;
> > + u32 rnic_handle;
> > + u32 msg_size;
> > + u32 depth;
> > +} PACKED ccwr_cq_create_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 mq_index;
> > + u32 adapter_shared;
> > + u32 cq_handle;
> > +} PACKED ccwr_cq_create_rep_t;
> > +
> > +typedef union {
> > + ccwr_cq_create_req_t req;
> > + ccwr_cq_create_rep_t rep;
> > +} PACKED ccwr_cq_create_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 cq_handle;
> > + u32 new_depth;
> > + u64 new_msg_pool;
> > +} PACKED ccwr_cq_modify_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_cq_modify_rep_t;
> > +
> > +typedef union {
> > + ccwr_cq_modify_req_t req;
> > + ccwr_cq_modify_rep_t rep;
> > +} PACKED ccwr_cq_modify_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 cq_handle;
> > +} PACKED ccwr_cq_destroy_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_cq_destroy_rep_t;
> > +
> > +typedef union {
> > + ccwr_cq_destroy_req_t req;
> > + ccwr_cq_destroy_rep_t rep;
> > +} PACKED ccwr_cq_destroy_t;
> > +
> > +/*
> > + *------------------------ PD ------------------------
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 pd_id;
> > +} PACKED ccwr_pd_alloc_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_pd_alloc_rep_t;
> > +
> > +typedef union {
> > + ccwr_pd_alloc_req_t req;
> > + ccwr_pd_alloc_rep_t rep;
> > +} PACKED ccwr_pd_alloc_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 pd_id;
> > +} PACKED ccwr_pd_dealloc_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_pd_dealloc_rep_t;
> > +
> > +typedef union {
> > + ccwr_pd_dealloc_req_t req;
> > + ccwr_pd_dealloc_rep_t rep;
> > +} PACKED ccwr_pd_dealloc_t;
> > +
> > +/*
> > + *------------------------ SRQ ------------------------
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 shared_ht;
> > + u64 user_context;
> > + u32 rnic_handle;
> > + u32 srq_depth;
> > + u32 srq_limit;
> > + u32 sgl_depth;
> > + u32 pd_id;
> > +} PACKED ccwr_srq_create_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 srq_depth;
> > + u32 sgl_depth;
> > + u32 msg_size;
> > + u32 mq_index;
> > + u32 mq_start;
> > + u32 srq_handle;
> > +} PACKED ccwr_srq_create_rep_t;
> > +
> > +typedef union {
> > + ccwr_srq_create_req_t req;
> > + ccwr_srq_create_rep_t rep;
> > +} PACKED ccwr_srq_create_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 srq_handle;
> > +} PACKED ccwr_srq_destroy_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_srq_destroy_rep_t;
> > +
> > +typedef union {
> > + ccwr_srq_destroy_req_t req;
> > + ccwr_srq_destroy_rep_t rep;
> > +} PACKED ccwr_srq_destroy_t;
> > +
> > +/*
> > + *------------------------ QP ------------------------
> > + */
> > +typedef enum {
> > + QP_RDMA_READ = 0x00000001, /* RDMA read enabled? */
> > + QP_RDMA_WRITE = 0x00000002, /* RDMA write enabled? */
> > + QP_MW_BIND = 0x00000004, /* MWs enabled */
> > + QP_ZERO_STAG = 0x00000008, /* enabled? */
> > + QP_REMOTE_TERMINATION = 0x00000010, /* remote end terminated
> */
> > + QP_RDMA_READ_RESPONSE = 0x00000020 /* Remote RDMA read */
> > + /* enabled? */
> > +} PACKED ccwr_qp_flags_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 shared_sq_ht;
> > + u64 shared_rq_ht;
> > + u64 user_context;
> > + u32 rnic_handle;
> > + u32 sq_cq_handle;
> > + u32 rq_cq_handle;
> > + u32 sq_depth;
> > + u32 rq_depth;
> > + u32 srq_handle;
> > + u32 srq_limit;
> > + u32 flags; /* see ccwr_qp_flags_t */
> > + u32 send_sgl_depth;
> > + u32 recv_sgl_depth;
> > + u32 rdma_write_sgl_depth;
> > + u32 ord;
> > + u32 ird;
> > + u32 pd_id;
> > +} PACKED ccwr_qp_create_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 sq_depth;
> > + u32 rq_depth;
> > + u32 send_sgl_depth;
> > + u32 recv_sgl_depth;
> > + u32 rdma_write_sgl_depth;
> > + u32 ord;
> > + u32 ird;
> > + u32 sq_msg_size;
> > + u32 sq_mq_index;
> > + u32 sq_mq_start;
> > + u32 rq_msg_size;
> > + u32 rq_mq_index;
> > + u32 rq_mq_start;
> > + u32 qp_handle;
> > +} PACKED ccwr_qp_create_rep_t;
> > +
> > +typedef union {
> > + ccwr_qp_create_req_t req;
> > + ccwr_qp_create_rep_t rep;
> > +} PACKED ccwr_qp_create_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 qp_handle;
> > +} PACKED ccwr_qp_query_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 user_context;
> > + u32 rnic_handle;
> > + u32 sq_depth;
> > + u32 rq_depth;
> > + u32 send_sgl_depth;
> > + u32 rdma_write_sgl_depth;
> > + u32 recv_sgl_depth;
> > + u32 ord;
> > + u32 ird;
> > + u16 qp_state;
> > + u16 flags; /* see ccwr_qp_flags_t */
> > + u32 qp_id;
> > + u32 local_addr;
> > + u32 remote_addr;
> > + u16 local_port;
> > + u16 remote_port;
> > + u32 terminate_msg_length; /* 0 if not present */
> > + u8 data[0];
> > + /* Terminate Message in-line here. */
> > +} PACKED ccwr_qp_query_rep_t;
> > +
> > +typedef union {
> > + ccwr_qp_query_req_t req;
> > + ccwr_qp_query_rep_t rep;
> > +} PACKED ccwr_qp_query_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 stream_msg;
> > + u32 stream_msg_length;
> > + u32 rnic_handle;
> > + u32 qp_handle;
> > + u32 next_qp_state;
> > + u32 ord;
> > + u32 ird;
> > + u32 sq_depth;
> > + u32 rq_depth;
> > + u32 llp_ep_handle;
> > +} PACKED ccwr_qp_modify_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 ord;
> > + u32 ird;
> > + u32 sq_depth;
> > + u32 rq_depth;
> > + u32 sq_msg_size;
> > + u32 sq_mq_index;
> > + u32 sq_mq_start;
> > + u32 rq_msg_size;
> > + u32 rq_mq_index;
> > + u32 rq_mq_start;
> > +} PACKED ccwr_qp_modify_rep_t;
> > +
> > +typedef union {
> > + ccwr_qp_modify_req_t req;
> > + ccwr_qp_modify_rep_t rep;
> > +} PACKED ccwr_qp_modify_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 qp_handle;
> > +} PACKED ccwr_qp_destroy_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_qp_destroy_rep_t;
> > +
> > +typedef union {
> > + ccwr_qp_destroy_req_t req;
> > + ccwr_qp_destroy_rep_t rep;
> > +} PACKED ccwr_qp_destroy_t;
> > +
> > +/*
> > + * The CCWR_QP_CONNECT msg is posted on the verbs request queue. It
> can
> > + * only be posted when a QP is in IDLE state. After the connect
> request is
> > + * submitted to the LLP, the adapter moves the QP to CONNECT_PENDING
> state.
> > + * No synchronous reply from adapter to this WR. The results of
> > + * connection are passed back in an async event
> CCAE_ACTIVE_CONNECT_RESULTS
> > + * See ccwr_ae_active_connect_results_t
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 qp_handle;
> > + u32 remote_addr;
> > + u16 remote_port;
> > + u16 pad;
> > + u32 private_data_length;
> > + u8 private_data[0]; /* Private data in-line. */
> > +} PACKED ccwr_qp_connect_req_t;
> > +
> > +typedef struct {
> > + ccwr_qp_connect_req_t req;
> > + /* no synchronous reply. */
> > +} PACKED ccwr_qp_connect_t;
> > +
> > +
> > +/*
> > + *------------------------ MM ------------------------
> > + */
> > +
> > +typedef cc_mm_flags_t ccwr_mr_flags_t; /* cc_types.h */
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 pbl_depth;
> > + u32 pd_id;
> > + u32 flags; /* See ccwr_mr_flags_t */
> > +} PACKED ccwr_nsmr_stag_alloc_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 pbl_depth;
> > + u32 stag_index;
> > +} PACKED ccwr_nsmr_stag_alloc_rep_t;
> > +
> > +typedef union {
> > + ccwr_nsmr_stag_alloc_req_t req;
> > + ccwr_nsmr_stag_alloc_rep_t rep;
> > +} PACKED ccwr_nsmr_stag_alloc_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 va;
> > + u32 rnic_handle;
> > + u16 flags; /* See ccwr_mr_flags_t */
> > + u8 stag_key;
> > + u8 pad;
> > + u32 pd_id;
> > + u32 pbl_depth;
> > + u32 pbe_size;
> > + u32 fbo;
> > + u32 length;
> > + u32 addrs_length;
> > + /* array of paddrs (must be aligned on a 64bit boundary) */
> > + u64 paddrs[0];
> > +} PACKED ccwr_nsmr_register_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 pbl_depth;
> > + u32 stag_index;
> > +} PACKED ccwr_nsmr_register_rep_t;
> > +
> > +typedef union {
> > + ccwr_nsmr_register_req_t req;
> > + ccwr_nsmr_register_rep_t rep;
> > +} PACKED ccwr_nsmr_register_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 flags; /* See ccwr_mr_flags_t */
> > + u32 stag_index;
> > + u32 addrs_length;
> > + /* array of paddrs (must be aligned on a 64bit boundary) */
> > + u64 paddrs[0];
> > +} PACKED ccwr_nsmr_pbl_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_nsmr_pbl_rep_t;
> > +
> > +typedef union {
> > + ccwr_nsmr_pbl_req_t req;
> > + ccwr_nsmr_pbl_rep_t rep;
> > +} PACKED ccwr_nsmr_pbl_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 stag_index;
> > +} PACKED ccwr_mr_query_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u8 stag_key;
> > + u8 pad[3];
> > + u32 pd_id;
> > + u32 flags; /* See ccwr_mr_flags_t */
> > + u32 pbl_depth;
> > +} PACKED ccwr_mr_query_rep_t;
> > +
> > +typedef union {
> > + ccwr_mr_query_req_t req;
> > + ccwr_mr_query_rep_t rep;
> > +} PACKED ccwr_mr_query_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 stag_index;
> > +} PACKED ccwr_mw_query_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u8 stag_key;
> > + u8 pad[3];
> > + u32 pd_id;
> > + u32 flags; /* See ccwr_mr_flags_t */
> > +} PACKED ccwr_mw_query_rep_t;
> > +
> > +typedef union {
> > + ccwr_mw_query_req_t req;
> > + ccwr_mw_query_rep_t rep;
> > +} PACKED ccwr_mw_query_t;
> > +
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 stag_index;
> > +} PACKED ccwr_stag_dealloc_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_stag_dealloc_rep_t;
> > +
> > +typedef union {
> > + ccwr_stag_dealloc_req_t req;
> > + ccwr_stag_dealloc_rep_t rep;
> > +} PACKED ccwr_stag_dealloc_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 va;
> > + u32 rnic_handle;
> > + u16 flags; /* See ccwr_mr_flags_t */
> > + u8 stag_key;
> > + u8 pad;
> > + u32 stag_index;
> > + u32 pd_id;
> > + u32 pbl_depth;
> > + u32 pbe_size;
> > + u32 fbo;
> > + u32 length;
> > + u32 addrs_length;
> > + u32 pad1;
> > + /* array of paddrs (must be aligned on a 64bit boundary) */
> > + u64 paddrs[0];
> > +} PACKED ccwr_nsmr_reregister_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 pbl_depth;
> > + u32 stag_index;
> > +} PACKED ccwr_nsmr_reregister_rep_t;
> > +
> > +typedef union {
> > + ccwr_nsmr_reregister_req_t req;
> > + ccwr_nsmr_reregister_rep_t rep;
> > +} PACKED ccwr_nsmr_reregister_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 va;
> > + u32 rnic_handle;
> > + u16 flags; /* See ccwr_mr_flags_t */
> > + u8 stag_key;
> > + u8 pad;
> > + u32 stag_index;
> > + u32 pd_id;
> > +} PACKED ccwr_smr_register_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 stag_index;
> > +} PACKED ccwr_smr_register_rep_t;
> > +
> > +typedef union {
> > + ccwr_smr_register_req_t req;
> > + ccwr_smr_register_rep_t rep;
> > +} PACKED ccwr_smr_register_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 pd_id;
> > +} PACKED ccwr_mw_alloc_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 stag_index;
> > +} PACKED ccwr_mw_alloc_rep_t;
> > +
> > +typedef union {
> > + ccwr_mw_alloc_req_t req;
> > + ccwr_mw_alloc_rep_t rep;
> > +} PACKED ccwr_mw_alloc_t;
> > +
> > +/*
> > + *------------------------ WRs -----------------------
> > + */
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr; /* Has status and WR Type */
> > +} PACKED ccwr_user_hdr_t;
> > +
> > +/* Completion queue entry. */
> > +typedef struct {
> > + ccwr_hdr_t hdr; /* Has status and WR Type */
> > + u64 qp_user_context;/* cc_user_qp_t * */
> > + u32 qp_state; /* Current QP State */
> > + u32 handle; /* QPID or EP Handle */
> > + u32 bytes_rcvd; /* valid for RECV WCs */
> > + u32 stag;
> > +} PACKED ccwr_ce_t;
> > +
> > +
> > +/*
> > + * Flags used for all post-sq WRs. These must fit in the flags
> > + * field of the ccwr_hdr_t (eight bits).
> > + */
> > +typedef enum {
> > + SQ_SIGNALED = 0x01,
> > + SQ_READ_FENCE = 0x02,
> > + SQ_FENCE = 0x04,
> > +} PACKED cc_sq_flags_t;
> > +
> > +/*
> > + * Common fields for all post-sq WRs. Namely the standard header and a
>
> > + * secondary header with fields common to all post-sq WRs.
> > + */
> > +typedef struct {
> > + ccwr_user_hdr_t user_hdr;
> > +} PACKED cc_sq_hdr_t;
> > +
> > +/*
> > + * Same as above but for post-rq WRs.
> > + */
> > +typedef struct {
> > + ccwr_user_hdr_t user_hdr;
> > +} PACKED cc_rq_hdr_t;
> > +
> > +/*
> > + * use the same struct for all sends.
> > + */
> > +typedef struct {
> > + cc_sq_hdr_t sq_hdr;
> > + u32 sge_len;
> > + u32 remote_stag;
> > + u8 data[0]; /* SGE array */
> > +} PACKED ccwr_send_req_t, ccwr_send_se_req_t, ccwr_send_inv_req_t,
> > ccwr_send_se_inv_req_t;
> > +
> > +typedef ccwr_ce_t ccwr_send_rep_t;
> > +
> > +typedef union {
> > + ccwr_send_req_t req;
> > + ccwr_send_rep_t rep;
> > +} PACKED ccwr_send_t, ccwr_send_se_t, ccwr_send_inv_t,
> ccwr_send_se_inv_t;
> > +
> > +typedef struct {
> > + cc_sq_hdr_t sq_hdr;
> > + u64 remote_to;
> > + u32 remote_stag;
> > + u32 sge_len;
> > + u8 data[0]; /* SGE array */
> > +} PACKED ccwr_rdma_write_req_t;
> > +
> > +typedef ccwr_ce_t ccwr_rdma_write_rep_t;
> > +
> > +typedef union {
> > + ccwr_rdma_write_req_t req;
> > + ccwr_rdma_write_rep_t rep;
> > +} PACKED ccwr_rdma_write_t;
> > +
> > +typedef struct {
> > + cc_sq_hdr_t sq_hdr;
> > + u64 local_to;
> > + u64 remote_to;
> > + u32 local_stag;
> > + u32 remote_stag;
> > + u32 length;
> > +} PACKED ccwr_rdma_read_req_t,ccwr_rdma_read_inv_req_t;
> > +
> > +typedef ccwr_ce_t ccwr_rdma_read_rep_t;
> > +
> > +typedef union {
> > + ccwr_rdma_read_req_t req;
> > + ccwr_rdma_read_rep_t rep;
> > +} PACKED ccwr_rdma_read_t, ccwr_rdma_read_inv_t;
> > +
> > +typedef struct {
> > + cc_sq_hdr_t sq_hdr;
> > + u64 va;
> > + u8 stag_key;
> > + u8 pad[3];
> > + u32 mw_stag_index;
> > + u32 mr_stag_index;
> > + u32 length;
> > + u32 flags; /* see ccwr_mr_flags_t; */
> > +} PACKED ccwr_mw_bind_req_t;
> > +
> > +typedef ccwr_ce_t ccwr_mw_bind_rep_t;
> > +
> > +typedef union {
> > + ccwr_mw_bind_req_t req;
> > + ccwr_mw_bind_rep_t rep;
> > +} PACKED ccwr_mw_bind_t;
> > +
> > +typedef struct {
> > + cc_sq_hdr_t sq_hdr;
> > + u64 va;
> > + u8 stag_key;
> > + u8 pad[3];
> > + u32 stag_index;
> > + u32 pbe_size;
> > + u32 fbo;
> > + u32 length;
> > + u32 addrs_length;
> > + /* array of paddrs (must be aligned on a 64bit boundary) */
> > + u64 paddrs[0];
> > +} PACKED ccwr_nsmr_fastreg_req_t;
> > +
> > +typedef ccwr_ce_t ccwr_nsmr_fastreg_rep_t;
> > +
> > +typedef union {
> > + ccwr_nsmr_fastreg_req_t req;
> > + ccwr_nsmr_fastreg_rep_t rep;
> > +} PACKED ccwr_nsmr_fastreg_t;
> > +
> > +typedef struct {
> > + cc_sq_hdr_t sq_hdr;
> > + u8 stag_key;
> > + u8 pad[3];
> > + u32 stag_index;
> > +} PACKED ccwr_stag_invalidate_req_t;
> > +
> > +typedef ccwr_ce_t ccwr_stag_invalidate_rep_t;
> > +
> > +typedef union {
> > + ccwr_stag_invalidate_req_t req;
> > + ccwr_stag_invalidate_rep_t rep;
> > +} PACKED ccwr_stag_invalidate_t;
> > +
> > +typedef union {
> > + cc_sq_hdr_t sq_hdr;
> > + ccwr_send_req_t send;
> > + ccwr_send_se_req_t send_se;
> > + ccwr_send_inv_req_t send_inv;
> > + ccwr_send_se_inv_req_t send_se_inv;
> > + ccwr_rdma_write_req_t rdma_write;
> > + ccwr_rdma_read_req_t rdma_read;
> > + ccwr_mw_bind_req_t mw_bind;
> > + ccwr_nsmr_fastreg_req_t nsmr_fastreg;
> > + ccwr_stag_invalidate_req_t stag_inv;
> > +} PACKED ccwr_sqwr_t;
> > +
> > +
> > +/*
> > + * RQ WRs
> > + */
> > +typedef struct {
> > + cc_rq_hdr_t rq_hdr;
> > + u8 data[0]; /* array of SGEs */
> > +} PACKED ccwr_rqwr_t, ccwr_recv_req_t;
> > +
> > +typedef ccwr_ce_t ccwr_recv_rep_t;
> > +
> > +typedef union {
> > + ccwr_recv_req_t req;
> > + ccwr_recv_rep_t rep;
> > +} PACKED ccwr_recv_t;
> > +
> > +/*
> > + * All AEs start with this header. Most AEs only need to convey the
> > + * information in the header. Some, like LLP connection events, need
> > + * more info. The union typdef ccwr_ae_t has all the possible AEs.
> > + *
> > + * hdr.context is the user_context from the rnic_open WR. NULL If this
>
> > + * is not affiliated with an rnic
> > + *
> > + * hdr.id is the AE identifier (eg; CCAE_REMOTE_SHUTDOWN,
> > + * CCAE_LLP_CLOSE_COMPLETE)
> > + *
> > + * resource_type is one of: CC_RES_IND_QP, CC_RES_IND_CQ,
> CC_RES_IND_SRQ
> > + *
> > + * user_context is the context passed down when the host created the
> resource.
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 user_context; /* user context for this res. */
> > + u32 resource_type; /* see cc_resource_indicator_t */
> > + u32 resource; /* handle for resource */
> > + u32 qp_state; /* current QP State */
> > +} PACKED PACKED ccwr_ae_hdr_t;
> > +
> > +/*
> > + * After submitting the CCAE_ACTIVE_CONNECT_RESULTS message on the AEQ,
>
> > + * the adapter moves the QP into RTS state
> > + */
> > +typedef struct {
> > + ccwr_ae_hdr_t ae_hdr;
> > + u32 laddr;
> > + u32 raddr;
> > + u16 lport;
> > + u16 rport;
> > + u32 private_data_length;
> > + u8 private_data[0]; /* data is in-line in the msg. */
> > +} PACKED ccwr_ae_active_connect_results_t;
> > +
> > +/*
> > + * When connections are established by the stack (and the private data
> > + * MPA frame is received), the adapter will generate an event to the
> host.
> > + * The details of the connection, any private data, and the new
> connection
> > + * request handle is passed up via the CCAE_CONNECTION_REQUEST msg on
> the
> > + * AE queue:
> > + */
> > +typedef struct {
> > + ccwr_ae_hdr_t ae_hdr;
> > + u32 cr_handle; /* connreq handle (sock ptr) */
> > + u32 laddr;
> > + u32 raddr;
> > + u16 lport;
> > + u16 rport;
> > + u32 private_data_length;
> > + u8 private_data[0]; /* data is in-line in the msg. */
> > +} PACKED ccwr_ae_connection_request_t;
> > +
> > +typedef union {
> > + ccwr_ae_hdr_t ae_generic;
> > + ccwr_ae_active_connect_results_t ae_active_connect_results;
> > + ccwr_ae_connection_request_t ae_connection_request;
> > +} PACKED ccwr_ae_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 hint_count;
> > + u64 q0_host_shared;
> > + u64 q1_host_shared;
> > + u64 q1_host_msg_pool;
> > + u64 q2_host_shared;
> > + u64 q2_host_msg_pool;
> > +} PACKED ccwr_init_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_init_rep_t;
> > +
> > +typedef union {
> > + ccwr_init_req_t req;
> > + ccwr_init_rep_t rep;
> > +} PACKED ccwr_init_t;
> > +
> > +/*
> > + * For upgrading flash.
> > + */
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > +} PACKED ccwr_flash_init_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 adapter_flash_buf_offset;
> > + u32 adapter_flash_len;
> > +} PACKED ccwr_flash_init_rep_t;
> > +
> > +typedef union {
> > + ccwr_flash_init_req_t req;
> > + ccwr_flash_init_rep_t rep;
> > +} PACKED ccwr_flash_init_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 len;
> > +} PACKED ccwr_flash_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 status;
> > +} PACKED ccwr_flash_rep_t;
> > +
> > +typedef union {
> > + ccwr_flash_req_t req;
> > + ccwr_flash_rep_t rep;
> > +} PACKED ccwr_flash_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 size;
> > +} PACKED ccwr_buf_alloc_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 offset; /* 0 if mem not available */
> > + u32 size; /* 0 if mem not available */
> > +} PACKED ccwr_buf_alloc_rep_t;
> > +
> > +typedef union {
> > + ccwr_buf_alloc_req_t req;
> > + ccwr_buf_alloc_rep_t rep;
> > +} PACKED ccwr_buf_alloc_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 offset; /* Must match value from alloc */
> > + u32 size; /* Must match value from alloc */
> > +} PACKED ccwr_buf_free_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_buf_free_rep_t;
> > +
> > +typedef union {
> > + ccwr_buf_free_req_t req;
> > + ccwr_buf_free_rep_t rep;
> > +} PACKED ccwr_buf_free_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 offset;
> > + u32 size;
> > + u32 type;
> > + u32 flags;
> > +} PACKED ccwr_flash_write_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 status;
> > +} PACKED ccwr_flash_write_rep_t;
> > +
> > +typedef union {
> > + ccwr_flash_write_req_t req;
> > + ccwr_flash_write_rep_t rep;
> > +} PACKED ccwr_flash_write_t;
> > +
> > +/*
> > + * Messages for LLP connection setup.
> > + */
> > +
> > +/*
> > + * Listen Request. This allocates a listening endpoint to allow
> passive
> > + * connection setup. Newly established LLP connections are passed up
> > + * via an AE. See ccwr_ae_connection_request_t
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 user_context; /* returned in AEs. */
> > + u32 rnic_handle;
> > + u32 local_addr; /* local addr, or 0 */
> > + u16 local_port; /* 0 means "pick one" */
> > + u16 pad;
> > + u32 backlog; /* tradional tcp listen bl */
> > +} PACKED ccwr_ep_listen_create_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 ep_handle; /* handle to new listening ep */
> > + u16 local_port; /* resulting port... */
> > + u16 pad;
> > +} PACKED ccwr_ep_listen_create_rep_t;
> > +
> > +typedef union {
> > + ccwr_ep_listen_create_req_t req;
> > + ccwr_ep_listen_create_rep_t rep;
> > +} PACKED ccwr_ep_listen_create_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 ep_handle;
> > +} PACKED ccwr_ep_listen_destroy_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_ep_listen_destroy_rep_t;
> > +
> > +typedef union {
> > + ccwr_ep_listen_destroy_req_t req;
> > + ccwr_ep_listen_destroy_rep_t rep;
> > +} PACKED ccwr_ep_listen_destroy_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 ep_handle;
> > +} PACKED ccwr_ep_query_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 local_addr;
> > + u32 remote_addr;
> > + u16 local_port;
> > + u16 remote_port;
> > +} PACKED ccwr_ep_query_rep_t;
> > +
> > +typedef union {
> > + ccwr_ep_query_req_t req;
> > + ccwr_ep_query_rep_t rep;
> > +} PACKED ccwr_ep_query_t;
> > +
> > +
> > +/*
> > + * The host passes this down to indicate acceptance of a pending iWARP
> > + * connection. The cr_handle was obtained from the CONNECTION_REQUEST
> > + * AE passed up by the adapter. See ccwr_ae_connection_request_t.
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 qp_handle; /* QP to bind to this LLP conn */
> > + u32 ep_handle; /* LLP handle to accept */
> > + u32 private_data_length;
> > + u8 private_data[0]; /* data in-line in msg. */
> > +} PACKED ccwr_cr_accept_req_t;
> > +
> > +/*
> > + * adapter sends reply when private data is successfully submitted to
> > + * the LLP.
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_cr_accept_rep_t;
> > +
> > +typedef union {
> > + ccwr_cr_accept_req_t req;
> > + ccwr_cr_accept_rep_t rep;
> > +} PACKED ccwr_cr_accept_t;
> > +
> > +/*
> > + * The host sends this down if a given iWARP connection request was
> > + * rejected by the consumer. The cr_handle was obtained from a
> > + * previous ccwr_ae_connection_request_t AE sent by the adapter.
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 ep_handle; /* LLP handle to reject */
> > +} PACKED ccwr_cr_reject_req_t;
> > +
> > +/*
> > + * Dunno if this is needed, but we'll add it for now. The adapter will
> > + * send the reject_reply after the LLP endpoint has been destroyed.
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_cr_reject_rep_t;
> > +
> > +typedef union {
> > + ccwr_cr_reject_req_t req;
> > + ccwr_cr_reject_rep_t rep;
> > +} PACKED ccwr_cr_reject_t;
> > +
> > +/*
> > + * console command. Used to implement a debug console over the verbs
> > + * request and reply queues.
> > + */
> > +
> > +/*
> > + * Console request message. It contains:
> > + * - message hdr with id = CCWR_CONSOLE
> > + * - the physaddr/len of host memory to be used for the reply.
> > + * - the command string. eg: "netstat -s" or "zoneinfo"
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr; /* id = CCWR_CONSOLE */
> > + u64 reply_buf; /* pinned host buf for reply */
> > + u32 reply_buf_len; /* length of reply buffer */
> > + u8 command[0]; /* NUL terminated ascii string */
> > + /* containing the command req */
> > +} PACKED ccwr_console_req_t;
> > +
> > +/*
> > + * flags used in the console reply.
> > + */
> > +typedef enum {
> > + CONS_REPLY_TRUNCATED = 0x00000001 /* reply was truncated */
> > +} PACKED cc_console_flags_t;
> > +
> > +/*
> > + * Console reply message.
> > + * hdr.result contains the cc_status_t error if the reply was _not_
> generated,
> > + * or CC_OK if the reply was generated.
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr; /* id = CCWR_CONSOLE */
> > + u32 flags; /* see cc_console_flags_t */
> > +} PACKED ccwr_console_rep_t;
> > +
> > +typedef union {
> > + ccwr_console_req_t req;
> > + ccwr_console_rep_t rep;
> > +} PACKED ccwr_console_t;
> > +
> > +
> > +/*
> > + * Giant union with all WRs. Makes life easier...
> > + */
> > +typedef union {
> > + ccwr_hdr_t hdr;
> > + ccwr_user_hdr_t user_hdr;
> > + ccwr_rnic_open_t rnic_open;
> > + ccwr_rnic_query_t rnic_query;
> > + ccwr_rnic_getconfig_t rnic_getconfig;
> > + ccwr_rnic_setconfig_t rnic_setconfig;
> > + ccwr_rnic_close_t rnic_close;
> > + ccwr_cq_create_t cq_create;
> > + ccwr_cq_modify_t cq_modify;
> > + ccwr_cq_destroy_t cq_destroy;
> > + ccwr_pd_alloc_t pd_alloc;
> > + ccwr_pd_dealloc_t pd_dealloc;
> > + ccwr_srq_create_t srq_create;
> > + ccwr_srq_destroy_t srq_destroy;
> > + ccwr_qp_create_t qp_create;
> > + ccwr_qp_query_t qp_query;
> > + ccwr_qp_modify_t qp_modify;
> > + ccwr_qp_destroy_t qp_destroy;
> > + ccwr_qp_connect_t qp_connect;
> > + ccwr_nsmr_stag_alloc_t nsmr_stag_alloc;
> > + ccwr_nsmr_register_t nsmr_register;
> > + ccwr_nsmr_pbl_t nsmr_pbl;
> > + ccwr_mr_query_t mr_query;
> > + ccwr_mw_query_t mw_query;
> > + ccwr_stag_dealloc_t stag_dealloc;
> > + ccwr_sqwr_t sqwr;
> > + ccwr_rqwr_t rqwr;
> > + ccwr_ce_t ce;
> > + ccwr_ae_t ae;
> > + ccwr_init_t init;
> > + ccwr_ep_listen_create_t ep_listen_create;
> > + ccwr_ep_listen_destroy_t ep_listen_destroy;
> > + ccwr_cr_accept_t cr_accept;
> > + ccwr_cr_reject_t cr_reject;
> > + ccwr_console_t console;
> > + ccwr_flash_init_t flash_init;
> > + ccwr_flash_t flash;
> > + ccwr_buf_alloc_t buf_alloc;
> > + ccwr_buf_free_t buf_free;
> > + ccwr_flash_write_t flash_write;
> > +} PACKED ccwr_t;
> > +
> > +
> > +/*
> > + * Accessors for the wr fields that are packed together tightly to
> > + * reduce the wr message size. The wr arguments are void* so that
> > + * either a ccwr_t*, a ccwr_hdr_t*, or a pointer to any of the types
> > + * in the ccwr_t union can be passed in.
> > + */
> > +static __inline__ u8
> > +cc_wr_get_id(void *wr)
> > +{
> > + return ((ccwr_hdr_t *)wr)->id;
> > +}
> > +static __inline__ void
> > +c2_wr_set_id(void *wr, u8 id)
> > +{
> > + ((ccwr_hdr_t *)wr)->id = id;
> > +}
> > +static __inline__ u8
> > +cc_wr_get_result(void *wr)
> > +{
> > + return ((ccwr_hdr_t *)wr)->result;
> > +}
> > +static __inline__ void
> > +cc_wr_set_result(void *wr, u8 result)
> > +{
> > + ((ccwr_hdr_t *)wr)->result = result;
> > +}
> > +static __inline__ u8
> > +cc_wr_get_flags(void *wr)
> > +{
> > + return ((ccwr_hdr_t *)wr)->flags;
> > +}
> > +static __inline__ void
> > +cc_wr_set_flags(void *wr, u8 flags)
> > +{
> > + ((ccwr_hdr_t *)wr)->flags = flags;
> > +}
> > +static __inline__ u8
> > +cc_wr_get_sge_count(void *wr)
> > +{
> > + return ((ccwr_hdr_t *)wr)->sge_count;
> > +}
> > +static __inline__ void
> > +cc_wr_set_sge_count(void *wr, u8 sge_count)
> > +{
> > + ((ccwr_hdr_t *)wr)->sge_count = sge_count;
> > +}
> > +static __inline__ u32
> > +cc_wr_get_wqe_count(void *wr)
> > +{
> > + return ((ccwr_hdr_t *)wr)->wqe_count;
> > +}
> > +static __inline__ void
> > +cc_wr_set_wqe_count(void *wr, u32 wqe_count)
> > +{
> > + ((ccwr_hdr_t *)wr)->wqe_count = wqe_count;
> > +}
> > +
> > +#undef PACKED
> > +
> > +#ifdef _MSC_VER
> > +#pragma pack(pop)
> > +#endif
> > +
> > +#endif /* _CC_WR_H_ */
> > Index: hw/amso1100/c2.c
> > ===================================================================
> > --- hw/amso1100/c2.c (revision 0)
> > +++ hw/amso1100/c2.c (revision 0)
> > @@ -0,0 +1,1221 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +#include <linux/module.h>
> > +#include <linux/moduleparam.h>
> > +#include <linux/pci.h>
> > +#include <linux/netdevice.h>
> > +#include <linux/etherdevice.h>
> > +#include <linux/delay.h>
> > +#include <linux/ethtool.h>
> > +#include <linux/mii.h>
> > +#include <linux/if_vlan.h>
> > +#include <linux/crc32.h>
> > +#include <linux/in.h>
> > +#include <linux/ip.h>
> > +#include <linux/tcp.h>
> > +#include <linux/init.h>
> > +#include <linux/dma-mapping.h>
> > +
> > +#include <asm/io.h>
> > +#include <asm/irq.h>
> > +#include <asm/byteorder.h>
> > +
> > +#include <rdma/ib_smi.h>
> > +#include "c2.h"
> > +#include "c2_provider.h"
> > +
> > +MODULE_AUTHOR("Tom Tucker <tom at ammasso.com>");
> > +MODULE_DESCRIPTION("Ammasso AMSO1100 Low-level iWARP Driver");
> > +MODULE_LICENSE("Dual BSD/GPL");
> > +MODULE_VERSION(DRV_VERSION);
> > +
> > +static const u32 default_msg = NETIF_MSG_DRV | NETIF_MSG_PROBE |
> NETIF_MSG_LINK
> > + | NETIF_MSG_IFUP | NETIF_MSG_IFDOWN;
> > +
> > +static int debug = -1; /* defaults above */
> > +module_param(debug, int, 0);
> > +MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)");
> > +
> > +char *rnic_ip_addr = "192.168.69.169";
> > +module_param(rnic_ip_addr, charp, S_IRUGO);
> > +MODULE_PARM_DESC(rnic_ip_addr, "IP Address for the AMSO1100 Adapter");
> > +
> > +static int c2_up(struct net_device *netdev);
> > +static int c2_down(struct net_device *netdev);
> > +static int c2_xmit_frame(struct sk_buff *skb, struct net_device
> *netdev);
> > +static void c2_tx_interrupt(struct net_device *netdev);
> > +static void c2_rx_interrupt(struct net_device *netdev);
> > +static irqreturn_t c2_interrupt(int irq, void *dev_id, struct pt_regs
> *regs);
> > +static void c2_tx_timeout(struct net_device *netdev);
> > +static int c2_change_mtu(struct net_device *netdev, int new_mtu);
> > +static void c2_reset(struct c2_port *c2_port);
> > +static struct net_device_stats* c2_get_stats(struct net_device
> *netdev);
> > +
> > +extern void c2_rnic_interrupt(struct c2_dev *c2dev);
> > +
> > +static struct pci_device_id c2_pci_table[] = {
> > + { 0x18b8, 0xb001, PCI_ANY_ID, PCI_ANY_ID },
> > + { 0 }
> > +};
> > +
> > +MODULE_DEVICE_TABLE(pci, c2_pci_table);
> > +
> > +static void c2_print_macaddr(struct net_device *netdev)
> > +{
> > + dprintk(KERN_INFO PFX "%s: MAC %02X:%02X:%02X:%02X:%02X:%02X, "
> > + "IRQ %u\n", netdev->name,
> > + netdev->dev_addr[0], netdev->dev_addr[1], netdev->dev_addr[2],
> > + netdev->dev_addr[3], netdev->dev_addr[4], netdev->dev_addr[5],
> > + netdev->irq);
> > +}
> > +
> > +static void c2_set_rxbufsize(struct c2_port *c2_port)
> > +{
> > + struct net_device *netdev = c2_port->netdev;
> > +
> > + assert(netdev != NULL);
> > +
> > + if (netdev->mtu > RX_BUF_SIZE)
> > + c2_port->rx_buf_size = netdev->mtu + ETH_HLEN + sizeof(struct
> > c2_rxp_hdr) + NET_IP_ALIGN;
> > + else
> > + c2_port->rx_buf_size = sizeof(struct c2_rxp_hdr) + RX_BUF_SIZE;
> > +}
> > +
> > +/*
> > + * Allocate TX ring elements and chain them together.
> > + * One-to-one association of adapter descriptors with ring elements.
> > + */
> > +static int c2_tx_ring_alloc(struct c2_ring *tx_ring, void *vaddr,
> dma_addr_t base,
> > + void __iomem *mmio_txp_ring)
> > +{
> > + struct c2_tx_desc *tx_desc;
> > + struct c2_txp_desc *txp_desc;
> > + struct c2_element *elem;
> > + int i;
> > +
> > + tx_ring->start = kmalloc(sizeof(*elem)*tx_ring->count, GFP_KERNEL);
> > + if (!tx_ring->start)
> > + return -ENOMEM;
> > +
> > + for (i = 0, elem = tx_ring->start, tx_desc = vaddr, txp_desc =
> mmio_txp_ring;
> > + i < tx_ring->count; i++, elem++, tx_desc++, txp_desc++)
> > + {
> > + tx_desc->len = 0;
> > + tx_desc->status = 0;
> > +
> > + /* Set TXP_HTXD_UNINIT */
> > + c2_write64((void *)txp_desc + C2_TXP_ADDR,
> cpu_to_be64(0x1122334455667788ULL));
> > + c2_write16((void *)txp_desc + C2_TXP_LEN, cpu_to_be16(0));
> > + c2_write16((void *)txp_desc + C2_TXP_FLAGS,
> cpu_to_be16(TXP_HTXD_UNINIT));
> > +
> > + elem->skb = NULL;
> > + elem->ht_desc = tx_desc;
> > + elem->hw_desc = txp_desc;
> > +
> > + if (i == tx_ring->count - 1) {
> > + elem->next = tx_ring->start;
> > + tx_desc->next_offset = base;
> > + } else {
> > + elem->next = elem + 1;
> > + tx_desc->next_offset = base + (i + 1) * sizeof(*tx_desc);
> > + }
> > + }
> > +
> > + tx_ring->to_use = tx_ring->to_clean = tx_ring->start;
> > +
> > + return 0;
> > +}
> > +
> > +/*
> > + * Allocate RX ring elements and chain them together.
> > + * One-to-one association of adapter descriptors with ring elements.
> > + */
> > +static int c2_rx_ring_alloc(struct c2_ring *rx_ring, void *vaddr,
> dma_addr_t base,
> > + void __iomem *mmio_rxp_ring)
> > +{
> > + struct c2_rx_desc *rx_desc;
> > + struct c2_rxp_desc *rxp_desc;
> > + struct c2_element *elem;
> > + int i;
> > +
> > + rx_ring->start = kmalloc(sizeof(*elem) * rx_ring->count,
> GFP_KERNEL);
> > + if (!rx_ring->start)
> > + return -ENOMEM;
> > +
> > + for (i = 0, elem = rx_ring->start, rx_desc = vaddr, rxp_desc =
> mmio_rxp_ring;
> > + i < rx_ring->count; i++, elem++, rx_desc++, rxp_desc++)
> > + {
> > + rx_desc->len = 0;
> > + rx_desc->status = 0;
> > +
> > + /* Set RXP_HRXD_UNINIT */
> > + c2_write16((void *)rxp_desc + C2_RXP_STATUS,
> cpu_to_be16(RXP_HRXD_OK));
> > + c2_write16((void *)rxp_desc + C2_RXP_COUNT, cpu_to_be16(0));
> > + c2_write16((void *)rxp_desc + C2_RXP_LEN, cpu_to_be16(0));
> > + c2_write64((void *)rxp_desc + C2_RXP_ADDR,
> cpu_to_be64(0x99aabbccddeeffULL));
> > + c2_write16((void *)rxp_desc + C2_RXP_FLAGS,
> cpu_to_be16(RXP_HRXD_UNINIT));
> > +
> > + elem->skb = NULL;
> > + elem->ht_desc = rx_desc;
> > + elem->hw_desc = rxp_desc;
> > +
> > + if (i == rx_ring->count - 1) {
> > + elem->next = rx_ring->start;
> > + rx_desc->next_offset = base;
> > + } else {
> > + elem->next = elem + 1;
> > + rx_desc->next_offset = base + (i + 1) * sizeof(*rx_desc);
> > + }
> > + }
> > +
> > + rx_ring->to_use = rx_ring->to_clean = rx_ring->start;
> > +
> > + return 0;
> > +}
> > +
> > +/* Setup buffer for receiving */
> > +static inline int c2_rx_alloc(struct c2_port *c2_port, struct
> c2_element *elem)
> > +{
> > + struct c2_dev *c2dev = c2_port->c2dev;
> > + struct c2_rx_desc *rx_desc = elem->ht_desc;
> > + struct sk_buff *skb;
> > + dma_addr_t mapaddr;
> > + u32 maplen;
> > + struct c2_rxp_hdr *rxp_hdr;
> > +
> > + skb = dev_alloc_skb(c2_port->rx_buf_size);
> > + if (unlikely(!skb)) {
> > + dprintk(KERN_ERR PFX "%s: out of memory for receive\n",
> > + c2_port->netdev->name);
> > + return -ENOMEM;
> > + }
> > +
> > + /* Zero out the rxp hdr in the sk_buff */
> > + memset(skb->data, 0, sizeof(*rxp_hdr));
> > +
> > + skb->dev = c2_port->netdev;
> > +
> > + maplen = c2_port->rx_buf_size;
> > + mapaddr = pci_map_single(c2dev->pcidev, skb->data, maplen,
> PCI_DMA_FROMDEVICE);
> > +
> > + /* Set the sk_buff RXP_header to RXP_HRXD_READY */
> > + rxp_hdr = (struct c2_rxp_hdr *) skb->data;
> > + rxp_hdr->flags = RXP_HRXD_READY;
> > +
> > + /* c2_write16(elem->hw_desc + C2_RXP_COUNT, cpu_to_be16(0)); */
> > + c2_write16(elem->hw_desc + C2_RXP_STATUS, cpu_to_be16(0));
> > + c2_write16(elem->hw_desc + C2_RXP_LEN, cpu_to_be16((u16)maplen -
> > sizeof(*rxp_hdr)));
> > + c2_write64(elem->hw_desc + C2_RXP_ADDR, cpu_to_be64(mapaddr));
> > + c2_write16(elem->hw_desc + C2_RXP_FLAGS,
> cpu_to_be16(RXP_HRXD_READY));
> > +
> > + elem->skb = skb;
> > + elem->mapaddr = mapaddr;
> > + elem->maplen = maplen;
> > + rx_desc->len = maplen;
> > +
> > + return 0;
> > +}
> > +
> > +/*
> > + * Allocate buffers for the Rx ring
> > + * For receive: rx_ring.to_clean is next received frame
> > + */
> > +static int c2_rx_fill(struct c2_port *c2_port)
> > +{
> > + struct c2_ring *rx_ring = &c2_port->rx_ring;
> > + struct c2_element *elem;
> > + int ret = 0;
> > +
> > + elem = rx_ring->start;
> > + do {
> > + if (c2_rx_alloc(c2_port, elem)) {
> > + ret = 1;
> > + break;
> > + }
> > + } while ((elem = elem->next) != rx_ring->start);
> > +
> > + rx_ring->to_clean = rx_ring->start;
> > + return ret;
> > +}
> > +
> > +/* Free all buffers in RX ring, assumes receiver stopped */
> > +static void c2_rx_clean(struct c2_port *c2_port)
> > +{
> > + struct c2_dev *c2dev = c2_port->c2dev;
> > + struct c2_ring *rx_ring = &c2_port->rx_ring;
> > + struct c2_element *elem;
> > + struct c2_rx_desc *rx_desc;
> > +
> > + elem = rx_ring->start;
> > + do {
> > + rx_desc = elem->ht_desc;
> > + rx_desc->len = 0;
> > +
> > + c2_write16(elem->hw_desc + C2_RXP_STATUS, cpu_to_be16(0));
> > + c2_write16(elem->hw_desc + C2_RXP_COUNT, cpu_to_be16(0));
> > + c2_write16(elem->hw_desc + C2_RXP_LEN, cpu_to_be16(0));
> > + c2_write64(elem->hw_desc + C2_RXP_ADDR,
> cpu_to_be64(0x99aabbccddeeffULL));
> > + c2_write16(elem->hw_desc + C2_RXP_FLAGS,
> cpu_to_be16(RXP_HRXD_UNINIT));
> > +
> > + if (elem->skb) {
> > + pci_unmap_single(c2dev->pcidev, elem->mapaddr, elem->maplen,
> > + PCI_DMA_FROMDEVICE);
> > + dev_kfree_skb(elem->skb);
> > + elem->skb = NULL;
> > + }
> > + } while ((elem = elem->next) != rx_ring->start);
> > +}
> > +
> > +static inline int c2_tx_free(struct c2_dev *c2dev, struct c2_element
> *elem)
> > +{
> > + struct c2_tx_desc *tx_desc = elem->ht_desc;
> > +
> > + tx_desc->len = 0;
> > +
> > + pci_unmap_single(c2dev->pcidev, elem->mapaddr, elem->maplen,
> PCI_DMA_TODEVICE);
> > +
> > + if (elem->skb) {
> > + dev_kfree_skb_any(elem->skb);
> > + elem->skb = NULL;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +/* Free all buffers in TX ring, assumes transmitter stopped */
> > +static void c2_tx_clean(struct c2_port *c2_port)
> > +{
> > + struct c2_ring *tx_ring = &c2_port->tx_ring;
> > + struct c2_element *elem;
> > + struct c2_txp_desc txp_htxd;
> > + int retry;
> > + unsigned long flags;
> > +
> > + spin_lock_irqsave(&c2_port->tx_lock, flags);
> > +
> > + elem = tx_ring->start;
> > +
> > + do {
> > + retry = 0;
> > + do {
> > + txp_htxd.flags = c2_read16(elem->hw_desc + C2_TXP_FLAGS);
> > +
> > + if (txp_htxd.flags == TXP_HTXD_READY) {
> > + retry = 1;
> > + c2_write16(elem->hw_desc + C2_TXP_LEN, cpu_to_be16(0));
> > + c2_write64(elem->hw_desc + C2_TXP_ADDR, cpu_to_be64(0));
> > + c2_write16(elem->hw_desc + C2_TXP_FLAGS,
> cpu_to_be16(TXP_HTXD_DONE));
> > + c2_port->netstats.tx_dropped++;
> > + break;
> > + } else {
> > + c2_write16(elem->hw_desc + C2_TXP_LEN, cpu_to_be16(0));
> > + c2_write64(elem->hw_desc + C2_TXP_ADDR,
> > cpu_to_be64(0x1122334455667788ULL));
> > + c2_write16(elem->hw_desc + C2_TXP_FLAGS,
> cpu_to_be16(TXP_HTXD_UNINIT));
> > + }
> > +
> > + c2_tx_free(c2_port->c2dev, elem);
> > +
> > + } while ((elem = elem->next) != tx_ring->start);
> > + } while (retry);
> > +
> > + c2_port->tx_avail = c2_port->tx_ring.count - 1;
> > + c2_port->c2dev->cur_tx = tx_ring->to_use - tx_ring->start;
> > +
> > + if (c2_port->tx_avail > MAX_SKB_FRAGS + 1)
> > + netif_wake_queue(c2_port->netdev);
> > +
> > + spin_unlock_irqrestore(&c2_port->tx_lock, flags);
> > +}
> > +
> > +/*
> > + * Process transmit descriptors marked 'DONE' by the firmware,
> > + * freeing up their unneeded sk_buffs.
> > + */
> > +static void c2_tx_interrupt(struct net_device *netdev)
> > +{
> > + struct c2_port *c2_port = netdev_priv(netdev);
> > + struct c2_dev *c2dev = c2_port->c2dev;
> > + struct c2_ring *tx_ring = &c2_port->tx_ring;
> > + struct c2_element *elem;
> > + struct c2_txp_desc txp_htxd;
> > +
> > + spin_lock(&c2_port->tx_lock);
> > +
> > + for(elem = tx_ring->to_clean; elem != tx_ring->to_use; elem =
> elem->next)
> > + {
> > + txp_htxd.flags = be16_to_cpu(c2_read16(elem->hw_desc +
> C2_TXP_FLAGS));
> > +
> > + if (txp_htxd.flags != TXP_HTXD_DONE)
> > + break;
> > +
> > + if (netif_msg_tx_done(c2_port)) {
> > + /* PCI reads are expensive in fast path */
> > + //txp_htxd.addr = be64_to_cpu(c2_read64(elem->hw_desc +
> C2_TXP_ADDR));
> > + txp_htxd.len = be16_to_cpu(c2_read16(elem->hw_desc +
> C2_TXP_LEN));
> > + dprintk(KERN_INFO PFX
> > + "%s: tx done slot %3Zu status 0x%x len %5u bytes\n",
> > + netdev->name, elem - tx_ring->start,
> > + txp_htxd.flags, txp_htxd.len);
> > + }
> > +
> > + c2_tx_free(c2dev, elem);
> > + ++(c2_port->tx_avail);
> > + }
> > +
> > + tx_ring->to_clean = elem;
> > +
> > + if (netif_queue_stopped(netdev) && c2_port->tx_avail > MAX_SKB_FRAGS
> + 1)
> > + netif_wake_queue(netdev);
> > +
> > + spin_unlock(&c2_port->tx_lock);
> > +}
> > +
> > +static void c2_rx_error(struct c2_port *c2_port, struct c2_element
> *elem)
> > +{
> > + struct c2_rx_desc *rx_desc = elem->ht_desc;
> > + struct c2_rxp_hdr *rxp_hdr = (struct c2_rxp_hdr *)elem->skb->data;
> > +
> > + if (rxp_hdr->status != RXP_HRXD_OK ||
> > + rxp_hdr->len > (rx_desc->len - sizeof(*rxp_hdr))) {
> > + dprintk(KERN_ERR PFX "BAD RXP_HRXD\n");
> > + dprintk(KERN_ERR PFX " rx_desc : %p\n", rx_desc);
> > + dprintk(KERN_ERR PFX " index : %Zu\n", elem -
> c2_port->rx_ring.start);
> > + dprintk(KERN_ERR PFX " len : %u\n", rx_desc->len);
> > + dprintk(KERN_ERR PFX " rxp_hdr : %p [PA %p]\n", rxp_hdr,
> > + (void *)__pa((unsigned long)rxp_hdr));
> > + dprintk(KERN_ERR PFX " flags : 0x%x\n", rxp_hdr->flags);
> > + dprintk(KERN_ERR PFX " status: 0x%x\n", rxp_hdr->status);
> > + dprintk(KERN_ERR PFX " len : %u\n", rxp_hdr->len);
> > + dprintk(KERN_ERR PFX " rsvd : 0x%x\n", rxp_hdr->rsvd);
> > + }
> > +
> > + /* Setup the skb for reuse since we're dropping this pkt */
> > + elem->skb->tail = elem->skb->data = elem->skb->head;
> > +
> > + /* Zero out the rxp hdr in the sk_buff */
> > + memset(elem->skb->data, 0, sizeof(*rxp_hdr));
> > +
> > + /* Write the descriptor to the adapter's rx ring */
> > + c2_write16(elem->hw_desc + C2_RXP_STATUS, cpu_to_be16(0));
> > + c2_write16(elem->hw_desc + C2_RXP_COUNT, cpu_to_be16(0));
> > + c2_write16(elem->hw_desc + C2_RXP_LEN, cpu_to_be16((u16)elem->maplen
> -
> > sizeof(*rxp_hdr)));
> > + c2_write64(elem->hw_desc + C2_RXP_ADDR, cpu_to_be64(elem->mapaddr));
> > + c2_write16(elem->hw_desc + C2_RXP_FLAGS,
> cpu_to_be16(RXP_HRXD_READY));
> > +
> > + dprintk(KERN_INFO PFX "packet dropped\n");
> > + c2_port->netstats.rx_dropped++;
> > +}
> > +
> > +static void c2_rx_interrupt(struct net_device *netdev)
> > +{
> > + struct c2_port *c2_port = netdev_priv(netdev);
> > + struct c2_dev *c2dev = c2_port->c2dev;
> > + struct c2_ring *rx_ring = &c2_port->rx_ring;
> > + struct c2_element *elem;
> > + struct c2_rx_desc *rx_desc;
> > + struct c2_rxp_hdr *rxp_hdr;
> > + struct sk_buff *skb;
> > + dma_addr_t mapaddr;
> > + u32 maplen, buflen;
> > + unsigned long flags;
> > +
> > + spin_lock_irqsave(&c2dev->lock, flags);
> > +
> > + /* Begin where we left off */
> > + rx_ring->to_clean = rx_ring->start + c2dev->cur_rx;
> > +
> > + for(elem = rx_ring->to_clean; elem->next != rx_ring->to_clean; elem
> = elem->next)
> > + {
> > + rx_desc = elem->ht_desc;
> > + mapaddr = elem->mapaddr;
> > + maplen = elem->maplen;
> > + skb = elem->skb;
> > + rxp_hdr = (struct c2_rxp_hdr *)skb->data;
> > +
> > + if (rxp_hdr->flags != RXP_HRXD_DONE)
> > + break;
> > +
> > + if (netif_msg_rx_status(c2_port))
> > + dprintk(KERN_INFO PFX "%s: rx done slot %3Zu status 0x%x len
> %5u bytes\n",
> > + netdev->name, elem - rx_ring->start,
> > + rxp_hdr->flags, rxp_hdr->len);
> > +
> > + buflen = rxp_hdr->len;
> > +
> > + /* Sanity check the RXP header */
> > + if (rxp_hdr->status != RXP_HRXD_OK ||
> > + buflen > (rx_desc->len - sizeof(*rxp_hdr))) {
> > + c2_rx_error(c2_port, elem);
> > + continue;
> > + }
> > +
> > + /* Allocate and map a new skb for replenishing the host RX desc
> */
> > + if (c2_rx_alloc(c2_port, elem)) {
> > + c2_rx_error(c2_port, elem);
> > + continue;
> > + }
> > +
> > + /* Unmap the old skb */
> > + pci_unmap_single(c2dev->pcidev, mapaddr, maplen,
> PCI_DMA_FROMDEVICE);
> > +
> > + /*
> > + * Skip past the leading 8 bytes comprising of the
> > + * "struct c2_rxp_hdr", prepended by the adapter
> > + * to the usual Ethernet header ("struct ethhdr"),
> > + * to the start of the raw Ethernet packet.
> > + *
> > + * Fix up the various fields in the sk_buff before
> > + * passing it up to netif_rx(). The transfer size
> > + * (in bytes) specified by the adapter len field of
> > + * the "struct rxp_hdr_t" does NOT include the
> > + * "sizeof(struct c2_rxp_hdr)".
> > + */
> > + skb->data += sizeof(*rxp_hdr);
> > + skb->tail = skb->data + buflen;
> > + skb->len = buflen;
> > + skb->dev = netdev;
> > + skb->protocol = eth_type_trans(skb, netdev);
> > +
> > + netif_rx(skb);
> > +
> > + netdev->last_rx = jiffies;
> > + c2_port->netstats.rx_packets++;
> > + c2_port->netstats.rx_bytes += buflen;
> > + }
> > +
> > + /* Save where we left off */
> > + rx_ring->to_clean = elem;
> > + c2dev->cur_rx = elem - rx_ring->start;
> > + C2_SET_CUR_RX(c2dev, c2dev->cur_rx);
> > +
> > + spin_unlock_irqrestore(&c2dev->lock, flags);
> > +}
> > +
> > +/*
> > + * Handle netisr0 TX & RX interrupts.
> > + */
> > +static irqreturn_t c2_interrupt(int irq, void *dev_id, struct pt_regs
> *regs)
> > +{
> > + unsigned int netisr0, dmaisr;
> > + int handled = 0;
> > + struct c2_dev *c2dev = (struct c2_dev *)dev_id;
> > +
> > + assert(c2dev != NULL);
> > +
> > + /* Process CCILNET interrupts */
> > + netisr0 = c2_read32(c2dev->regs + C2_NISR0);
> > + if (netisr0) {
> > +
> > + /*
> > + * There is an issue with the firmware that always
> > + * provides the status of RX for both TX & RX
> > + * interrupts. So process both queues here.
> > + */
> > + c2_rx_interrupt(c2dev->netdev);
> > + c2_tx_interrupt(c2dev->netdev);
> > +
> > + /* Clear the interrupt */
> > + c2_write32(c2dev->regs + C2_NISR0, netisr0);
> > + handled++;
> > + }
> > +
> > + /* Process RNIC interrupts */
> > + dmaisr = c2_read32(c2dev->regs + C2_DISR);
> > + if (dmaisr) {
> > + c2_write32(c2dev->regs + C2_DISR, dmaisr);
> > + c2_rnic_interrupt(c2dev);
> > + handled++;
> > + }
> > +
> > + if (handled) {
> > + return IRQ_HANDLED;
> > + } else {
> > + return IRQ_NONE;
> > + }
> > +}
> > +
> > +static int c2_up(struct net_device *netdev)
> > +{
> > + struct c2_port *c2_port = netdev_priv(netdev);
> > + struct c2_dev *c2dev = c2_port->c2dev;
> > + struct c2_element *elem;
> > + struct c2_rxp_hdr *rxp_hdr;
> > + size_t rx_size, tx_size;
> > + int ret, i;
> > + unsigned int netimr0;
> > +
> > + assert(c2dev != NULL);
> > +
> > + if (netif_msg_ifup(c2_port))
> > + dprintk(KERN_INFO PFX "%s: enabling interface\n", netdev->name);
> > +
> > + /* Set the Rx buffer size based on MTU */
> > + c2_set_rxbufsize(c2_port);
> > +
> > + /* Allocate DMA'able memory for Tx/Rx host descriptor rings */
> > + rx_size = c2_port->rx_ring.count * sizeof(struct c2_rx_desc);
> > + tx_size = c2_port->tx_ring.count * sizeof(struct c2_tx_desc);
> > +
> > + c2_port->mem_size = tx_size + rx_size;
> > + c2_port->mem = pci_alloc_consistent(c2dev->pcidev,
> c2_port->mem_size,
> > + &c2_port->dma);
> > + if (c2_port->mem == NULL) {
> > + dprintk(KERN_ERR PFX "Unable to allocate memory for host
> descriptor rings\n");
> > + return -ENOMEM;
> > + }
> > +
> > + memset(c2_port->mem, 0, c2_port->mem_size);
> > +
> > + /* Create the Rx host descriptor ring */
> > + if ((ret = c2_rx_ring_alloc(&c2_port->rx_ring, c2_port->mem,
> c2_port->dma,
> > + c2dev->mmio_rxp_ring))) {
> > + dprintk(KERN_ERR PFX "Unable to create RX ring\n");
> > + goto bail0;
> > + }
> > +
> > + /* Allocate Rx buffers for the host descriptor ring */
> > + if (c2_rx_fill(c2_port)) {
> > + dprintk(KERN_ERR PFX "Unable to fill RX ring\n");
> > + goto bail1;
> > + }
> > +
> > + /* Create the Tx host descriptor ring */
> > + if ((ret = c2_tx_ring_alloc(&c2_port->tx_ring, c2_port->mem +
> rx_size,
> > + c2_port->dma + rx_size, c2dev->mmio_txp_ring)))
> {
> > + dprintk(KERN_ERR PFX "Unable to create TX ring\n");
> > + goto bail1;
> > + }
> > +
> > + /* Set the TX pointer to where we left off */
> > + c2_port->tx_avail = c2_port->tx_ring.count - 1;
> > + c2_port->tx_ring.to_use = c2_port->tx_ring.to_clean =
> c2_port->tx_ring.
> > start + c2dev->cur_tx;
> > +
> > + /* missing: Initialize MAC */
> > +
> > + BUG_ON(c2_port->tx_ring.to_use != c2_port->tx_ring.to_clean);
> > +
> > + /* Reset the adapter, ensures the driver is in sync with the RXP */
> > + c2_reset(c2_port);
> > +
> > + /* Reset the READY bit in the sk_buff RXP headers & adapter HRXDQ */
> > + for(i = 0, elem = c2_port->rx_ring.start; i <
> c2_port->rx_ring.count;
> > + i++, elem++)
> > + {
> > + rxp_hdr = (struct c2_rxp_hdr *)elem->skb->data;
> > + rxp_hdr->flags = 0;
> > + c2_write16(elem->hw_desc + C2_RXP_FLAGS,
> cpu_to_be16(RXP_HRXD_READY));
> > + }
> > +
> > + /* Enable network packets */
> > + netif_start_queue(netdev);
> > +
> > + /* Enable IRQ */
> > + c2_write32(c2dev->regs + C2_IDIS, 0);
> > + netimr0 = c2_read32(c2dev->regs + C2_NIMR0);
> > + netimr0 &= ~(C2_PCI_HTX_INT | C2_PCI_HRX_INT);
> > + c2_write32(c2dev->regs + C2_NIMR0, netimr0);
> > +
> > + return 0;
> > +
> > + bail1:
> > + c2_rx_clean(c2_port);
> > + kfree(c2_port->rx_ring.start);
> > +
> > + bail0:
> > + pci_free_consistent(c2dev->pcidev, c2_port->mem_size, c2_port->mem,
> c2_port->dma);
> > +
> > + return ret;
> > +}
> > +
> > +static int c2_down(struct net_device *netdev)
> > +{
> > + struct c2_port *c2_port = netdev_priv(netdev);
> > + struct c2_dev *c2dev = c2_port->c2dev;
> > +
> > + if (netif_msg_ifdown(c2_port))
> > + dprintk(KERN_INFO PFX "%s: disabling interface\n", netdev->name);
> > +
> > + /* Wait for all the queued packets to get sent */
> > + c2_tx_interrupt(netdev);
> > +
> > + /* Disable network packets */
> > + netif_stop_queue(netdev);
> > +
> > + /* Disable IRQs by clearing the interrupt mask */
> > + c2_write32(c2dev->regs + C2_IDIS, 1);
> > + c2_write32(c2dev->regs + C2_NIMR0, 0);
> > +
> > + /* missing: Stop transmitter */
> > +
> > + /* missing: Stop receiver */
> > +
> > + /* Reset the adapter, ensures the driver is in sync with the RXP */
> > + c2_reset(c2_port);
> > +
> > + /* missing: Turn off LEDs here */
> > +
> > + /* Free all buffers in the host descriptor rings */
> > + c2_tx_clean(c2_port);
> > + c2_rx_clean(c2_port);
> > +
> > + /* Free the host descriptor rings */
> > + kfree(c2_port->rx_ring.start);
> > + kfree(c2_port->tx_ring.start);
> > + pci_free_consistent(c2dev->pcidev, c2_port->mem_size, c2_port->mem,
> c2_port->dma);
> > +
> > + return 0;
> > +}
> > +
> > +static void c2_reset(struct c2_port *c2_port)
> > +{
> > + struct c2_dev *c2dev = c2_port->c2dev;
> > + unsigned int cur_rx = c2dev->cur_rx;
> > +
> > + /* Tell the hardware to quiesce */
> > + C2_SET_CUR_RX(c2dev, cur_rx|C2_PCI_HRX_QUI);
> > +
> > + /*
> > + * The hardware will reset the C2_PCI_HRX_QUI bit once
> > + * the RXP is quiesced. Wait 2 seconds for this.
> > + */
> > + ssleep(2);
> > +
> > + cur_rx = C2_GET_CUR_RX(c2dev);
> > +
> > + if (cur_rx & C2_PCI_HRX_QUI)
> > + dprintk(KERN_ERR PFX "c2_reset: failed to quiesce the
> hardware!\n");
> > +
> > + cur_rx &= ~C2_PCI_HRX_QUI;
> > +
> > + c2dev->cur_rx = cur_rx;
> > +
> > + dprintk("Current RX: %u\n", c2dev->cur_rx);
> > +}
> > +
> > +static int c2_xmit_frame(struct sk_buff *skb, struct net_device
> *netdev)
> > +{
> > + struct c2_port *c2_port = netdev_priv(netdev);
> > + struct c2_dev *c2dev = c2_port->c2dev;
> > + struct c2_ring *tx_ring = &c2_port->tx_ring;
> > + struct c2_element *elem;
> > + dma_addr_t mapaddr;
> > + u32 maplen;
> > + unsigned long flags;
> > + unsigned int i;
> > +
> > + spin_lock_irqsave(&c2_port->tx_lock, flags);
> > +
> > + if (unlikely(c2_port->tx_avail < (skb_shinfo(skb)->nr_frags + 1))) {
> > + netif_stop_queue(netdev);
> > + spin_unlock_irqrestore(&c2_port->tx_lock, flags);
> > +
> > + dprintk(KERN_WARNING PFX "%s: Tx ring full when queue awake!\n",
> > + netdev->name);
> > + return NETDEV_TX_BUSY;
> > + }
> > +
> > + maplen = skb_headlen(skb);
> > + mapaddr = pci_map_single(c2dev->pcidev, skb->data, maplen,
> PCI_DMA_TODEVICE);
> > +
> > + elem = tx_ring->to_use;
> > + elem->skb = skb;
> > + elem->mapaddr = mapaddr;
> > + elem->maplen = maplen;
> > +
> > + /* Tell HW to xmit */
> > + c2_write64(elem->hw_desc + C2_TXP_ADDR, cpu_to_be64(mapaddr));
> > + c2_write16(elem->hw_desc + C2_TXP_LEN, cpu_to_be16(maplen));
> > + c2_write16(elem->hw_desc + C2_TXP_FLAGS,
> cpu_to_be16(TXP_HTXD_READY));
> > +
> > + c2_port->netstats.tx_packets++;
> > + c2_port->netstats.tx_bytes += maplen;
> > +
> > + /* Loop thru additional data fragments and queue them */
> > + if (skb_shinfo(skb)->nr_frags) {
> > + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
> > + {
> > + skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
> > + maplen = frag->size;
> > + mapaddr = pci_map_page(c2dev->pcidev, frag->page,
> frag->page_offset,
> > + maplen, PCI_DMA_TODEVICE);
> > +
> > + elem = elem->next;
> > + elem->skb = NULL;
> > + elem->mapaddr = mapaddr;
> > + elem->maplen = maplen;
> > +
> > + /* Tell HW to xmit */
> > + c2_write64(elem->hw_desc + C2_TXP_ADDR, cpu_to_be64(mapaddr));
> > + c2_write16(elem->hw_desc + C2_TXP_LEN, cpu_to_be16(maplen));
> > + c2_write16(elem->hw_desc + C2_TXP_FLAGS,
> cpu_to_be16(TXP_HTXD_READY));
> > +
> > + c2_port->netstats.tx_packets++;
> > + c2_port->netstats.tx_bytes += maplen;
> > + }
> > + }
> > +
> > + tx_ring->to_use = elem->next;
> > + c2_port->tx_avail -= (skb_shinfo(skb)->nr_frags + 1);
> > +
> > + if (netif_msg_tx_queued(c2_port))
> > + dprintk(KERN_DEBUG PFX "%s: tx queued, slot %3Zu, len %5u bytes,
> avail = %u\n",
> > + netdev->name, elem - tx_ring->start, maplen,
> c2_port->tx_avail);
> > +
> > + if (c2_port->tx_avail <= MAX_SKB_FRAGS + 1) {
> > + netif_stop_queue(netdev);
> > + if (netif_msg_tx_queued(c2_port))
> > + dprintk(KERN_INFO PFX "%s: transmit queue full\n",
> netdev->name);
> > + }
> > +
> > + spin_unlock_irqrestore(&c2_port->tx_lock, flags);
> > +
> > + netdev->trans_start = jiffies;
> > +
> > + return NETDEV_TX_OK;
> > +}
> > +
> > +static struct net_device_stats *c2_get_stats(struct net_device *netdev)
> > +{
> > + struct c2_port *c2_port = netdev_priv(netdev);
> > +
> > + return &c2_port->netstats;
> > +}
> > +
> > +static int c2_set_mac_address(struct net_device *netdev, void *p)
> > +{
> > + return -1;
> > +}
> > +
> > +static void c2_tx_timeout(struct net_device *netdev)
> > +{
> > + struct c2_port *c2_port = netdev_priv(netdev);
> > +
> > + if (netif_msg_timer(c2_port))
> > + dprintk(KERN_DEBUG PFX "%s: tx timeout\n", netdev->name);
> > +
> > + c2_tx_clean(c2_port);
> > +}
> > +
> > +static int c2_change_mtu(struct net_device *netdev, int new_mtu)
> > +{
> > + int ret = 0;
> > +
> > + if (new_mtu < ETH_ZLEN || new_mtu > ETH_JUMBO_MTU)
> > + return -EINVAL;
> > +
> > + netdev->mtu = new_mtu;
> > +
> > + if (netif_running(netdev)) {
> > + c2_down(netdev);
> > +
> > + c2_up(netdev);
> > + }
> > +
> > + return ret;
> > +}
> > +
> > +/* Initialize network device */
> > +static struct net_device *c2_devinit(struct c2_dev *c2dev, void __iomem
> *mmio_addr)
> > +{
> > + struct c2_port *c2_port = NULL;
> > + struct net_device *netdev = alloc_etherdev(sizeof(*c2_port));
> > +
> > + if (!netdev) {
> > + dprintk(KERN_ERR PFX "c2_port etherdev alloc failed");
> > + return NULL;
> > + }
> > +
> > + SET_MODULE_OWNER(netdev);
> > + SET_NETDEV_DEV(netdev, &c2dev->pcidev->dev);
> > +
> > + netdev->open = c2_up;
> > + netdev->stop = c2_down;
> > + netdev->hard_start_xmit = c2_xmit_frame;
> > + netdev->get_stats = c2_get_stats;
> > + netdev->tx_timeout = c2_tx_timeout;
> > + netdev->set_mac_address = c2_set_mac_address;
> > + netdev->change_mtu = c2_change_mtu;
> > + netdev->watchdog_timeo = C2_TX_TIMEOUT;
> > + netdev->irq = c2dev->pcidev->irq;
> > +
> > + c2_port = netdev_priv(netdev);
> > + c2_port->netdev = netdev;
> > + c2_port->c2dev = c2dev;
> > + c2_port->msg_enable = netif_msg_init(debug, default_msg);
> > + c2_port->tx_ring.count = C2_NUM_TX_DESC;
> > + c2_port->rx_ring.count = C2_NUM_RX_DESC;
> > +
> > + spin_lock_init(&c2_port->tx_lock);
> > +
> > + /* Copy our 48-bit ethernet hardware address */
> > +#if 1
> > + memcpy_fromio(netdev->dev_addr, mmio_addr + C2_REGS_ENADDR, 6);
> > +#else
> > + memcpy_fromio(netdev->dev_addr, mmio_addr + C2_REGS_RDMA_ENADDR, 6);
> > +#endif
> > + /* Validate the MAC address */
> > + if(!is_valid_ether_addr(netdev->dev_addr)) {
> > + dprintk(KERN_ERR PFX "Invalid MAC Address\n");
> > + c2_print_macaddr(netdev);
> > + free_netdev(netdev);
> > + return NULL;
> > + }
> > +
> > + c2dev->netdev = netdev;
> > +
> > + return netdev;
> > +}
> > +
> > +static int __devinit c2_probe(struct pci_dev *pcidev, const struct
> pci_device_id *ent)
> > +{
> > + int ret = 0, i;
> > + unsigned long reg0_start, reg0_flags, reg0_len;
> > + unsigned long reg2_start, reg2_flags, reg2_len;
> > + unsigned long reg4_start, reg4_flags, reg4_len;
> > + unsigned kva_map_size;
> > + struct net_device *netdev = NULL;
> > + struct c2_dev *c2dev = NULL;
> > + void __iomem *mmio_regs = NULL;
> > +
> > + assert(pcidev != NULL);
> > + assert(ent != NULL);
> > +
> > + dprintk(KERN_INFO PFX "AMSO1100 Gigabit Ethernet driver v%s
> loaded\n",
> > + DRV_VERSION);
> > +
> > + /* Enable PCI device */
> > + ret = pci_enable_device(pcidev);
> > + if (ret) {
> > + dprintk(KERN_ERR PFX "%s: Unable to enable PCI device\n",
> pci_name(pcidev));
> > + goto bail0;
> > + }
> > +
> > + reg0_start = pci_resource_start(pcidev, BAR_0);
> > + reg0_len = pci_resource_len(pcidev, BAR_0);
> > + reg0_flags = pci_resource_flags(pcidev, BAR_0);
> > +
> > + reg2_start = pci_resource_start(pcidev, BAR_2);
> > + reg2_len = pci_resource_len(pcidev, BAR_2);
> > + reg2_flags = pci_resource_flags(pcidev, BAR_2);
> > +
> > + reg4_start = pci_resource_start(pcidev, BAR_4);
> > + reg4_len = pci_resource_len(pcidev, BAR_4);
> > + reg4_flags = pci_resource_flags(pcidev, BAR_4);
> > +
> > + dprintk(KERN_INFO PFX "BAR0 size = 0x%lX bytes\n", reg0_len);
> > + dprintk(KERN_INFO PFX "BAR2 size = 0x%lX bytes\n", reg2_len);
> > + dprintk(KERN_INFO PFX "BAR4 size = 0x%lX bytes\n", reg4_len);
> > +
> > + /* Make sure PCI base addr are MMIO */
> > + if (!(reg0_flags & IORESOURCE_MEM) ||
> > + !(reg2_flags & IORESOURCE_MEM) ||
> > + !(reg4_flags & IORESOURCE_MEM)) {
> > + dprintk (KERN_ERR PFX "PCI regions not an MMIO resource\n");
> > + ret = -ENODEV;
> > + goto bail1;
> > + }
> > +
> > + /* Check for weird/broken PCI region reporting */
> > + if ((reg0_len < C2_REG0_SIZE) ||
> > + (reg2_len < C2_REG2_SIZE) ||
> > + (reg4_len < C2_REG4_SIZE)) {
> > + dprintk (KERN_ERR PFX "Invalid PCI region sizes\n");
> > + ret = -ENODEV;
> > + goto bail1;
> > + }
> > +
> > + /* Reserve PCI I/O and memory resources */
> > + ret = pci_request_regions(pcidev, DRV_NAME);
> > + if (ret) {
> > + dprintk(KERN_ERR PFX "%s: Unable to request regions\n",
> pci_name(pcidev));
> > + goto bail1;
> > + }
> > +
> > + if ((sizeof(dma_addr_t) > 4)) {
> > + ret = pci_set_dma_mask(pcidev, DMA_64BIT_MASK);
> > + if (ret < 0) {
> > + dprintk(KERN_ERR PFX "64b DMA configuration failed\n");
> > + goto bail2;
> > + }
> > + } else {
> > + ret = pci_set_dma_mask(pcidev, DMA_32BIT_MASK);
> > + if (ret < 0) {
> > + dprintk(KERN_ERR PFX "32b DMA configuration failed\n");
> > + goto bail2;
> > + }
> > + }
> > +
> > + /* Enables bus-mastering on the device */
> > + pci_set_master(pcidev);
> > +
> > + /* Remap the adapter PCI registers in BAR4 */
> > + mmio_regs = ioremap_nocache(reg4_start + C2_PCI_REGS_OFFSET,
> > + sizeof(struct c2_adapter_pci_regs));
> > + if (mmio_regs == 0UL) {
> > + dprintk(KERN_ERR PFX "Unable to remap adapter PCI registers in
> BAR4\n");
> > + ret = -EIO;
> > + goto bail2;
> > + }
> > +
> > + /* Validate PCI regs magic */
> > + for (i = 0; i < sizeof(c2_magic); i++)
> > + {
> > + if (c2_magic[i] != c2_read8(mmio_regs + C2_REGS_MAGIC + i)) {
> > + dprintk(KERN_ERR PFX
> > + "Invalid PCI regs magic [%d/%Zd: got 0x%x, exp 0x%x]\n",
> > + i + 1, sizeof(c2_magic),
> > + c2_read8(mmio_regs + C2_REGS_MAGIC + i), c2_magic[i]);
> > + dprintk(KERN_ERR PFX "Adapter not claimed\n");
> > + iounmap(mmio_regs);
> > + ret = -EIO;
> > + goto bail2;
> > + }
> > + }
> > +
> > + /* Validate the adapter version */
> > + if (be32_to_cpu(c2_read32(mmio_regs + C2_REGS_VERS)) != C2_VERSION)
> {
> > + dprintk(KERN_ERR PFX "Version mismatch [fw=%u, c2=%u], Adapter
> not claimed\n",
> > + be32_to_cpu(c2_read32(mmio_regs + C2_REGS_VERS)),
> C2_VERSION);
> > + ret = -EINVAL;
> > + iounmap(mmio_regs);
> > + goto bail2;
> > + }
> > +
> > + /* Validate the adapter IVN */
> > + if (be32_to_cpu(c2_read32(mmio_regs + C2_REGS_IVN)) != C2_IVN) {
> > + dprintk(KERN_ERR PFX "IVN mismatch [fw=0x%x, c2=0x%x], Adapter
> not claimed\n",
> > + be32_to_cpu(c2_read32(mmio_regs + C2_REGS_IVN)), C2_IVN);
> > + ret = -EINVAL;
> > + iounmap(mmio_regs);
> > + goto bail2;
> > + }
> > +
> > + /* Allocate hardware structure */
> > + c2dev = (struct c2_dev*)ib_alloc_device(sizeof *c2dev);
> > + if (!c2dev) {
> > + dprintk(KERN_ERR PFX "%s: Unable to alloc hardware struct\n",
> > + pci_name(pcidev));
> > + ret = -ENOMEM;
> > + iounmap(mmio_regs);
> > + goto bail2;
> > + }
> > +
> > + memset(c2dev, 0, sizeof(*c2dev));
> > + spin_lock_init(&c2dev->lock);
> > + c2dev->pcidev = pcidev;
> > + c2dev->cur_tx = 0;
> > +
> > + /* Get the last RX index */
> > + c2dev->cur_rx = (be32_to_cpu(c2_read32(mmio_regs + C2_REGS_HRX_CUR))
> -
> > 0xffffc000) / sizeof(struct c2_rxp_desc);
> > +
> > + /* Request an interrupt line for the driver */
> > + ret = request_irq(pcidev->irq, c2_interrupt, SA_SHIRQ, DRV_NAME,
> c2dev);
> > + if (ret) {
> > + dprintk(KERN_ERR PFX "%s: requested IRQ %u is busy\n",
> > + pci_name(pcidev), pcidev->irq);
> > + iounmap(mmio_regs);
> > + goto bail3;
> > + }
> > +
> > + /* Set driver specific data */
> > + pci_set_drvdata(pcidev, c2dev);
> > +
> > + /* Initialize network device */
> > + if ((netdev = c2_devinit(c2dev, mmio_regs)) == NULL) {
> > + iounmap(mmio_regs);
> > + goto bail4;
> > + }
> > +
> > + /* Save off the actual size prior to unmapping mmio_regs */
> > + kva_map_size = be32_to_cpu(c2_read32(mmio_regs +
> C2_REGS_PCI_WINSIZE));
> > +
> > + /* Unmap the adapter PCI registers in BAR4 */
> > + iounmap(mmio_regs);
> > +
> > + /* Register network device */
> > + ret = register_netdev(netdev);
> > + if (ret) {
> > + dprintk(KERN_ERR PFX "Unable to register netdev, ret = %d\n",
> ret);
> > + goto bail5;
> > + }
> > +
> > + /* Disable network packets */
> > + netif_stop_queue(netdev);
> > +
> > + /* Remap the adapter HRXDQ PA space to kernel VA space */
> > + c2dev->mmio_rxp_ring = ioremap_nocache(reg4_start +
> C2_RXP_HRXDQ_OFFSET,
> > + C2_RXP_HRXDQ_SIZE);
> > + if (c2dev->mmio_rxp_ring == 0UL) {
> > + dprintk(KERN_ERR PFX "Unable to remap MMIO HRXDQ region\n");
> > + ret = -EIO;
> > + goto bail6;
> > + }
> > +
> > + /* Remap the adapter HTXDQ PA space to kernel VA space */
> > + c2dev->mmio_txp_ring = ioremap_nocache(reg4_start +
> C2_TXP_HTXDQ_OFFSET,
> > + C2_TXP_HTXDQ_SIZE);
> > + if (c2dev->mmio_txp_ring == 0UL) {
> > + dprintk(KERN_ERR PFX "Unable to remap MMIO HTXDQ region\n");
> > + ret = -EIO;
> > + goto bail7;
> > + }
> > +
> > + /* Save off the current RX index in the last 4 bytes of the TXP Ring
> */
> > + C2_SET_CUR_RX(c2dev, c2dev->cur_rx);
> > +
> > + /* Remap the PCI registers in adapter BAR0 to kernel VA space */
> > + c2dev->regs = ioremap_nocache(reg0_start, reg0_len);
> > + if (c2dev->regs == 0UL) {
> > + dprintk(KERN_ERR PFX "Unable to remap BAR0\n");
> > + ret = -EIO;
> > + goto bail8;
> > + }
> > +
> > + /* Remap the PCI registers in adapter BAR4 to kernel VA space */
> > + c2dev->pa = (void *)(reg4_start + C2_PCI_REGS_OFFSET);
> > + c2dev->kva = ioremap_nocache(reg4_start + C2_PCI_REGS_OFFSET,
> kva_map_size);
> > + if (c2dev->kva == 0UL) {
> > + dprintk(KERN_ERR PFX "Unable to remap BAR4\n");
> > + ret = -EIO;
> > + goto bail9;
> > + }
> > +
> > + /* Print out the MAC address */
> > + c2_print_macaddr(netdev);
> > +
> > + ret = c2_rnic_init(c2dev);
> > + if (ret) {
> > + dprintk(KERN_ERR PFX "c2_rnic_init failed: %d\n", ret);
> > + goto bail10;
> > + }
> > +
> > + c2_register_device(c2dev);
> > +
> > + return 0;
> > +
> > + bail10:
> > + iounmap(c2dev->kva);
> > +
> > + bail9:
> > + iounmap(c2dev->regs);
> > +
> > + bail8:
> > + iounmap(c2dev->mmio_txp_ring);
> > +
> > + bail7:
> > + iounmap(c2dev->mmio_rxp_ring);
> > +
> > + bail6:
> > + unregister_netdev(netdev);
> > +
> > + bail5:
> > + free_netdev(netdev);
> > +
> > + bail4:
> > + free_irq(pcidev->irq, c2dev);
> > +
> > + bail3:
> > + ib_dealloc_device(&c2dev->ibdev);
> > +
> > + bail2:
> > + pci_release_regions(pcidev);
> > +
> > + bail1:
> > + pci_disable_device(pcidev);
> > +
> > + bail0:
> > + return ret;
> > +}
> > +
> > +static void __devexit c2_remove(struct pci_dev *pcidev)
> > +{
> > + struct c2_dev *c2dev = pci_get_drvdata(pcidev);
> > + struct net_device *netdev = c2dev->netdev;
> > +
> > + assert(netdev != NULL);
> > +
> > + /* Unregister with OpenIB */
> > + ib_unregister_device(&c2dev->ibdev);
> > +
> > + /* Clean up the RNIC resources */
> > + c2_rnic_term(c2dev);
> > +
> > + /* Remove network device from the kernel */
> > + unregister_netdev(netdev);
> > +
> > + /* Free network device */
> > + free_netdev(netdev);
> > +
> > + /* Free the interrupt line */
> > + free_irq(pcidev->irq, c2dev);
> > +
> > + /* missing: Turn LEDs off here */
> > +
> > + /* Unmap adapter PA space */
> > + iounmap(c2dev->kva);
> > + iounmap(c2dev->regs);
> > + iounmap(c2dev->mmio_txp_ring);
> > + iounmap(c2dev->mmio_rxp_ring);
> > +
> > + /* Free the hardware structure */
> > + ib_dealloc_device(&c2dev->ibdev);
> > +
> > + /* Release reserved PCI I/O and memory resources */
> > + pci_release_regions(pcidev);
> > +
> > + /* Disable PCI device */
> > + pci_disable_device(pcidev);
> > +
> > + /* Clear driver specific data */
> > + pci_set_drvdata(pcidev, NULL);
> > +}
> > +
> > +static struct pci_driver c2_pci_driver = {
> > + .name = DRV_NAME,
> > + .id_table = c2_pci_table,
> > + .probe = c2_probe,
> > + .remove = __devexit_p(c2_remove),
> > +};
> > +
> > +static int __init c2_init_module(void)
> > +{
> > + return pci_module_init(&c2_pci_driver);
> > +}
> > +
> > +static void __exit c2_exit_module(void)
> > +{
> > + pci_unregister_driver(&c2_pci_driver);
> > +}
> > +
> > +module_init(c2_init_module);
> > +module_exit(c2_exit_module);
> > Index: hw/amso1100/c2_qp.c
> > ===================================================================
> > --- hw/amso1100/c2_qp.c (revision 0)
> > +++ hw/amso1100/c2_qp.c (revision 0)
> > @@ -0,0 +1,840 @@
> > +/*
> > + * Copyright (c) 2004 Topspin Communications. All rights reserved.
> > + * Copyright (c) 2005 Cisco Systems. All rights reserved.
> > + * Copyright (c) 2005 Mellanox Technologies. All rights reserved.
> > + * Copyright (c) 2004 Voltaire, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + *
> > + */
> > +
> > +#include "c2.h"
> > +#include "c2_vq.h"
> > +#include "cc_status.h"
> > +
> > +#define C2_MAX_ORD_PER_QP 128
> > +#define C2_MAX_IRD_PER_QP 128
> > +
> > +#define CC_HINT_MAKE(q_index, hint_count) (((q_index) << 16) |
> hint_count)
> > +#define CC_HINT_GET_INDEX(hint) (((hint) & 0x7FFF0000) >> 16)
> > +#define CC_HINT_GET_COUNT(hint) ((hint) & 0x0000FFFF)
> > +
> > +enum c2_qp_state {
> > + C2_QP_STATE_IDLE = 0x01,
> > + C2_QP_STATE_CONNECTING = 0x02,
> > + C2_QP_STATE_RTS = 0x04,
> > + C2_QP_STATE_CLOSING = 0x08,
> > + C2_QP_STATE_TERMINATE = 0x10,
> > + C2_QP_STATE_ERROR = 0x20,
> > +};
> > +
> > +#define NO_SUPPORT -1
> > +static const u8 c2_opcode[] = {
> > + [IB_WR_SEND] = CC_WR_TYPE_SEND,
> > + [IB_WR_SEND_WITH_IMM] = NO_SUPPORT,
> > + [IB_WR_RDMA_WRITE] = CC_WR_TYPE_RDMA_WRITE,
> > + [IB_WR_RDMA_WRITE_WITH_IMM] = NO_SUPPORT,
> > + [IB_WR_RDMA_READ] = CC_WR_TYPE_RDMA_READ,
> > + [IB_WR_ATOMIC_CMP_AND_SWP] = NO_SUPPORT,
> > + [IB_WR_ATOMIC_FETCH_AND_ADD] = NO_SUPPORT,
> > +};
> > +
> > +void c2_qp_event(struct c2_dev *c2dev, u32 qpn,
> > + enum ib_event_type event_type)
> > +{
> > + struct c2_qp *qp;
> > + struct ib_event event;
> > +
> > + spin_lock(&c2dev->qp_table.lock);
> > + qp = c2_array_get(&c2dev->qp_table.qp, qpn & (c2dev->max_qp - 1));
> > + if (qp)
> > + atomic_inc(&qp->refcount);
> > + spin_unlock(&c2dev->qp_table.lock);
> > +
> > + if (!qp) {
> > + dprintk("Async event for bogus QP %08x\n", qpn);
> > + return;
> > + }
> > +
> > + event.device = &c2dev->ibdev;
> > + event.event = event_type;
> > + event.element.qp = &qp->ibqp;
> > + if (qp->ibqp.event_handler)
> > + qp->ibqp.event_handler(&event, qp->ibqp.qp_context);
> > +
> > + if (atomic_dec_and_test(&qp->refcount))
> > + wake_up(&qp->wait);
> > +}
> > +
> > +static int to_c2_state(enum ib_qp_state ib_state)
> > +{
> > + switch (ib_state) {
> > + case IB_QPS_RESET: return C2_QP_STATE_IDLE;
> > + case IB_QPS_RTS: return C2_QP_STATE_RTS;
> > + case IB_QPS_SQD: return C2_QP_STATE_CLOSING;
> > + case IB_QPS_SQE: return C2_QP_STATE_CLOSING;
> > + case IB_QPS_ERR: return C2_QP_STATE_ERROR;
> > + default: return -1;
> > + }
> > +}
> > +
> > +#define C2_QP_NO_ATTR_CHANGE 0xFFFFFFFF
> > +
> > +int c2_qp_modify(struct c2_dev *c2dev, struct c2_qp *qp,
> > + struct ib_qp_attr *attr, int attr_mask)
> > +{
> > + ccwr_qp_modify_req_t wr;
> > + ccwr_qp_modify_rep_t *reply;
> > + struct c2_vq_req *vq_req;
> > + int err;
> > +
> > + vq_req = vq_req_alloc(c2dev);
> > + if (!vq_req)
> > + return -ENOMEM;
> > +
> > + c2_wr_set_id(&wr, CCWR_QP_MODIFY);
> > + wr.hdr.context = (unsigned long)vq_req;
> > + wr.rnic_handle = c2dev->adapter_handle;
> > + wr.qp_handle = qp->adapter_handle;
> > + wr.ord = cpu_to_be32(C2_QP_NO_ATTR_CHANGE);
> > + wr.ird = cpu_to_be32(C2_QP_NO_ATTR_CHANGE);
> > + wr.sq_depth = cpu_to_be32(C2_QP_NO_ATTR_CHANGE);
> > + wr.rq_depth = cpu_to_be32(C2_QP_NO_ATTR_CHANGE);
> > +
> > + if (attr_mask & IB_QP_STATE) {
> > +
> > + /* Ensure the state is valid */
> > + if (attr->qp_state < 0 || attr->qp_state > IB_QPS_ERR)
> > + return -EINVAL;
> > +
> > + wr.next_qp_state = cpu_to_be32(to_c2_state(attr->qp_state));
> > +
> > + } else if (attr_mask & IB_QP_CUR_STATE) {
> > +
> > + if (attr->cur_qp_state != IB_QPS_RTR &&
> > + attr->cur_qp_state != IB_QPS_RTS &&
> > + attr->cur_qp_state != IB_QPS_SQD &&
> > + attr->cur_qp_state != IB_QPS_SQE)
> > + return -EINVAL;
> > + else
> > + wr.next_qp_state =
> cpu_to_be32(to_c2_state(attr->cur_qp_state));
> > + } else {
> > + err = 0;
> > + goto bail0;
> > + }
> > +
> > + /* reference the request struct */
> > + vq_req_get(c2dev, vq_req);
> > +
> > + err = vq_send_wr(c2dev, (ccwr_t *)&wr);
> > + if (err) {
> > + vq_req_put(c2dev, vq_req);
> > + goto bail0;
> > + }
> > +
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err)
> > + goto bail0;
> > +
> > + reply = (ccwr_qp_modify_rep_t *)(unsigned long)vq_req->reply_msg;
> > + if (!reply) {
> > + err = -ENOMEM;
> > + goto bail0;
> > + }
> > +
> > + err = c2_errno(reply);
> > +
> > + vq_repbuf_free(c2dev, reply);
> > +bail0:
> > + vq_req_free(c2dev, vq_req);
> > + return err;
> > +}
> > +
> > +static int destroy_qp(struct c2_dev *c2dev,
> > + struct c2_qp *qp)
> > +{
> > + struct c2_vq_req *vq_req;
> > + ccwr_qp_destroy_req_t wr;
> > + ccwr_qp_destroy_rep_t *reply;
> > + int err;
> > +
> > + /*
> > + * Allocate a verb request message
> > + */
> > + vq_req = vq_req_alloc(c2dev);
> > + if (!vq_req) {
> > + return -ENOMEM;
> > + }
> > +
> > + /*
> > + * Initialize the WR
> > + */
> > + c2_wr_set_id(&wr, CCWR_QP_DESTROY);
> > + wr.hdr.context = (unsigned long)vq_req;
> > + wr.rnic_handle = c2dev->adapter_handle;
> > + wr.qp_handle = qp->adapter_handle;
> > +
> > + /*
> > + * reference the request struct. dereferenced in the int handler.
> > + */
> > + vq_req_get(c2dev, vq_req);
> > +
> > + /*
> > + * Send WR to adapter
> > + */
> > + err = vq_send_wr(c2dev, (ccwr_t*)&wr);
> > + if (err) {
> > + vq_req_put(c2dev, vq_req);
> > + goto bail0;
> > + }
> > +
> > + /*
> > + * Wait for reply from adapter
> > + */
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err) {
> > + goto bail0;
> > + }
> > +
> > + /*
> > + * Process reply
> > + */
> > + reply = (ccwr_qp_destroy_rep_t*)(unsigned long)(vq_req->reply_msg);
> > + if (!reply) {
> > + err = -ENOMEM;
> > + goto bail0;
> > + }
> > +
> > + if ( (err = c2_errno(reply)) != 0) {
> > + // XXX print error
> > + }
> > +
> > + vq_repbuf_free(c2dev, reply);
> > +bail0:
> > + vq_req_free(c2dev, vq_req);
> > + return err;
> > +}
> > +
> > +int c2_alloc_qp(struct c2_dev *c2dev,
> > + struct c2_pd *pd,
> > + struct ib_qp_init_attr *qp_attrs,
> > + struct c2_qp *qp)
> > +{
> > + ccwr_qp_create_req_t wr;
> > + ccwr_qp_create_rep_t *reply;
> > + struct c2_vq_req *vq_req;
> > + struct c2_cq *send_cq = to_c2cq(qp_attrs->send_cq);
> > + struct c2_cq *recv_cq = to_c2cq(qp_attrs->recv_cq);
> > + unsigned long peer_pa;
> > + u32 q_size, msg_size, mmap_size;
> > + void *mmap;
> > + int err;
> > +
> > + qp->qpn = c2_alloc(&c2dev->qp_table.alloc);
> > + if (qp->qpn == -1)
> > + return -ENOMEM;
> > +
> > + /* Allocate the SQ and RQ shared pointers */
> > + qp->sq_mq.shared = c2_alloc_mqsp(c2dev->kern_mqsp_pool);
> > + if (!qp->sq_mq.shared) {
> > + err = -ENOMEM;
> > + goto bail0;
> > + }
> > +
> > + qp->rq_mq.shared = c2_alloc_mqsp(c2dev->kern_mqsp_pool);
> > + if (!qp->rq_mq.shared) {
> > + err = -ENOMEM;
> > + goto bail1;
> > + }
> > +
> > + /* Allocate the verbs request */
> > + vq_req = vq_req_alloc(c2dev);
> > + if (vq_req == NULL) {
> > + err = -ENOMEM;
> > + goto bail2;
> > + }
> > +
> > + /* Initialize the work request */
> > + memset(&wr, 0, sizeof(wr));
> > + c2_wr_set_id(&wr, CCWR_QP_CREATE);
> > + wr.hdr.context = (unsigned long)vq_req;
> > + wr.rnic_handle = c2dev->adapter_handle;
> > + wr.sq_cq_handle = send_cq->adapter_handle;
> > + wr.rq_cq_handle = recv_cq->adapter_handle;
> > + wr.sq_depth = cpu_to_be32(qp_attrs->cap.max_send_wr+1);
> > + wr.rq_depth = cpu_to_be32(qp_attrs->cap.max_recv_wr+1);
> > + wr.srq_handle = 0;
> > + wr.flags = cpu_to_be32(QP_RDMA_READ | QP_RDMA_WRITE |
> QP_MW_BIND |
> > + QP_ZERO_STAG | QP_RDMA_READ_RESPONSE);
> > + wr.send_sgl_depth = cpu_to_be32(qp_attrs->cap.max_send_sge);
> > + wr.recv_sgl_depth = cpu_to_be32(qp_attrs->cap.max_recv_sge);
> > + wr.rdma_write_sgl_depth =
> cpu_to_be32(qp_attrs->cap.max_send_sge); //
> > XXX no write depth?
> > + wr.shared_sq_ht = cpu_to_be64(__pa(qp->sq_mq.shared));
> > + wr.shared_rq_ht = cpu_to_be64(__pa(qp->rq_mq.shared));
> > + wr.ord = cpu_to_be32(C2_MAX_ORD_PER_QP);
> > + wr.ird = cpu_to_be32(C2_MAX_IRD_PER_QP);
> > + wr.pd_id = pd->pd_id;
> > + wr.user_context = (unsigned long)qp;
> > +
> > + vq_req_get(c2dev, vq_req);
> > +
> > + /* Send the WR to the adapter */
> > + err = vq_send_wr(c2dev, (ccwr_t*)&wr);
> > + if (err) {
> > + vq_req_put(c2dev, vq_req);
> > + goto bail3;
> > + }
> > +
> > + /* Wait for the verb reply */
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err) {
> > + goto bail3;
> > + }
> > +
> > + /* Process the reply */
> > + reply = (ccwr_qp_create_rep_t*)(unsigned long)(vq_req->reply_msg);
> > + if (!reply) {
> > + err = -ENOMEM;
> > + goto bail3;
> > + }
> > +
> > + if ( (err = c2_wr_get_result(reply)) != 0) {
> > + goto bail4;
> > + }
> > +
> > + /* Fill in the kernel QP struct */
> > + atomic_set(&qp->refcount, 1);
> > + qp->adapter_handle = reply->qp_handle;
> > + qp->state = IB_QPS_RESET;
> > + qp->send_sgl_depth = qp_attrs->cap.max_send_sge;
> > + qp->rdma_write_sgl_depth = qp_attrs->cap.max_send_sge;
> > + qp->recv_sgl_depth = qp_attrs->cap.max_recv_sge;
> > +
> > + /* Initialize the SQ MQ */
> > + q_size = be32_to_cpu(reply->sq_depth);
> > + msg_size = be32_to_cpu(reply->sq_msg_size);
> > + peer_pa = (unsigned long)(c2dev->pa +
> be32_to_cpu(reply->sq_mq_start));
> > + mmap_size = PAGE_ALIGN(sizeof(struct c2_mq_shared) + msg_size *
> q_size);
> > + mmap = ioremap_nocache(peer_pa, mmap_size);
> > + if (!mmap) {
> > + err = -ENOMEM;
> > + goto bail5;
> > + }
> > +
> > + c2_mq_init(&qp->sq_mq,
> > + be32_to_cpu(reply->sq_mq_index),
> > + q_size,
> > + msg_size,
> > + mmap + sizeof(struct c2_mq_shared), /* pool start */
> > + mmap, /* peer */
> > + C2_MQ_ADAPTER_TARGET);
> > +
> > + /* Initialize the RQ mq */
> > + q_size = be32_to_cpu(reply->rq_depth);
> > + msg_size = be32_to_cpu(reply->rq_msg_size);
> > + peer_pa = (unsigned long)(c2dev->pa +
> be32_to_cpu(reply->rq_mq_start));
> > + mmap_size = PAGE_ALIGN(sizeof(struct c2_mq_shared) + msg_size *
> q_size);
> > + mmap = ioremap_nocache(peer_pa, mmap_size);
> > + if (!mmap) {
> > + err = -ENOMEM;
> > + goto bail6;
> > + }
> > +
> > + c2_mq_init(&qp->rq_mq,
> > + be32_to_cpu(reply->rq_mq_index),
> > + q_size,
> > + msg_size,
> > + mmap + sizeof(struct c2_mq_shared), /* pool start */
> > + mmap, /* peer */
> > + C2_MQ_ADAPTER_TARGET);
> > +
> > + vq_repbuf_free(c2dev, reply);
> > + vq_req_free(c2dev, vq_req);
> > +
> > + spin_lock_irq(&c2dev->qp_table.lock);
> > + c2_array_set(&c2dev->qp_table.qp,
> > + qp->qpn & (c2dev->max_qp - 1), qp);
> > + spin_unlock_irq(&c2dev->qp_table.lock);
> > +
> > + return 0;
> > +
> > +bail6:
> > + iounmap(qp->sq_mq.peer);
> > +bail5:
> > + destroy_qp(c2dev, qp);
> > +bail4:
> > + vq_repbuf_free(c2dev, reply);
> > +bail3:
> > + vq_req_free(c2dev, vq_req);
> > +bail2:
> > + c2_free_mqsp(qp->rq_mq.shared);
> > +bail1:
> > + c2_free_mqsp(qp->sq_mq.shared);
> > +bail0:
> > + c2_free(&c2dev->qp_table.alloc, qp->qpn);
> > + return err;
> > +}
> > +
> > +void c2_free_qp(struct c2_dev *c2dev,
> > + struct c2_qp *qp)
> > +{
> > + struct c2_cq *send_cq;
> > + struct c2_cq *recv_cq;
> > +
> > + send_cq = to_c2cq(qp->ibqp.send_cq);
> > + recv_cq = to_c2cq(qp->ibqp.recv_cq);
> > +
> > + /*
> > + * Lock CQs here, so that CQ polling code can do QP lookup
> > + * without taking a lock.
> > + */
> > + spin_lock_irq(&send_cq->lock);
> > + if (send_cq != recv_cq)
> > + spin_lock(&recv_cq->lock);
> > +
> > + spin_lock(&c2dev->qp_table.lock);
> > + c2_array_clear(&c2dev->qp_table.qp,
> > + qp->qpn & (c2dev->max_qp - 1));
> > + spin_unlock(&c2dev->qp_table.lock);
> > +
> > + if (send_cq != recv_cq)
> > + spin_unlock(&recv_cq->lock);
> > + spin_unlock_irq(&send_cq->lock);
> > +
> > + atomic_dec(&qp->refcount);
> > + wait_event(qp->wait, !atomic_read(&qp->refcount));
> > +
> > + /*
> > + * Destory qp in the rnic...
> > + */
> > + destroy_qp(c2dev, qp);
> > +
> > + /*
> > + * Mark any unreaped CQEs as null and void.
> > + */
> > + c2_cq_clean(c2dev, qp, send_cq->cqn);
> > + if (send_cq != recv_cq)
> > + c2_cq_clean(c2dev, qp, recv_cq->cqn);
> > + /*
> > + * Unmap the MQs and return the shared pointers
> > + * to the message pool.
> > + */
> > + iounmap(qp->sq_mq.peer);
> > + iounmap(qp->rq_mq.peer);
> > + c2_free_mqsp(qp->sq_mq.shared);
> > + c2_free_mqsp(qp->rq_mq.shared);
> > +
> > + c2_free(&c2dev->qp_table.alloc, qp->qpn);
> > +}
> > +
> > +/*
> > + * Function: move_sgl
> > + *
> > + * Description:
> > + * Move an SGL from the user's work request struct into a CCIL Work
> Request
> > + * message, swapping to WR byte order and ensure the total length
> doesn't
> > + * overflow.
> > + *
> > + * IN:
> > + * dst - ptr to CCIL Work Request message SGL memory.
> > + * src - ptr to the consumers SGL memory.
> > + *
> > + * OUT: none
> > + *
> > + * Return:
> > + * CCIL status codes.
> > + */
> > +static int
> > +move_sgl(cc_data_addr_t *dst, struct ib_sge *src, int count, u32
> *p_len, u8
> > *actual_count)
> > +{
> > + u32 tot = 0; /* running total */
> > + u8 acount = 0; /* running total non-0 len sge's */
> > +
> > + while (count > 0) {
> > + /*
> > + * If the addition of this SGE causes the
> > + * total SGL length to exceed 2^32-1, then
> > + * fail-n-bail.
> > + *
> > + * If the current total plus the next element length
> > + * wraps, then it will go negative and be less than the
> > + * current total...
> > + */
> > + if ((tot+src->length) < tot) {
> > + return -EINVAL;
> > + }
> > + /*
> > + * Bug: 1456 (as well as 1498 & 1643)
> > + * Skip over any sge's supplied with len=0
> > + */
> > + if (src->length) {
> > + tot += src->length;
> > + dst->stag = cpu_to_be32(src->lkey);
> > + dst->to = cpu_to_be64(src->addr);
> > + dst->length = cpu_to_be32(src->length);
> > + dst++;
> > + acount++;
> > + }
> > + src++;
> > + count--;
> > + }
> > +
> > + if (acount == 0) {
> > + /*
> > + * Bug: 1476 (as well as 1498, 1456 and 1643)
> > + * Setup the SGL in the WR to make it easier for the RNIC.
> > + * This way, the FW doesn't have to deal with special cases.
> > + * Setting length=0 should be sufficient.
> > + */
> > + dst->stag = 0;
> > + dst->to = 0;
> > + dst->length = 0;
> > + }
> > +
> > + *p_len = tot;
> > + *actual_count = acount;
> > + return 0;
> > +}
> > +
> > +/*
> > + * Function: c2_activity (private function)
> > + *
> > + * Description:
> > + * Post an mq index to the host->adapter activity fifo.
> > + *
> > + * IN:
> > + * c2dev - ptr to c2dev structure
> > + * mq_index - mq index to post
> > + * shared - value most recently written to shared
> > + *
> > + * OUT:
> > + *
> > + * Return:
> > + * none
> > + */
> > +static inline void
> > +c2_activity(struct c2_dev *c2dev, u32 mq_index, u16 shared)
> > +{
> > + /*
> > + * First read the register to see if the FIFO is full, and if so,
> > + * spin until it's not. This isn't perfect -- there is no
> > + * synchronization among the clients of the register, but in
> > + * practice it prevents multiple CPU from hammering the bus
> > + * with PCI RETRY. Note that when this does happen, the card
> > + * cannot get on the bus and the card and system hang in a
> > + * deadlock -- thus the need for this code. [TOT]
> > + */
> > + while (c2_read32(c2dev->regs + PCI_BAR0_ADAPTER_HINT) & 0x80000000)
> {
> > + set_current_state(TASK_UNINTERRUPTIBLE);
> > + schedule_timeout(0);
> > + }
> > +
> > + c2_write32(c2dev->regs + PCI_BAR0_ADAPTER_HINT,
> CC_HINT_MAKE(mq_index, shared));
> > +}
> > +
> > +/*
> > + * Function: qp_wr_post
> > + *
> > + * Description:
> > + * This in-line function allocates a MQ msg, then moves the host-copy
> of
> > + * the completed WR into msg. Then it posts the message.
> > + *
> > + * IN:
> > + * q - ptr to user MQ.
> > + * wr - ptr to host-copy of the WR.
> > + * qp - ptr to user qp
> > + * size - Number of bytes to post. Assumed to be divisible by 4.
> > + *
> > + * OUT: none
> > + *
> > + * Return:
> > + * CCIL status codes.
> > + */
> > +static int
> > +qp_wr_post(struct c2_mq *q, ccwr_t *wr, struct c2_qp *qp, u32 size)
> > +{
> > + ccwr_t *msg;
> > +
> > + msg = c2_mq_alloc(q);
> > + if (msg == NULL) {
> > + return -EINVAL;
> > + }
> > +
> > +#ifdef CCMSGMAGIC
> > + ((ccwr_hdr_t *)wr)->magic = cpu_to_be32(CCWR_MAGIC);
> > +#endif
> > +
> > + /*
> > + * Since all header fields in the WR are the same as the
> > + * CQE, set the following so the adapter need not.
> > + */
> > + c2_wr_set_result(wr, CCERR_PENDING);
> > +
> > + /*
> > + * Copy the wr down to the adapter
> > + */
> > + memcpy((void *)msg, (void *)wr, size);
> > +
> > + c2_mq_produce(q);
> > + return 0;
> > +}
> > +
> > +
> > +int c2_post_send(struct ib_qp *ibqp, struct ib_send_wr *ib_wr,
> > + struct ib_send_wr **bad_wr)
> > +{
> > + struct c2_dev *c2dev = to_c2dev(ibqp->device);
> > + struct c2_qp *qp = to_c2qp(ibqp);
> > + ccwr_t wr;
> > + int err = 0;
> > +
> > + u32 flags;
> > + u32 tot_len;
> > + u8 actual_sge_count;
> > + u32 msg_size;
> > +
> > + if (qp->state > IB_QPS_RTS)
> > + return -EINVAL;
> > +
> > + while (ib_wr) {
> > +
> > + flags = 0;
> > + wr.sqwr.sq_hdr.user_hdr.hdr.context = ib_wr->wr_id;
> > + if (ib_wr->send_flags & IB_SEND_SIGNALED) {
> > + flags |= SQ_SIGNALED;
> > + }
> > +
> > + switch (ib_wr->opcode) {
> > + case IB_WR_SEND:
> > + if (ib_wr->send_flags & IB_SEND_SOLICITED) {
> > + c2_wr_set_id(&wr, CC_WR_TYPE_SEND_SE);
> > + msg_size = sizeof(ccwr_send_se_req_t);
> > + } else {
> > + c2_wr_set_id(&wr, CC_WR_TYPE_SEND);
> > + msg_size = sizeof(ccwr_send_req_t);
> > + }
> > +
> > + wr.sqwr.send.remote_stag = 0;
> > + msg_size += sizeof(cc_data_addr_t) * ib_wr->num_sge;
> > + if (ib_wr->num_sge > qp->send_sgl_depth) {
> > + err = -EINVAL;
> > + break;
> > + }
> > + if (ib_wr->send_flags & IB_SEND_FENCE) {
> > + flags |= SQ_READ_FENCE;
> > + }
> > + err = move_sgl((cc_data_addr_t*)&(wr.sqwr.send.data),
> > + ib_wr->sg_list,
> > + ib_wr->num_sge,
> > + &tot_len,
> > + &actual_sge_count);
> > + wr.sqwr.send.sge_len = cpu_to_be32(tot_len);
> > + c2_wr_set_sge_count(&wr, actual_sge_count);
> > + break;
> > + case IB_WR_RDMA_WRITE:
> > + c2_wr_set_id(&wr, CC_WR_TYPE_RDMA_WRITE);
> > + msg_size = sizeof(ccwr_rdma_write_req_t) +
> > + (sizeof(cc_data_addr_t) * ib_wr->num_sge);
> > + if (ib_wr->num_sge > qp->rdma_write_sgl_depth) {
> > + err = -EINVAL;
> > + break;
> > + }
> > + if (ib_wr->send_flags & IB_SEND_FENCE) {
> > + flags |= SQ_READ_FENCE;
> > + }
> > + wr.sqwr.rdma_write.remote_stag =
> cpu_to_be32(ib_wr->wr.rdma.rkey);
> > + wr.sqwr.rdma_write.remote_to =
> cpu_to_be64(ib_wr->wr.rdma.remote_addr);
> > + err = move_sgl((cc_data_addr_t*)
> > + &(wr.sqwr.rdma_write.data),
> > + ib_wr->sg_list,
> > + ib_wr->num_sge,
> > + &tot_len,
> > + &actual_sge_count);
> > + wr.sqwr.rdma_write.sge_len = cpu_to_be32(tot_len);
> > + c2_wr_set_sge_count(&wr, actual_sge_count);
> > + break;
> > + case IB_WR_RDMA_READ:
> > + c2_wr_set_id(&wr, CC_WR_TYPE_RDMA_READ);
> > + msg_size = sizeof(ccwr_rdma_read_req_t);
> > +
> > + /* IWarp only suppots 1 sge for RDMA reads */
> > + if (ib_wr->num_sge > 1) {
> > + err = -EINVAL;
> > + break;
> > + }
> > +
> > + /*
> > + * Move the local and remote stag/to/len into the WR.
> > + */
> > + wr.sqwr.rdma_read.local_stag =
> > + cpu_to_be32(ib_wr->sg_list->lkey);
> > + wr.sqwr.rdma_read.local_to =
> > + cpu_to_be64(ib_wr->sg_list->addr);
> > + wr.sqwr.rdma_read.remote_stag =
> > + cpu_to_be32(ib_wr->wr.rdma.rkey);
> > + wr.sqwr.rdma_read.remote_to =
> > + cpu_to_be64(ib_wr->wr.rdma.remote_addr);
> > + wr.sqwr.rdma_read.length =
> > + cpu_to_be32(ib_wr->sg_list->length);
> > + break;
> > + default:
> > + /* error */
> > + msg_size = 0;
> > + err = -EINVAL;
> > + break;
> > + }
> > +
> > + /*
> > + * If we had an error on the last wr build, then
> > + * break out. Possible errors include bogus WR
> > + * type, and a bogus SGL length...
> > + */
> > + if (err) {
> > + break;
> > + }
> > +
> > + /*
> > + * Store flags
> > + */
> > + c2_wr_set_flags(&wr, flags);
> > +
> > + /*
> > + * Post the puppy!
> > + */
> > + err = qp_wr_post(&qp->sq_mq, &wr, qp, msg_size);
> > + if (err) {
> > + break;
> > + }
> > +
> > + /*
> > + * Enqueue mq index to activity FIFO.
> > + */
> > + c2_activity(c2dev, qp->sq_mq.index, qp->sq_mq.hint_count);
> > +
> > + ib_wr = ib_wr->next;
> > + }
> > +
> > + if (err)
> > + *bad_wr = ib_wr;
> > + return err;
> > +}
> > +
> > +int c2_post_receive(struct ib_qp *ibqp, struct ib_recv_wr *ib_wr,
> > + struct ib_recv_wr **bad_wr)
> > +{
> > + struct c2_dev *c2dev = to_c2dev(ibqp->device);
> > + struct c2_qp *qp = to_c2qp(ibqp);
> > + ccwr_t wr;
> > + int err = 0;
> > +
> > + if (qp->state > IB_QPS_RTS)
> > + return -EINVAL;
> > +
> > + /*
> > + * Try and post each work request
> > + */
> > + while (ib_wr) {
> > + u32 tot_len;
> > + u8 actual_sge_count;
> > +
> > + if (ib_wr->num_sge > qp->recv_sgl_depth) {
> > + err = -EINVAL;
> > + break;
> > + }
> > +
> > + /*
> > + * Create local host-copy of the WR
> > + */
> > + wr.rqwr.rq_hdr.user_hdr.hdr.context = ib_wr->wr_id;
> > + c2_wr_set_id(&wr, CCWR_RECV);
> > + c2_wr_set_flags(&wr, 0);
> > +
> > + /* sge_count is limited to eight bits. */
> > + assert(ib_wr->num_sge < 256);
> > + err = move_sgl((cc_data_addr_t*)&(wr.rqwr.data),
> > + ib_wr->sg_list,
> > + ib_wr->num_sge,
> > + &tot_len,
> > + &actual_sge_count);
> > + c2_wr_set_sge_count(&wr, actual_sge_count);
> > +
> > + /*
> > + * If we had an error on the last wr build, then
> > + * break out. Possible errors include bogus WR
> > + * type, and a bogus SGL length...
> > + */
> > + if (err) {
> > + break;
> > + }
> > +
> > + err = qp_wr_post(&qp->rq_mq, &wr, qp, qp->rq_mq.msg_size);
> > + if (err) {
> > + break;
> > + }
> > +
> > + /*
> > + * Enqueue mq index to activity FIFO
> > + */
> > + c2_activity(c2dev, qp->rq_mq.index, qp->rq_mq.hint_count);
> > +
> > + ib_wr = ib_wr->next;
> > + }
> > +
> > + if (err)
> > + *bad_wr = ib_wr;
> > + return err;
> > +}
> > +
> > +int __devinit c2_init_qp_table(struct c2_dev *c2dev)
> > +{
> > + int err;
> > +
> > + spin_lock_init(&c2dev->qp_table.lock);
> > +
> > + err = c2_alloc_init(&c2dev->qp_table.alloc,
> > + c2dev->max_qp,
> > + 0);
> > + if (err)
> > + return err;
> > +
> > + err = c2_array_init(&c2dev->qp_table.qp,
> > + c2dev->max_qp);
> > + if (err) {
> > + c2_alloc_cleanup(&c2dev->qp_table.alloc);
> > + return err;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +void __devexit c2_cleanup_qp_table(struct c2_dev *c2dev)
> > +{
> > + c2_alloc_cleanup(&c2dev->qp_table.alloc);
> > +}
> > Index: hw/amso1100/cc_ivn.h
> > ===================================================================
> > --- hw/amso1100/cc_ivn.h (revision 0)
> > +++ hw/amso1100/cc_ivn.h (revision 0)
> > @@ -0,0 +1,57 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +#ifndef _CC_IVN_H_
> > +#define _CC_IVN_H_
> > +
> > +/*
> > + * The following value must be incremented each time structures shared
> > + * between the firmware and host drivers are changed. This includes
> > + * structures, types, and Max number of queue pairs..
> > + */
> > +#define CC_IVN_BASE 18
> > +
> > +/* Used to mask of the CCMSGMAGIC bit */
> > +#define CC_IVN_MASK 0x7fffffff
> > +
> > +
> > +/*
> > + * The high order bit indicates a CCMSGMAGIC build, which changes the
> > + * adapter<->host message formats.
> > + */
> > +#ifdef CCMSGMAGIC
> > +#define CC_IVN (CC_IVN_BASE | 0x80000000)
> > +#else
> > +#define CC_IVN (CC_IVN_BASE & 0x7fffffff)
> > +#endif
> > +
> > +#endif /* _CC_IVN_H_ */
> > Index: hw/amso1100/c2_mq.h
> > ===================================================================
> > --- hw/amso1100/c2_mq.h (revision 0)
> > +++ hw/amso1100/c2_mq.h (revision 0)
> > @@ -0,0 +1,104 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +
> > +#ifndef _C2_MQ_H_
> > +#define _C2_MQ_H_
> > +#include <linux/kernel.h>
> > +#include "c2_wr.h"
> > +
> > +enum c2_shared_regs {
> > +
> > + C2_SHARED_ARMED = 0x10,
> > + C2_SHARED_NOTIFY = 0x18,
> > + C2_SHARED_SHARED = 0x40,
> > +};
> > +
> > +struct c2_mq_shared {
> > + u16 unused1;
> > + u8 armed;
> > + u8 notification_type;
> > + u32 unused2;
> > + u16 shared;
> > + /* Pad to 64 bytes. */
> > + u8 pad[64-sizeof(u16)-2*sizeof(u8)-sizeof(u32)-sizeof(u16)];
> > +};
> > +
> > +enum c2_mq_type {
> > + C2_MQ_HOST_TARGET = 1,
> > + C2_MQ_ADAPTER_TARGET = 2,
> > +};
> > +
> > +/*
> > + * c2_mq_t is for kernel-mode MQs like the VQs and the AEQ.
> > + * c2_user_mq_t (which is the same format) is for user-mode MQs...
> > + */
> > +#define C2_MQ_MAGIC 0x4d512020 /* 'MQ ' */
> > +struct c2_mq {
> > + u32 magic;
> > + u8* msg_pool;
> > + u16 hint_count;
> > + u16 priv;
> > + struct c2_mq_shared *peer;
> > + u16* shared;
> > + u32 q_size;
> > + u32 msg_size;
> > + u32 index;
> > + enum c2_mq_type type;
> > +};
> > +
> > +#define BUMP(q,p) (p) = ((p)+1) % (q)->q_size
> > +#define BUMP_SHARED(q,p) (p) = cpu_to_be16((be16_to_cpu(p)+1) %
> (q)->q_size)
> > +
> > +static __inline__ int
> > +c2_mq_empty(struct c2_mq *q)
> > +{
> > + return q->priv == be16_to_cpu(*q->shared);
> > +}
> > +
> > +static __inline__ int
> > +c2_mq_full(struct c2_mq *q)
> > +{
> > + return q->priv == (be16_to_cpu(*q->shared) + q->q_size-1) %
> q->q_size;
> > +}
> > +
> > +extern void c2_mq_lconsume(struct c2_mq *q, u32 wqe_count);
> > +extern void * c2_mq_alloc(struct c2_mq *q);
> > +extern void c2_mq_produce(struct c2_mq *q);
> > +extern void * c2_mq_consume(struct c2_mq *q);
> > +extern void c2_mq_free(struct c2_mq *q);
> > +extern u32 c2_mq_count(struct c2_mq *q);
> > +extern void c2_mq_init(struct c2_mq *q, u32 index, u32 q_size,
> > + u32 msg_size, u8 *pool_start,
> > + u16 *peer, u32 type);
> > +
> > +#endif /* _C2_MQ_H_ */
> > Index: hw/amso1100/c2_user.h
> > ===================================================================
> > --- hw/amso1100/c2_user.h (revision 0)
> > +++ hw/amso1100/c2_user.h (revision 0)
> > @@ -0,0 +1,82 @@
> > +/*
> > + * Copyright (c) 2005 Topspin Communications. All rights reserved.
> > + * Copyright (c) 2005 Cisco Systems. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + *
> > + */
> > +
> > +#ifndef C2_USER_H
> > +#define C2_USER_H
> > +
> > +#include <linux/types.h>
> > +
> > +/*
> > + * Make sure that all structs defined in this file remain laid out so
> > + * that they pack the same way on 32-bit and 64-bit architectures (to
> > + * avoid incompatibility between 32-bit userspace and 64-bit kernels).
> > + * In particular do not use pointer types -- pass pointers in __u64
> > + * instead.
> > + */
> > +
> > +struct c2_alloc_ucontext_resp {
> > + __u32 qp_tab_size;
> > + __u32 uarc_size;
> > +};
> > +
> > +struct c2_alloc_pd_resp {
> > + __u32 pdn;
> > + __u32 reserved;
> > +};
> > +
> > +struct c2_create_cq {
> > + __u32 lkey;
> > + __u32 pdn;
> > + __u64 arm_db_page;
> > + __u64 set_db_page;
> > + __u32 arm_db_index;
> > + __u32 set_db_index;
> > +};
> > +
> > +struct c2_create_cq_resp {
> > + __u32 cqn;
> > + __u32 reserved;
> > +};
> > +
> > +struct c2_create_qp {
> > + __u32 lkey;
> > + __u32 reserved;
> > + __u64 sq_db_page;
> > + __u64 rq_db_page;
> > + __u32 sq_db_index;
> > + __u32 rq_db_index;
> > +};
> > +
> > +#endif /* C2_USER_H */
> > Index: hw/amso1100/c2_ae.c
> > ===================================================================
> > --- hw/amso1100/c2_ae.c (revision 0)
> > +++ hw/amso1100/c2_ae.c (revision 0)
> > @@ -0,0 +1,216 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +#include "c2.h"
> > +#include <rdma/iw_cm.h>
> > +#include "cc_status.h"
> > +#include "cc_ae.h"
> > +
> > +static int c2_convert_cm_status(u32 cc_status)
> > +{
> > + switch (cc_status) {
> > + case CC_CONN_STATUS_SUCCESS:
> > + return 0;
> > + case CC_CONN_STATUS_REJECTED:
> > + return -ENETRESET;
> > + case CC_CONN_STATUS_REFUSED:
> > + return -ECONNREFUSED;
> > + case CC_CONN_STATUS_TIMEDOUT:
> > + return -ETIMEDOUT;
> > + case CC_CONN_STATUS_NETUNREACH:
> > + return -ENETUNREACH;
> > + case CC_CONN_STATUS_HOSTUNREACH:
> > + return -EHOSTUNREACH;
> > + case CC_CONN_STATUS_INVALID_RNIC:
> > + return -EINVAL;
> > + case CC_CONN_STATUS_INVALID_QP:
> > + return -EINVAL;
> > + case CC_CONN_STATUS_INVALID_QP_STATE:
> > + return -EINVAL;
> > + default:
> > + panic("Unable to convert CM status: %d\n", cc_status);
> > + break;
> > + }
> > +}
> > +
> > +void c2_ae_event(struct c2_dev *c2dev, u32 mq_index)
> > +{
> > + struct c2_mq *mq = c2dev->qptr_array[mq_index];
> > + ccwr_t *wr;
> > + void *resource_user_context;
> > + struct iw_cm_event cm_event;
> > + struct ib_event ib_event;
> > + cc_resource_indicator_t resource_indicator;
> > + cc_event_id_t event_id;
> > + u8 *pdata = NULL;
> > +
> > + /*
> > + * retreive the message
> > + */
> > + wr = c2_mq_consume(mq);
> > + if (!wr)
> > + return;
> > +
> > + memset(&cm_event, 0, sizeof(cm_event));
> > +
> > + event_id = c2_wr_get_id(wr);
> > + resource_indicator = be32_to_cpu(wr->ae.ae_generic.resource_type);
> > + resource_user_context = (void *)(unsigned
> long)wr->ae.ae_generic.user_context;
> > +
> > + cm_event.status = c2_convert_cm_status(c2_wr_get_result(wr));
> > +
> > + switch (resource_indicator) {
> > + case CC_RES_IND_QP: {
> > +
> > + struct c2_qp *qp = (struct c2_qp *)resource_user_context;
> > +
> > + switch (event_id) {
> > + case CCAE_ACTIVE_CONNECT_RESULTS:
> > + cm_event.event = IW_CM_EVENT_CONNECT_REPLY;
> > + cm_event.local_addr.sin_addr.s_addr =
> > + wr->ae.ae_active_connect_results.laddr;
> > + cm_event.remote_addr.sin_addr.s_addr =
> > + wr->ae.ae_active_connect_results.raddr;
> > + cm_event.local_addr.sin_port =
> > + wr->ae.ae_active_connect_results.lport;
> > + cm_event.remote_addr.sin_port =
> > + wr->ae.ae_active_connect_results.rport;
> > + cm_event.private_data_len =
> > + be32_to_cpu(wr->ae.ae_active_connect_results.private_data_length);
> > +
> > + if (cm_event.private_data_len) {
> > + /* XXX */
> > + pdata = kmalloc(cm_event.private_data_len, GFP_ATOMIC);
> > + if (!pdata) {
> > + /* Ignore the request, maybe the remote peer
> > + * will retry */
> > + dprintk("Ignored connect request -- no memory for pdata"
> > + "private_data_len=%d\n", cm_event.private_data_len);
> > + goto ignore_it;
> > + }
> > +
> > + memcpy(pdata,
> > + wr->ae.ae_active_connect_results.private_data,
> > + cm_event.private_data_len);
> > +
> > + cm_event.private_data = pdata;
> > + }
> > + if (qp->cm_id->event_handler)
> > + qp->cm_id->event_handler(qp->cm_id, &cm_event);
> > +
> > + break;
> > +
> > + case CCAE_TERMINATE_MESSAGE_RECEIVED:
> > + case CCAE_CQ_SQ_COMPLETION_OVERFLOW:
> > + ib_event.device = &c2dev->ibdev;
> > + ib_event.element.qp = &qp->ibqp;
> > + ib_event.event = IB_EVENT_QP_REQ_ERR;
> > +
> > + if(qp->ibqp.event_handler)
> > + (*qp->ibqp.event_handler)(&ib_event,
> > + qp->ibqp.qp_context);
> > + case CCAE_BAD_CLOSE:
> > + case CCAE_LLP_CLOSE_COMPLETE:
> > + case CCAE_LLP_CONNECTION_RESET:
> > + case CCAE_LLP_CONNECTION_LOST:
> > + default:
> > + cm_event.event = IW_CM_EVENT_CLOSE;
> > + if (qp->cm_id->event_handler)
> > + qp->cm_id->event_handler(qp->cm_id, &cm_event);
> > +
> > + }
> > + break;
> > + }
> > +
> > + case CC_RES_IND_EP: {
> > +
> > + struct iw_cm_id* cm_id = (struct iw_cm_id*)resource_user_context;
> > +
> > + dprintk("CC_RES_IND_EP event_id=%d\n", event_id);
> > + if (event_id != CCAE_CONNECTION_REQUEST) {
> > + dprintk("%s: Invalid event_id: %d\n", __FUNCTION__, event_id);
> > + break;
> > + }
> > +
> > + cm_event.event = IW_CM_EVENT_CONNECT_REQUEST;
> > + cm_event.provider_id =
> > + wr->ae.ae_connection_request.cr_handle;
> > + cm_event.local_addr.sin_addr.s_addr =
> > + wr->ae.ae_connection_request.laddr;
> > + cm_event.remote_addr.sin_addr.s_addr =
> > + wr->ae.ae_connection_request.raddr;
> > + cm_event.local_addr.sin_port =
> > + wr->ae.ae_connection_request.lport;
> > + cm_event.remote_addr.sin_port =
> > + wr->ae.ae_connection_request.rport;
> > + cm_event.private_data_len =
> > + be32_to_cpu(wr->ae.ae_connection_request.private_data_length);
> > +
> > + if (cm_event.private_data_len) {
> > + pdata = kmalloc(cm_event.private_data_len, GFP_ATOMIC);
> > + if (!pdata) {
> > + /* Ignore the request, maybe the remote peer
> > + * will retry */
> > + dprintk("Ignored connect request -- no memory for pdata"
> > + "private_data_len=%d\n", cm_event.private_data_len);
> > + goto ignore_it;
> > + }
> > + memcpy(pdata,
> > + wr->ae.ae_connection_request.private_data,
> > + cm_event.private_data_len);
> > +
> > + cm_event.private_data = pdata;
> > + }
> > + if (cm_id->event_handler)
> > + cm_id->event_handler(cm_id, &cm_event);
> > + break;
> > + }
> > +
> > + case CC_RES_IND_CQ: {
> > + struct c2_cq *cq = (struct c2_cq *)resource_user_context;
> > +
> > + dprintk("IB_EVENT_CQ_ERR\n");
> > + ib_event.device = &c2dev->ibdev;
> > + ib_event.element.cq = &cq->ibcq;
> > + ib_event.event = IB_EVENT_CQ_ERR;
> > +
> > + if (cq->ibcq.event_handler)
> > + cq->ibcq.event_handler(&ib_event, cq->ibcq.cq_context);
> > + }
> > +
> > + default:
> > + break;
> > + }
> > +
> > + ignore_it:
> > + c2_mq_free(mq);
> > +}
> > Index: hw/amso1100/c2.h
> > ===================================================================
> > --- hw/amso1100/c2.h (revision 0)
> > +++ hw/amso1100/c2.h (revision 0)
> > @@ -0,0 +1,617 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +
> > +#ifndef __C2_H
> > +#define __C2_H
> > +
> > +#include <linux/netdevice.h>
> > +#include <linux/spinlock.h>
> > +#include <linux/kernel.h>
> > +#include <linux/pci.h>
> > +#include <linux/dma-mapping.h>
> > +#include <asm/semaphore.h>
> > +
> > +#include "c2_provider.h"
> > +#include "c2_mq.h"
> > +#include "cc_status.h"
> > +
> > +#define DRV_NAME "c2"
> > +#define DRV_VERSION "1.1"
> > +#define PFX DRV_NAME ": "
> > +
> > +#ifdef C2_DEBUG
> > +#define assert(expr) \
> > + if(!(expr)) { \
> > + printk(KERN_ERR PFX "Assertion failed! %s, %s, %s, line %d\n",\
> > + #expr, __FILE__, __FUNCTION__, __LINE__); \
> > + }
> > +#define dprintk(fmt, args...) do {printk(KERN_INFO PFX fmt, ##args);}
> while (0)
> > +#else
> > +#define assert(expr) do {} while (0)
> > +#define dprintk(fmt, args...) do {} while (0)
> > +#endif /* C2_DEBUG */
> > +
> > +#define BAR_0 0
> > +#define BAR_2 2
> > +#define BAR_4 4
> > +
> > +#define RX_BUF_SIZE (1536 + 8)
> > +#define ETH_JUMBO_MTU 9000
> > +#define C2_MAGIC "CEPHEUS"
> > +#define C2_VERSION 4
> > +#define C2_IVN (18 & 0x7fffffff)
> > +
> > +#define C2_REG0_SIZE (16 * 1024)
> > +#define C2_REG2_SIZE (2 * 1024 * 1024)
> > +#define C2_REG4_SIZE (256 * 1024 * 1024)
> > +#define C2_NUM_TX_DESC 341
> > +#define C2_NUM_RX_DESC 256
> > +#define C2_PCI_REGS_OFFSET (0x10000)
> > +#define C2_RXP_HRXDQ_OFFSET (((C2_REG4_SIZE)/2))
> > +#define C2_RXP_HRXDQ_SIZE (4096)
> > +#define C2_TXP_HTXDQ_OFFSET (((C2_REG4_SIZE)/2) + C2_RXP_HRXDQ_SIZE)
> > +#define C2_TXP_HTXDQ_SIZE (4096)
> > +#define C2_TX_TIMEOUT (6*HZ)
> > +
> > +/* CEPHEUS */
> > +static const u8 c2_magic[] = {
> > + 0x43, 0x45, 0x50, 0x48, 0x45, 0x55, 0x53
> > + };
> > +
> > +enum adapter_pci_regs {
> > + C2_REGS_MAGIC = 0x0000,
> > + C2_REGS_VERS = 0x0008,
> > + C2_REGS_IVN = 0x000C,
> > + C2_REGS_PCI_WINSIZE = 0x0010,
> > + C2_REGS_Q0_QSIZE = 0x0014,
> > + C2_REGS_Q0_MSGSIZE = 0x0018,
> > + C2_REGS_Q0_POOLSTART = 0x001C,
> > + C2_REGS_Q0_SHARED = 0x0020,
> > + C2_REGS_Q1_QSIZE = 0x0024,
> > + C2_REGS_Q1_MSGSIZE = 0x0028,
> > + C2_REGS_Q1_SHARED = 0x0030,
> > + C2_REGS_Q2_QSIZE = 0x0034,
> > + C2_REGS_Q2_MSGSIZE = 0x0038,
> > + C2_REGS_Q2_SHARED = 0x0040,
> > + C2_REGS_ENADDR = 0x004C,
> > + C2_REGS_RDMA_ENADDR = 0x0054,
> > + C2_REGS_HRX_CUR = 0x006C,
> > +};
> > +
> > +struct c2_adapter_pci_regs {
> > + char reg_magic[8];
> > + u32 version;
> > + u32 ivn;
> > + u32 pci_window_size;
> > + u32 q0_q_size;
> > + u32 q0_msg_size;
> > + u32 q0_pool_start;
> > + u32 q0_shared;
> > + u32 q1_q_size;
> > + u32 q1_msg_size;
> > + u32 q1_pool_start;
> > + u32 q1_shared;
> > + u32 q2_q_size;
> > + u32 q2_msg_size;
> > + u32 q2_pool_start;
> > + u32 q2_shared;
> > + u32 log_start;
> > + u32 log_size;
> > + u8 host_enaddr[8];
> > + u8 rdma_enaddr[8];
> > + u32 crash_entry;
> > + u32 crash_ready[2];
> > + u32 fw_txd_cur;
> > + u32 fw_hrxd_cur;
> > + u32 fw_rxd_cur;
> > +};
> > +
> > +enum pci_regs {
> > + C2_HISR = 0x0000,
> > + C2_DISR = 0x0004,
> > + C2_HIMR = 0x0008,
> > + C2_DIMR = 0x000C,
> > + C2_NISR0 = 0x0010,
> > + C2_NISR1 = 0x0014,
> > + C2_NIMR0 = 0x0018,
> > + C2_NIMR1 = 0x001C,
> > + C2_IDIS = 0x0020,
> > +};
> > +
> > +enum {
> > + C2_PCI_HRX_INT = 1<<8,
> > + C2_PCI_HTX_INT = 1<<17,
> > + C2_PCI_HRX_QUI = 1<<31,
> > +};
> > +
> > +/*
> > + * Cepheus registers in BAR0.
> > + */
> > +struct c2_pci_regs {
> > + u32 hostisr;
> > + u32 dmaisr;
> > + u32 hostimr;
> > + u32 dmaimr;
> > + u32 netisr0;
> > + u32 netisr1;
> > + u32 netimr0;
> > + u32 netimr1;
> > + u32 int_disable;
> > +};
> > +
> > +/* TXP flags */
> > +enum c2_txp_flags {
> > + TXP_HTXD_DONE = 0,
> > + TXP_HTXD_READY = 1<<0,
> > + TXP_HTXD_UNINIT = 1<<1,
> > +};
> > +
> > +/* RXP flags */
> > +enum c2_rxp_flags {
> > + RXP_HRXD_UNINIT = 0,
> > + RXP_HRXD_READY = 1<<0,
> > + RXP_HRXD_DONE = 1<<1,
> > +};
> > +
> > +/* RXP status */
> > +enum c2_rxp_status {
> > + RXP_HRXD_ZERO = 0,
> > + RXP_HRXD_OK = 1<<0,
> > + RXP_HRXD_BUF_OV = 1<<1,
> > +};
> > +
> > +/* TXP descriptor fields */
> > +enum txp_desc {
> > + C2_TXP_FLAGS = 0x0000,
> > + C2_TXP_LEN = 0x0002,
> > + C2_TXP_ADDR = 0x0004,
> > +};
> > +
> > +/* RXP descriptor fields */
> > +enum rxp_desc {
> > + C2_RXP_FLAGS = 0x0000,
> > + C2_RXP_STATUS = 0x0002,
> > + C2_RXP_COUNT = 0x0004,
> > + C2_RXP_LEN = 0x0006,
> > + C2_RXP_ADDR = 0x0008,
> > +};
> > +
> > +struct c2_txp_desc {
> > + u16 flags;
> > + u16 len;
> > + u64 addr;
> > +} __attribute__ ((packed));
> > +
> > +struct c2_rxp_desc {
> > + u16 flags;
> > + u16 status;
> > + u16 count;
> > + u16 len;
> > + u64 addr;
> > +} __attribute__ ((packed));
> > +
> > +struct c2_rxp_hdr {
> > + u16 flags;
> > + u16 status;
> > + u16 len;
> > + u16 rsvd;
> > +} __attribute__ ((packed));
> > +
> > +struct c2_tx_desc {
> > + u32 len;
> > + u32 status;
> > + dma_addr_t next_offset;
> > +};
> > +
> > +struct c2_rx_desc {
> > + u32 len;
> > + u32 status;
> > + dma_addr_t next_offset;
> > +};
> > +
> > +struct c2_alloc {
> > + u32 last;
> > + u32 max;
> > + spinlock_t lock;
> > + unsigned long *table;
> > +};
> > +
> > +struct c2_array {
> > + struct {
> > + void **page;
> > + int used;
> > + } *page_list;
> > +};
> > +
> > +/*
> > + * The MQ shared pointer pool is organized as a linked list of
> > + * chunks. Each chunk contains a linked list of free shared pointers
> > + * that can be allocated to a given user mode client.
> > + *
> > + */
> > +struct sp_chunk {
> > + struct sp_chunk* next;
> > + u32 gfp_mask;
> > + u16 head;
> > + u16 shared_ptr[0];
> > +};
> > +
> > +struct c2_pd_table {
> > + struct c2_alloc alloc;
> > + struct c2_array pd;
> > +};
> > +
> > +struct c2_qp_table {
> > + struct c2_alloc alloc;
> > + u32 rdb_base;
> > + int rdb_shift;
> > + int sqp_start;
> > + spinlock_t lock;
> > + struct c2_array qp;
> > + struct c2_icm_table *qp_table;
> > + struct c2_icm_table *eqp_table;
> > + struct c2_icm_table *rdb_table;
> > +};
> > +
> > +struct c2_element {
> > + struct c2_element *next;
> > + void *ht_desc; /* host descriptor */
> > + void *hw_desc; /* hardware descriptor */
> > + struct sk_buff *skb;
> > + dma_addr_t mapaddr;
> > + u32 maplen;
> > +};
> > +
> > +struct c2_ring {
> > + struct c2_element *to_clean;
> > + struct c2_element *to_use;
> > + struct c2_element *start;
> > + unsigned long count;
> > +};
> > +
> > +struct c2_dev {
> > + struct ib_device ibdev;
> > + void __iomem *regs;
> > + void __iomem *mmio_txp_ring; /* remapped adapter memory for hw
> rings */
> > + void __iomem *mmio_rxp_ring;
> > + spinlock_t lock;
> > + struct pci_dev *pcidev;
> > + struct net_device *netdev;
> > + unsigned int cur_tx;
> > + unsigned int cur_rx;
> > + u64 fw_ver;
> > + u32 adapter_handle;
> > + u32 hw_rev;
> > + u32 device_cap_flags;
> > + u32 vendor_id;
> > + u32 vendor_part_id;
> > + void __iomem *kva; /* KVA device memory */
> > + void __iomem *pa; /* PA device memory */
> > + void **qptr_array;
> > +
> > + kmem_cache_t* host_msg_cache;
> > + //kmem_cache_t* ae_msg_cache;
> > +
> > + struct list_head cca_link; /* adapter list */
> > + struct list_head eh_wakeup_list; /* event wakeup list */
> > + wait_queue_head_t req_vq_wo;
> > +
> > + /* RNIC Limits */
> > + u32 max_mr;
> > + u32 max_mr_size;
> > + u32 max_qp;
> > + u32 max_qp_wr;
> > + u32 max_sge;
> > + u32 max_cq;
> > + u32 max_cqe;
> > + u32 max_pd;
> > +
> > + struct c2_pd_table pd_table;
> > + struct c2_qp_table qp_table;
> > +#if 0
> > + struct c2_mr_table mr_table;
> > +#endif
> > + int ports; /* num of GigE ports */
> > + int devnum;
> > + spinlock_t vqlock; /* sync vbs req MQ */
> > +
> > + /* Verbs Queues */
> > + struct c2_mq req_vq; /* Verbs Request MQ */
> > + struct c2_mq rep_vq; /* Verbs Reply MQ */
> > + struct c2_mq aeq; /* Async Events MQ */
> > +
> > + /* Kernel client MQs */
> > + struct sp_chunk* kern_mqsp_pool;
> > +
> > + /* Device updates these values when posting messages to a host
> > + * target queue */
> > + u16 req_vq_shared;
> > + u16 rep_vq_shared;
> > + u16 aeq_shared;
> > + u16 irq_claimed;
> > +
> > + /*
> > + * Shared host target pages for user-accessible MQs.
> > + */
> > + int hthead; /* index of first free entry */
> > + void* htpages; /* kernel vaddr */
> > + int htlen; /* length of htpages memory */
> > + void* htuva; /* user mapped vaddr */
> > + spinlock_t htlock; /* serialize allocation */
> > +
> > + u64 adapter_hint_uva; /* access to the activity FIFO */
> > +
> > + spinlock_t aeq_lock;
> > + spinlock_t rnic_lock;
> > +
> > +
> > + u16 hint_count;
> > + u16 hints_read;
> > +
> > + int init; /* TRUE if it's ready */
> > + char ae_cache_name[16];
> > + char vq_cache_name[16];
> > +};
> > +
> > +struct c2_port {
> > + u32 msg_enable;
> > + struct c2_dev *c2dev;
> > + struct net_device *netdev;
> > +
> > + spinlock_t tx_lock;
> > + u32 tx_avail;
> > + struct c2_ring tx_ring;
> > + struct c2_ring rx_ring;
> > +
> > + void *mem; /* PCI memory for host rings */
> > + dma_addr_t dma;
> > + unsigned long mem_size;
> > +
> > + u32 rx_buf_size;
> > +
> > + struct net_device_stats netstats;
> > +};
> > +
> > +/*
> > + * Activity FIFO registers in BAR0.
> > + */
> > +#define PCI_BAR0_HOST_HINT 0x100
> > +#define PCI_BAR0_ADAPTER_HINT 0x2000
> > +
> > +/*
> > + * Ammasso PCI vendor id and Cepheus PCI device id.
> > + */
> > +#define CQ_ARMED 0x01
> > +#define CQ_WAIT_FOR_DMA 0x80
> > +
> > +/*
> > + * The format of a hint is as follows:
> > + * Lower 16 bits are the count of hints for the queue.
> > + * Next 15 bits are the qp_index
> > + * Upper most bit depends on who reads it:
> > + * If read by producer, then it means Full (1) or Not-Full (0)
> > + * If read by consumer, then it means Empty (1) or Not-Empty (0)
> > + */
> > +#define C2_HINT_MAKE(q_index, hint_count) (((q_index) << 16) |
> hint_count)
> > +#define C2_HINT_GET_INDEX(hint) (((hint) & 0x7FFF0000) >> 16)
> > +#define C2_HINT_GET_COUNT(hint) ((hint) & 0x0000FFFF)
> > +
> > +
> > +/*
> > + * The following defines the offset in SDRAM for the
> cc_adapter_pci_regs_t
> > + * struct.
> > + */
> > +#define C2_ADAPTER_PCI_REGS_OFFSET 0x10000
> > +
> > +#ifndef readq
> > +static inline u64 readq(const void __iomem *addr)
> > +{
> > + u64 ret = readl(addr + 4);
> > + ret <<= 32;
> > + ret |= readl(addr);
> > +
> > + return ret;
> > +}
> > +#endif
> > +
> > +#ifndef writeq
> > +static inline void writeq(u64 val, void __iomem *addr)
> > +{
> > + writel((u32) (val), addr);
> > + writel((u32) (val >> 32), (addr + 4));
> > +}
> > +#endif
> > +
> > +/* Read from memory-mapped device */
> > +static inline u64 c2_read64(const void __iomem *addr)
> > +{
> > + return readq(addr);
> > +}
> > +
> > +static inline u32 c2_read32(const void __iomem *addr)
> > +{
> > + return readl(addr);
> > +}
> > +
> > +static inline u16 c2_read16(const void __iomem *addr)
> > +{
> > + return readw(addr);
> > +}
> > +
> > +static inline u8 c2_read8(const void __iomem *addr)
> > +{
> > + return readb(addr);
> > +}
> > +
> > +/* Write to memory-mapped device */
> > +static inline void c2_write64(void __iomem *addr, u64 val)
> > +{
> > + writeq(val, addr);
> > +}
> > +
> > +static inline void c2_write32(void __iomem *addr, u32 val)
> > +{
> > + writel(val, addr);
> > +}
> > +
> > +static inline void c2_write16(void __iomem *addr, u16 val)
> > +{
> > + writew(val, addr);
> > +}
> > +
> > +static inline void c2_write8(void __iomem *addr, u8 val)
> > +{
> > + writeb(val, addr);
> > +}
> > +
> > +#define C2_SET_CUR_RX(c2dev, cur_rx) \
> > + c2_write32(c2dev->mmio_txp_ring + 4092, cpu_to_be32(cur_rx))
> > +
> > +#define C2_GET_CUR_RX(c2dev) \
> > + be32_to_cpu(c2_read32(c2dev->mmio_txp_ring + 4092))
> > +
> > +static inline struct c2_dev *to_c2dev(struct ib_device* ibdev)
> > +{
> > + return container_of(ibdev, struct c2_dev, ibdev);
> > +}
> > +
> > +static inline int c2_errno(void *reply)
> > +{
> > + switch(c2_wr_get_result(reply)) {
> > + case CC_OK:
> > + return 0;
> > + case CCERR_NO_BUFS:
> > + case CCERR_INSUFFICIENT_RESOURCES:
> > + case CCERR_ZERO_RDMA_READ_RESOURCES:
> > + return -ENOMEM;
> > + case CCERR_MR_IN_USE:
> > + case CCERR_QP_IN_USE:
> > + return -EBUSY;
> > + case CCERR_ADDR_IN_USE:
> > + return -EADDRINUSE;
> > + case CCERR_ADDR_NOT_AVAIL:
> > + return -EADDRNOTAVAIL;
> > + case CCERR_CONN_RESET:
> > + return -ECONNRESET;
> > + case CCERR_NOT_IMPLEMENTED:
> > + case CCERR_INVALID_WQE:
> > + return -ENOSYS;
> > + case CCERR_QP_NOT_PRIVILEGED:
> > + return -EPERM;
> > + case CCERR_STACK_ERROR:
> > + return -EPROTO;
> > + case CCERR_ACCESS_VIOLATION:
> > + case CCERR_BASE_AND_BOUNDS_VIOLATION:
> > + return -EFAULT;
> > + case CCERR_STAG_STATE_NOT_INVALID:
> > + case CCERR_INVALID_ADDRESS:
> > + case CCERR_INVALID_CQ:
> > + case CCERR_INVALID_EP:
> > + case CCERR_INVALID_MODIFIER:
> > + case CCERR_INVALID_MTU:
> > + case CCERR_INVALID_PD_ID:
> > + case CCERR_INVALID_QP:
> > + case CCERR_INVALID_RNIC:
> > + case CCERR_INVALID_STAG:
> > + return -EINVAL;
> > + default:
> > + return -EAGAIN;
> > + }
> > +}
> > +
> > +/* Device */
> > +extern int c2_register_device(struct c2_dev *c2dev);
> > +extern void c2_unregister_device(struct c2_dev *c2dev);
> > +extern int c2_rnic_init(struct c2_dev* c2dev);
> > +extern void c2_rnic_term(struct c2_dev* c2dev);
> > +
> > +/* QPs */
> > +extern int c2_alloc_qp(struct c2_dev *c2dev, struct c2_pd *pd,
> > + struct ib_qp_init_attr *qp_attrs, struct c2_qp *qp);
> > +extern void c2_free_qp(struct c2_dev *c2dev, struct c2_qp *qp);
> > +extern int c2_qp_modify(struct c2_dev *c2dev, struct c2_qp *qp,
> > + struct ib_qp_attr *attr, int attr_mask);
> > +extern int c2_post_send(struct ib_qp *ibqp, struct ib_send_wr *ib_wr,
> > + struct ib_send_wr **bad_wr);
> > +extern int c2_post_receive(struct ib_qp *ibqp, struct ib_recv_wr
> *ib_wr,
> > + struct ib_recv_wr **bad_wr);
> > +extern int __devinit c2_init_qp_table(struct c2_dev *c2dev);
> > +extern void __devexit c2_cleanup_qp_table(struct c2_dev *c2dev);
> > +
> > +/* PDs */
> > +extern int c2_pd_alloc(struct c2_dev *c2dev, int privileged, struct
> c2_pd *pd);
> > +extern void c2_pd_free(struct c2_dev *c2dev, struct c2_pd *pd);
> > +extern int __devinit c2_init_pd_table(struct c2_dev *c2dev);
> > +extern void __devexit c2_cleanup_pd_table(struct c2_dev *c2dev);
> > +
> > +/* CQs */
> > +extern int c2_init_cq(struct c2_dev *c2dev, int entries, struct
> c2_ucontext *ctx,
> > + struct c2_cq *cq);
> > +extern void c2_free_cq(struct c2_dev *c2dev, struct c2_cq *cq);
> > +extern void c2_cq_event(struct c2_dev *c2dev, u32 mq_index);
> > +extern void c2_cq_clean(struct c2_dev *c2dev, struct c2_qp *qp, u32
> mq_index);
> > +extern int c2_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc
> *entry);
> > +extern int c2_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify notify);
> > +
> > +/* CM */
> > +extern int c2_llp_connect(struct iw_cm_id* cm_id, const void* pdata, u8
> pdata_len);
> > +extern int c2_llp_accept(struct iw_cm_id* cm_id, const void* pdata, u8
> pdata_len);
> > +extern int c2_llp_reject(struct iw_cm_id* cm_id, const void* pdata, u8
> pdata_len);
> > +extern int c2_llp_service_create(struct iw_cm_id* cm_id, int backlog);
> > +extern int c2_llp_service_destroy(struct iw_cm_id* cm_id);
> > +
> > +/* MM */
> > +extern int c2_nsmr_register_phys_kern(struct c2_dev *c2dev, u64
> **addr_list,
> > + int pbl_depth, u32 length, u64 *va,
> > + cc_acf_t acf, struct c2_mr *mr);
> > +extern int c2_stag_dealloc(struct c2_dev *c2dev, u32 stag_index);
> > +
> > +/* AE */
> > +extern void c2_ae_event(struct c2_dev *c2dev, u32 mq_index);
> > +
> > +/* Allocators */
> > +extern u32 c2_alloc(struct c2_alloc *alloc);
> > +extern void c2_free(struct c2_alloc *alloc, u32 obj);
> > +extern int c2_alloc_init(struct c2_alloc *alloc, u32 num, u32
> reserved);
> > +extern void c2_alloc_cleanup(struct c2_alloc *alloc);
> > +extern int c2_init_mqsp_pool(unsigned int gfp_mask, struct sp_chunk**
> root);
> > +extern void c2_free_mqsp_pool(struct sp_chunk* root);
> > +extern u16* c2_alloc_mqsp(struct sp_chunk* head);
> > +extern void c2_free_mqsp(u16* mqsp);
> > +extern int c2_array_init(struct c2_array *array, int nent);
> > +extern void c2_array_clear(struct c2_array *array, int index);
> > +extern int c2_array_set(struct c2_array *array, int index, void
> *value);
> > +extern void *c2_array_get(struct c2_array *array, int index);
> > +
> > +#endif
> > +
> > Index: hw/amso1100/c2_vq.c
> > ===================================================================
> > --- hw/amso1100/c2_vq.c (revision 0)
> > +++ hw/amso1100/c2_vq.c (revision 0)
> > @@ -0,0 +1,272 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +#include <linux/slab.h>
> > +#include <linux/spinlock.h>
> > +
> > +#include "c2_vq.h"
> > +
> > +/*
> > + * Verbs Request Objects:
> > + *
> > + * VQ Request Objects are allocated by the kernel verbs handlers.
> > + * They contain a wait object, a refcnt, an atomic bool indicating that
> the
> > + * adapter has replied, and a copy of the verb reply work request.
> > + * A pointer to the VQ Request Object is passed down in the context
> > + * field of the work request message, and reflected back by the adapter
> > + * in the verbs reply message. The function handle_vq() in the
> interrupt
> > + * path will use this pointer to:
> > + * 1) append a copy of the verbs reply message
> > + * 2) mark that the reply is ready
> > + * 3) wake up the kernel verbs handler blocked awaiting the reply.
> > + *
> > + *
> > + * The kernel verbs handlers do a "get" to put a 2nd reference on the
> > + * VQ Request object. If the kernel verbs handler exits before the
> adapter
> > + * can respond, this extra reference will keep the VQ Request object
> around
> > + * until the adapter's reply can be processed. The reason we need this
> is
> > + * because a pointer to this object is stuffed into the context field
> of
> > + * the verbs work request message, and reflected back in the reply
> message.
> > + * It is used in the interrupt handler (handle_vq()) to wake up the
> appropriate
> > + * kernel verb handler that is blocked awaiting the verb reply.
> > + * So handle_vq() will do a "put" on the object when it's done
> accessing it.
> > + * NOTE: If we guarantee that the kernel verb handler will never bail
> before
> > + * getting the reply, then we don't need these refcnts.
> > + *
> > + *
> > + * VQ Request objects are freed by the kernel verbs handlers only
> > + * after the verb has been processed, or when the adapter fails and
> > + * does not reply.
> > + *
> > + *
> > + * Verbs Reply Buffers:
> > + *
> > + * VQ Reply bufs are local host memory copies of a outstanding Verb
> Request reply
> > + * message. The are always allocated by the kernel verbs handlers, and
> _may_ be
> > + * freed by either the kernel verbs handler -or- the interrupt handler.
> The
> > + * kernel verbs handler _must_ free the repbuf, then free the vq
> request object
> > + * in that order.
> > + */
> > +
> > +int
> > +vq_init(struct c2_dev* c2dev)
> > +{
> > + sprintf(c2dev->vq_cache_name, "c2-vq:dev%c", (char ) ('0' +
> c2dev->devnum));
> > + c2dev->host_msg_cache = kmem_cache_create(c2dev->vq_cache_name,
> > + c2dev->rep_vq.msg_size, 0,
> > + SLAB_HWCACHE_ALIGN, NULL, NULL);
> > + if (c2dev->host_msg_cache == NULL) {
> > + return -ENOMEM;
> > + }
> > + return 0;
> > +}
> > +
> > +void
> > +vq_term(struct c2_dev* c2dev)
> > +{
> > + kmem_cache_destroy(c2dev->host_msg_cache);
> > +}
> > +
> > +/* vq_req_alloc - allocate a VQ Request Object and initialize it.
> > + * The refcnt is set to 1.
> > + */
> > +struct c2_vq_req *
> > +vq_req_alloc(struct c2_dev *c2dev)
> > +{
> > + struct c2_vq_req *r;
> > +
> > + r = (struct c2_vq_req *)kmalloc(sizeof(struct c2_vq_req),
> GFP_KERNEL);
> > + if (r) {
> > + init_waitqueue_head(&r->wait_object);
> > + r->reply_msg = (u64)NULL;
> > + atomic_set(&r->refcnt, 1);
> > + atomic_set(&r->reply_ready, 0);
> > + }
> > + return r;
> > +}
> > +
> > +
> > +/* vq_req_free - free the VQ Request Object. It is assumed the verbs
> handler
> > + * has already free the VQ Reply Buffer if it existed.
> > + */
> > +void
> > +vq_req_free(struct c2_dev *c2dev, struct c2_vq_req *r)
> > +{
> > + r->reply_msg = (u64)NULL;
> > + if (atomic_dec_and_test(&r->refcnt)) {
> > + kfree(r);
> > + }
> > +}
> > +
> > +/* vq_req_get - reference a VQ Request Object. Done
> > + * only in the kernel verbs handlers.
> > + */
> > +void
> > +vq_req_get(struct c2_dev *c2dev, struct c2_vq_req *r)
> > +{
> > + atomic_inc(&r->refcnt);
> > +}
> > +
> > +
> > +/* vq_req_put - dereference and potentially free a VQ Request Object.
> > + *
> > + * This is only called by handle_vq() on the interrupt when it is done
> processing
> > + * a verb reply message. If the associated kernel verbs handler has
> already bailed,
> > + * then this put will actually free the VQ Request object _and_ the VQ
> Reply Buffer
> > + * if it exists.
> > + */
> > +void
> > +vq_req_put(struct c2_dev *c2dev, struct c2_vq_req *r)
> > +{
> > + if (atomic_dec_and_test(&r->refcnt)) {
> > + if (r->reply_msg != (u64)NULL)
> > + vq_repbuf_free(c2dev, (void *)(unsigned long)r->reply_msg);
> > + kfree(r);
> > + }
> > +}
> > +
> > +
> > +/*
> > + * vq_repbuf_alloc - allocate a VQ Reply Buffer.
> > + */
> > +void *
> > +vq_repbuf_alloc(struct c2_dev *c2dev)
> > +{
> > + return kmem_cache_alloc(c2dev->host_msg_cache, SLAB_ATOMIC);
> > +}
> > +
> > +/*
> > + * vq_send_wr - post a verbs request message to the Verbs Request
> Queue.
> > + * If a message is not available in the MQ, then block until one is
> available.
> > + * NOTE: handle_mq() on the interrupt context will wake up threads
> blocked here.
> > + * When the adapter drains the Verbs Request Queue, it inserts MQ index
> 0 in to the
> > + * adapter->host activity fifo and interrupts the host.
> > + */
> > +int
> > +vq_send_wr(struct c2_dev *c2dev, ccwr_t *wr)
> > +{
> > + void *msg;
> > + wait_queue_t __wait;
> > +
> > + /*
> > + * grab adapter vq lock
> > + */
> > + spin_lock(&c2dev->vqlock);
> > +
> > + /*
> > + * allocate msg
> > + */
> > + msg = c2_mq_alloc(&c2dev->req_vq);
> > +
> > + /*
> > + * If we cannot get a msg, then we'll wait
> > + * When a messages are available, the int handler will wake_up()
> > + * any waiters.
> > + */
> > + while (msg == NULL) {
> > + init_waitqueue_entry(&__wait, current);
> > + add_wait_queue(&c2dev->req_vq_wo, &__wait);
> > + spin_unlock(&c2dev->vqlock);
> > + for (;;) {
> > + set_current_state(TASK_INTERRUPTIBLE);
> > + if (!c2_mq_full(&c2dev->req_vq)) {
> > + break;
> > + }
> > + if (!signal_pending(current)) {
> > + schedule_timeout(1*HZ); /* 1 second... */
> > + continue;
> > + }
> > + set_current_state(TASK_RUNNING);
> > + remove_wait_queue(&c2dev->req_vq_wo, &__wait);
> > + return -EINTR;
> > + }
> > + set_current_state(TASK_RUNNING);
> > + remove_wait_queue(&c2dev->req_vq_wo, &__wait);
> > + spin_lock(&c2dev->vqlock);
> > + msg = c2_mq_alloc(&c2dev->req_vq);
> > + }
> > +
> > + /*
> > + * copy wr into adapter msg
> > + */
> > + memcpy(msg, wr, c2dev->req_vq.msg_size);
> > +
> > + /*
> > + * post msg
> > + */
> > + c2_mq_produce(&c2dev->req_vq);
> > +
> > + /*
> > + * release adapter vq lock
> > + */
> > + spin_unlock(&c2dev->vqlock);
> > + return 0;
> > +}
> > +
> > +
> > +/*
> > + * vq_wait_for_reply - block until the adapter posts a Verb Reply
> Message.
> > + */
> > +int
> > +vq_wait_for_reply(struct c2_dev *c2dev, struct c2_vq_req *req)
> > +{
> > + wait_queue_t __wait;
> > + int rc = 0;
> > +
> > + /*
> > + * Add this request to the wait queue.
> > + */
> > + init_waitqueue_entry(&__wait, current);
> > + add_wait_queue(&req->wait_object, &__wait);
> > + for (;;) {
> > + set_current_state(TASK_UNINTERRUPTIBLE);
> > + if (atomic_read(&req->reply_ready)) {
> > + break;
> > + }
> > + if (schedule_timeout(60*HZ) == 0) {
> > + rc = -ETIMEDOUT;
> > + break;
> > + }
> > + }
> > + set_current_state(TASK_RUNNING);
> > + remove_wait_queue(&req->wait_object, &__wait);
> > + return rc;
> > +}
> > +
> > +/*
> > + * vq_repbuf_free - Free a Verbs Reply Buffer.
> > + */
> > +void
> > +vq_repbuf_free(struct c2_dev *c2dev, void *reply)
> > +{
> > + kmem_cache_free(c2dev->host_msg_cache, reply);
> > +}
> > Index: hw/amso1100/README
> > ===================================================================
> > --- hw/amso1100/README (revision 0)
> > +++ hw/amso1100/README (revision 0)
> > @@ -0,0 +1,11 @@
> > +
> > +This is the OpenIB iWARP driver for the AMSO1100 HCA from
> > +Open Grid Computing. The adapter is a 1Gb RDMA capable PCI-X RNIC.
> > +
> > +The driver implements an iWARP CM Provider and OpenIB verbs
> > +provider. The company that created the device (Ammasso, Inc.)
> > +is no longer in business, however, limited quantities of the cards
> > +are available for development purposes from Open Grid Computing.
> > +
> > +Please contact 512-343-9196 x 108 or e-mail tom at opengridcomputing.com
> > +for more information.
> > Index: hw/amso1100/c2_provider.c
> > ===================================================================
> > --- hw/amso1100/c2_provider.c (revision 0)
> > +++ hw/amso1100/c2_provider.c (revision 0)
> > @@ -0,0 +1,704 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + *
> > + */
> > +
> > +#include <linux/module.h>
> > +#include <linux/moduleparam.h>
> > +#include <linux/pci.h>
> > +#include <linux/netdevice.h>
> > +#include <linux/etherdevice.h>
> > +#include <linux/delay.h>
> > +#include <linux/ethtool.h>
> > +#include <linux/mii.h>
> > +#include <linux/if_vlan.h>
> > +#include <linux/crc32.h>
> > +#include <linux/in.h>
> > +#include <linux/ip.h>
> > +#include <linux/tcp.h>
> > +#include <linux/init.h>
> > +#include <linux/dma-mapping.h>
> > +
> > +#include <asm/io.h>
> > +#include <asm/irq.h>
> > +#include <asm/byteorder.h>
> > +
> > +#include <rdma/ib_smi.h>
> > +#include "c2.h"
> > +#include "c2_provider.h"
> > +#include "c2_user.h"
> > +
> > +static int c2_query_device(struct ib_device *ibdev,
> > + struct ib_device_attr *props)
> > +{
> > + struct c2_dev* c2dev = to_c2dev(ibdev);
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + memset(props, 0, sizeof *props);
> > +
> > + memcpy(&props->sys_image_guid, c2dev->netdev->dev_addr, 6);
> > + memcpy(&props->node_guid, c2dev->netdev->dev_addr, 6);
> > +
> > + props->fw_ver = c2dev->fw_ver;
> > + props->device_cap_flags = c2dev->device_cap_flags;
> > + props->vendor_id = c2dev->vendor_id;
> > + props->vendor_part_id = c2dev->vendor_part_id;
> > + props->hw_ver = c2dev->hw_rev;
> > + props->max_mr_size = ~0ull;
> > + props->max_qp = c2dev->max_qp;
> > + props->max_qp_wr = c2dev->max_qp_wr;
> > + props->max_sge = c2dev->max_sge;
> > + props->max_cq = c2dev->max_cq;
> > + props->max_cqe = c2dev->max_cqe;
> > + props->max_mr = c2dev->max_mr;
> > + props->max_pd = c2dev->max_pd;
> > + props->max_qp_rd_atom = 0;
> > + props->max_qp_init_rd_atom = 0;
> > + props->local_ca_ack_delay = 0;
> > +
> > + return 0;
> > +}
> > +
> > +static int c2_query_port(struct ib_device *ibdev,
> > + u8 port, struct ib_port_attr *props)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + props->max_mtu = IB_MTU_4096;
> > + props->lid = 0;
> > + props->lmc = 0;
> > + props->sm_lid = 0;
> > + props->sm_sl = 0;
> > + props->state = IB_PORT_ACTIVE;
> > + props->phys_state = 0;
> > + props->port_cap_flags =
> > + IB_PORT_CM_SUP |
> > + IB_PORT_SNMP_TUNNEL_SUP |
> > + IB_PORT_REINIT_SUP |
> > + IB_PORT_DEVICE_MGMT_SUP |
> > + IB_PORT_VENDOR_CLASS_SUP|
> > + IB_PORT_BOOT_MGMT_SUP;
> > + props->gid_tbl_len = 128;
> > + props->pkey_tbl_len = 1;
> > + props->qkey_viol_cntr = 0;
> > + props->active_width = 1;
> > + props->active_speed = 1;
> > +
> > + return 0;
> > +}
> > +
> > +static int c2_modify_port(struct ib_device *ibdev,
> > + u8 port, int port_modify_mask,
> > + struct ib_port_modify *props)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + return 0;
> > +}
> > +
> > +static int c2_query_pkey(struct ib_device *ibdev,
> > + u8 port, u16 index, u16 *pkey)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + *pkey = 0;
> > + return 0;
> > +}
> > +
> > +static int c2_query_gid(struct ib_device *ibdev, u8 port,
> > + int index, union ib_gid *gid)
> > +{
> > + struct c2_dev* c2dev = to_c2dev(ibdev);
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + memcpy(&(gid->raw[0]),c2dev->netdev->dev_addr, MAX_ADDR_LEN);
> > +
> > + return 0;
> > +}
> > +
> > +/* Allocate the user context data structure. This keeps track
> > + * of all objects associated with a particular user-mode client.
> > + */
> > +static struct ib_ucontext *c2_alloc_ucontext(struct ib_device *ibdev,
> > + struct ib_udata *udata)
> > +{
> > + struct c2_alloc_ucontext_resp uresp;
> > + struct c2_ucontext *context;
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + memset(&uresp, 0, sizeof uresp);
> > +
> > + uresp.qp_tab_size = to_c2dev(ibdev)->max_qp;
> > +
> > + context = kmalloc(sizeof *context, GFP_KERNEL);
> > + if (!context)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + /* The OpenIB user context is logically similar to the RNIC
> > + * Instance of our existing driver
> > + */
> > + /* context->rnic_p = rnic_open */
> > +
> > + if (ib_copy_to_udata(udata, &uresp, sizeof uresp)) {
> > + kfree(context);
> > + return ERR_PTR(-EFAULT);
> > + }
> > +
> > + return &context->ibucontext;
> > +}
> > +
> > +static int c2_dealloc_ucontext(struct ib_ucontext *context)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + return -ENOSYS;
> > +}
> > +
> > +static int c2_mmap_uar(struct ib_ucontext *context,
> > + struct vm_area_struct *vma)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + return -ENOSYS;
> > +}
> > +
> > +static struct ib_pd *c2_alloc_pd(struct ib_device *ibdev,
> > + struct ib_ucontext *context,
> > + struct ib_udata *udata)
> > +{
> > + struct c2_pd* pd;
> > + int err;
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + pd = kmalloc(sizeof *pd, GFP_KERNEL);
> > + if (!pd)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + err = c2_pd_alloc(to_c2dev(ibdev), !context, pd);
> > + if (err) {
> > + kfree(pd);
> > + return ERR_PTR(err);
> > + }
> > +
> > + if (context) {
> > + if (ib_copy_to_udata(udata, &pd->pd_id, sizeof (__u32))) {
> > + c2_pd_free(to_c2dev(ibdev), pd);
> > + kfree(pd);
> > + return ERR_PTR(-EFAULT);
> > + }
> > + }
> > +
> > + return &pd->ibpd;
> > +}
> > +
> > +static int c2_dealloc_pd(struct ib_pd *pd)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + c2_pd_free(to_c2dev(pd->device), to_c2pd(pd));
> > + kfree(pd);
> > +
> > + return 0;
> > +}
> > +
> > +static struct ib_ah *c2_ah_create(struct ib_pd *pd,
> > + struct ib_ah_attr *ah_attr)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + return ERR_PTR(-ENOSYS);
> > +}
> > +
> > +static int c2_ah_destroy(struct ib_ah *ah)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + return -ENOSYS;
> > +}
> > +
> > +static struct ib_qp *c2_create_qp(struct ib_pd *pd,
> > + struct ib_qp_init_attr *init_attr,
> > + struct ib_udata *udata)
> > +{
> > + struct c2_qp *qp;
> > + int err;
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + switch(init_attr->qp_type) {
> > + case IB_QPT_RC:
> > + qp = kmalloc(sizeof(*qp), GFP_KERNEL);
> > + if (!qp) {
> > + dprintk("%s: Unable to allocate QP\n", __FUNCTION__);
> > + return ERR_PTR(-ENOMEM);
> > + }
> > +
> > + if (pd->uobject) {
> > + /* XXX userspace specific */
> > + }
> > +
> > + err = c2_alloc_qp(to_c2dev(pd->device),
> > + to_c2pd(pd),
> > + init_attr,
> > + qp);
> > + if (err && pd->uobject) {
> > + /* XXX userspace specific */
> > + }
> > +
> > + break;
> > + default:
> > + dprintk("%s: Invalid QP type: %d\n", __FUNCTION__,
> init_attr->qp_type);
> > + return ERR_PTR(-EINVAL);
> > + break;
> > + }
> > +
> > + if (err) {
> > + kfree(pd);
> > + return ERR_PTR(err);
> > + }
> > +
> > + return &qp->ibqp;
> > +}
> > +
> > +static int c2_destroy_qp(struct ib_qp *ib_qp)
> > +{
> > + struct c2_qp *qp = to_c2qp(ib_qp);
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + c2_free_qp(to_c2dev(ib_qp->device), qp);
> > + kfree(qp);
> > +
> > + return 0;
> > +}
> > +
> > +static struct ib_cq *c2_create_cq(struct ib_device *ibdev, int entries,
> > + struct ib_ucontext *context,
> > + struct ib_udata *udata)
> > +{
> > + struct c2_cq *cq;
> > + int err;
> > +
> > + cq = kmalloc(sizeof(*cq), GFP_KERNEL);
> > + if (!cq) {
> > + dprintk("%s: Unable to allocate CQ\n", __FUNCTION__);
> > + return ERR_PTR(-ENOMEM);
> > + }
> > +
> > + err = c2_init_cq(to_c2dev(ibdev), entries, NULL, cq);
> > + if (err) {
> > + dprintk("%s: error initializing CQ\n", __FUNCTION__);
> > + kfree(cq);
> > + return ERR_PTR(err);
> > + }
> > +
> > + return &cq->ibcq;
> > +}
> > +
> > +static int c2_destroy_cq(struct ib_cq *ib_cq)
> > +{
> > + struct c2_cq *cq = to_c2cq(ib_cq);
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + c2_free_cq(to_c2dev(ib_cq->device), cq);
> > + kfree(cq);
> > +
> > + return 0;
> > +}
> > +
> > +static inline u32 c2_convert_access(int acc)
> > +{
> > + return (acc & IB_ACCESS_REMOTE_WRITE ? CC_ACF_REMOTE_WRITE : 0) |
> > + (acc & IB_ACCESS_REMOTE_READ ? CC_ACF_REMOTE_READ : 0) |
> > + (acc & IB_ACCESS_LOCAL_WRITE ? CC_ACF_LOCAL_WRITE : 0) |
> > + CC_ACF_LOCAL_READ | CC_ACF_WINDOW_BIND;
> > +}
> > +
> > +static struct ib_mr *c2_reg_phys_mr(struct ib_pd *ib_pd,
> > + struct ib_phys_buf *buffer_list,
> > + int num_phys_buf,
> > + int acc,
> > + u64 *iova_start)
> > +{
> > + struct c2_mr *mr;
> > + u64 **page_list;
> > + u32 total_len;
> > + int err, i, j, k, pbl_depth;
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + pbl_depth = 0;
> > + total_len = 0;
> > +
> > + for (i = 0; i < num_phys_buf; i++) {
> > +
> > + int size;
> > +
> > + if (buffer_list[i].addr & ~PAGE_MASK) {
> > + dprintk("Unaligned Memory Buffer: 0x%x\n",
> > + (unsigned int)buffer_list[i].addr);
> > + return ERR_PTR(-EINVAL);
> > + }
> > +
> > + if (!buffer_list[i].size) {
> > + dprintk("Invalid Buffer Size\n");
> > + return ERR_PTR(-EINVAL);
> > + }
> > +
> > + size = buffer_list[i].size;
> > + total_len += size;
> > + while (size) {
> > + pbl_depth++;
> > + size -= PAGE_SIZE;
> > + }
> > + }
> > +
> > + page_list = kmalloc(sizeof(u64 *) * pbl_depth, GFP_KERNEL);
> > + if (!page_list)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + for (i = 0, j = 0; i < num_phys_buf; i++) {
> > +
> > + int naddrs;
> > +
> > + naddrs = (u32)buffer_list[i].size % ~PAGE_MASK;
> > + for (k = 0; k < naddrs; k++)
> > + page_list[j++] =
> > + (u64 *)(unsigned long)(buffer_list[i].addr + (k <<
> PAGE_SHIFT));
> > + }
> > +
> > + mr = kmalloc(sizeof(*mr), GFP_KERNEL);
> > + if (!mr)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + mr->pd = to_c2pd(ib_pd);
> > +
> > + err = c2_nsmr_register_phys_kern(to_c2dev(ib_pd->device), page_list,
> > + pbl_depth, total_len, iova_start,
> > + c2_convert_access(acc), mr);
> > + kfree(page_list);
> > + if (err) {
> > + kfree(mr);
> > + return ERR_PTR(err);
> > + }
> > +
> > + return &mr->ibmr;
> > +}
> > +
> > +static struct ib_mr *c2_get_dma_mr(struct ib_pd *pd, int acc)
> > +{
> > + struct ib_phys_buf bl;
> > + u64 kva;
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + bl.size = 4096;
> > + kva = (u64)(unsigned long)kmalloc(bl.size, GFP_KERNEL);
> > + if (!kva)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + bl.addr = __pa(kva);
> > + return c2_reg_phys_mr(pd, &bl, 1, acc, &kva);
> > +}
> > +
> > +static struct ib_mr *c2_reg_user_mr(struct ib_pd *pd, struct ib_umem
> *region,
> > + int acc, struct ib_udata *udata)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + return ERR_PTR(-ENOSYS);
> > +}
> > +
> > +static int c2_dereg_mr(struct ib_mr *ib_mr)
> > +{
> > + struct c2_mr *mr = to_c2mr(ib_mr);
> > + int err;
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + err = c2_stag_dealloc(to_c2dev(ib_mr->device), ib_mr->lkey);
> > + if (err)
> > + dprintk("c2_stag_dealloc failed: %d\n", err);
> > + else
> > + kfree(mr);
> > +
> > + return err;
> > +}
> > +
> > +static ssize_t show_rev(struct class_device *cdev, char *buf)
> > +{
> > + struct c2_dev *dev = container_of(cdev, struct c2_dev,
> ibdev.class_dev);
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + return sprintf(buf, "%x\n", dev->hw_rev);
> > +}
> > +
> > +static ssize_t show_fw_ver(struct class_device *cdev, char *buf)
> > +{
> > + struct c2_dev *dev = container_of(cdev, struct c2_dev,
> ibdev.class_dev);
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + return sprintf(buf, "%x.%x.%x\n",
> > + (int)(dev->fw_ver >> 32),
> > + (int)(dev->fw_ver >> 16) & 0xffff,
> > + (int)(dev->fw_ver & 0xffff));
> > +}
> > +
> > +static ssize_t show_hca(struct class_device *cdev, char *buf)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + return sprintf(buf, "AMSO1100\n");
> > +}
> > +
> > +static ssize_t show_board(struct class_device *cdev, char *buf)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + return sprintf(buf, "%.*s\n", 32, "AMSO1100 Board ID");
> > +}
> > +
> > +static CLASS_DEVICE_ATTR(hw_rev, S_IRUGO, show_rev, NULL);
> > +static CLASS_DEVICE_ATTR(fw_ver, S_IRUGO, show_fw_ver, NULL);
> > +static CLASS_DEVICE_ATTR(hca_type, S_IRUGO, show_hca, NULL);
> > +static CLASS_DEVICE_ATTR(board_id, S_IRUGO, show_board, NULL);
> > +
> > +static struct class_device_attribute *c2_class_attributes[] = {
> > + &class_device_attr_hw_rev,
> > + &class_device_attr_fw_ver,
> > + &class_device_attr_hca_type,
> > + &class_device_attr_board_id
> > +};
> > +
> > +static int c2_modify_qp(struct ib_qp *ibqp, struct ib_qp_attr *attr,
> int attr_mask)
> > +{
> > + int err;
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + err = c2_qp_modify(to_c2dev(ibqp->device), to_c2qp(ibqp), attr,
> attr_mask);
> > +
> > + return err;
> > +}
> > +
> > +static int c2_multicast_attach(struct ib_qp *ibqp, union ib_gid *gid,
> u16 lid)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + return -ENOSYS;
> > +}
> > +
> > +static int c2_multicast_detach(struct ib_qp *ibqp, union ib_gid *gid,
> u16 lid)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + return -ENOSYS;
> > +}
> > +
> > +static int c2_process_mad(struct ib_device *ibdev,
> > + int mad_flags,
> > + u8 port_num,
> > + struct ib_wc *in_wc,
> > + struct ib_grh *in_grh,
> > + struct ib_mad *in_mad,
> > + struct ib_mad *out_mad)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + return -ENOSYS;
> > +}
> > +
> > +static int c2_connect(struct iw_cm_id* cm_id,
> > + const void* pdata, u8 pdata_len)
> > +{
> > + int err;
> > + struct c2_qp* qp = container_of(cm_id->qp, struct c2_qp, ibqp);
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + if (cm_id->qp == NULL)
> > + return -EINVAL;
> > +
> > + /* Cache the cm_id in the qp */
> > + qp->cm_id = cm_id;
> > +
> > + err = c2_llp_connect(cm_id, pdata, pdata_len);
> > +
> > + return err;
> > +}
> > +
> > +static int c2_disconnect(struct iw_cm_id* cm_id, int abrupt)
> > +{
> > + struct ib_qp_attr attr;
> > + struct ib_qp *ib_qp = cm_id->qp;
> > + int err;
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + if (ib_qp == 0)
> > + /* If this is a lietening endpoint, there is no QP */
> > + return 0;
> > +
> > + memset(&attr, 0, sizeof(struct ib_qp_attr));
> > + if (abrupt)
> > + attr.qp_state = IB_QPS_ERR;
> > + else
> > + attr.qp_state = IB_QPS_SQD;
> > +
> > + err = c2_modify_qp(ib_qp, &attr, IB_QP_STATE);
> > + return err;
> > +}
> > +
> > +static int c2_accept(struct iw_cm_id* cm_id, const void *pdata, u8
> pdata_len)
> > +{
> > + int err;
> > + struct c2_qp* qp = container_of(cm_id->qp, struct c2_qp, ibqp);
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + /* Cache the cm_id in the qp */
> > + qp->cm_id = cm_id;
> > +
> > + err = c2_llp_accept(cm_id, pdata, pdata_len);
> > +
> > + return err;
> > +}
> > +
> > +static int c2_reject(struct iw_cm_id* cm_id, const void* pdata, u8
> pdata_len)
> > +{
> > + int err;
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + err = c2_llp_reject(cm_id, pdata, pdata_len);
> > + return err;
> > +}
> > +
> > +static int c2_getpeername(struct iw_cm_id* cm_id,
> > + struct sockaddr_in* local_addr,
> > + struct sockaddr_in* remote_addr )
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + *local_addr = cm_id->local_addr;
> > + *remote_addr = cm_id->remote_addr;
> > + return 0;
> > +}
> > +
> > +static int c2_service_create(struct iw_cm_id* cm_id, int backlog)
> > +{
> > + int err;
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + err = c2_llp_service_create(cm_id, backlog);
> > + return err;
> > +}
> > +
> > +static int c2_service_destroy(struct iw_cm_id* cm_id)
> > +{
> > + int err;
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > +
> > + err = c2_llp_service_destroy(cm_id);
> > +
> > + return err;
> > +}
> > +
> > +int c2_register_device(struct c2_dev *dev)
> > +{
> > + int ret;
> > + int i;
> > +
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + strlcpy(dev->ibdev.name, "amso%d", IB_DEVICE_NAME_MAX);
> > + dev->ibdev.owner = THIS_MODULE;
> > +
> > + dev->ibdev.node_type = IB_NODE_RNIC;
> > + memset(&dev->ibdev.node_guid, 0, sizeof(dev->ibdev.node_guid));
> > + memcpy(&dev->ibdev.node_guid, dev->netdev->dev_addr, 6);
> > + dev->ibdev.phys_port_cnt = 1;
> > + dev->ibdev.dma_device = &dev->pcidev->dev;
> > + dev->ibdev.class_dev.dev = &dev->pcidev->dev;
> > + dev->ibdev.query_device = c2_query_device;
> > + dev->ibdev.query_port = c2_query_port;
> > + dev->ibdev.modify_port = c2_modify_port;
> > + dev->ibdev.query_pkey = c2_query_pkey;
> > + dev->ibdev.query_gid = c2_query_gid;
> > + dev->ibdev.alloc_ucontext = c2_alloc_ucontext;
> > + dev->ibdev.dealloc_ucontext = c2_dealloc_ucontext;
> > + dev->ibdev.mmap = c2_mmap_uar;
> > + dev->ibdev.alloc_pd = c2_alloc_pd;
> > + dev->ibdev.dealloc_pd = c2_dealloc_pd;
> > + dev->ibdev.create_ah = c2_ah_create;
> > + dev->ibdev.destroy_ah = c2_ah_destroy;
> > + dev->ibdev.create_qp = c2_create_qp;
> > + dev->ibdev.modify_qp = c2_modify_qp;
> > + dev->ibdev.destroy_qp = c2_destroy_qp;
> > + dev->ibdev.create_cq = c2_create_cq;
> > + dev->ibdev.destroy_cq = c2_destroy_cq;
> > + dev->ibdev.poll_cq = c2_poll_cq;
> > + dev->ibdev.get_dma_mr = c2_get_dma_mr;
> > + dev->ibdev.reg_phys_mr = c2_reg_phys_mr;
> > + dev->ibdev.reg_user_mr = c2_reg_user_mr;
> > + dev->ibdev.dereg_mr = c2_dereg_mr;
> > +
> > + dev->ibdev.alloc_fmr = 0;
> > + dev->ibdev.unmap_fmr = 0;
> > + dev->ibdev.dealloc_fmr = 0;
> > + dev->ibdev.map_phys_fmr = 0;
> > +
> > + dev->ibdev.attach_mcast = c2_multicast_attach;
> > + dev->ibdev.detach_mcast = c2_multicast_detach;
> > + dev->ibdev.process_mad = c2_process_mad;
> > +
> > + dev->ibdev.req_notify_cq = c2_arm_cq;
> > + dev->ibdev.post_send = c2_post_send;
> > + dev->ibdev.post_recv = c2_post_receive;
> > +
> > + dev->ibdev.iwcm = kmalloc(sizeof(*dev->ibdev.iwcm),
> GFP_KERNEL);
> > + dev->ibdev.iwcm->connect = c2_connect;
> > + dev->ibdev.iwcm->disconnect = c2_disconnect;
> > + dev->ibdev.iwcm->accept = c2_accept;
> > + dev->ibdev.iwcm->reject = c2_reject;
> > + dev->ibdev.iwcm->getpeername = c2_getpeername;
> > + dev->ibdev.iwcm->create_listen = c2_service_create;
> > + dev->ibdev.iwcm->destroy_listen = c2_service_destroy;
> > +
> > + ret = ib_register_device(&dev->ibdev);
> > + if (ret)
> > + return ret;
> > +
> > + for (i = 0; i < ARRAY_SIZE(c2_class_attributes); ++i) {
> > + ret = class_device_create_file(&dev->ibdev.class_dev,
> > + c2_class_attributes[i]);
> > + if (ret) {
> > + ib_unregister_device(&dev->ibdev);
> > + return ret;
> > + }
> > + }
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + return 0;
> > +}
> > +
> > +void c2_unregister_device(struct c2_dev *dev)
> > +{
> > + dprintk("%s:%s:%u\n", __FILE__, __FUNCTION__, __LINE__);
> > + ib_unregister_device(&dev->ibdev);
> > +}
> > Index: hw/amso1100/c2_alloc.c
> > ===================================================================
> > --- hw/amso1100/c2_alloc.c (revision 0)
> > +++ hw/amso1100/c2_alloc.c (revision 0)
> > @@ -0,0 +1,255 @@
> > +/*
> > + * Copyright (c) 2004 Topspin Communications. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +
> > +#include <linux/errno.h>
> > +#include <linux/slab.h>
> > +#include <linux/bitmap.h>
> > +
> > +#include "c2.h"
> > +
> > +/* Trivial bitmap-based allocator */
> > +u32 c2_alloc(struct c2_alloc *alloc)
> > +{
> > + u32 obj;
> > +
> > + spin_lock(&alloc->lock);
> > + obj = find_next_zero_bit(alloc->table, alloc->max, alloc->last);
> > + if (obj < alloc->max) {
> > + set_bit(obj, alloc->table);
> > + alloc->last = obj;
> > + } else
> > + obj = -1;
> > +
> > + spin_unlock(&alloc->lock);
> > +
> > + return obj;
> > +}
> > +
> > +void c2_free(struct c2_alloc *alloc, u32 obj)
> > +{
> > + spin_lock(&alloc->lock);
> > + clear_bit(obj, alloc->table);
> > + alloc->last = min(alloc->last, obj);
> > + spin_unlock(&alloc->lock);
> > +}
> > +
> > +int c2_alloc_init(struct c2_alloc *alloc, u32 num, u32 reserved)
> > +{
> > + int i;
> > +
> > + alloc->last = 0;
> > + alloc->max = num;
> > + spin_lock_init(&alloc->lock);
> > + alloc->table = kmalloc(BITS_TO_LONGS(num) * sizeof (long),
> > + GFP_KERNEL);
> > + if (!alloc->table)
> > + return -ENOMEM;
> > +
> > + bitmap_zero(alloc->table, num);
> > + for (i = 0; i < reserved; ++i)
> > + set_bit(i, alloc->table);
> > +
> > + return 0;
> > +}
> > +
> > +void c2_alloc_cleanup(struct c2_alloc *alloc)
> > +{
> > + kfree(alloc->table);
> > +}
> > +
> > +/*
> > + * Array of pointers with lazy allocation of leaf pages. Callers of
> > + * _get, _set and _clear methods must use a lock or otherwise
> > + * serialize access to the array.
> > + */
> > +
> > +void *c2_array_get(struct c2_array *array, int index)
> > +{
> > + int p = (index * sizeof (void *)) >> PAGE_SHIFT;
> > +
> > + if (array->page_list[p].page) {
> > + int i = index & (PAGE_SIZE / sizeof (void *) - 1);
> > + return array->page_list[p].page[i];
> > + } else
> > + return NULL;
> > +}
> > +
> > +int c2_array_set(struct c2_array *array, int index, void *value)
> > +{
> > + int p = (index * sizeof (void *)) >> PAGE_SHIFT;
> > +
> > + /* Allocate with GFP_ATOMIC because we'll be called with locks held.
> */
> > + if (!array->page_list[p].page)
> > + array->page_list[p].page = (void **) get_zeroed_page(GFP_ATOMIC);
> > +
> > + if (!array->page_list[p].page)
> > + return -ENOMEM;
> > +
> > + array->page_list[p].page[index & (PAGE_SIZE / sizeof (void *) - 1)]
> =
> > + value;
> > + ++array->page_list[p].used;
> > +
> > + return 0;
> > +}
> > +
> > +void c2_array_clear(struct c2_array *array, int index)
> > +{
> > + int p = (index * sizeof (void *)) >> PAGE_SHIFT;
> > +
> > + if (--array->page_list[p].used == 0) {
> > + free_page((unsigned long) array->page_list[p].page);
> > + array->page_list[p].page = NULL;
> > + }
> > +
> > + if (array->page_list[p].used < 0)
> > + pr_debug("Array %p index %d page %d with ref count %d < 0\n",
> > + array, index, p, array->page_list[p].used);
> > +}
> > +
> > +int c2_array_init(struct c2_array *array, int nent)
> > +{
> > + int npage = (nent * sizeof (void *) + PAGE_SIZE - 1) / PAGE_SIZE;
> > + int i;
> > +
> > + array->page_list = kmalloc(npage * sizeof *array->page_list,
> GFP_KERNEL);
> > + if (!array->page_list)
> > + return -ENOMEM;
> > +
> > + for (i = 0; i < npage; ++i) {
> > + array->page_list[i].page = NULL;
> > + array->page_list[i].used = 0;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +void c2_array_cleanup(struct c2_array *array, int nent)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < (nent * sizeof (void *) + PAGE_SIZE - 1) /
> PAGE_SIZE; ++i)
> > + free_page((unsigned long) array->page_list[i].page);
> > +
> > + kfree(array->page_list);
> > +}
> > +
> > +static int c2_alloc_mqsp_chunk(unsigned int gfp_mask, struct sp_chunk**
> head)
> > +{
> > + int i;
> > + struct sp_chunk* new_head;
> > +
> > + new_head = (struct sp_chunk*)__get_free_page(gfp_mask|GFP_DMA);
> > + if (new_head == NULL)
> > + return -ENOMEM;
> > +
> > + new_head->next = NULL;
> > + new_head->head = 0;
> > + new_head->gfp_mask = gfp_mask;
> > +
> > + /* build list where each index is the next free slot */
> > + for (i = 0;
> > + i < (PAGE_SIZE-sizeof(struct sp_chunk*)-sizeof(u16)) /
> sizeof(u16)-1;
> > + i++) {
> > + new_head->shared_ptr[i] = i+1;
> > + }
> > + /* terminate list */
> > + new_head->shared_ptr[i] = 0xFFFF;
> > +
> > + *head = new_head;
> > + return 0;
> > +}
> > +
> > +int c2_init_mqsp_pool(unsigned int gfp_mask, struct sp_chunk** root) {
> > + return c2_alloc_mqsp_chunk(gfp_mask, root);
> > +}
> > +
> > +void c2_free_mqsp_pool(struct sp_chunk* root)
> > +{
> > + struct sp_chunk* next;
> > +
> > + while (root) {
> > + next = root->next;
> > + __free_page((struct page*)root);
> > + root = next;
> > + }
> > +}
> > +
> > +u16* c2_alloc_mqsp(struct sp_chunk* head)
> > +{
> > + u16 mqsp;
> > +
> > + while (head) {
> > + mqsp = head->head;
> > + if (mqsp != 0xFFFF) {
> > + head->head = head->shared_ptr[mqsp];
> > + break;
> > + } else if (head->next == NULL) {
> > + if (c2_alloc_mqsp_chunk(head->gfp_mask, &head->next) == 0) {
> > + head = head->next;
> > + mqsp = head->head;
> > + head->head =
> > + head->shared_ptr[mqsp];
> > + break;
> > + }
> > + else
> > + return 0;
> > + }
> > + else
> > + head = head->next;
> > + }
> > + if (head)
> > + return &(head->shared_ptr[mqsp]);
> > + return 0;
> > +}
> > +
> > +void c2_free_mqsp(u16* mqsp)
> > +{
> > + struct sp_chunk* head;
> > + u16 idx;
> > +
> > + /* The chunk containing this ptr begins at the page boundary */
> > + head = (struct sp_chunk*)((unsigned long)mqsp & PAGE_MASK);
> > +
> > + /* Link head to new mqsp */
> > + *mqsp = head->head;
> > +
> > + /* Compute the shared_ptr index */
> > + idx = ((unsigned long)mqsp & ~PAGE_MASK) >> 1;
> > + idx -= (unsigned long)&(((struct sp_chunk*)0)->shared_ptr[0]) >> 1;
> > +
> > + /* Point this index at the head */
> > + head->shared_ptr[idx] = head->head;
> > +
> > + /* Point head at this index */
> > + head->head = idx;
> > +}
> > Index: hw/amso1100/cc_types.h
> > ===================================================================
> > --- hw/amso1100/cc_types.h (revision 0)
> > +++ hw/amso1100/cc_types.h (revision 0)
> > @@ -0,0 +1,297 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +#ifndef _CC_TYPES_H_
> > +#define _CC_TYPES_H_
> > +
> > +#include <linux/types.h>
> > +
> > +#ifndef NULL
> > +#define NULL 0
> > +#endif
> > +#ifndef TRUE
> > +#define TRUE 1
> > +#endif
> > +#ifndef FALSE
> > +#define FALSE 0
> > +#endif
> > +
> > +#define PTR_TO_CTX(p) (u64)(u32)(p)
> > +
> > +#define CC_PTR_TO_64(p) (u64)(u32)(p)
> > +#define CC_64_TO_PTR(c) (void*)(u32)(c)
> > +
> > +
> > +
> > +/*
> > + * not really a "type" however this needs
> > + * to be common between adapter and host.
> > + * this is the best place to put it.
> > + */
> > +#define CC_QP_NO_ATTR_CHANGE 0xFFFFFFFF
> > +
> > +/* Maximum allowed size in bytes of private_data exchange
> > + * on connect.
> > + */
> > +#define CC_MAX_PRIVATE_DATA_SIZE 200
> > +
> > +/*
> > + * These types are shared among the adapter, host, and CCIL consumer.
> Thus
> > + * they are placed here since everyone includes cc_types.h...
> > + */
> > +typedef enum {
> > + CC_CQ_NOTIFICATION_TYPE_NONE = 1,
> > + CC_CQ_NOTIFICATION_TYPE_NEXT,
> > + CC_CQ_NOTIFICATION_TYPE_NEXT_SE
> > +} cc_cq_notification_type_t;
> > +
> > +typedef enum {
> > + CC_CFG_ADD_ADDR = 1,
> > + CC_CFG_DEL_ADDR = 2,
> > + CC_CFG_ADD_ROUTE = 3,
> > + CC_CFG_DEL_ROUTE = 4
> > +} cc_setconfig_cmd_t;
> > +
> > +typedef enum {
> > + CC_GETCONFIG_ROUTES = 1,
> > + CC_GETCONFIG_ADDRS
> > +} cc_getconfig_cmd_t;
> > +
> > +/*
> > + * CCIL Work Request Identifiers
> > + */
> > +typedef enum {
> > + CCWR_RNIC_OPEN = 1,
> > + CCWR_RNIC_QUERY,
> > + CCWR_RNIC_SETCONFIG,
> > + CCWR_RNIC_GETCONFIG,
> > + CCWR_RNIC_CLOSE,
> > + CCWR_CQ_CREATE,
> > + CCWR_CQ_QUERY,
> > + CCWR_CQ_MODIFY,
> > + CCWR_CQ_DESTROY,
> > + CCWR_QP_CONNECT,
> > + CCWR_PD_ALLOC,
> > + CCWR_PD_DEALLOC,
> > + CCWR_SRQ_CREATE,
> > + CCWR_SRQ_QUERY,
> > + CCWR_SRQ_MODIFY,
> > + CCWR_SRQ_DESTROY,
> > + CCWR_QP_CREATE,
> > + CCWR_QP_QUERY,
> > + CCWR_QP_MODIFY,
> > + CCWR_QP_DESTROY,
> > + CCWR_NSMR_STAG_ALLOC,
> > + CCWR_NSMR_REGISTER,
> > + CCWR_NSMR_PBL,
> > + CCWR_STAG_DEALLOC,
> > + CCWR_NSMR_REREGISTER,
> > + CCWR_SMR_REGISTER,
> > + CCWR_MR_QUERY,
> > + CCWR_MW_ALLOC,
> > + CCWR_MW_QUERY,
> > + CCWR_EP_CREATE,
> > + CCWR_EP_GETOPT,
> > + CCWR_EP_SETOPT,
> > + CCWR_EP_DESTROY,
> > + CCWR_EP_BIND,
> > + CCWR_EP_CONNECT,
> > + CCWR_EP_LISTEN,
> > + CCWR_EP_SHUTDOWN,
> > + CCWR_EP_LISTEN_CREATE,
> > + CCWR_EP_LISTEN_DESTROY,
> > + CCWR_EP_QUERY,
> > + CCWR_CR_ACCEPT,
> > + CCWR_CR_REJECT,
> > + CCWR_CONSOLE,
> > + CCWR_TERM,
> > + CCWR_FLASH_INIT,
> > + CCWR_FLASH,
> > + CCWR_BUF_ALLOC,
> > + CCWR_BUF_FREE,
> > + CCWR_FLASH_WRITE,
> > + CCWR_INIT, /* WARNING: Don't move this ever again! */
> > +
> > +
> > +
> > + /* Add new IDs here */
> > +
> > +
> > +
> > + /*
> > + * WARNING: CCWR_LAST must always be the last verbs id defined!
>
> > + * All the preceding IDs are fixed, and must not
> change.
> > + * You can add new IDs, but must not remove or reorder
> > + * any IDs. If you do, YOU will ruin any hope of
> > + * compatability between versions.
> > + */
> > + CCWR_LAST,
> > +
> > + /*
> > + * Start over at 1 so that arrays indexed by user wr id's
> > + * begin at 1. This is OK since the verbs and user wr id's
> > + * are always used on disjoint sets of queues.
> > + */
> > +#if 0
> > + CCWR_SEND = 1,
> > + CCWR_SEND_SE,
> > + CCWR_SEND_INV,
> > + CCWR_SEND_SE_INV,
> > +#else
> > + /*
> > + * The order of the CCWR_SEND_XX verbs must
> > + * match the order of the RDMA_OPs
> > + */
> > + CCWR_SEND = 1,
> > + CCWR_SEND_INV,
> > + CCWR_SEND_SE,
> > + CCWR_SEND_SE_INV,
> > +#endif
> > + CCWR_RDMA_WRITE,
> > + CCWR_RDMA_READ,
> > + CCWR_RDMA_READ_INV,
> > + CCWR_MW_BIND,
> > + CCWR_NSMR_FASTREG,
> > + CCWR_STAG_INVALIDATE,
> > + CCWR_RECV,
> > + CCWR_NOP,
> > + CCWR_UNIMPL, /* WARNING: This must always be the last user wr
> id defined! */
> > +} ccwr_ids_t;
> > +#define RDMA_SEND_OPCODE_FROM_WR_ID(x) (x+2)
> > +
> > +/*
> > + * SQ/RQ Work Request Types
> > + */
> > +typedef enum {
> > + CC_WR_TYPE_SEND = CCWR_SEND,
> > + CC_WR_TYPE_SEND_SE = CCWR_SEND_SE,
> > + CC_WR_TYPE_SEND_INV = CCWR_SEND_INV,
> > + CC_WR_TYPE_SEND_SE_INV = CCWR_SEND_SE_INV,
> > + CC_WR_TYPE_RDMA_WRITE = CCWR_RDMA_WRITE,
> > + CC_WR_TYPE_RDMA_READ = CCWR_RDMA_READ,
> > + CC_WR_TYPE_RDMA_READ_INV_STAG = CCWR_RDMA_READ_INV,
> > + CC_WR_TYPE_BIND_MW = CCWR_MW_BIND,
> > + CC_WR_TYPE_FASTREG_NSMR = CCWR_NSMR_FASTREG,
> > + CC_WR_TYPE_INV_STAG = CCWR_STAG_INVALIDATE,
> > + CC_WR_TYPE_RECV = CCWR_RECV,
> > + CC_WR_TYPE_NOP = CCWR_NOP,
> > +} cc_wr_type_t;
> > +
> > +/*
> > + * These are used as bitfields for efficient comparison of multiple
> possible
> > + * states.
> > + */
> > +typedef enum {
> > + CC_QP_STATE_IDLE = 0x01, /* initial state */
> > + CC_QP_STATE_CONNECTING = 0x02, /* LLP is connecting */
> > + CC_QP_STATE_RTS = 0x04, /* RDDP/RDMAP enabled */
> > + CC_QP_STATE_CLOSING = 0x08, /* LLP is shutting down */
> > + CC_QP_STATE_TERMINATE = 0x10, /* Connection
> Terminat[ing|ed] */
> > + CC_QP_STATE_ERROR = 0x20, /* Error state to flush
> everything */
> > +} cc_qp_state_t;
> > +
> > +typedef struct _cc_netaddr_s {
> > + u32 ip_addr;
> > + u32 netmask;
> > + u32 mtu;
> > +} cc_netaddr_t;
> > +
> > +typedef struct _cc_route_s {
> > + u32 ip_addr; /* 0 indicates the default route */
> > + u32 netmask; /* netmask associated with dst */
> > + u32 flags;
> > + union {
> > + u32 ipaddr; /* address of the nexthop interface */
> > + u8 enaddr[6];
> > + } nexthop;
> > +} cc_route_t;
> > +
> > +/*
> > + * A Scatter Gather Entry.
> > + */
> > +typedef u32 cc_stag_t;
> > +
> > +typedef struct {
> > + cc_stag_t stag;
> > + u32 length;
> > + u64 to;
> > +} cc_data_addr_t;
> > +
> > +/*
> > + * MR and MW flags used by the consumer, RI, and RNIC.
> > + */
> > +typedef enum {
> > + MEM_REMOTE = 0x0001, /* allow mw binds with remote access. */
> > + MEM_VA_BASED = 0x0002, /* Not Zero-based */
> > + MEM_PBL_COMPLETE = 0x0004, /* PBL array is complete in this msg
> */
> > + MEM_LOCAL_READ = 0x0008, /* allow local reads */
> > + MEM_LOCAL_WRITE = 0x0010, /* allow local writes */
> > + MEM_REMOTE_READ = 0x0020, /* allow remote reads */
> > + MEM_REMOTE_WRITE = 0x0040, /* allow remote writes */
> > + MEM_WINDOW_BIND = 0x0080, /* binds allowed */
> > + MEM_SHARED = 0x0100, /* set if MR is shared */
> > + MEM_STAG_VALID = 0x0200 /* set if STAG is in valid state */
> > +} cc_mm_flags_t;
> > +
> > +/*
> > + * CCIL API ACF flags defined in terms of the low level mem flags.
> > + * This minimizes translation needed in the user API
> > + */
> > +typedef enum {
> > + CC_ACF_LOCAL_READ = MEM_LOCAL_READ,
> > + CC_ACF_LOCAL_WRITE = MEM_LOCAL_WRITE,
> > + CC_ACF_REMOTE_READ = MEM_REMOTE_READ,
> > + CC_ACF_REMOTE_WRITE = MEM_REMOTE_WRITE,
> > + CC_ACF_WINDOW_BIND = MEM_WINDOW_BIND
> > +} cc_acf_t;
> > +
> > +/*
> > + * Image types of objects written to flash
> > + */
> > +#define CC_FLASH_IMG_BITFILE 1
> > +#define CC_FLASH_IMG_OPTION_ROM 2
> > +#define CC_FLASH_IMG_VPD 3
> > +
> > +/*
> > + * to fix bug 1815 we define the max size allowable of the
> > + * terminate message (per the IETF spec).Refer to the IETF
> > + * protocal specification, section 12.1.6, page 64)
> > + * The message is prefixed by 20 types of DDP info.
> > + *
> > + * Then the message has 6 bytes for the terminate control
> > + * and DDP segment length info plus a DDP header (either
> > + * 14 or 18 byts) plus 28 bytes for the RDMA header.
> > + * Thus the max size in:
> > + * 20 + (6 + 18 + 28) = 72
> > + */
> > +#define CC_MAX_TERMINATE_MESSAGE_SIZE (72)
> > +#endif
> > Index: hw/amso1100/c2_rnic.c
> > ===================================================================
> > --- hw/amso1100/c2_rnic.c (revision 0)
> > +++ hw/amso1100/c2_rnic.c (revision 0)
> > @@ -0,0 +1,581 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + *
> > + */
> > +
> > +
> > +#include <linux/module.h>
> > +#include <linux/moduleparam.h>
> > +#include <linux/pci.h>
> > +#include <linux/netdevice.h>
> > +#include <linux/etherdevice.h>
> > +#include <linux/delay.h>
> > +#include <linux/ethtool.h>
> > +#include <linux/mii.h>
> > +#include <linux/if_vlan.h>
> > +#include <linux/crc32.h>
> > +#include <linux/in.h>
> > +#include <linux/ip.h>
> > +#include <linux/tcp.h>
> > +#include <linux/init.h>
> > +#include <linux/dma-mapping.h>
> > +#include <linux/mm.h>
> > +#include <linux/inet.h>
> > +
> > +#include <linux/route.h>
> > +#ifdef NETEVENT_NOTIFIER
> > +#include <net/netevent.h>
> > +#include <net/neighbour.h>
> > +#include <net/ip_fib.h>
> > +#endif
> > +
> > +
> > +#include <asm/io.h>
> > +#include <asm/irq.h>
> > +#include <asm/byteorder.h>
> > +#include <rdma/ib_smi.h>
> > +#include "c2.h"
> > +#include "c2_vq.h"
> > +
> > +#define C2_MAX_MRS 32768
> > +#define C2_MAX_QPS 16000
> > +#define C2_MAX_WQE_SZ 256
> > +#define C2_MAX_QP_WR ((128*1024)/C2_MAX_WQE_SZ)
> > +#define C2_MAX_SGES 4
> > +#define C2_MAX_CQS 32768
> > +#define C2_MAX_CQES 4096
> > +#define C2_MAX_PDS 16384
> > +
> > +/*
> > + * Send the adapter INIT message to the amso1100
> > + */
> > +static int c2_adapter_init(struct c2_dev *c2dev)
> > +{
> > + ccwr_init_req_t wr;
> > + int err;
> > +
> > + memset(&wr, 0, sizeof(wr));
> > + c2_wr_set_id(&wr, CCWR_INIT);
> > + wr.hdr.context = 0;
> > + wr.hint_count = cpu_to_be64(__pa(&c2dev->hint_count));
> > + wr.q0_host_shared =
> > + cpu_to_be64(__pa(c2dev->req_vq.shared));
> > + wr.q1_host_shared =
> > + cpu_to_be64(__pa(c2dev->rep_vq.shared));
> > + wr.q1_host_msg_pool =
> > + cpu_to_be64(__pa(c2dev->rep_vq.msg_pool));
> > + wr.q2_host_shared =
> > + cpu_to_be64(__pa(c2dev->aeq.shared));
> > + wr.q2_host_msg_pool =
> > + cpu_to_be64(__pa(c2dev->aeq.msg_pool));
> > +
> > + /* Post the init message */
> > + err = vq_send_wr(c2dev, (ccwr_t *)&wr);
> > +
> > + return err;
> > +}
> > +
> > +/*
> > + * Send the adapter TERM message to the amso1100
> > + */
> > +static void c2_adapter_term(struct c2_dev *c2dev)
> > +{
> > + ccwr_init_req_t wr;
> > +
> > + memset(&wr, 0, sizeof(wr));
> > + c2_wr_set_id(&wr, CCWR_TERM);
> > + wr.hdr.context = 0;
> > +
> > + /* Post the init message */
> > + vq_send_wr(c2dev, (ccwr_t *)&wr);
> > + c2dev->init = 0;
> > +
> > + return;
> > +}
> > +
> > +/*
> > + * Hack to hard code an ip address
> > + */
> > +extern char *rnic_ip_addr;
> > +static int c2_setconfig_hack(struct c2_dev *c2dev)
> > +{
> > + struct c2_vq_req *vq_req;
> > + ccwr_rnic_setconfig_req_t *wr;
> > + ccwr_rnic_setconfig_rep_t *reply;
> > + cc_netaddr_t netaddr;
> > + int err, len;
> > +
> > + vq_req = vq_req_alloc(c2dev);
> > + if (!vq_req)
> > + return -ENOMEM;
> > +
> > + len = sizeof(cc_netaddr_t);
> > + wr = kmalloc(sizeof(*wr) + len, GFP_KERNEL);
> > + if (!wr) {
> > + err = -ENOMEM;
> > + goto bail0;
> > + }
> > +
> > + c2_wr_set_id(wr, CCWR_RNIC_SETCONFIG);
> > + wr->hdr.context = (unsigned long)vq_req;
> > + wr->rnic_handle = c2dev->adapter_handle;
> > + wr->option = cpu_to_be32(CC_CFG_ADD_ADDR);
> > +
> > + netaddr.ip_addr = in_aton(rnic_ip_addr);
> > + netaddr.netmask = htonl(0xFFFFFF00);
> > + netaddr.mtu = 0;
> > +
> > + memcpy(wr->data, &netaddr, len);
> > +
> > + vq_req_get(c2dev, vq_req);
> > +
> > + err = vq_send_wr(c2dev, (ccwr_t *)wr);
> > + if (err) {
> > + vq_req_put(c2dev, vq_req);
> > + goto bail1;
> > + }
> > +
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err)
> > + goto bail1;
> > +
> > + reply = (ccwr_rnic_setconfig_rep_t *)(unsigned
> long)(vq_req->reply_msg);
> > + if (!reply) {
> > + err = -ENOMEM;
> > + goto bail1;
> > + }
> > +
> > + err = c2_errno(reply);
> > + vq_repbuf_free(c2dev, reply);
> > +
> > +bail1:
> > + kfree(wr);
> > +bail0:
> > + vq_req_free(c2dev, vq_req);
> > + return err;
> > +}
> > +
> > +/*
> > + * Open a single RNIC instance to use with all
> > + * low level openib calls
> > + */
> > +static int c2_rnic_open(struct c2_dev *c2dev)
> > +{
> > + struct c2_vq_req *vq_req;
> > + ccwr_t wr;
> > + ccwr_rnic_open_rep_t* reply;
> > + int err;
> > +
> > + vq_req = vq_req_alloc(c2dev);
> > + if (vq_req == NULL) {
> > + return -ENOMEM;
> > + }
> > +
> > + memset(&wr, 0, sizeof(wr));
> > + c2_wr_set_id(&wr, CCWR_RNIC_OPEN);
> > + wr.rnic_open.req.hdr.context = (unsigned long)(vq_req);
> > + wr.rnic_open.req.flags = cpu_to_be16(RNIC_PRIV_MODE);
> > + wr.rnic_open.req.port_num = cpu_to_be16(0);
> > + wr.rnic_open.req.user_context = (unsigned long)c2dev;
> > +
> > + vq_req_get(c2dev, vq_req);
> > +
> > + err = vq_send_wr(c2dev, &wr);
> > + if (err) {
> > + vq_req_put(c2dev, vq_req);
> > + goto bail0;
> > + }
> > +
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err) {
> > + goto bail0;
> > + }
> > +
> > + reply = (ccwr_rnic_open_rep_t*)(unsigned long)(vq_req->reply_msg);
> > + if (!reply) {
> > + err = -ENOMEM;
> > + goto bail0;
> > + }
> > +
> > + if ( (err = c2_errno(reply)) != 0) {
> > + goto bail1;
> > + }
> > +
> > + c2dev->adapter_handle = reply->rnic_handle;
> > +
> > +bail1:
> > + vq_repbuf_free(c2dev, reply);
> > +bail0:
> > + vq_req_free(c2dev, vq_req);
> > + return err;
> > +}
> > +
> > +/*
> > + * Close the RNIC instance
> > + */
> > +static int c2_rnic_close(struct c2_dev *c2dev)
> > +{
> > + struct c2_vq_req *vq_req;
> > + ccwr_t wr;
> > + ccwr_rnic_close_rep_t *reply;
> > + int err;
> > +
> > + vq_req = vq_req_alloc(c2dev);
> > + if (vq_req == NULL) {
> > + return -ENOMEM;
> > + }
> > +
> > + memset(&wr, 0, sizeof(wr));
> > + c2_wr_set_id(&wr, CCWR_RNIC_CLOSE);
> > + wr.rnic_close.req.hdr.context = (unsigned long)vq_req;
> > + wr.rnic_close.req.rnic_handle = c2dev->adapter_handle;
> > +
> > + vq_req_get(c2dev, vq_req);
> > +
> > + err = vq_send_wr(c2dev, &wr);
> > + if (err) {
> > + vq_req_put(c2dev, vq_req);
> > + goto bail0;
> > + }
> > +
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err) {
> > + goto bail0;
> > + }
> > +
> > + reply = (ccwr_rnic_close_rep_t*)(unsigned long)(vq_req->reply_msg);
> > + if (!reply) {
> > + err = -ENOMEM;
> > + goto bail0;
> > + }
> > +
> > + if ( (err = c2_errno(reply)) != 0) {
> > + goto bail1;
> > + }
> > +
> > + c2dev->adapter_handle = 0;
> > +
> > +bail1:
> > + vq_repbuf_free(c2dev, reply);
> > +bail0:
> > + vq_req_free(c2dev, vq_req);
> > + return err;
> > +}
> > +#ifdef NETEVENT_NOTIFIER
> > +static int netevent_notifier(struct notifier_block *self, unsigned long
>
> > event, void* data)
> > +{
> > + int i;
> > + u8* ha;
> > + struct neighbour* neigh = data;
> > + struct netevent_redirect* redir = data;
> > + struct netevent_route_change* rev = data;
> > +
> > + switch (event) {
> > + case NETEVENT_ROUTE_UPDATE:
> > + printk(KERN_ERR "NETEVENT_ROUTE_UPDATE:\n");
> > + printk(KERN_ERR "fib_flags : %d\n",
> > + rev->fib_info->fib_flags);
> > + printk(KERN_ERR "fib_protocol : %d\n",
> > + rev->fib_info->fib_protocol);
> > + printk(KERN_ERR "fib_prefsrc : %08x\n",
> > + rev->fib_info->fib_prefsrc);
> > + printk(KERN_ERR "fib_priority : %d\n",
> > + rev->fib_info->fib_priority);
> > + break;
> > +
> > + case NETEVENT_NEIGH_UPDATE:
> > + printk(KERN_ERR "NETEVENT_NEIGH_UPDATE:\n");
> > + printk(KERN_ERR "nud_state : %d\n", neigh->nud_state);
> > + printk(KERN_ERR "refcnt : %d\n", neigh->refcnt);
> > + printk(KERN_ERR "used : %d\n", neigh->used);
> > + printk(KERN_ERR "confirmed : %d\n", neigh->confirmed);
> > + printk(KERN_ERR " ha: ");
> > + for (i=0; i < neigh->dev->addr_len; i+=4) {
> > + ha = &neigh->ha[i];
> > + printk("%02x:%02x:%02x:%02x:", ha[0], ha[1], ha[2], ha[3]);
> > + }
> > + printk("\n");
> > +
> > + printk(KERN_ERR "%8s: ", neigh->dev->name);
> > + for (i=0; i < neigh->dev->addr_len; i+=4) {
> > + ha = &neigh->ha[i];
> > + printk("%02x:%02x:%02x:%02x:", ha[0], ha[1], ha[2], ha[3]);
> > + }
> > + printk("\n");
> > + break;
> > +
> > + case NETEVENT_REDIRECT:
> > + printk(KERN_ERR "NETEVENT_REDIRECT:\n");
> > + printk(KERN_ERR "old: ");
> > + for (i=0; i < redir->old->neighbour->dev->addr_len; i+=4) {
> > + ha = &redir->old->neighbour->ha[i];
> > + printk("%02x:%02x:%02x:%02x:", ha[0], ha[1], ha[2], ha[3]);
> > + }
> > + printk("\n");
> > +
> > + printk(KERN_ERR "new: ");
> > + for (i=0; i < redir->new->neighbour->dev->addr_len; i+=4) {
> > + ha = &redir->new->neighbour->ha[i];
> > + printk("%02x:%02x:%02x:%02x:", ha[0], ha[1], ha[2], ha[3]);
> > + }
> > + printk("\n");
> > + break;
> > +
> > + default:
> > + printk(KERN_ERR "NETEVENT_WTFO:\n");
> > + }
> > +
> > + return NOTIFY_DONE;
> > +}
> > +
> > +static struct notifier_block nb = {
> > + .notifier_call = netevent_notifier,
> > +};
> > +#endif
> > +/*
> > + * Called by c2_probe to initialize the RNIC. This principally
> > + * involves initalizing the various limits and resouce pools that
> > + * comprise the RNIC instance.
> > + */
> > +int c2_rnic_init(struct c2_dev* c2dev)
> > +{
> > + int err;
> > + u32 qsize, msgsize;
> > + void *q1_pages;
> > + void *q2_pages;
> > + void __iomem *mmio_regs;
> > +
> > + /* Initialize the adapter limits */
> > + c2dev->max_mr = C2_MAX_MRS;
> > + c2dev->max_mr_size = ~0;
> > + c2dev->max_qp = C2_MAX_QPS;
> > + c2dev->max_qp_wr = C2_MAX_QP_WR;
> > + c2dev->max_sge = C2_MAX_SGES;
> > + c2dev->max_cq = C2_MAX_CQS;
> > + c2dev->max_cqe = C2_MAX_CQES;
> > + c2dev->max_pd = C2_MAX_PDS;
> > +
> > + /* Device capabilities */
> > + c2dev->device_cap_flags =
> > + (
> > + IB_DEVICE_RESIZE_MAX_WR |
> > + IB_DEVICE_CURR_QP_STATE_MOD |
> > + IB_DEVICE_SYS_IMAGE_GUID |
> > + IB_DEVICE_ZERO_STAG |
> > + IB_DEVICE_SEND_W_INV |
> > + IB_DEVICE_MW |
> > + IB_DEVICE_ARP
> > + );
> > +
> > + /* Allocate the qptr_array */
> > + c2dev->qptr_array = vmalloc(C2_MAX_CQS*sizeof(void *));
> > + if (!c2dev->qptr_array) {
> > + return -ENOMEM;
> > + }
> > +
> > + /* Inialize the qptr_array */
> > + memset(c2dev->qptr_array, 0, C2_MAX_CQS*sizeof(void *));
> > + c2dev->qptr_array[0] = (void *)&c2dev->req_vq;
> > + c2dev->qptr_array[1] = (void *)&c2dev->rep_vq;
> > + c2dev->qptr_array[2] = (void *)&c2dev->aeq;
> > +
> > + /* Initialize data structures */
> > + init_waitqueue_head(&c2dev->req_vq_wo);
> > + spin_lock_init(&c2dev->vqlock);
> > + spin_lock_init(&c2dev->aeq_lock);
> > +
> > +
> > + /* Allocate MQ shared pointer pool for kernel clients. User
> > + * mode client pools are hung off the user context
> > + */
> > + err = c2_init_mqsp_pool(GFP_KERNEL, &c2dev->kern_mqsp_pool);
> > + if (err) {
> > + goto bail0;
> > + }
> > +
> > + /* Allocate shared pointers for Q0, Q1, and Q2 from
> > + * the shared pointer pool.
> > + */
> > + c2dev->req_vq.shared = c2_alloc_mqsp(c2dev->kern_mqsp_pool);
> > + c2dev->rep_vq.shared = c2_alloc_mqsp(c2dev->kern_mqsp_pool);
> > + c2dev->aeq.shared = c2_alloc_mqsp(c2dev->kern_mqsp_pool);
> > + if (!c2dev->req_vq.shared ||
> > + !c2dev->rep_vq.shared ||
> > + !c2dev->aeq.shared) {
> > + err = -ENOMEM;
> > + goto bail1;
> > + }
> > +
> > + mmio_regs = c2dev->kva;
> > + /* Initialize the Verbs Request Queue */
> > + c2_mq_init(&c2dev->req_vq, 0,
> > + be32_to_cpu(c2_read32(mmio_regs + C2_REGS_Q0_QSIZE)),
> > + be32_to_cpu(c2_read32(mmio_regs + C2_REGS_Q0_MSGSIZE)),
> > + mmio_regs + be32_to_cpu(c2_read32(mmio_regs +
> C2_REGS_Q0_POOLSTART)),
> > + mmio_regs + be32_to_cpu(c2_read32(mmio_regs +
> C2_REGS_Q0_SHARED)),
> > + C2_MQ_ADAPTER_TARGET);
> > +
> > + /* Initialize the Verbs Reply Queue */
> > + qsize = be32_to_cpu(c2_read32(mmio_regs + C2_REGS_Q1_QSIZE));
> > + msgsize = be32_to_cpu(c2_read32(mmio_regs + C2_REGS_Q1_MSGSIZE));
> > + q1_pages = kmalloc(qsize * msgsize, GFP_KERNEL);
> > + if (!q1_pages) {
> > + err = -ENOMEM;
> > + goto bail1;
> > + }
> > + c2_mq_init(&c2dev->rep_vq,
> > + 1,
> > + qsize,
> > + msgsize,
> > + q1_pages,
> > + mmio_regs + be32_to_cpu(c2_read32(mmio_regs +
> C2_REGS_Q1_SHARED)),
> > + C2_MQ_HOST_TARGET);
> > +
> > + /* Initialize the Asynchronus Event Queue */
> > + qsize = be32_to_cpu(c2_read32(mmio_regs + C2_REGS_Q2_QSIZE));
> > + msgsize = be32_to_cpu(c2_read32(mmio_regs + C2_REGS_Q2_MSGSIZE));
> > + q2_pages = kmalloc(qsize * msgsize, GFP_KERNEL);
> > + if (!q2_pages) {
> > + err = -ENOMEM;
> > + goto bail2;
> > + }
> > + c2_mq_init(&c2dev->aeq,
> > + 2,
> > + qsize,
> > + msgsize,
> > + q2_pages,
> > + mmio_regs + be32_to_cpu(c2_read32(mmio_regs +
> C2_REGS_Q2_SHARED)),
> > + C2_MQ_HOST_TARGET);
> > +
> > + /* Initialize the verbs request allocator */
> > + err = vq_init(c2dev);
> > + if (err) {
> > + goto bail3;
> > + }
> > +
> > + /* Enable interrupts on the adapter */
> > + c2_write32(c2dev->regs + C2_IDIS, 0);
> > +
> > + /* create the WR init message */
> > + err = c2_adapter_init(c2dev);
> > + if (err) {
> > + goto bail4;
> > + }
> > + c2dev->init++;
> > +
> > + /* open an adapter instance */
> > + err = c2_rnic_open(c2dev);
> > + if (err) {
> > + goto bail4;
> > + }
> > +
> > + /* Initialize the PD pool */
> > + err = c2_init_pd_table(c2dev);
> > + if (err)
> > + goto bail5;
> > +
> > + /* Initialize the QP pool */
> > + err = c2_init_qp_table(c2dev);
> > + if (err)
> > + goto bail6;
> > +
> > + /* XXX hardcode an address */
> > + err = c2_setconfig_hack(c2dev);
> > + if (err)
> > + goto bail7;
> > +
> > +#ifdef NETEVENT_NOTIFIER
> > + register_netevent_notifier(&nb);
> > +#endif
> > + return 0;
> > +
> > +bail7:
> > + c2_cleanup_qp_table(c2dev);
> > +bail6:
> > + c2_cleanup_pd_table(c2dev);
> > +bail5:
> > + c2_rnic_close(c2dev);
> > +bail4:
> > + vq_term(c2dev);
> > +bail3:
> > + kfree(q2_pages);
> > +bail2:
> > + kfree(q1_pages);
> > +bail1:
> > + c2_free_mqsp_pool(c2dev->kern_mqsp_pool);
> > +bail0:
> > + vfree(c2dev->qptr_array);
> > +
> > + return err;
> > +}
> > +
> > +/*
> > + * Called by c2_remove to cleanup the RNIC resources.
> > + */
> > +void c2_rnic_term(struct c2_dev* c2dev)
> > +{
> > +#ifdef NETEVENT_NOTIFIER
> > + unregister_netevent_notifier(&nb);
> > +#endif
> > +
> > + /* Close the open adapter instance */
> > + c2_rnic_close(c2dev);
> > +
> > + /* Send the TERM message to the adapter */
> > + c2_adapter_term(c2dev);
> > +
> > + /* Disable interrupts on the adapter */
> > + c2_write32(c2dev->regs + C2_IDIS, 1);
> > +
> > + /* Free the QP pool */
> > + c2_cleanup_qp_table(c2dev);
> > +
> > + /* Free the PD pool */
> > + c2_cleanup_pd_table(c2dev);
> > +
> > + /* Free the verbs request allocator */
> > + vq_term(c2dev);
> > +
> > + /* Free the asynchronus event queue */
> > + kfree(c2dev->aeq.msg_pool);
> > +
> > + /* Free the verbs reply queue */
> > + kfree(c2dev->rep_vq.msg_pool);
> > +
> > + /* Free the MQ shared pointer pool */
> > + c2_free_mqsp_pool(c2dev->kern_mqsp_pool);
> > +
> > + /* Free the qptr_array */
> > + vfree(c2dev->qptr_array);
> > +
> > + return;
> > +}
> > Index: hw/amso1100/c2_vq.h
> > ===================================================================
> > --- hw/amso1100/c2_vq.h (revision 0)
> > +++ hw/amso1100/c2_vq.h (revision 0)
> > @@ -0,0 +1,60 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +#ifndef _C2_VQ_H_
> > +#define _C2_VQ_H_
> > +#include <linux/sched.h>
> > +
> > +#include "c2.h"
> > +#include "c2_wr.h"
> > +
> > +struct c2_vq_req{
> > + u64 reply_msg; /* ptr to reply msg */
> > + wait_queue_head_t wait_object; /* wait object for vq reqs */
> > + atomic_t reply_ready; /* set when reply is ready */
> > + atomic_t refcnt; /* used to cancel WRs... */
> > +};
> > +
> > +extern int vq_init(struct c2_dev* c2dev);
> > +extern void vq_term(struct c2_dev* c2dev);
> > +
> > +extern struct c2_vq_req* vq_req_alloc(struct c2_dev *c2dev);
> > +extern void vq_req_free(struct c2_dev *c2dev, struct c2_vq_req *req);
> > +extern void vq_req_get(struct c2_dev *c2dev, struct c2_vq_req *req);
> > +extern void vq_req_put(struct c2_dev *c2dev, struct c2_vq_req *req);
> > +extern int vq_send_wr(struct c2_dev *c2dev, ccwr_t *wr);
> > +
> > +extern void* vq_repbuf_alloc(struct c2_dev *c2dev);
> > +extern void vq_repbuf_free(struct c2_dev *c2dev, void *reply);
> > +
> > +extern int vq_wait_for_reply(struct c2_dev *c2dev, struct c2_vq_req
> *req);
> > +#endif /* _C2_VQ_H_ */
> > Index: hw/amso1100/c2_wr.h
> > ===================================================================
> > --- hw/amso1100/c2_wr.h (revision 0)
> > +++ hw/amso1100/c2_wr.h (revision 0)
> > @@ -0,0 +1,1343 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +#ifndef _CC_WR_H_
> > +#define _CC_WR_H_
> > +#include "cc_types.h"
> > +/*
> > + * WARNING: If you change this file, also bump CC_IVN_BASE
> > + * in common/include/clustercore/cc_ivn.h.
> > + */
> > +
> > +#ifdef CCDEBUG
> > +#define CCWR_MAGIC 0xb07700b0
> > +#endif
> > +
> > +/*
> > + * Build String Length. It must be the same as CC_BUILD_STR_LEN in
> ccil_api.h
> > + */
> > +#define WR_BUILD_STR_LEN 64
> > +
> > +#ifdef _MSC_VER
> > +#define PACKED
> > +#pragma pack(push)
> > +#pragma pack(1)
> > +#define __inline__ __inline
> > +#else
> > +#define PACKED __attribute__ ((packed))
> > +#endif
> > +
> > +/*
> > + * WARNING: All of these structs need to align any 64bit types on
> > + * 64 bit boundaries! 64bit types include u64 and u64.
> > + */
> > +
> > +/*
> > + * Clustercore Work Request Header. Be sensitive to field layout
> > + * and alignment.
> > + */
> > +typedef struct {
> > + /* wqe_count is part of the cqe. It is put here so the
> > + * adapter can write to it while the wr is pending without
> > + * clobbering part of the wr. This word need not be dma'd
> > + * from the host to adapter by libccil, but we copy it anyway
> > + * to make the memcpy to the adapter better aligned.
> > + */
> > + u32 wqe_count;
> > +
> > + /* Put these fields next so that later 32- and 64-bit
> > + * quantities are naturally aligned.
> > + */
> > + u8 id;
> > + u8 result; /* adapter -> host */
> > + u8 sge_count; /* host -> adapter */
> > + u8 flags; /* host -> adapter */
> > +
> > + u64 context;
> > +#ifdef CCMSGMAGIC
> > + u32 magic;
> > + u32 pad;
> > +#endif
> > +} PACKED ccwr_hdr_t;
> > +
> > +/*
> > + *------------------------ RNIC ------------------------
> > + */
> > +
> > +/*
> > + * WR_RNIC_OPEN
> > + */
> > +
> > +/*
> > + * Flags for the RNIC WRs
> > + */
> > +typedef enum {
> > + RNIC_IRD_STATIC = 0x0001,
> > + RNIC_ORD_STATIC = 0x0002,
> > + RNIC_QP_STATIC = 0x0004,
> > + RNIC_SRQ_SUPPORTED = 0x0008,
> > + RNIC_PBL_BLOCK_MODE = 0x0010,
> > + RNIC_SRQ_MODEL_ARRIVAL = 0x0020,
> > + RNIC_CQ_OVF_DETECTED = 0x0040,
> > + RNIC_PRIV_MODE = 0x0080
> > +} PACKED cc_rnic_flags_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 user_context;
> > + u16 flags; /* See cc_rnic_flags_t */
> > + u16 port_num;
> > +} PACKED ccwr_rnic_open_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > +} PACKED ccwr_rnic_open_rep_t;
> > +
> > +typedef union {
> > + ccwr_rnic_open_req_t req;
> > + ccwr_rnic_open_rep_t rep;
> > +} PACKED ccwr_rnic_open_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > +} PACKED ccwr_rnic_query_req_t;
> > +
> > +/*
> > + * WR_RNIC_QUERY
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 user_context;
> > + u32 vendor_id;
> > + u32 part_number;
> > + u32 hw_version;
> > + u32 fw_ver_major;
> > + u32 fw_ver_minor;
> > + u32 fw_ver_patch;
> > + char fw_ver_build_str[WR_BUILD_STR_LEN];
> > + u32 max_qps;
> > + u32 max_qp_depth;
> > + u32 max_srq_depth;
> > + u32 max_send_sgl_depth;
> > + u32 max_rdma_sgl_depth;
> > + u32 max_cqs;
> > + u32 max_cq_depth;
> > + u32 max_cq_event_handlers;
> > + u32 max_mrs;
> > + u32 max_pbl_depth;
> > + u32 max_pds;
> > + u32 max_global_ird;
> > + u32 max_global_ord;
> > + u32 max_qp_ird;
> > + u32 max_qp_ord;
> > + u32 flags; /* See cc_rnic_flags_t */
> > + u32 max_mws;
> > + u32 pbe_range_low;
> > + u32 pbe_range_high;
> > + u32 max_srqs;
> > + u32 page_size;
> > +} PACKED ccwr_rnic_query_rep_t;
> > +
> > +typedef union {
> > + ccwr_rnic_query_req_t req;
> > + ccwr_rnic_query_rep_t rep;
> > +} PACKED ccwr_rnic_query_t;
> > +
> > +/*
> > + * WR_RNIC_GETCONFIG
> > + */
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 option; /* see cc_getconfig_cmd_t */
> > + u64 reply_buf;
> > + u32 reply_buf_len;
> > +} PACKED ccwr_rnic_getconfig_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 option; /* see cc_getconfig_cmd_t */
> > + u32 count_len; /* length of the number of addresses configured */
> > +} PACKED ccwr_rnic_getconfig_rep_t;
> > +
> > +typedef union {
> > + ccwr_rnic_getconfig_req_t req;
> > + ccwr_rnic_getconfig_rep_t rep;
> > +} PACKED ccwr_rnic_getconfig_t;
> > +
> > +/*
> > + * WR_RNIC_SETCONFIG
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 option; /* See cc_setconfig_cmd_t */
> > + /* variable data and pad See cc_netaddr_t and
> > + * cc_route_t
> > + */
> > + u8 data[0];
> > +} PACKED ccwr_rnic_setconfig_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_rnic_setconfig_rep_t;
> > +
> > +typedef union {
> > + ccwr_rnic_setconfig_req_t req;
> > + ccwr_rnic_setconfig_rep_t rep;
> > +} PACKED ccwr_rnic_setconfig_t;
> > +
> > +/*
> > + * WR_RNIC_CLOSE
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > +} PACKED ccwr_rnic_close_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_rnic_close_rep_t;
> > +
> > +typedef union {
> > + ccwr_rnic_close_req_t req;
> > + ccwr_rnic_close_rep_t rep;
> > +} PACKED ccwr_rnic_close_t;
> > +
> > +/*
> > + *------------------------ CQ ------------------------
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 shared_ht;
> > + u64 user_context;
> > + u64 msg_pool;
> > + u32 rnic_handle;
> > + u32 msg_size;
> > + u32 depth;
> > +} PACKED ccwr_cq_create_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 mq_index;
> > + u32 adapter_shared;
> > + u32 cq_handle;
> > +} PACKED ccwr_cq_create_rep_t;
> > +
> > +typedef union {
> > + ccwr_cq_create_req_t req;
> > + ccwr_cq_create_rep_t rep;
> > +} PACKED ccwr_cq_create_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 cq_handle;
> > + u32 new_depth;
> > + u64 new_msg_pool;
> > +} PACKED ccwr_cq_modify_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_cq_modify_rep_t;
> > +
> > +typedef union {
> > + ccwr_cq_modify_req_t req;
> > + ccwr_cq_modify_rep_t rep;
> > +} PACKED ccwr_cq_modify_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 cq_handle;
> > +} PACKED ccwr_cq_destroy_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_cq_destroy_rep_t;
> > +
> > +typedef union {
> > + ccwr_cq_destroy_req_t req;
> > + ccwr_cq_destroy_rep_t rep;
> > +} PACKED ccwr_cq_destroy_t;
> > +
> > +/*
> > + *------------------------ PD ------------------------
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 pd_id;
> > +} PACKED ccwr_pd_alloc_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_pd_alloc_rep_t;
> > +
> > +typedef union {
> > + ccwr_pd_alloc_req_t req;
> > + ccwr_pd_alloc_rep_t rep;
> > +} PACKED ccwr_pd_alloc_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 pd_id;
> > +} PACKED ccwr_pd_dealloc_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_pd_dealloc_rep_t;
> > +
> > +typedef union {
> > + ccwr_pd_dealloc_req_t req;
> > + ccwr_pd_dealloc_rep_t rep;
> > +} PACKED ccwr_pd_dealloc_t;
> > +
> > +/*
> > + *------------------------ SRQ ------------------------
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 shared_ht;
> > + u64 user_context;
> > + u32 rnic_handle;
> > + u32 srq_depth;
> > + u32 srq_limit;
> > + u32 sgl_depth;
> > + u32 pd_id;
> > +} PACKED ccwr_srq_create_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 srq_depth;
> > + u32 sgl_depth;
> > + u32 msg_size;
> > + u32 mq_index;
> > + u32 mq_start;
> > + u32 srq_handle;
> > +} PACKED ccwr_srq_create_rep_t;
> > +
> > +typedef union {
> > + ccwr_srq_create_req_t req;
> > + ccwr_srq_create_rep_t rep;
> > +} PACKED ccwr_srq_create_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 srq_handle;
> > +} PACKED ccwr_srq_destroy_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_srq_destroy_rep_t;
> > +
> > +typedef union {
> > + ccwr_srq_destroy_req_t req;
> > + ccwr_srq_destroy_rep_t rep;
> > +} PACKED ccwr_srq_destroy_t;
> > +
> > +/*
> > + *------------------------ QP ------------------------
> > + */
> > +typedef enum {
> > + QP_RDMA_READ = 0x00000001, /* RDMA read enabled? */
> > + QP_RDMA_WRITE = 0x00000002, /* RDMA write enabled? */
> > + QP_MW_BIND = 0x00000004, /* MWs enabled */
> > + QP_ZERO_STAG = 0x00000008, /* enabled? */
> > + QP_REMOTE_TERMINATION = 0x00000010, /* remote end terminated
> */
> > + QP_RDMA_READ_RESPONSE = 0x00000020 /* Remote RDMA read */
> > + /* enabled? */
> > +} PACKED ccwr_qp_flags_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 shared_sq_ht;
> > + u64 shared_rq_ht;
> > + u64 user_context;
> > + u32 rnic_handle;
> > + u32 sq_cq_handle;
> > + u32 rq_cq_handle;
> > + u32 sq_depth;
> > + u32 rq_depth;
> > + u32 srq_handle;
> > + u32 srq_limit;
> > + u32 flags; /* see ccwr_qp_flags_t */
> > + u32 send_sgl_depth;
> > + u32 recv_sgl_depth;
> > + u32 rdma_write_sgl_depth;
> > + u32 ord;
> > + u32 ird;
> > + u32 pd_id;
> > +} PACKED ccwr_qp_create_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 sq_depth;
> > + u32 rq_depth;
> > + u32 send_sgl_depth;
> > + u32 recv_sgl_depth;
> > + u32 rdma_write_sgl_depth;
> > + u32 ord;
> > + u32 ird;
> > + u32 sq_msg_size;
> > + u32 sq_mq_index;
> > + u32 sq_mq_start;
> > + u32 rq_msg_size;
> > + u32 rq_mq_index;
> > + u32 rq_mq_start;
> > + u32 qp_handle;
> > +} PACKED ccwr_qp_create_rep_t;
> > +
> > +typedef union {
> > + ccwr_qp_create_req_t req;
> > + ccwr_qp_create_rep_t rep;
> > +} PACKED ccwr_qp_create_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 qp_handle;
> > +} PACKED ccwr_qp_query_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 user_context;
> > + u32 rnic_handle;
> > + u32 sq_depth;
> > + u32 rq_depth;
> > + u32 send_sgl_depth;
> > + u32 rdma_write_sgl_depth;
> > + u32 recv_sgl_depth;
> > + u32 ord;
> > + u32 ird;
> > + u16 qp_state;
> > + u16 flags; /* see ccwr_qp_flags_t */
> > + u32 qp_id;
> > + u32 local_addr;
> > + u32 remote_addr;
> > + u16 local_port;
> > + u16 remote_port;
> > + u32 terminate_msg_length; /* 0 if not present */
> > + u8 data[0];
> > + /* Terminate Message in-line here. */
> > +} PACKED ccwr_qp_query_rep_t;
> > +
> > +typedef union {
> > + ccwr_qp_query_req_t req;
> > + ccwr_qp_query_rep_t rep;
> > +} PACKED ccwr_qp_query_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 stream_msg;
> > + u32 stream_msg_length;
> > + u32 rnic_handle;
> > + u32 qp_handle;
> > + u32 next_qp_state;
> > + u32 ord;
> > + u32 ird;
> > + u32 sq_depth;
> > + u32 rq_depth;
> > + u32 llp_ep_handle;
> > +} PACKED ccwr_qp_modify_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 ord;
> > + u32 ird;
> > + u32 sq_depth;
> > + u32 rq_depth;
> > + u32 sq_msg_size;
> > + u32 sq_mq_index;
> > + u32 sq_mq_start;
> > + u32 rq_msg_size;
> > + u32 rq_mq_index;
> > + u32 rq_mq_start;
> > +} PACKED ccwr_qp_modify_rep_t;
> > +
> > +typedef union {
> > + ccwr_qp_modify_req_t req;
> > + ccwr_qp_modify_rep_t rep;
> > +} PACKED ccwr_qp_modify_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 qp_handle;
> > +} PACKED ccwr_qp_destroy_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_qp_destroy_rep_t;
> > +
> > +typedef union {
> > + ccwr_qp_destroy_req_t req;
> > + ccwr_qp_destroy_rep_t rep;
> > +} PACKED ccwr_qp_destroy_t;
> > +
> > +/*
> > + * The CCWR_QP_CONNECT msg is posted on the verbs request queue. It
> can
> > + * only be posted when a QP is in IDLE state. After the connect
> request is
> > + * submitted to the LLP, the adapter moves the QP to CONNECT_PENDING
> state.
> > + * No synchronous reply from adapter to this WR. The results of
> > + * connection are passed back in an async event
> CCAE_ACTIVE_CONNECT_RESULTS
> > + * See ccwr_ae_active_connect_results_t
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 qp_handle;
> > + u32 remote_addr;
> > + u16 remote_port;
> > + u16 pad;
> > + u32 private_data_length;
> > + u8 private_data[0]; /* Private data in-line. */
> > +} PACKED ccwr_qp_connect_req_t;
> > +
> > +typedef struct {
> > + ccwr_qp_connect_req_t req;
> > + /* no synchronous reply. */
> > +} PACKED ccwr_qp_connect_t;
> > +
> > +
> > +/*
> > + *------------------------ MM ------------------------
> > + */
> > +
> > +typedef cc_mm_flags_t ccwr_mr_flags_t; /* cc_types.h */
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 pbl_depth;
> > + u32 pd_id;
> > + u32 flags; /* See ccwr_mr_flags_t */
> > +} PACKED ccwr_nsmr_stag_alloc_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 pbl_depth;
> > + u32 stag_index;
> > +} PACKED ccwr_nsmr_stag_alloc_rep_t;
> > +
> > +typedef union {
> > + ccwr_nsmr_stag_alloc_req_t req;
> > + ccwr_nsmr_stag_alloc_rep_t rep;
> > +} PACKED ccwr_nsmr_stag_alloc_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 va;
> > + u32 rnic_handle;
> > + u16 flags; /* See ccwr_mr_flags_t */
> > + u8 stag_key;
> > + u8 pad;
> > + u32 pd_id;
> > + u32 pbl_depth;
> > + u32 pbe_size;
> > + u32 fbo;
> > + u32 length;
> > + u32 addrs_length;
> > + /* array of paddrs (must be aligned on a 64bit boundary) */
> > + u64 paddrs[0];
> > +} PACKED ccwr_nsmr_register_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 pbl_depth;
> > + u32 stag_index;
> > +} PACKED ccwr_nsmr_register_rep_t;
> > +
> > +typedef union {
> > + ccwr_nsmr_register_req_t req;
> > + ccwr_nsmr_register_rep_t rep;
> > +} PACKED ccwr_nsmr_register_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 flags; /* See ccwr_mr_flags_t */
> > + u32 stag_index;
> > + u32 addrs_length;
> > + /* array of paddrs (must be aligned on a 64bit boundary) */
> > + u64 paddrs[0];
> > +} PACKED ccwr_nsmr_pbl_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_nsmr_pbl_rep_t;
> > +
> > +typedef union {
> > + ccwr_nsmr_pbl_req_t req;
> > + ccwr_nsmr_pbl_rep_t rep;
> > +} PACKED ccwr_nsmr_pbl_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 stag_index;
> > +} PACKED ccwr_mr_query_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u8 stag_key;
> > + u8 pad[3];
> > + u32 pd_id;
> > + u32 flags; /* See ccwr_mr_flags_t */
> > + u32 pbl_depth;
> > +} PACKED ccwr_mr_query_rep_t;
> > +
> > +typedef union {
> > + ccwr_mr_query_req_t req;
> > + ccwr_mr_query_rep_t rep;
> > +} PACKED ccwr_mr_query_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 stag_index;
> > +} PACKED ccwr_mw_query_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u8 stag_key;
> > + u8 pad[3];
> > + u32 pd_id;
> > + u32 flags; /* See ccwr_mr_flags_t */
> > +} PACKED ccwr_mw_query_rep_t;
> > +
> > +typedef union {
> > + ccwr_mw_query_req_t req;
> > + ccwr_mw_query_rep_t rep;
> > +} PACKED ccwr_mw_query_t;
> > +
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 stag_index;
> > +} PACKED ccwr_stag_dealloc_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_stag_dealloc_rep_t;
> > +
> > +typedef union {
> > + ccwr_stag_dealloc_req_t req;
> > + ccwr_stag_dealloc_rep_t rep;
> > +} PACKED ccwr_stag_dealloc_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 va;
> > + u32 rnic_handle;
> > + u16 flags; /* See ccwr_mr_flags_t */
> > + u8 stag_key;
> > + u8 pad;
> > + u32 stag_index;
> > + u32 pd_id;
> > + u32 pbl_depth;
> > + u32 pbe_size;
> > + u32 fbo;
> > + u32 length;
> > + u32 addrs_length;
> > + u32 pad1;
> > + /* array of paddrs (must be aligned on a 64bit boundary) */
> > + u64 paddrs[0];
> > +} PACKED ccwr_nsmr_reregister_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 pbl_depth;
> > + u32 stag_index;
> > +} PACKED ccwr_nsmr_reregister_rep_t;
> > +
> > +typedef union {
> > + ccwr_nsmr_reregister_req_t req;
> > + ccwr_nsmr_reregister_rep_t rep;
> > +} PACKED ccwr_nsmr_reregister_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 va;
> > + u32 rnic_handle;
> > + u16 flags; /* See ccwr_mr_flags_t */
> > + u8 stag_key;
> > + u8 pad;
> > + u32 stag_index;
> > + u32 pd_id;
> > +} PACKED ccwr_smr_register_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 stag_index;
> > +} PACKED ccwr_smr_register_rep_t;
> > +
> > +typedef union {
> > + ccwr_smr_register_req_t req;
> > + ccwr_smr_register_rep_t rep;
> > +} PACKED ccwr_smr_register_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 pd_id;
> > +} PACKED ccwr_mw_alloc_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 stag_index;
> > +} PACKED ccwr_mw_alloc_rep_t;
> > +
> > +typedef union {
> > + ccwr_mw_alloc_req_t req;
> > + ccwr_mw_alloc_rep_t rep;
> > +} PACKED ccwr_mw_alloc_t;
> > +
> > +/*
> > + *------------------------ WRs -----------------------
> > + */
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr; /* Has status and WR Type */
> > +} PACKED ccwr_user_hdr_t;
> > +
> > +/* Completion queue entry. */
> > +typedef struct {
> > + ccwr_hdr_t hdr; /* Has status and WR Type */
> > + u64 qp_user_context;/* cc_user_qp_t * */
> > + u32 qp_state; /* Current QP State */
> > + u32 handle; /* QPID or EP Handle */
> > + u32 bytes_rcvd; /* valid for RECV WCs */
> > + u32 stag;
> > +} PACKED ccwr_ce_t;
> > +
> > +
> > +/*
> > + * Flags used for all post-sq WRs. These must fit in the flags
> > + * field of the ccwr_hdr_t (eight bits).
> > + */
> > +typedef enum {
> > + SQ_SIGNALED = 0x01,
> > + SQ_READ_FENCE = 0x02,
> > + SQ_FENCE = 0x04,
> > +} PACKED cc_sq_flags_t;
> > +
> > +/*
> > + * Common fields for all post-sq WRs. Namely the standard header and a
>
> > + * secondary header with fields common to all post-sq WRs.
> > + */
> > +typedef struct {
> > + ccwr_user_hdr_t user_hdr;
> > +} PACKED cc_sq_hdr_t;
> > +
> > +/*
> > + * Same as above but for post-rq WRs.
> > + */
> > +typedef struct {
> > + ccwr_user_hdr_t user_hdr;
> > +} PACKED cc_rq_hdr_t;
> > +
> > +/*
> > + * use the same struct for all sends.
> > + */
> > +typedef struct {
> > + cc_sq_hdr_t sq_hdr;
> > + u32 sge_len;
> > + u32 remote_stag;
> > + u8 data[0]; /* SGE array */
> > +} PACKED ccwr_send_req_t, ccwr_send_se_req_t, ccwr_send_inv_req_t,
> > ccwr_send_se_inv_req_t;
> > +
> > +typedef ccwr_ce_t ccwr_send_rep_t;
> > +
> > +typedef union {
> > + ccwr_send_req_t req;
> > + ccwr_send_rep_t rep;
> > +} PACKED ccwr_send_t, ccwr_send_se_t, ccwr_send_inv_t,
> ccwr_send_se_inv_t;
> > +
> > +typedef struct {
> > + cc_sq_hdr_t sq_hdr;
> > + u64 remote_to;
> > + u32 remote_stag;
> > + u32 sge_len;
> > + u8 data[0]; /* SGE array */
> > +} PACKED ccwr_rdma_write_req_t;
> > +
> > +typedef ccwr_ce_t ccwr_rdma_write_rep_t;
> > +
> > +typedef union {
> > + ccwr_rdma_write_req_t req;
> > + ccwr_rdma_write_rep_t rep;
> > +} PACKED ccwr_rdma_write_t;
> > +
> > +typedef struct {
> > + cc_sq_hdr_t sq_hdr;
> > + u64 local_to;
> > + u64 remote_to;
> > + u32 local_stag;
> > + u32 remote_stag;
> > + u32 length;
> > +} PACKED ccwr_rdma_read_req_t,ccwr_rdma_read_inv_req_t;
> > +
> > +typedef ccwr_ce_t ccwr_rdma_read_rep_t;
> > +
> > +typedef union {
> > + ccwr_rdma_read_req_t req;
> > + ccwr_rdma_read_rep_t rep;
> > +} PACKED ccwr_rdma_read_t, ccwr_rdma_read_inv_t;
> > +
> > +typedef struct {
> > + cc_sq_hdr_t sq_hdr;
> > + u64 va;
> > + u8 stag_key;
> > + u8 pad[3];
> > + u32 mw_stag_index;
> > + u32 mr_stag_index;
> > + u32 length;
> > + u32 flags; /* see ccwr_mr_flags_t; */
> > +} PACKED ccwr_mw_bind_req_t;
> > +
> > +typedef ccwr_ce_t ccwr_mw_bind_rep_t;
> > +
> > +typedef union {
> > + ccwr_mw_bind_req_t req;
> > + ccwr_mw_bind_rep_t rep;
> > +} PACKED ccwr_mw_bind_t;
> > +
> > +typedef struct {
> > + cc_sq_hdr_t sq_hdr;
> > + u64 va;
> > + u8 stag_key;
> > + u8 pad[3];
> > + u32 stag_index;
> > + u32 pbe_size;
> > + u32 fbo;
> > + u32 length;
> > + u32 addrs_length;
> > + /* array of paddrs (must be aligned on a 64bit boundary) */
> > + u64 paddrs[0];
> > +} PACKED ccwr_nsmr_fastreg_req_t;
> > +
> > +typedef ccwr_ce_t ccwr_nsmr_fastreg_rep_t;
> > +
> > +typedef union {
> > + ccwr_nsmr_fastreg_req_t req;
> > + ccwr_nsmr_fastreg_rep_t rep;
> > +} PACKED ccwr_nsmr_fastreg_t;
> > +
> > +typedef struct {
> > + cc_sq_hdr_t sq_hdr;
> > + u8 stag_key;
> > + u8 pad[3];
> > + u32 stag_index;
> > +} PACKED ccwr_stag_invalidate_req_t;
> > +
> > +typedef ccwr_ce_t ccwr_stag_invalidate_rep_t;
> > +
> > +typedef union {
> > + ccwr_stag_invalidate_req_t req;
> > + ccwr_stag_invalidate_rep_t rep;
> > +} PACKED ccwr_stag_invalidate_t;
> > +
> > +typedef union {
> > + cc_sq_hdr_t sq_hdr;
> > + ccwr_send_req_t send;
> > + ccwr_send_se_req_t send_se;
> > + ccwr_send_inv_req_t send_inv;
> > + ccwr_send_se_inv_req_t send_se_inv;
> > + ccwr_rdma_write_req_t rdma_write;
> > + ccwr_rdma_read_req_t rdma_read;
> > + ccwr_mw_bind_req_t mw_bind;
> > + ccwr_nsmr_fastreg_req_t nsmr_fastreg;
> > + ccwr_stag_invalidate_req_t stag_inv;
> > +} PACKED ccwr_sqwr_t;
> > +
> > +
> > +/*
> > + * RQ WRs
> > + */
> > +typedef struct {
> > + cc_rq_hdr_t rq_hdr;
> > + u8 data[0]; /* array of SGEs */
> > +} PACKED ccwr_rqwr_t, ccwr_recv_req_t;
> > +
> > +typedef ccwr_ce_t ccwr_recv_rep_t;
> > +
> > +typedef union {
> > + ccwr_recv_req_t req;
> > + ccwr_recv_rep_t rep;
> > +} PACKED ccwr_recv_t;
> > +
> > +/*
> > + * All AEs start with this header. Most AEs only need to convey the
> > + * information in the header. Some, like LLP connection events, need
> > + * more info. The union typdef ccwr_ae_t has all the possible AEs.
> > + *
> > + * hdr.context is the user_context from the rnic_open WR. NULL If this
>
> > + * is not affiliated with an rnic
> > + *
> > + * hdr.id is the AE identifier (eg; CCAE_REMOTE_SHUTDOWN,
> > + * CCAE_LLP_CLOSE_COMPLETE)
> > + *
> > + * resource_type is one of: CC_RES_IND_QP, CC_RES_IND_CQ,
> CC_RES_IND_SRQ
> > + *
> > + * user_context is the context passed down when the host created the
> resource.
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 user_context; /* user context for this res. */
> > + u32 resource_type; /* see cc_resource_indicator_t */
> > + u32 resource; /* handle for resource */
> > + u32 qp_state; /* current QP State */
> > +} PACKED PACKED ccwr_ae_hdr_t;
> > +
> > +/*
> > + * After submitting the CCAE_ACTIVE_CONNECT_RESULTS message on the AEQ,
>
> > + * the adapter moves the QP into RTS state
> > + */
> > +typedef struct {
> > + ccwr_ae_hdr_t ae_hdr;
> > + u32 laddr;
> > + u32 raddr;
> > + u16 lport;
> > + u16 rport;
> > + u32 private_data_length;
> > + u8 private_data[0]; /* data is in-line in the msg. */
> > +} PACKED ccwr_ae_active_connect_results_t;
> > +
> > +/*
> > + * When connections are established by the stack (and the private data
> > + * MPA frame is received), the adapter will generate an event to the
> host.
> > + * The details of the connection, any private data, and the new
> connection
> > + * request handle is passed up via the CCAE_CONNECTION_REQUEST msg on
> the
> > + * AE queue:
> > + */
> > +typedef struct {
> > + ccwr_ae_hdr_t ae_hdr;
> > + u32 cr_handle; /* connreq handle (sock ptr) */
> > + u32 laddr;
> > + u32 raddr;
> > + u16 lport;
> > + u16 rport;
> > + u32 private_data_length;
> > + u8 private_data[0]; /* data is in-line in the msg. */
> > +} PACKED ccwr_ae_connection_request_t;
> > +
> > +typedef union {
> > + ccwr_ae_hdr_t ae_generic;
> > + ccwr_ae_active_connect_results_t ae_active_connect_results;
> > + ccwr_ae_connection_request_t ae_connection_request;
> > +} PACKED ccwr_ae_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 hint_count;
> > + u64 q0_host_shared;
> > + u64 q1_host_shared;
> > + u64 q1_host_msg_pool;
> > + u64 q2_host_shared;
> > + u64 q2_host_msg_pool;
> > +} PACKED ccwr_init_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_init_rep_t;
> > +
> > +typedef union {
> > + ccwr_init_req_t req;
> > + ccwr_init_rep_t rep;
> > +} PACKED ccwr_init_t;
> > +
> > +/*
> > + * For upgrading flash.
> > + */
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > +} PACKED ccwr_flash_init_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 adapter_flash_buf_offset;
> > + u32 adapter_flash_len;
> > +} PACKED ccwr_flash_init_rep_t;
> > +
> > +typedef union {
> > + ccwr_flash_init_req_t req;
> > + ccwr_flash_init_rep_t rep;
> > +} PACKED ccwr_flash_init_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 len;
> > +} PACKED ccwr_flash_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 status;
> > +} PACKED ccwr_flash_rep_t;
> > +
> > +typedef union {
> > + ccwr_flash_req_t req;
> > + ccwr_flash_rep_t rep;
> > +} PACKED ccwr_flash_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 size;
> > +} PACKED ccwr_buf_alloc_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 offset; /* 0 if mem not available */
> > + u32 size; /* 0 if mem not available */
> > +} PACKED ccwr_buf_alloc_rep_t;
> > +
> > +typedef union {
> > + ccwr_buf_alloc_req_t req;
> > + ccwr_buf_alloc_rep_t rep;
> > +} PACKED ccwr_buf_alloc_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 offset; /* Must match value from alloc */
> > + u32 size; /* Must match value from alloc */
> > +} PACKED ccwr_buf_free_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_buf_free_rep_t;
> > +
> > +typedef union {
> > + ccwr_buf_free_req_t req;
> > + ccwr_buf_free_rep_t rep;
> > +} PACKED ccwr_buf_free_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 offset;
> > + u32 size;
> > + u32 type;
> > + u32 flags;
> > +} PACKED ccwr_flash_write_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 status;
> > +} PACKED ccwr_flash_write_rep_t;
> > +
> > +typedef union {
> > + ccwr_flash_write_req_t req;
> > + ccwr_flash_write_rep_t rep;
> > +} PACKED ccwr_flash_write_t;
> > +
> > +/*
> > + * Messages for LLP connection setup.
> > + */
> > +
> > +/*
> > + * Listen Request. This allocates a listening endpoint to allow
> passive
> > + * connection setup. Newly established LLP connections are passed up
> > + * via an AE. See ccwr_ae_connection_request_t
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u64 user_context; /* returned in AEs. */
> > + u32 rnic_handle;
> > + u32 local_addr; /* local addr, or 0 */
> > + u16 local_port; /* 0 means "pick one" */
> > + u16 pad;
> > + u32 backlog; /* tradional tcp listen bl */
> > +} PACKED ccwr_ep_listen_create_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 ep_handle; /* handle to new listening ep */
> > + u16 local_port; /* resulting port... */
> > + u16 pad;
> > +} PACKED ccwr_ep_listen_create_rep_t;
> > +
> > +typedef union {
> > + ccwr_ep_listen_create_req_t req;
> > + ccwr_ep_listen_create_rep_t rep;
> > +} PACKED ccwr_ep_listen_create_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 ep_handle;
> > +} PACKED ccwr_ep_listen_destroy_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_ep_listen_destroy_rep_t;
> > +
> > +typedef union {
> > + ccwr_ep_listen_destroy_req_t req;
> > + ccwr_ep_listen_destroy_rep_t rep;
> > +} PACKED ccwr_ep_listen_destroy_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 ep_handle;
> > +} PACKED ccwr_ep_query_req_t;
> > +
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 local_addr;
> > + u32 remote_addr;
> > + u16 local_port;
> > + u16 remote_port;
> > +} PACKED ccwr_ep_query_rep_t;
> > +
> > +typedef union {
> > + ccwr_ep_query_req_t req;
> > + ccwr_ep_query_rep_t rep;
> > +} PACKED ccwr_ep_query_t;
> > +
> > +
> > +/*
> > + * The host passes this down to indicate acceptance of a pending iWARP
> > + * connection. The cr_handle was obtained from the CONNECTION_REQUEST
> > + * AE passed up by the adapter. See ccwr_ae_connection_request_t.
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 qp_handle; /* QP to bind to this LLP conn */
> > + u32 ep_handle; /* LLP handle to accept */
> > + u32 private_data_length;
> > + u8 private_data[0]; /* data in-line in msg. */
> > +} PACKED ccwr_cr_accept_req_t;
> > +
> > +/*
> > + * adapter sends reply when private data is successfully submitted to
> > + * the LLP.
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_cr_accept_rep_t;
> > +
> > +typedef union {
> > + ccwr_cr_accept_req_t req;
> > + ccwr_cr_accept_rep_t rep;
> > +} PACKED ccwr_cr_accept_t;
> > +
> > +/*
> > + * The host sends this down if a given iWARP connection request was
> > + * rejected by the consumer. The cr_handle was obtained from a
> > + * previous ccwr_ae_connection_request_t AE sent by the adapter.
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > + u32 rnic_handle;
> > + u32 ep_handle; /* LLP handle to reject */
> > +} PACKED ccwr_cr_reject_req_t;
> > +
> > +/*
> > + * Dunno if this is needed, but we'll add it for now. The adapter will
> > + * send the reject_reply after the LLP endpoint has been destroyed.
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr;
> > +} PACKED ccwr_cr_reject_rep_t;
> > +
> > +typedef union {
> > + ccwr_cr_reject_req_t req;
> > + ccwr_cr_reject_rep_t rep;
> > +} PACKED ccwr_cr_reject_t;
> > +
> > +/*
> > + * console command. Used to implement a debug console over the verbs
> > + * request and reply queues.
> > + */
> > +
> > +/*
> > + * Console request message. It contains:
> > + * - message hdr with id = CCWR_CONSOLE
> > + * - the physaddr/len of host memory to be used for the reply.
> > + * - the command string. eg: "netstat -s" or "zoneinfo"
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr; /* id = CCWR_CONSOLE */
> > + u64 reply_buf; /* pinned host buf for reply */
> > + u32 reply_buf_len; /* length of reply buffer */
> > + u8 command[0]; /* NUL terminated ascii string */
> > + /* containing the command req */
> > +} PACKED ccwr_console_req_t;
> > +
> > +/*
> > + * flags used in the console reply.
> > + */
> > +typedef enum {
> > + CONS_REPLY_TRUNCATED = 0x00000001 /* reply was truncated */
> > +} PACKED cc_console_flags_t;
> > +
> > +/*
> > + * Console reply message.
> > + * hdr.result contains the cc_status_t error if the reply was _not_
> generated,
> > + * or CC_OK if the reply was generated.
> > + */
> > +typedef struct {
> > + ccwr_hdr_t hdr; /* id = CCWR_CONSOLE */
> > + u32 flags; /* see cc_console_flags_t */
> > +} PACKED ccwr_console_rep_t;
> > +
> > +typedef union {
> > + ccwr_console_req_t req;
> > + ccwr_console_rep_t rep;
> > +} PACKED ccwr_console_t;
> > +
> > +
> > +/*
> > + * Giant union with all WRs. Makes life easier...
> > + */
> > +typedef union {
> > + ccwr_hdr_t hdr;
> > + ccwr_user_hdr_t user_hdr;
> > + ccwr_rnic_open_t rnic_open;
> > + ccwr_rnic_query_t rnic_query;
> > + ccwr_rnic_getconfig_t rnic_getconfig;
> > + ccwr_rnic_setconfig_t rnic_setconfig;
> > + ccwr_rnic_close_t rnic_close;
> > + ccwr_cq_create_t cq_create;
> > + ccwr_cq_modify_t cq_modify;
> > + ccwr_cq_destroy_t cq_destroy;
> > + ccwr_pd_alloc_t pd_alloc;
> > + ccwr_pd_dealloc_t pd_dealloc;
> > + ccwr_srq_create_t srq_create;
> > + ccwr_srq_destroy_t srq_destroy;
> > + ccwr_qp_create_t qp_create;
> > + ccwr_qp_query_t qp_query;
> > + ccwr_qp_modify_t qp_modify;
> > + ccwr_qp_destroy_t qp_destroy;
> > + ccwr_qp_connect_t qp_connect;
> > + ccwr_nsmr_stag_alloc_t nsmr_stag_alloc;
> > + ccwr_nsmr_register_t nsmr_register;
> > + ccwr_nsmr_pbl_t nsmr_pbl;
> > + ccwr_mr_query_t mr_query;
> > + ccwr_mw_query_t mw_query;
> > + ccwr_stag_dealloc_t stag_dealloc;
> > + ccwr_sqwr_t sqwr;
> > + ccwr_rqwr_t rqwr;
> > + ccwr_ce_t ce;
> > + ccwr_ae_t ae;
> > + ccwr_init_t init;
> > + ccwr_ep_listen_create_t ep_listen_create;
> > + ccwr_ep_listen_destroy_t ep_listen_destroy;
> > + ccwr_cr_accept_t cr_accept;
> > + ccwr_cr_reject_t cr_reject;
> > + ccwr_console_t console;
> > + ccwr_flash_init_t flash_init;
> > + ccwr_flash_t flash;
> > + ccwr_buf_alloc_t buf_alloc;
> > + ccwr_buf_free_t buf_free;
> > + ccwr_flash_write_t flash_write;
> > +} PACKED ccwr_t;
> > +
> > +
> > +/*
> > + * Accessors for the wr fields that are packed together tightly to
> > + * reduce the wr message size. The wr arguments are void* so that
> > + * either a ccwr_t*, a ccwr_hdr_t*, or a pointer to any of the types
> > + * in the ccwr_t union can be passed in.
> > + */
> > +static __inline__ u8
> > +c2_wr_get_id(void *wr)
> > +{
> > + return ((ccwr_hdr_t *)wr)->id;
> > +}
> > +static __inline__ void
> > +c2_wr_set_id(void *wr, u8 id)
> > +{
> > + ((ccwr_hdr_t *)wr)->id = id;
> > +}
> > +static __inline__ u8
> > +c2_wr_get_result(void *wr)
> > +{
> > + return ((ccwr_hdr_t *)wr)->result;
> > +}
> > +static __inline__ void
> > +c2_wr_set_result(void *wr, u8 result)
> > +{
> > + ((ccwr_hdr_t *)wr)->result = result;
> > +}
> > +static __inline__ u8
> > +c2_wr_get_flags(void *wr)
> > +{
> > + return ((ccwr_hdr_t *)wr)->flags;
> > +}
> > +static __inline__ void
> > +c2_wr_set_flags(void *wr, u8 flags)
> > +{
> > + ((ccwr_hdr_t *)wr)->flags = flags;
> > +}
> > +static __inline__ u8
> > +c2_wr_get_sge_count(void *wr)
> > +{
> > + return ((ccwr_hdr_t *)wr)->sge_count;
> > +}
> > +static __inline__ void
> > +c2_wr_set_sge_count(void *wr, u8 sge_count)
> > +{
> > + ((ccwr_hdr_t *)wr)->sge_count = sge_count;
> > +}
> > +static __inline__ u32
> > +c2_wr_get_wqe_count(void *wr)
> > +{
> > + return ((ccwr_hdr_t *)wr)->wqe_count;
> > +}
> > +static __inline__ void
> > +c2_wr_set_wqe_count(void *wr, u32 wqe_count)
> > +{
> > + ((ccwr_hdr_t *)wr)->wqe_count = wqe_count;
> > +}
> > +
> > +#undef PACKED
> > +
> > +#ifdef _MSC_VER
> > +#pragma pack(pop)
> > +#endif
> > +
> > +#endif /* _CC_WR_H_ */
> > Index: hw/amso1100/c2_cm.c
> > ===================================================================
> > --- hw/amso1100/c2_cm.c (revision 0)
> > +++ hw/amso1100/c2_cm.c (revision 0)
> > @@ -0,0 +1,415 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + *
> > + */
> > +#include "c2.h"
> > +#include "c2_vq.h"
> > +#include <rdma/iw_cm.h>
> > +
> > +int c2_llp_connect(struct iw_cm_id* cm_id, const void* pdata, u8
> pdata_len)
> > +{
> > + struct c2_dev *c2dev = to_c2dev(cm_id->device);
> > + struct c2_qp *qp = to_c2qp(cm_id->qp);
> > + ccwr_qp_connect_req_t *wr; /* variable size needs a malloc. */
> > + struct c2_vq_req *vq_req;
> > + int err;
> > +
> > + /*
> > + * only support the max private_data length
> > + */
> > + if (pdata_len > CC_MAX_PRIVATE_DATA_SIZE) {
> > + return -EINVAL;
> > + }
> > +
> > + /*
> > + * Create and send a WR_QP_CONNECT...
> > + */
> > + wr = kmalloc(sizeof(*wr) + pdata_len, GFP_KERNEL);
> > + if (!wr) {
> > + return -ENOMEM;
> > + }
> > +
> > + vq_req = vq_req_alloc(c2dev);
> > + if (!vq_req) {
> > + err = -ENOMEM;
> > + goto bail0;
> > + }
> > +
> > + c2_wr_set_id(wr, CCWR_QP_CONNECT);
> > + wr->hdr.context = 0;
> > + wr->rnic_handle = c2dev->adapter_handle;
> > + wr->qp_handle = qp->adapter_handle;
> > +
> > + wr->remote_addr = cm_id->remote_addr.sin_addr.s_addr;
> > + wr->remote_port = cm_id->remote_addr.sin_port;
> > +
> > + /*
> > + * Move any private data from the callers's buf into
> > + * the WR.
> > + */
> > + if (pdata) {
> > + wr->private_data_length = cpu_to_be32(pdata_len);
> > + memcpy(&wr->private_data[0], pdata, pdata_len);
> > + } else {
> > + wr->private_data_length = 0;
> > + }
> > +
> > + /*
> > + * Send WR to adapter. NOTE: There is no synch reply from
> > + * the adapter.
> > + */
> > + err = vq_send_wr(c2dev, (ccwr_t*)wr);
> > + vq_req_free(c2dev, vq_req);
> > +bail0:
> > + kfree(wr);
> > + return err;
> > +}
> > +
> > +int
> > +c2_llp_service_create(struct iw_cm_id* cm_id, int backlog)
> > +{
> > + struct c2_dev *c2dev;
> > + ccwr_ep_listen_create_req_t wr;
> > + ccwr_ep_listen_create_rep_t *reply;
> > + struct c2_vq_req *vq_req;
> > + int err;
> > +
> > + c2dev = to_c2dev(cm_id->device);
> > + if (c2dev == NULL)
> > + return -EINVAL;
> > +
> > + /*
> > + * Allocate verbs request.
> > + */
> > + vq_req = vq_req_alloc(c2dev);
> > + if (!vq_req)
> > + return -ENOMEM;
> > +
> > + /*
> > + * Build the WR
> > + */
> > + c2_wr_set_id(&wr, CCWR_EP_LISTEN_CREATE);
> > + wr.hdr.context = (u64)(unsigned long)vq_req;
> > + wr.rnic_handle = c2dev->adapter_handle;
> > + wr.local_addr = cm_id->local_addr.sin_addr.s_addr;
> > + wr.local_port = cm_id->local_addr.sin_port;
> > + wr.backlog = cpu_to_be32(backlog);
> > + wr.user_context = (u64)(unsigned long)cm_id;
> > +
> > + /*
> > + * Reference the request struct. Dereferenced in the int handler.
> > + */
> > + vq_req_get(c2dev, vq_req);
> > +
> > + /*
> > + * Send WR to adapter
> > + */
> > + err = vq_send_wr(c2dev, (ccwr_t*)&wr);
> > + if (err) {
> > + vq_req_put(c2dev, vq_req);
> > + goto bail0;
> > + }
> > +
> > + /*
> > + * Wait for reply from adapter
> > + */
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err) {
> > + goto bail0;
> > + }
> > +
> > + /*
> > + * Process reply
> > + */
> > + reply = (ccwr_ep_listen_create_rep_t*)(unsigned
> long)vq_req->reply_msg;
> > + if (!reply) {
> > + err = -ENOMEM;
> > + goto bail1;
> > + }
> > +
> > + if ( (err = c2_errno(reply)) != 0) {
> > + goto bail1;
> > + }
> > +
> > + /*
> > + * get the adapter handle
> > + */
> > + cm_id->provider_id = reply->ep_handle;
> > +
> > + /*
> > + * free vq stuff
> > + */
> > + vq_repbuf_free(c2dev, reply);
> > + vq_req_free(c2dev, vq_req);
> > +
> > + return 0;
> > +
> > +bail1:
> > + vq_repbuf_free(c2dev, reply);
> > +bail0:
> > + vq_req_free(c2dev, vq_req);
> > + return err;
> > +}
> > +
> > +
> > +int
> > +c2_llp_service_destroy(struct iw_cm_id* cm_id)
> > +{
> > +
> > + struct c2_dev *c2dev;
> > + ccwr_ep_listen_destroy_req_t wr;
> > + ccwr_ep_listen_destroy_rep_t *reply;
> > + struct c2_vq_req *vq_req;
> > + int err;
> > +
> > + c2dev = to_c2dev(cm_id->device);
> > + if (c2dev == NULL)
> > + return -EINVAL;
> > +
> > + /*
> > + * Allocate verbs request.
> > + */
> > + vq_req = vq_req_alloc(c2dev);
> > + if (!vq_req) {
> > + return -ENOMEM;
> > + }
> > +
> > + /*
> > + * Build the WR
> > + */
> > + c2_wr_set_id(&wr, CCWR_EP_LISTEN_DESTROY);
> > + wr.hdr.context = (unsigned long)vq_req;
> > + wr.rnic_handle = c2dev->adapter_handle;
> > + wr.ep_handle = cm_id->provider_id;
> > +
> > + /*
> > + * reference the request struct. dereferenced in the int handler.
> > + */
> > + vq_req_get(c2dev, vq_req);
> > +
> > + /*
> > + * Send WR to adapter
> > + */
> > + err = vq_send_wr(c2dev, (ccwr_t*)&wr);
> > + if (err) {
> > + vq_req_put(c2dev, vq_req);
> > + goto bail0;
> > + }
> > +
> > + /*
> > + * Wait for reply from adapter
> > + */
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err) {
> > + goto bail0;
> > + }
> > +
> > + /*
> > + * Process reply
> > + */
> > + reply = (ccwr_ep_listen_destroy_rep_t*)(unsigned
> long)vq_req->reply_msg;
> > + if (!reply) {
> > + err = -ENOMEM;
> > + goto bail0;
> > + }
> > + if ( (err = c2_errno(reply)) != 0) {
> > + goto bail1;
> > + }
> > +
> > +bail1:
> > + vq_repbuf_free(c2dev, reply);
> > +bail0:
> > + vq_req_free(c2dev, vq_req);
> > + return err;
> > +}
> > +
> > +
> > +int
> > +c2_llp_accept(struct iw_cm_id* cm_id, const void* pdata, u8 pdata_len)
> > +{
> > + struct c2_dev *c2dev = to_c2dev(cm_id->device);
> > + struct c2_qp *qp = to_c2qp(cm_id->qp);
> > + ccwr_cr_accept_req_t *wr; /* variable length WR */
> > + struct c2_vq_req *vq_req;
> > + ccwr_cr_accept_rep_t *reply; /* VQ Reply msg ptr. */
> > + int err;
> > +
> > + /* Make sure there's a bound QP */
> > + if (qp == 0)
> > + return -EINVAL;
> > +
> > + /*
> > + * only support the max private_data length
> > + */
> > + if (pdata_len > CC_MAX_PRIVATE_DATA_SIZE) {
> > + return -EINVAL;
> > + }
> > +
> > + /*
> > + * Allocate verbs request.
> > + */
> > + vq_req = vq_req_alloc(c2dev);
> > + if (!vq_req) {
> > + return -ENOMEM;
> > + }
> > +
> > + wr = kmalloc(sizeof(*wr) + pdata_len, GFP_KERNEL);
> > + if (!wr) {
> > + err = -ENOMEM;
> > + goto bail0;
> > + }
> > +
> > + /*
> > + * Build the WR
> > + */
> > + c2_wr_set_id(wr, CCWR_CR_ACCEPT);
> > + wr->hdr.context = (unsigned long)vq_req;
> > + wr->rnic_handle = c2dev->adapter_handle;
> > + wr->ep_handle = (u32)cm_id->provider_id;
> > + wr->qp_handle = qp->adapter_handle;
> > + if (pdata) {
> > + wr->private_data_length = cpu_to_be32(pdata_len);
> > + memcpy(&wr->private_data[0], pdata, pdata_len);
> > + } else {
> > + wr->private_data_length = 0;
> > + }
> > +
> > + /*
> > + * reference the request struct. dereferenced in the int handler.
> > + */
> > + vq_req_get(c2dev, vq_req);
> > +
> > + /*
> > + * Send WR to adapter
> > + */
> > + err = vq_send_wr(c2dev, (ccwr_t*)wr);
> > + if (err) {
> > + vq_req_put(c2dev, vq_req);
> > + goto bail1;
> > + }
> > +
> > + /*
> > + * Wait for reply from adapter
> > + */
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err) {
> > + goto bail1;
> > + }
> > +
> > + /*
> > + * Process reply
> > + */
> > + reply = (ccwr_cr_accept_rep_t*)(unsigned long)vq_req->reply_msg;
> > + if (!reply) {
> > + err = -ENOMEM;
> > + goto bail1;
> > + }
> > +
> > + err = c2_errno(reply);
> > + vq_repbuf_free(c2dev, reply);
> > +
> > +bail1:
> > + kfree(wr);
> > +bail0:
> > + vq_req_free(c2dev, vq_req);
> > + return err;
> > +}
> > +
> > +int
> > +c2_llp_reject(struct iw_cm_id* cm_id, const void* pdata, u8 pdata_len)
> > +{
> > + struct c2_dev *c2dev;
> > + ccwr_cr_reject_req_t wr;
> > + struct c2_vq_req *vq_req;
> > + ccwr_cr_reject_rep_t *reply;
> > + int err;
> > +
> > + c2dev = to_c2dev(cm_id->device);
> > +
> > + /*
> > + * Allocate verbs request.
> > + */
> > + vq_req = vq_req_alloc(c2dev);
> > + if (!vq_req) {
> > + return -ENOMEM;
> > + }
> > +
> > + /*
> > + * Build the WR
> > + */
> > + c2_wr_set_id(&wr, CCWR_CR_REJECT);
> > + wr.hdr.context = (unsigned long)vq_req;
> > + wr.rnic_handle = c2dev->adapter_handle;
> > + wr.ep_handle = (u32)cm_id->provider_id;
> > +
> > + /*
> > + * reference the request struct. dereferenced in the int handler.
> > + */
> > + vq_req_get(c2dev, vq_req);
> > +
> > + /*
> > + * Send WR to adapter
> > + */
> > + err = vq_send_wr(c2dev, (ccwr_t*)&wr);
> > + if (err) {
> > + vq_req_put(c2dev, vq_req);
> > + goto bail0;
> > + }
> > +
> > + /*
> > + * Wait for reply from adapter
> > + */
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err) {
> > + goto bail0;
> > + }
> > +
> > + /*
> > + * Process reply
> > + */
> > + reply = (ccwr_cr_reject_rep_t*)(unsigned long)vq_req->reply_msg;
> > + if (!reply) {
> > + err = -ENOMEM;
> > + goto bail0;
> > + }
> > + err = c2_errno(reply);
> > +
> > + /*
> > + * free vq stuff
> > + */
> > + vq_repbuf_free(c2dev, reply);
> > +
> > +bail0:
> > + vq_req_free(c2dev, vq_req);
> > + return err;
> > +}
> > +
> > Index: hw/amso1100/c2_provider.h
> > ===================================================================
> > --- hw/amso1100/c2_provider.h (revision 0)
> > +++ hw/amso1100/c2_provider.h (revision 0)
> > @@ -0,0 +1,174 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + *
> > + */
> > +
> > +#ifndef C2_PROVIDER_H
> > +#define C2_PROVIDER_H
> > +
> > +#include <rdma/ib_verbs.h>
> > +#include <rdma/ib_pack.h>
> > +
> > +#include "c2_mq.h"
> > +#include <rdma/iw_cm.h>
> > +
> > +#define C2_MPT_FLAG_ATOMIC (1 << 14)
> > +#define C2_MPT_FLAG_REMOTE_WRITE (1 << 13)
> > +#define C2_MPT_FLAG_REMOTE_READ (1 << 12)
> > +#define C2_MPT_FLAG_LOCAL_WRITE (1 << 11)
> > +#define C2_MPT_FLAG_LOCAL_READ (1 << 10)
> > +
> > +struct c2_buf_list {
> > + void *buf;
> > + DECLARE_PCI_UNMAP_ADDR(mapping)
> > +};
> > +
> > +
> > +/* The user context keeps track of objects allocated for a
> > + * particular user-mode client. */
> > +struct c2_ucontext {
> > + struct ib_ucontext ibucontext;
> > +
> > + int index; /* rnic index (minor) */
> > + int port; /* Which GigE port */
> > +
> > + /*
> > + * Shared HT pages for user-accessible MQs.
> > + */
> > + int hthead; /* index of first
> free entry */
> > + void* htpages; /* kernel vaddr */
> > + int htlen; /* length of htpages memory */
> > + void* htuva; /* user mapped vaddr */
> > + spinlock_t htlock; /* serialize allocation */
> > + u64 adapter_hint_uva; /* Activity FIFO */
> > +};
> > +
> > +struct c2_mtt;
> > +
> > +/* All objects associated with a PD are kept in the
> > + * associated user context if present.
> > + */
> > +struct c2_pd {
> > + struct ib_pd ibpd;
> > + u32 pd_id;
> > + atomic_t sqp_count;
> > +};
> > +
> > +struct c2_mr {
> > + struct ib_mr ibmr;
> > + struct c2_pd *pd;
> > +};
> > +
> > +struct c2_av;
> > +
> > +enum c2_ah_type {
> > + C2_AH_ON_HCA,
> > + C2_AH_PCI_POOL,
> > + C2_AH_KMALLOC
> > +};
> > +
> > +struct c2_ah {
> > + struct ib_ah ibah;
> > +};
> > +
> > +struct c2_cq {
> > + struct ib_cq ibcq;
> > + spinlock_t lock;
> > + atomic_t refcount;
> > + int cqn;
> > + int is_kernel;
> > + wait_queue_head_t wait;
> > +
> > + u32 adapter_handle;
> > + struct c2_mq mq;
> > +};
> > +
> > +struct c2_wq {
> > + spinlock_t lock;
> > +};
> > +struct iw_cm_id;
> > +struct c2_qp {
> > + struct ib_qp ibqp;
> > + struct iw_cm_id* cm_id;
> > + spinlock_t lock;
> > + atomic_t refcount;
> > + wait_queue_head_t wait;
> > + int qpn;
> > +
> > + u32 adapter_handle;
> > + u32 send_sgl_depth;
> > + u32 recv_sgl_depth;
> > + u32 rdma_write_sgl_depth;
> > + u8 state;
> > +
> > + struct c2_mq sq_mq;
> > + struct c2_mq rq_mq;
> > +};
> > +
> > +struct c2_cr_query_attrs {
> > + u32 local_addr;
> > + u32 remote_addr;
> > + u16 local_port;
> > + u16 remote_port;
> > +};
> > +
> > +static inline struct c2_pd *to_c2pd(struct ib_pd *ibpd)
> > +{
> > + return container_of(ibpd, struct c2_pd, ibpd);
> > +}
> > +
> > +static inline struct c2_ucontext *to_c2ucontext(struct ib_ucontext
> *ibucontext)
> > +{
> > + return container_of(ibucontext, struct c2_ucontext, ibucontext);
> > +}
> > +
> > +static inline struct c2_mr *to_c2mr(struct ib_mr *ibmr)
> > +{
> > + return container_of(ibmr, struct c2_mr, ibmr);
> > +}
> > +
> > +
> > +static inline struct c2_ah *to_c2ah(struct ib_ah *ibah)
> > +{
> > + return container_of(ibah, struct c2_ah, ibah);
> > +}
> > +
> > +static inline struct c2_cq *to_c2cq(struct ib_cq *ibcq)
> > +{
> > + return container_of(ibcq, struct c2_cq, ibcq);
> > +}
> > +
> > +static inline struct c2_qp *to_c2qp(struct ib_qp *ibqp)
> > +{
> > + return container_of(ibqp, struct c2_qp, ibqp);
> > +}
> > +#endif /* C2_PROVIDER_H */
> > Index: hw/amso1100/c2_pd.c
> > ===================================================================
> > --- hw/amso1100/c2_pd.c (revision 0)
> > +++ hw/amso1100/c2_pd.c (revision 0)
> > @@ -0,0 +1,73 @@
> > +/*
> > + * Copyright (c) 2004 Topspin Communications. All rights reserved.
> > + * Copyright (c) 2005 Cisco Systems. All rights reserved.
> > + * Copyright (c) 2005 Mellanox Technologies. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +
> > +#include <linux/init.h>
> > +#include <linux/errno.h>
> > +
> > +#include "c2.h"
> > +#include "c2_provider.h"
> > +
> > +int c2_pd_alloc(struct c2_dev *dev, int privileged, struct c2_pd *pd)
> > +{
> > + int err = 0;
> > +
> > + might_sleep();
> > +
> > + atomic_set(&pd->sqp_count, 0);
> > + pd->pd_id = c2_alloc(&dev->pd_table.alloc);
> > + if (pd->pd_id == -1)
> > + return -ENOMEM;
> > +
> > + return err;
> > +}
> > +
> > +void c2_pd_free(struct c2_dev *dev, struct c2_pd *pd)
> > +{
> > + might_sleep();
> > + c2_free(&dev->pd_table.alloc, pd->pd_id);
> > +}
> > +
> > +int __devinit c2_init_pd_table(struct c2_dev *dev)
> > +{
> > + return c2_alloc_init(&dev->pd_table.alloc,
> > + dev->max_pd,
> > + 0);
> > +}
> > +
> > +void __devexit c2_cleanup_pd_table(struct c2_dev *dev)
> > +{
> > + /* XXX check if any PDs are still allocated? */
> > + c2_alloc_cleanup(&dev->pd_table.alloc);
> > +}
> > Index: hw/amso1100/c2_cq.c
> > ===================================================================
> > --- hw/amso1100/c2_cq.c (revision 0)
> > +++ hw/amso1100/c2_cq.c (revision 0)
> > @@ -0,0 +1,401 @@
> > +/*
> > + * Copyright (c) 2004, 2005 Topspin Communications. All rights
> reserved.
> > + * Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
> > + * Copyright (c) 2005 Cisco Systems, Inc. All rights reserved.
> > + * Copyright (c) 2005 Mellanox Technologies. All rights reserved.
> > + * Copyright (c) 2004 Voltaire, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + *
> > + */
> > +#include "c2.h"
> > +#include "c2_vq.h"
> > +#include "cc_status.h"
> > +
> > +#define C2_CQ_MSG_SIZE ((sizeof(ccwr_ce_t) + 32-1) & ~(32-1))
> > +
> > +void c2_cq_event(struct c2_dev *c2dev, u32 mq_index)
> > +{
> > + struct c2_cq *cq;
> > +
> > + cq = c2dev->qptr_array[mq_index];
> > +
> > + if (!cq) {
> > + dprintk("Completion event for bogus CQ %08x\n", mq_index);
> > + return;
> > + }
> > +
> > + assert(cq->ibcq.comp_handler);
> > + (*cq->ibcq.comp_handler)(&cq->ibcq, cq->ibcq.cq_context);
> > +}
> > +
> > +void c2_cq_clean(struct c2_dev *c2dev, struct c2_qp *qp, u32 mq_index)
> > +{
> > + struct c2_cq *cq;
> > + struct c2_mq *q;
> > +
> > + cq = c2dev->qptr_array[mq_index];
> > + if (!cq)
> > + return;
> > +
> > + spin_lock_irq(&cq->lock);
> > +
> > + q = &cq->mq;
> > + if (q && !c2_mq_empty(q)) {
> > + u16 priv = q->priv;
> > + ccwr_ce_t *msg;
> > +
> > + while (priv != cpu_to_be16(*q->shared)) {
> > + msg = (ccwr_ce_t *)(q->msg_pool + priv * q->msg_size);
> > + if (msg->qp_user_context == (u64)(unsigned long)qp) {
> > + msg->qp_user_context = (u64)0;
> > + }
> > + BUMP(q, priv);
> > + }
> > + }
> > +
> > + spin_unlock_irq(&cq->lock);
> > +}
> > +
> > +static inline enum ib_wc_status c2_cqe_status_to_openib(u8 status)
> > +{
> > + switch (status) {
> > + case CC_OK: return IB_WC_SUCCESS;
> > + case CCERR_FLUSHED: return IB_WC_WR_FLUSH_ERR;
> > + case CCERR_BASE_AND_BOUNDS_VIOLATION: return IB_WC_LOC_PROT_ERR;
> > + case CCERR_ACCESS_VIOLATION: return IB_WC_LOC_ACCESS_ERR;
> > + case CCERR_TOTAL_LENGTH_TOO_BIG: return IB_WC_LOC_LEN_ERR;
> > + case CCERR_INVALID_WINDOW: return IB_WC_MW_BIND_ERR;
> > + default: return IB_WC_GENERAL_ERR;
> > + }
> > +}
> > +
> > +
> > +static inline int c2_poll_one(struct c2_dev *c2dev,
> > + struct c2_cq *cq,
> > + struct ib_wc *entry)
> > +{
> > + ccwr_ce_t *ce;
> > + struct c2_qp *qp;
> > + int is_recv = 0;
> > +
> > + ce = (ccwr_ce_t *)c2_mq_consume(&cq->mq);
> > + if (!ce) {
> > + return -EAGAIN;
> > + }
> > +
> > + /*
> > + * if the qp returned is null then this qp has already
> > + * been freed and we are unable process the completion.
> > + * try pulling the next message
> > + */
> > + while ( (qp = (struct c2_qp *)(unsigned long)ce->qp_user_context) ==
> NULL) {
> > + c2_mq_free(&cq->mq);
> > + ce = (ccwr_ce_t *)c2_mq_consume(&cq->mq);
> > + if (!ce)
> > + return -EAGAIN;
> > + }
> > +
> > + entry->status = c2_cqe_status_to_openib(c2_wr_get_result(ce));
> > + entry->wr_id = ce->hdr.context;
> > + entry->qp_num = ce->handle;
> > + entry->wc_flags = 0;
> > + entry->slid = 0;
> > + entry->sl = 0;
> > + entry->src_qp = 0;
> > + entry->dlid_path_bits = 0;
> > + entry->pkey_index = 0;
> > +
> > + switch (c2_wr_get_id(ce)) {
> > + case CC_WR_TYPE_SEND:
> > + entry->opcode = IB_WC_SEND;
> > + break;
> > + case CC_WR_TYPE_RDMA_WRITE:
> > + entry->opcode = IB_WC_RDMA_WRITE;
> > + break;
> > + case CC_WR_TYPE_RDMA_READ:
> > + entry->opcode = IB_WC_RDMA_READ;
> > + break;
> > + case CC_WR_TYPE_BIND_MW:
> > + entry->opcode = IB_WC_BIND_MW;
> > + break;
> > + case CC_WR_TYPE_RECV:
> > + entry->byte_len = be32_to_cpu(ce->bytes_rcvd);
> > + entry->opcode = IB_WC_RECV;
> > + is_recv = 1;
> > + break;
> > + default:
> > + break;
> > + }
> > +
> > + /* consume the WQEs */
> > + if (is_recv)
> > + c2_mq_lconsume(&qp->rq_mq, 1);
> > + else
> > + c2_mq_lconsume(&qp->sq_mq,
> be32_to_cpu(c2_wr_get_wqe_count(ce))+1);
> > +
> > + /* free the message */
> > + c2_mq_free(&cq->mq);
> > +
> > + return 0;
> > +}
> > +
> > +int c2_poll_cq(struct ib_cq *ibcq, int num_entries,
> > + struct ib_wc *entry)
> > +{
> > + struct c2_dev *c2dev = to_c2dev(ibcq->device);
> > + struct c2_cq *cq = to_c2cq(ibcq);
> > + unsigned long flags;
> > + int npolled, err;
> > +
> > + spin_lock_irqsave(&cq->lock, flags);
> > +
> > + for (npolled = 0; npolled < num_entries; ++npolled) {
> > +
> > + err = c2_poll_one(c2dev, cq, entry + npolled);
> > + if (err)
> > + break;
> > + }
> > +
> > + spin_unlock_irqrestore(&cq->lock, flags);
> > +
> > + return npolled;
> > +}
> > +
> > +int c2_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify notify)
> > +{
> > + struct c2_mq_shared volatile *shared;
> > + struct c2_cq *cq;
> > +
> > + cq = to_c2cq(ibcq);
> > + shared = cq->mq.peer;
> > +
> > + if (notify == IB_CQ_NEXT_COMP)
> > + shared->notification_type = CC_CQ_NOTIFICATION_TYPE_NEXT;
> > + else if (notify == IB_CQ_SOLICITED)
> > + shared->notification_type = CC_CQ_NOTIFICATION_TYPE_NEXT_SE;
> > + else
> > + return -EINVAL;
> > +
> > + shared->armed = CQ_WAIT_FOR_DMA|CQ_ARMED;
> > +
> > + /*
> > + * Now read back shared->armed to make the PCI
> > + * write synchronous. This is necessary for
> > + * correct cq notification semantics.
> > + */
> > + {
> > + volatile char c;
> > + c = shared->armed;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static void c2_free_cq_buf(struct c2_mq *mq)
> > +{
> > + int npages;
> > +
> > + npages = ((mq->q_size * mq->msg_size) + PAGE_SIZE - 1) / PAGE_SIZE;
> > + free_pages((unsigned long)mq->msg_pool, npages);
> > +}
> > +
> > +static int c2_alloc_cq_buf(struct c2_mq *mq, int q_size, int msg_size)
> > +{
> > + unsigned long pool_start;
> > + int npages;
> > +
> > + npages = ( (q_size * msg_size) + PAGE_SIZE - 1) / PAGE_SIZE;
> > +
> > + pool_start = __get_free_pages(GFP_KERNEL, npages);
> > + if (!pool_start)
> > + return -ENOMEM;
> > +
> > + c2_mq_init(mq,
> > + 0, /* index (currently unknown) */
> > + q_size,
> > + msg_size,
> > + (u8 *)pool_start,
> > + 0, /* peer (currently unknown) */
> > + C2_MQ_HOST_TARGET);
> > +
> > + return 0;
> > +}
> > +
> > +int c2_init_cq(struct c2_dev *c2dev, int entries,
> > + struct c2_ucontext *ctx, struct c2_cq *cq)
> > +{
> > + ccwr_cq_create_req_t wr;
> > + ccwr_cq_create_rep_t* reply;
> > + unsigned long peer_pa;
> > + struct c2_vq_req *vq_req;
> > + int err;
> > +
> > + might_sleep();
> > +
> > + cq->ibcq.cqe = entries - 1;
> > + cq->is_kernel = !ctx;
> > +
> > + /* Allocate a shared pointer */
> > + cq->mq.shared = c2_alloc_mqsp(c2dev->kern_mqsp_pool);
> > + if (!cq->mq.shared)
> > + return -ENOMEM;
> > +
> > + /* Allocate pages for the message pool */
> > + err = c2_alloc_cq_buf(&cq->mq, entries+1, C2_CQ_MSG_SIZE);
> > + if (err)
> > + goto bail0;
> > +
> > + vq_req = vq_req_alloc(c2dev);
> > + if (!vq_req) {
> > + err = -ENOMEM;
> > + goto bail1;
> > + }
> > +
> > + memset(&wr, 0, sizeof(wr));
> > + c2_wr_set_id(&wr, CCWR_CQ_CREATE);
> > + wr.hdr.context = (unsigned long)vq_req;
> > + wr.rnic_handle = c2dev->adapter_handle;
> > + wr.msg_size = cpu_to_be32(cq->mq.msg_size);
> > + wr.depth = cpu_to_be32(cq->mq.q_size);
> > + wr.shared_ht = cpu_to_be64(__pa(cq->mq.shared));
> > + wr.msg_pool = cpu_to_be64(__pa(cq->mq.msg_pool));
> > + wr.user_context = (u64)(unsigned long)(cq);
> > +
> > + vq_req_get(c2dev, vq_req);
> > +
> > + err = vq_send_wr(c2dev, (ccwr_t*)&wr);
> > + if (err) {
> > + vq_req_put(c2dev, vq_req);
> > + goto bail2;
> > + }
> > +
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err)
> > + goto bail2;
> > +
> > + reply = (ccwr_cq_create_rep_t*)(unsigned long)(vq_req->reply_msg);
> > + if (!reply) {
> > + err = -ENOMEM;
> > + goto bail2;
> > + }
> > +
> > + if ( (err = c2_errno(reply)) != 0)
> > + goto bail3;
> > +
> > + cq->adapter_handle = reply->cq_handle;
> > + cq->mq.index = be32_to_cpu(reply->mq_index);
> > +
> > + peer_pa = (unsigned long)(c2dev->pa +
> be32_to_cpu(reply->adapter_shared));
> > + cq->mq.peer = ioremap_nocache(peer_pa, PAGE_SIZE);
> > + if (!cq->mq.peer) {
> > + err = -ENOMEM;
> > + goto bail3;
> > + }
> > +
> > + vq_repbuf_free(c2dev, reply);
> > + vq_req_free(c2dev, vq_req);
> > +
> > + spin_lock_init(&cq->lock);
> > + atomic_set(&cq->refcount, 1);
> > + init_waitqueue_head(&cq->wait);
> > +
> > + /*
> > + * Use the MQ index allocated by the adapter to
> > + * store the CQ in the qptr_array
> > + */
> > + /* XXX qptr_array lock? */
> > + cq->cqn = cq->mq.index;
> > + c2dev->qptr_array[cq->cqn] = cq;
> > +
> > + return 0;
> > +
> > +bail3:
> > + vq_repbuf_free(c2dev, reply);
> > +bail2:
> > + vq_req_free(c2dev, vq_req);
> > +bail1:
> > + c2_free_cq_buf(&cq->mq);
> > +bail0:
> > + c2_free_mqsp(cq->mq.shared);
> > +
> > + return err;
> > +}
> > +
> > +void c2_free_cq(struct c2_dev *c2dev,
> > + struct c2_cq *cq)
> > +{
> > + int err;
> > + struct c2_vq_req *vq_req;
> > + ccwr_cq_destroy_req_t wr;
> > + ccwr_cq_destroy_rep_t *reply;
> > +
> > + might_sleep();
> > +
> > + atomic_dec(&cq->refcount);
> > + wait_event(cq->wait, !atomic_read(&cq->refcount));
> > +
> > + vq_req = vq_req_alloc(c2dev);
> > + if (!vq_req) {
> > + goto bail0;
> > + }
> > +
> > + memset(&wr, 0, sizeof(wr));
> > + c2_wr_set_id(&wr, CCWR_CQ_DESTROY);
> > + wr.hdr.context = (unsigned long)vq_req;
> > + wr.rnic_handle = c2dev->adapter_handle;
> > + wr.cq_handle = cq->adapter_handle;
> > +
> > + vq_req_get(c2dev, vq_req);
> > +
> > + err = vq_send_wr(c2dev, (ccwr_t*)&wr);
> > + if (err) {
> > + vq_req_put(c2dev, vq_req);
> > + goto bail1;
> > + }
> > +
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err)
> > + goto bail1;
> > +
> > + reply = (ccwr_cq_destroy_rep_t*)(unsigned long)(vq_req->reply_msg);
> > +
> > +//bail2:
> > + vq_repbuf_free(c2dev, reply);
> > +bail1:
> > + vq_req_free(c2dev, vq_req);
> > +bail0:
> > + if (cq->is_kernel) {
> > + c2_free_cq_buf(&cq->mq);
> > + }
> > +
> > + return;
> > +}
> > +
> > Index: hw/amso1100/Makefile
> > ===================================================================
> > --- hw/amso1100/Makefile (revision 0)
> > +++ hw/amso1100/Makefile (revision 0)
> > @@ -0,0 +1,22 @@
> > +EXTRA_CFLAGS += -Idrivers/infiniband/include
> > +
> > +ifdef CONFIG_INFINIBAND_AMSO1100_DEBUG
> > +EXTRA_CFLAGS += -DC2_DEBUG
> > +endif
> > +
> > +obj-$(CONFIG_INFINIBAND_AMSO1100) += iw_c2.o
> > +
> > +iw_c2-y := \
> > + c2.o \
> > + c2_provider.o \
> > + c2_rnic.o \
> > + c2_alloc.o \
> > + c2_mq.o \
> > + c2_ae.o \
> > + c2_vq.o \
> > + c2_intr.o \
> > + c2_cq.o \
> > + c2_qp.o \
> > + c2_cm.o \
> > + c2_mm.o \
> > + c2_pd.o
> > Index: hw/amso1100/c2_mm.c
> > ===================================================================
> > --- hw/amso1100/c2_mm.c (revision 0)
> > +++ hw/amso1100/c2_mm.c (revision 0)
> > @@ -0,0 +1,376 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +#include "c2.h"
> > +#include "c2_vq.h"
> > +
> > +#define PBL_VIRT 1
> > +#define PBL_PHYS 2
> > +
> > +/*
> > + * Send all the PBL messages to convey the remainder of the PBL
> > + * Wait for the adapter's reply on the last one.
> > + * This is indicated by setting the MEM_PBL_COMPLETE in the flags.
> > + *
> > + * NOTE: vq_req is _not_ freed by this function. The VQ Host
> > + * Reply buffer _is_ freed by this function.
> > + */
> > +static int
> > +send_pbl_messages(struct c2_dev *c2dev, u32 stag_index,
> > + unsigned long va, u32 pbl_depth,
> > + struct c2_vq_req *vq_req, int pbl_type)
> > +{
> > + u32 pbe_count; /* amt that fits in a PBL msg */
> > + u32 count; /* amt in this PBL MSG. */
> > + ccwr_nsmr_pbl_req_t *wr; /* PBL WR ptr */
> > + ccwr_nsmr_pbl_rep_t *reply; /* reply ptr */
> > + int err, pbl_virt, i;
> > +
> > + switch (pbl_type) {
> > + case PBL_VIRT:
> > + pbl_virt = 1;
> > + break;
> > + case PBL_PHYS:
> > + pbl_virt = 0;
> > + break;
> > + default:
> > + return -EINVAL;
> > + break;
> > + }
> > +
> > + pbe_count = (c2dev->req_vq.msg_size -
> > + sizeof(ccwr_nsmr_pbl_req_t)) / sizeof(u64);
> > + wr = (ccwr_nsmr_pbl_req_t*)kmalloc(c2dev->req_vq.msg_size,
> GFP_KERNEL);
> > + if (!wr) {
> > + return -ENOMEM;
> > + }
> > + c2_wr_set_id(wr, CCWR_NSMR_PBL);
> > +
> > + /*
> > + * Only the last PBL message will generate a reply from the verbs,
> > + * so we set the context to 0 indicating there is no kernel verbs
> > + * handler blocked awaiting this reply.
> > + */
> > + wr->hdr.context = 0;
> > + wr->rnic_handle = c2dev->adapter_handle;
> > + wr->stag_index = stag_index; /* already swapped */
> > + wr->flags = 0;
> > + while (pbl_depth) {
> > + count = min(pbe_count, pbl_depth);
> > + wr->addrs_length = cpu_to_be32(count);
> > +
> > + /*
> > + * If this is the last message, then reference the
> > + * vq request struct cuz we're gonna wait for a reply.
> > + * also make this PBL msg as the last one.
> > + */
> > + if (count == pbl_depth) {
> > + /*
> > + * reference the request struct. dereferenced in the
> > + * int handler.
> > + */
> > + vq_req_get(c2dev, vq_req);
> > + wr->flags = cpu_to_be32(MEM_PBL_COMPLETE);
> > +
> > + /*
> > + * This is the last PBL message.
> > + * Set the context to our VQ Request Object so we can
> > + * wait for the reply.
> > + */
> > + wr->hdr.context = (unsigned long)vq_req;
> > + }
> > +
> > + /*
> > + * if pbl_virt is set then va is a virtual address that describes
> a
> > + * virtually contiguous memory allocation. the wr needs the start
> of
> > + * each virtual page to be converted to the corresponding
> physical
> > + * address of the page.
> > + *
> > + * if pbl_virt is not set then va is an array of physical
> addresses and
> > + * there is no conversion to do. just fill in the wr with what
> is in
> > + * the array.
> > + */
> > + for (i=0; i < count; i++) {
> > + if (pbl_virt) {
> > + /* XXX */ //wr->paddrs[i] =
> cpu_to_be64(user_virt_to_phys(va));
> > + va += PAGE_SIZE;
> > + } else {
> > + wr->paddrs[i] = cpu_to_be64((u64)(unsigned long)((void
> **)va)[i]);
> > + }
> > + }
> > +
> > + /*
> > + * Send WR to adapter
> > + */
> > + err = vq_send_wr(c2dev, (ccwr_t*)wr);
> > + if (err) {
> > + if (count <= pbe_count) {
> > + vq_req_put(c2dev, vq_req);
> > + }
> > + goto bail0;
> > + }
> > + pbl_depth -= count;
> > + }
> > +
> > + /*
> > + * Now wait for the reply...
> > + */
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err) {
> > + goto bail0;
> > + }
> > +
> > + /*
> > + * Process reply
> > + */
> > + reply = (ccwr_nsmr_pbl_rep_t*)(unsigned long)vq_req->reply_msg;
> > + if (!reply) {
> > + err = -ENOMEM;
> > + goto bail0;
> > + }
> > +
> > + err = c2_errno(reply);
> > +
> > + vq_repbuf_free(c2dev, reply);
> > +bail0:
> > + kfree(wr);
> > + return err;
> > +}
> > +
> > +#define CC_PBL_MAX_DEPTH 131072
> > +int
> > +c2_nsmr_register_phys_kern(struct c2_dev *c2dev, u64 **addr_list,
> > + int pbl_depth, u32 length, u64 *va,
> > + cc_acf_t acf, struct c2_mr *mr)
> > +{
> > + struct c2_vq_req *vq_req;
> > + ccwr_nsmr_register_req_t *wr;
> > + ccwr_nsmr_register_rep_t *reply;
> > + u16 flags;
> > + int i, pbe_count, count;
> > + int err;
> > +
> > + if (!va || !length || !addr_list || !pbl_depth)
> > + return -EINTR;
> > +
> > + /*
> > + * Verify PBL depth is within rnic max
> > + */
> > + if (pbl_depth > CC_PBL_MAX_DEPTH) {
> > + return -EINTR;
> > + }
> > +
> > + /*
> > + * allocate verbs request object
> > + */
> > + vq_req = vq_req_alloc(c2dev);
> > + if (!vq_req)
> > + return -ENOMEM;
> > +
> > + wr = kmalloc(c2dev->req_vq.msg_size, GFP_KERNEL);
> > + if (!wr) {
> > + err = -ENOMEM;
> > + goto bail0;
> > + }
> > +
> > + /*
> > + * build the WR
> > + */
> > + c2_wr_set_id(wr, CCWR_NSMR_REGISTER);
> > + wr->hdr.context = (unsigned long)vq_req;
> > + wr->rnic_handle = c2dev->adapter_handle;
> > +
> > + flags = (acf | MEM_VA_BASED | MEM_REMOTE);
> > +
> > + /*
> > + * compute how many pbes can fit in the message
> > + */
> > + pbe_count = (c2dev->req_vq.msg_size -
> > + sizeof(ccwr_nsmr_register_req_t)) /
> > + sizeof(u64);
> > +
> > + if (pbl_depth <= pbe_count) {
> > + flags |= MEM_PBL_COMPLETE;
> > + }
> > + wr->flags = cpu_to_be16(flags);
> > + wr->stag_key = 0; //stag_key;
> > + wr->va = cpu_to_be64(*va);
> > + wr->pd_id = mr->pd->pd_id;
> > + wr->pbe_size = cpu_to_be32(PAGE_SIZE);
> > + wr->length = cpu_to_be32(length);
> > + wr->pbl_depth = cpu_to_be32(pbl_depth);
> > + wr->fbo = cpu_to_be32(0);
> > + count = min(pbl_depth, pbe_count);
> > + wr->addrs_length = cpu_to_be32(count);
> > +
> > + /*
> > + * fill out the PBL for this message
> > + */
> > + for (i = 0; i < count; i++) {
> > + wr->paddrs[i] = cpu_to_be64((u64)(unsigned long)addr_list[i]);
> > + }
> > +
> > + /*
> > + * regerence the request struct
> > + */
> > + vq_req_get(c2dev, vq_req);
> > +
> > + /*
> > + * send the WR to the adapter
> > + */
> > + err = vq_send_wr(c2dev, (ccwr_t *)wr);
> > + if (err) {
> > + vq_req_put(c2dev, vq_req);
> > + goto bail1;
> > + }
> > +
> > + /*
> > + * wait for reply from adapter
> > + */
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err) {
> > + goto bail1;
> > + }
> > +
> > + /*
> > + * process reply
> > + */
> > + reply = (ccwr_nsmr_register_rep_t *)(unsigned
> long)(vq_req->reply_msg);
> > + if (!reply) {
> > + err = -ENOMEM;
> > + goto bail1;
> > + }
> > + if ( (err = c2_errno(reply))) {
> > + goto bail2;
> > + }
> > + //*p_pb_entries = be32_to_cpu(reply->pbl_depth);
> > + mr->ibmr.lkey = mr->ibmr.rkey = be32_to_cpu(reply->stag_index);
> > + vq_repbuf_free(c2dev, reply);
> > +
> > + /*
> > + * if there are still more PBEs we need to send them to
> > + * the adapter and wait for a reply on the final one.
> > + * reuse vq_req for this purpose.
> > + */
> > + pbl_depth -= count;
> > + if (pbl_depth) {
> > +
> > + vq_req->reply_msg = (unsigned long)NULL;
> > + atomic_set(&vq_req->reply_ready, 0);
> > + err = send_pbl_messages(c2dev,
> > + cpu_to_be32(mr->ibmr.lkey),
> > + (unsigned long)&addr_list[i],
> > + pbl_depth, vq_req, PBL_PHYS);
> > + if (err) {
> > + goto bail1;
> > + }
> > + }
> > +
> > + vq_req_free(c2dev, vq_req);
> > + kfree(wr);
> > +
> > + return err;
> > +
> > +bail2:
> > + vq_repbuf_free(c2dev, reply);
> > +bail1:
> > + kfree(wr);
> > +bail0:
> > + vq_req_free(c2dev, vq_req);
> > + return err;
> > +}
> > +
> > +int
> > +c2_stag_dealloc(struct c2_dev *c2dev, u32 stag_index)
> > +{
> > + struct c2_vq_req *vq_req; /* verbs request object */
> > + ccwr_stag_dealloc_req_t wr; /* work request */
> > + ccwr_stag_dealloc_rep_t *reply; /* WR reply */
> > + int err;
> > +
> > +
> > + /*
> > + * allocate verbs request object
> > + */
> > + vq_req = vq_req_alloc(c2dev);
> > + if (!vq_req) {
> > + return -ENOMEM;
> > + }
> > +
> > + /*
> > + * Build the WR
> > + */
> > + c2_wr_set_id(&wr, CCWR_STAG_DEALLOC);
> > + wr.hdr.context = (u64)(unsigned long)vq_req;
> > + wr.rnic_handle = c2dev->adapter_handle;
> > + wr.stag_index = cpu_to_be32(stag_index);
> > +
> > + /*
> > + * reference the request struct. dereferenced in the int handler.
> > + */
> > + vq_req_get(c2dev, vq_req);
> > +
> > + /*
> > + * Send WR to adapter
> > + */
> > + err = vq_send_wr(c2dev, (ccwr_t*)&wr);
> > + if (err) {
> > + vq_req_put(c2dev, vq_req);
> > + goto bail0;
> > + }
> > +
> > + /*
> > + * Wait for reply from adapter
> > + */
> > + err = vq_wait_for_reply(c2dev, vq_req);
> > + if (err) {
> > + goto bail0;
> > + }
> > +
> > + /*
> > + * Process reply
> > + */
> > + reply = (ccwr_stag_dealloc_rep_t*)(unsigned long)vq_req->reply_msg;
> > + if (!reply) {
> > + err = -ENOMEM;
> > + goto bail0;
> > + }
> > +
> > + err = c2_errno(reply);
> > +
> > + vq_repbuf_free(c2dev, reply);
> > +bail0:
> > + vq_req_free(c2dev, vq_req);
> > + return err;
> > +}
> > +
> > +
> > Index: hw/amso1100/cc_status.h
> > ===================================================================
> > --- hw/amso1100/cc_status.h (revision 0)
> > +++ hw/amso1100/cc_status.h (revision 0)
> > @@ -0,0 +1,163 @@
> > +/*
> > + * Copyright (c) 2005 Ammasso, Inc. All rights reserved.
> > + * Copyright (c) 2005 Open Grid Computing, Inc. All rights reserved.
> > + *
> > + * This software is available to you under a choice of one of two
> > + * licenses. You may choose to be licensed under the terms of the GNU
> > + * General Public License (GPL) Version 2, available from the file
> > + * COPYING in the main directory of this source tree, or the
> > + * OpenIB.org BSD license below:
> > + *
> > + * Redistribution and use in source and binary forms, with or
> > + * without modification, are permitted provided that the following
> > + * conditions are met:
> > + *
> > + * - Redistributions of source code must retain the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer.
> > + *
> > + * - Redistributions in binary form must reproduce the above
> > + * copyright notice, this list of conditions and the following
> > + * disclaimer in the documentation and/or other materials
> > + * provided with the distribution.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> > + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> > + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
> > + * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
> > + * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
> > + * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
> > + * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> > + * SOFTWARE.
> > + */
> > +#ifndef _CC_STATUS_H_
> > +#define _CC_STATUS_H_
> > +
> > +/*
> > + * Verbs Status Codes
> > + */
> > +typedef enum {
> > + CC_OK = 0, /* This must be zero */
> > + CCERR_INSUFFICIENT_RESOURCES = 1,
> > + CCERR_INVALID_MODIFIER = 2,
> > + CCERR_INVALID_MODE = 3,
> > + CCERR_IN_USE = 4,
> > + CCERR_INVALID_RNIC = 5,
> > + CCERR_INTERRUPTED_OPERATION = 6,
> > + CCERR_INVALID_EH = 7,
> > + CCERR_INVALID_CQ = 8,
> > + CCERR_CQ_EMPTY = 9,
> > + CCERR_NOT_IMPLEMENTED = 10,
> > + CCERR_CQ_DEPTH_TOO_SMALL = 11,
> > + CCERR_PD_IN_USE = 12,
> > + CCERR_INVALID_PD = 13,
> > + CCERR_INVALID_SRQ = 14,
> > + CCERR_INVALID_ADDRESS = 15,
> > + CCERR_INVALID_NETMASK = 16,
> > + CCERR_INVALID_QP = 17,
> > + CCERR_INVALID_QP_STATE = 18,
> > + CCERR_TOO_MANY_WRS_POSTED = 19,
> > + CCERR_INVALID_WR_TYPE = 20,
> > + CCERR_INVALID_SGL_LENGTH = 21,
> > + CCERR_INVALID_SQ_DEPTH = 22,
> > + CCERR_INVALID_RQ_DEPTH = 23,
> > + CCERR_INVALID_ORD = 24,
> > + CCERR_INVALID_IRD = 25,
> > + CCERR_QP_ATTR_CANNOT_CHANGE = 26,
> > + CCERR_INVALID_STAG = 27,
> > + CCERR_QP_IN_USE = 28,
> > + CCERR_OUTSTANDING_WRS = 29,
> > + CCERR_STAG_IN_USE = 30,
> > + CCERR_INVALID_STAG_INDEX = 31,
> > + CCERR_INVALID_SGL_FORMAT = 32,
> > + CCERR_ADAPTER_TIMEOUT = 33,
> > + CCERR_INVALID_CQ_DEPTH = 34,
> > + CCERR_INVALID_PRIVATE_DATA_LENGTH = 35,
> > + CCERR_INVALID_EP = 36,
> > + CCERR_MR_IN_USE = CCERR_STAG_IN_USE,
> > + CCERR_FLUSHED = 38,
> > + CCERR_INVALID_WQE = 39,
> > + CCERR_LOCAL_QP_CATASTROPHIC_ERROR = 40,
> > + CCERR_REMOTE_TERMINATION_ERROR = 41,
> > + CCERR_BASE_AND_BOUNDS_VIOLATION = 42,
> > + CCERR_ACCESS_VIOLATION = 43,
> > + CCERR_INVALID_PD_ID = 44,
> > + CCERR_WRAP_ERROR = 45,
> > + CCERR_INV_STAG_ACCESS_ERROR = 46,
> > + CCERR_ZERO_RDMA_READ_RESOURCES = 47,
> > + CCERR_QP_NOT_PRIVILEGED = 48,
> > + CCERR_STAG_STATE_NOT_INVALID = 49,
> > + CCERR_INVALID_PAGE_SIZE = 50,
> > + CCERR_INVALID_BUFFER_SIZE = 51,
> > + CCERR_INVALID_PBE = 52,
> > + CCERR_INVALID_FBO = 53,
> > + CCERR_INVALID_LENGTH = 54,
> > + CCERR_INVALID_ACCESS_RIGHTS = 55,
> > + CCERR_PBL_TOO_BIG = 56,
> > + CCERR_INVALID_VA = 57,
> > + CCERR_INVALID_REGION = 58,
> > + CCERR_INVALID_WINDOW = 59,
> > + CCERR_TOTAL_LENGTH_TOO_BIG = 60,
> > + CCERR_INVALID_QP_ID = 61,
> > + CCERR_ADDR_IN_USE = 62,
> > + CCERR_ADDR_NOT_AVAIL = 63,
> > + CCERR_NET_DOWN = 64,
> > + CCERR_NET_UNREACHABLE = 65,
> > + CCERR_CONN_ABORTED = 66,
> > + CCERR_CONN_RESET = 67,
> > + CCERR_NO_BUFS = 68,
> > + CCERR_CONN_TIMEDOUT = 69,
> > + CCERR_CONN_REFUSED = 70,
> > + CCERR_HOST_UNREACHABLE = 71,
> > + CCERR_INVALID_SEND_SGL_DEPTH = 72,
> > + CCERR_INVALID_RECV_SGL_DEPTH = 73,
> > + CCERR_INVALID_RDMA_WRITE_SGL_DEPTH = 74,
> > + CCERR_INSUFFICIENT_PRIVILEGES = 75,
> > + CCERR_STACK_ERROR = 76,
> > + CCERR_INVALID_VERSION = 77,
> > + CCERR_INVALID_MTU = 78,
> > + CCERR_INVALID_IMAGE = 79,
> > + CCERR_PENDING = 98, /* not an error; user internally by adapter */
> > + CCERR_DEFER = 99, /* not an error; used internally by adapter */
> > + CCERR_FAILED_WRITE = 100,
> > + CCERR_FAILED_ERASE = 101,
> > + CCERR_FAILED_VERIFICATION = 102,
> > + CCERR_NOT_FOUND = 103,
> > +
> > +} cc_status_t;
> > +
> > +/*
> > + * Verbs and Completion Status Code types...
> > + */
> > +typedef cc_status_t cc_verbs_status_t;
> > +typedef cc_status_t cc_wc_status_t;
> > +
> > +/*
> > + * CCAE_ACTIVE_CONNECT_RESULTS status result codes.
> > + */
> > +typedef enum {
> > + CC_CONN_STATUS_SUCCESS = CC_OK,
> > + CC_CONN_STATUS_NO_MEM = CCERR_INSUFFICIENT_RESOURCES,
> > + CC_CONN_STATUS_TIMEDOUT = CCERR_CONN_TIMEDOUT,
> > + CC_CONN_STATUS_REFUSED = CCERR_CONN_REFUSED,
> > + CC_CONN_STATUS_NETUNREACH = CCERR_NET_UNREACHABLE,
> > + CC_CONN_STATUS_HOSTUNREACH = CCERR_HOST_UNREACHABLE,
> > + CC_CONN_STATUS_INVALID_RNIC = CCERR_INVALID_RNIC,
> > + CC_CONN_STATUS_INVALID_QP = CCERR_INVALID_QP,
> > + CC_CONN_STATUS_INVALID_QP_STATE = CCERR_INVALID_QP_STATE,
> > + CC_CONN_STATUS_REJECTED = CCERR_CONN_RESET,
> > +} cc_connect_status_t;
> > +
> > +/*
> > + * Flash programming status codes.
> > + */
> > +typedef enum {
> > + CC_FLASH_STATUS_SUCCESS = 0x0000,
> > + CC_FLASH_STATUS_VERIFY_ERR = 0x0002,
> > + CC_FLASH_STATUS_IMAGE_ERR = 0x0004,
> > + CC_FLASH_STATUS_ECLBS = 0x0400,
> > + CC_FLASH_STATUS_PSLBS = 0x0800,
> > + CC_FLASH_STATUS_VPENS = 0x1000,
> > +} cc_flash_status_t;
> > +
> > +#endif /* _CC_STATUS_H_ */
> > _______________________________________________
> > openib-general mailing list
> > openib-general at openib.org
> > http://openib.org/mailman/listinfo/openib-general
> >
> > To unsubscribe, please visit
> http://openib.org/mailman/listinfo/openib-general
More information about the general
mailing list