[openib-general] Proposed device enumeration & async event APIs
Roland Dreier
roland at topspin.com
Sun Sep 12 20:35:33 PDT 2004
Fabian> Do you plan on having an allocation per client, or having
Fabian> an array of allocations that you grow dynamically as
Fabian> needed? I would think the array would be beneficial since
Fabian> you end up with fewer allocations. It does require
Fabian> synchronization and a counter to keep track of the array
Fabian> size. Resize would then just do a malloc, copy, and free.
I allocate per client per device. I just wanted to do the dumbest,
simplest possible thing, given that the number of clients and devices
are probably going to be in single digits on almost all machines.
However, now that we have an API defined, we can make whatever changes
to the implementation are required later painlessly.
If you're curious, here's the actual implementation:
void *ib_get_client_data(struct ib_device *device, struct ib_client *client)
{
struct ib_client_data *context;
void *ret = NULL;
unsigned long flags;
spin_lock_irqsave(&device->client_data_lock, flags);
list_for_each_entry(context, &device->client_data_list, list)
if (context->client == client) {
ret = context->data;
break;
}
spin_unlock_irqrestore(&device->client_data_lock, flags);
return ret;
}
int ib_set_client_data(struct ib_device *device, struct ib_client *client,
void *data)
{
struct ib_client_data *context;
int ret = 0;
unsigned long flags;
spin_lock_irqsave(&device->client_data_lock, flags);
list_for_each_entry(context, &device->client_data_list, list)
if (context->client == client) {
context->data = data;
spin_unlock_irqrestore(&device->client_data_lock, flags);
return 0;
}
spin_unlock_irqrestore(&device->client_data_lock, flags);
context = kmalloc(sizeof *context, GFP_KERNEL);
if (!context)
return -ENOMEM;
context->client = client;
context->data = data;
spin_lock_irqsave(&device->client_data_lock, flags);
list_add(&context->list, &device->client_data_list);
spin_unlock_irqrestore(&device->client_data_lock, flags);
return 0;
}
More information about the general
mailing list