[openib-general] [Andrew Morton] inappropriate use of in_atomic()
Hal Rosenstock
halr at voltaire.com
Fri Mar 11 05:17:15 PST 2005
On Thu, 2005-03-10 at 23:50, Roland Dreier wrote:
> drivers/infiniband/core/mad.c is in Andrew's list...
>
> >From looking at the code, the best fix I can come up with is just to
> always use GFP_ATOMIC ... worst case we drop a MAD under memory
> pressure.
That could be bad if this persists but I suppose there are other ill
effects of this.
> The other option is to change ib_post_send_mad() to take a
> GFP_ mask as a parameter, but that doesn't seem worth doing...
There aren't that many places this is called. Also, it appears to me
that sa_query.c is already doing this for some of it's memory allocation
and this could be passed down to ib_post_send_mad.
int ib_sa_path_rec_get(struct ib_device *device, u8 port_num,
struct ib_sa_path_rec *rec,
ib_sa_comp_mask comp_mask,
int timeout_ms, int gfp_mask,
...
This approach seems better to me from a robustness standpoint.
Is the difficulty determing what to set the mask to for each call ? If
they all end up being GFP_ATOMIC, this reduces to your preferred
solution.
The biggest impact appears to be on CM (at least currently).
-- Hal
> --- infiniband/core/mad.c (revision 1975)
> +++ infiniband/core/mad.c (working copy)
> @@ -666,11 +666,7 @@ static int handle_outgoing_dr_smp(struct
> if (!ret || !device->process_mad)
> goto out;
>
> - if (in_atomic() || irqs_disabled())
> - alloc_flags = GFP_ATOMIC;
> - else
> - alloc_flags = GFP_KERNEL;
> - local = kmalloc(sizeof *local, alloc_flags);
> + local = kmalloc(sizeof *local, GFP_ATOMIC);
> if (!local) {
> ret = -ENOMEM;
> printk(KERN_ERR PFX "No memory for ib_mad_local_private\n");
> @@ -860,9 +856,7 @@ int ib_post_send_mad(struct ib_mad_agent
> }
>
> /* Allocate MAD send WR tracking structure */
> - mad_send_wr = kmalloc(sizeof *mad_send_wr,
> - (in_atomic() || irqs_disabled()) ?
> - GFP_ATOMIC : GFP_KERNEL);
> + mad_send_wr = kmalloc(sizeof *mad_send_wr, GFP_ATOMIC);
> if (!mad_send_wr) {
> printk(KERN_ERR PFX "No memory for "
> "ib_mad_send_wr_private\n");
>
>
>
>
> ______________________________________________________________________
>
> From: Andrew Morton <akpm at osdl.org>
> To: Paul Mackerras <paulus at samba.org>, Jean Tourrilhes <jt at bougret.hpl.hp.com>, javier at tudela.mad.ttd.net, linux-fbdev-devel at lists.sourceforge.net, acpi-devel at lists.sourceforge.net, linux1394-devel at lists.sourceforge.net, Roland Dreier <roland at topspin.com>
> Cc: linux-kernel at vger.kernel.org
> Subject: inappropriate use of in_atomic()
> Date: 10 Mar 2005 20:40:06 -0800
>
>
> in_atomic() is not a reliable indication of whether it is currently safe
> to call schedule().
>
> This is because the lockdepth beancounting which in_atomic() uses is only
> accumulated if CONFIG_PREEMPT=y. in_atomic() will return false inside
> spinlocks if CONFIG_PREEMPT=n.
>
> Consequently the use of in_atomic() in the below files is probably
> deadlocky if CONFIG_PREEMPT=n:
>
> arch/ppc64/kernel/viopath.c
> drivers/net/irda/sir_kthread.c
> drivers/net/wireless/airo.c
> drivers/video/amba-clcd.c
> drivers/acpi/osl.c
> drivers/ieee1394/ieee1394_transactions.c
> drivers/infiniband/core/mad.c
>
> Note that the same beancounting is used for the "scheduling while atomic"
> warning, so if the code calls schedule with locks held, we won't get a
> warning. Both are tied to CONFIG_PREEMPT=y.
>
> The kernel provides no reliable runtime way of detecting whether or not it
> is safe to call schedule().
>
> Can we please find ways to change the above code to not use in_atomic()?
> Then we can whack #ifndef MODULE around its definition to reduce
> reoccurrences. Will probably rename it to something more scary as well.
>
> Thanks.
>
>
> ______________________________________________________________________
>
> _______________________________________________
> openib-general mailing list
> openib-general at openib.org
> http://openib.org/mailman/listinfo/openib-general
>
> To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
More information about the general
mailing list