[ofa-general] NFS-RDMA server run out-of-memory,

Jon Mason jon at opengridcomputing.com
Thu Mar 26 13:19:05 PDT 2009


On Thu, Mar 26, 2009 at 10:21:37AM -0700, Vu Pham wrote:
> Jon,
>
> nfsrdma client (RHEL 5.2), Arbel or ConnectX HCAs
> nfsrdma server (RHEL 5.2/5.3 or 2.6.27), Arbel HCA
>
> I run connectathon -N 1000 (or less if you have less memory on nfsrdma server),
> System run out-of-memory and reboot.
>
> Do you see the same behavior with Chelsio?

I see the same behavior on Chelsio over RDMA and over TCP.

> Call Trace:
> [<ffffffff800c0b09>] out_of_memory+0x8e/0x2f5
> [<ffffffff8000f263>] __alloc_pages+0x245/0x2ce
> [<ffffffff8002344e>] alloc_page_interleave+0x3d/0x74
> [<ffffffff80012946>] __do_page_cache_readahead+0x95/0x1d9
> [<ffffffff800c572d>] zone_statistics+0x3e/0x6d
> [<ffffffff8003d77c>] ifind_fast+0x47/0x83
> [<ffffffff8002abf6>] iput+0x4b/0x84
> [<ffffffff80032010>] blockable_page_cache_readahead+0x53/0xb2
> [<ffffffff80013aad>] page_cache_readahead+0x13d/0x1af
> [<ffffffff8000be7f>] do_generic_mapping_read+0x126/0x3f8
> [<ffffffff885bb606>] :nfsd:nfsd_read_actor+0x0/0xd9
> [<ffffffff800bfa0c>] generic_file_sendfile+0x4c/0x64
> [<ffffffff885ba3e9>] :nfsd:nfsd_vfs_read+0x20a/0x32f
> [<ffffffff885ba93a>] :nfsd:nfsd_read+0x9c/0xba
> [<ffffffff885c19d9>] :nfsd:nfsd3_proc_read+0x11b/0x161
> [<ffffffff885b6233>] :nfsd:nfsd_dispatch+0xde/0x1b6
> [<ffffffff884c87df>] :sunrpc:svc_process+0x405/0x6de
> [<ffffffff8006459c>] __down_read+0x12/0x92
> [<ffffffff8009dbca>] keventd_create_kthread+0x0/0xc4
> [<ffffffff885b687f>] :nfsd:nfsd+0x1a3/0x274
> [<ffffffff885b66dc>] :nfsd:nfsd+0x0/0x274
> [<ffffffff8003253d>] kthread+0xfe/0x132
> [<ffffffff8005dfb1>] child_rip+0xa/0x11
> [<ffffffff8009dbca>] keventd_create_kthread+0x0/0xc4
> [<ffffffff8003243f>] kthread+0x0/0x132
> [<ffffffff8005dfa7>] child_rip+0x0/0x11
>
> Node 0 DMA per-cpu:
> cpu 0 hot: high 0, batch 1 used:0
> cpu 0 cold: high 0, batch 1 used:0
> cpu 1 hot: high 0, batch 1 used:0
> cpu 1 cold: high 0, batch 1 used:0
> cpu 2 hot: high 0, batch 1 used:0
> cpu 2 cold: high 0, batch 1 used:0
> cpu 3 hot: high 0, batch 1 used:0
> cpu 3 cold: high 0, batch 1 used:0
> Node 0 DMA32 per-cpu:
>
>
> Call Trace:
> [<ffffffff800c0b09>] out_of_memory+0x8e/0x2f5
> [<ffffffff8000f263>] __alloc_pages+0x245/0x2ce
> [<ffffffff80064a81>] _spin_lock_bh+0x9/0x14
> [<ffffffff8002344e>] alloc_page_interleave+0x3d/0x74
> [<ffffffff884d23ba>] :sunrpc:svc_recv+0xc2/0x73f
> [<ffffffff8008ac03>] default_wake_function+0x0/0xe
> [<ffffffff8006459c>] __down_read+0x12/0x92
> [<ffffffff8009dbca>] keventd_create_kthread+0x0/0xc4
> [<ffffffff885b67af>] :nfsd:nfsd+0xd3/0x274
> [<ffffffff885b66dc>] :nfsd:nfsd+0x0/0x274
> [<ffffffff8003253d>] kthread+0xfe/0x132
> [<ffffffff8005dfb1>] child_rip+0xa/0x11
> [<ffffffff8009dbca>] keventd_create_kthread+0x0/0xc4
> [<ffffffff8003243f>] kthread+0x0/0x132
> [<ffffffff8005dfa7>] child_rip+0x0/0x11
>
>
> -vu
>



More information about the general mailing list