Message ID | 162876946134.3068428.15475611190876694695.stgit@warthog.procyon.org.uk (mailing list archive) |
---|---|
Headers | show |
Series | mm: Fix NFS swapfiles and use DIO read for swapfiles | expand |
On Thu, Aug 12, 2021 at 12:57:41PM +0100, David Howells wrote: > > Hi Willy, Trond, > > Here's a change to make reads from the swapfile use async DIO rather than > readpage(), as requested by Willy. > > Whilst trying to make this work, I found that NFS's support for swapfiles > seems to have been non-functional since Aug 2019 (I think), so the first > patch fixes that. Question is: do we actually *want* to keep this > functionality, given that it seems that no one's tested it with an upstream > kernel in the last couple of years? Independ of the NFS use using the direct I/O code for swap seems like the right thing to do in generlal. e.g. for XFS a lookup in the extent btree will be more efficient than the weird swap extent map.
On Thu, 12 Aug 2021 12:57:41 +0100 David Howells wrote: > > Hi Willy, Trond, > > Here's a change to make reads from the swapfile use async DIO rather than > readpage(), as requested by Willy. > > Whilst trying to make this work, I found that NFS's support for swapfiles > seems to have been non-functional since Aug 2019 (I think), so the first > patch fixes that. Question is: do we actually *want* to keep this > functionality, given that it seems that no one's tested it with an upstream > kernel in the last couple of years? > > I tested this using the procedure and program outlined in the first patch. > > I also encountered occasional instances of the following warning, so I'm > wondering if there's a scheduling problem somewhere: > > BUG: workqueue lockup - pool cpus=0-3 flags=0x5 nice=0 stuck for 34s! > Showing busy workqueues and worker pools: > workqueue events: flags=0x0 > pwq 6: cpus=3 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 > in-flight: 1565:fill_page_cache_func > workqueue events_highpri: flags=0x10 > pwq 3: cpus=1 node=0 flags=0x1 nice=-20 active=1/256 refcnt=2 > in-flight: 1547:fill_page_cache_func > pwq 1: cpus=0 node=0 flags=0x0 nice=-20 active=1/256 refcnt=2 > in-flight: 1811:fill_page_cache_func > workqueue events_unbound: flags=0x2 > pwq 8: cpus=0-3 flags=0x5 nice=0 active=3/512 refcnt=5 > pending: fsnotify_connector_destroy_workfn, fsnotify_mark_destroy_workfn, cleanup_offline_cgwbs_workfn > workqueue events_power_efficient: flags=0x82 > pwq 8: cpus=0-3 flags=0x5 nice=0 active=4/256 refcnt=6 > pending: neigh_periodic_work, neigh_periodic_work, check_lifetime, do_cache_clean > workqueue writeback: flags=0x4a > pwq 8: cpus=0-3 flags=0x5 nice=0 active=1/256 refcnt=4 > in-flight: 433(RESCUER):wb_workfn Is it a memory tight scenario that got rescuer active? > workqueue rpciod: flags=0xa > pwq 8: cpus=0-3 flags=0x5 nice=0 active=38/256 refcnt=40 > in-flight: 7:rpc_async_schedule, 1609:rpc_async_schedule, 1610:rpc_async_schedule, 912:rpc_async_schedule, 1613:rpc_async_schedule, 1631:rpc_async_schedule, 34:rpc_async_schedule, 44:rpc_async_schedule > pending: rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule > workqueue ext4-rsv-conversion: flags=0x2000a > pool 1: cpus=0 node=0 flags=0x0 nice=-20 hung=59s workers=2 idle: 6 > pool 3: cpus=1 node=0 flags=0x1 nice=-20 hung=43s workers=2 manager: 20 > pool 6: cpus=3 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 498 29 > pool 8: cpus=0-3 flags=0x5 nice=0 hung=34s workers=9 manager: 1623 > pool 9: cpus=0-3 flags=0x5 nice=-20 hung=0s workers=2 manager: 5224 idle: 859 > > Note that this is due to DIO writes to NFS only, as far as I can tell, and > that no reads had happened yet. > > David > --- > David Howells (2): > nfs: Fix write to swapfile failure due to generic_write_checks() > mm: Make swap_readpage() for SWP_FS_OPS use ->direct_IO() not ->readpage() > > > mm/page_io.c | 73 +++++++++++++++++++++++++++++++++++++++++++++++----- > 1 file changed, 67 insertions(+), 6 deletions(-) Print memory info to help understand the busy rescuer. +++ x/kernel/workqueue.c @@ -4710,12 +4710,16 @@ static void show_pwq(struct pool_workque } if (has_in_flight) { bool comma = false; + bool rescuer = false; pr_info(" in-flight:"); hash_for_each(pool->busy_hash, bkt, worker, hentry) { if (worker->current_pwq != pwq) continue; + if (worker->rescue_wq) + rescuer = true; + pr_cont("%s %d%s:%ps", comma ? "," : "", task_pid_nr(worker->task), worker->rescue_wq ? "(RESCUER)" : "", @@ -4725,6 +4729,11 @@ static void show_pwq(struct pool_workque comma = true; } pr_cont("\n"); + if (rescuer) { + pr_cont("\n"); + show_free_areas(0, NULL); + pr_cont("\n"); + } } list_for_each_entry(work, &pool->worklist, entry) {