diff options
author | Dave Chinner <dchinner@redhat.com> | 2010-09-27 11:09:51 +1000 |
---|---|---|
committer | Alex Elder <aelder@sgi.com> | 2010-10-18 15:07:55 -0500 |
commit | 69b491c214d7fd4d4df972ae5377be99ca3753db (patch) | |
tree | b0d022080d8da893e525ee6502878424cffbd8c2 /fs/xfs/xfs_mount.c | |
parent | e3a20c0b02e1704ab115dfa9d012caf0fbc45ed0 (diff) | |
download | linux-stable-69b491c214d7fd4d4df972ae5377be99ca3753db.tar.gz linux-stable-69b491c214d7fd4d4df972ae5377be99ca3753db.tar.bz2 linux-stable-69b491c214d7fd4d4df972ae5377be99ca3753db.zip |
xfs: serialise inode reclaim within an AG
Memory reclaim via shrinkers has a terrible habit of having N+M
concurrent shrinker executions (N = num CPUs, M = num kswapds) all
trying to shrink the same cache. When the cache they are all working
on is protected by a single spinlock, massive contention an
slowdowns occur.
Wrap the per-ag inode caches with a reclaim mutex to serialise
reclaim access to the AG. This will block concurrent reclaim in each
AG but still allow reclaim to scan multiple AGs concurrently. Allow
shrinkers to move on to the next AG if it can't get the lock, and if
we can't get any AG, then start blocking on locks.
To prevent reclaimers from continually scanning the same inodes in
each AG, add a cursor that tracks where the last reclaim got up to
and start from that point on the next reclaim. This should avoid
only ever scanning a small number of inodes at the satart of each AG
and not making progress. If we have a non-shrinker based reclaim
pass, ignore the cursor and reset it to zero once we are done.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
Diffstat (limited to 'fs/xfs/xfs_mount.c')
-rw-r--r-- | fs/xfs/xfs_mount.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c index d66e87c7c3a6..59859c343e04 100644 --- a/fs/xfs/xfs_mount.c +++ b/fs/xfs/xfs_mount.c @@ -477,6 +477,7 @@ xfs_initialize_perag( pag->pag_agno = index; pag->pag_mount = mp; rwlock_init(&pag->pag_ici_lock); + mutex_init(&pag->pag_ici_reclaim_lock); INIT_RADIX_TREE(&pag->pag_ici_root, GFP_ATOMIC); if (radix_tree_preload(GFP_NOFS)) |