diff options
author | Steven Whitehouse <swhiteho@redhat.com> | 2011-01-19 09:30:01 +0000 |
---|---|---|
committer | Steven Whitehouse <swhiteho@redhat.com> | 2011-01-21 09:39:08 +0000 |
commit | bc015cb84129eb1451913cfebece270bf7a39e0f (patch) | |
tree | 4f116a61b802d87ae80051e9ae05d8fcb73d9ae7 /fs/gfs2/lops.c | |
parent | 2b1caf6ed7b888c95a1909d343799672731651a5 (diff) |
GFS2: Use RCU for glock hash table
This has a number of advantages:
- Reduces contention on the hash table lock
- Makes the code smaller and simpler
- Should speed up glock dumps when under load
- Removes ref count changing in examine_bucket
- No longer need hash chain lock in glock_put() in common case
There are some further changes which this enables and which
we may do in the future. One is to look at using SLAB_RCU,
and another is to look at using a per-cpu counter for the
per-sb glock counter, since that is touched twice in the
lifetime of each glock (but only used at umount time).
Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Diffstat (limited to 'fs/gfs2/lops.c')
-rw-r--r-- | fs/gfs2/lops.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c index bf33f822058..11a73efa826 100644 --- a/fs/gfs2/lops.c +++ b/fs/gfs2/lops.c @@ -91,7 +91,8 @@ static void gfs2_unpin(struct gfs2_sbd *sdp, struct buffer_head *bh, } bd->bd_ail = ai; list_add(&bd->bd_ail_st_list, &ai->ai_ail1_list); - clear_bit(GLF_LFLUSH, &bd->bd_gl->gl_flags); + if (test_and_clear_bit(GLF_LFLUSH, &bd->bd_gl->gl_flags)) + gfs2_glock_schedule_for_reclaim(bd->bd_gl); trace_gfs2_pin(bd, 0); gfs2_log_unlock(sdp); unlock_buffer(bh); |