summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>2008-10-18 20:28:08 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2008-10-20 08:52:38 -0700
commit073e587ec2cc377867e53d8b8959738a8e16cff6 (patch)
tree856aac72b818de4f52ce38448b852930554b3efa /mm
parent47c59803becb55b72b26cdab3838d621a15badc8 (diff)
memcg: move charge swapin under lock
While page-cache's charge/uncharge is done under page_lock(), swap-cache isn't. (anonymous page is charged when it's newly allocated.) This patch moves do_swap_page()'s charge() call under lock. I don't see any bad problem *now* but this fix will be good for future for avoiding unnecessary racy state. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Acked-by: Balbir Singh <balbir@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/memory.c11
1 files changed, 6 insertions, 5 deletions
diff --git a/mm/memory.c b/mm/memory.c
index ba86b436b85..54cf20ee0a8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2326,16 +2326,17 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
count_vm_event(PGMAJFAULT);
}
+ mark_page_accessed(page);
+
+ lock_page(page);
+ delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
+
if (mem_cgroup_charge(page, mm, GFP_KERNEL)) {
- delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
ret = VM_FAULT_OOM;
+ unlock_page(page);
goto out;
}
- mark_page_accessed(page);
- lock_page(page);
- delayacct_clear_flag(DELAYACCT_PF_SWAPIN);
-
/*
* Back out if somebody else already faulted in this pte.
*/