summaryrefslogtreecommitdiff
path: root/fs
diff options
context:
space:
mode:
authorBrian Foster <bfoster@redhat.com>2017-02-13 22:48:17 -0800
committerDarrick J. Wong <darrick.wong@oracle.com>2017-02-15 10:18:15 -0800
commit3bf06fcb73b2b9d78750973516c7d5a4a490bee4 (patch)
treeb4f0c22f1951dba0ef3fb24f1dfb0a8e98ceb924 /fs
parent4560e78f40cb55bd2ea8f1ef4001c5baa88531c7 (diff)
xfs: clear delalloc and cache on buffered write failure
The buffered write failure handling code in xfs_file_iomap_end_delalloc() has a couple minor problems. First, if written == 0, start_fsb is not rounded down and it fails to kill off a delalloc block if the start offset is block unaligned. This results in a lingering delalloc block and broken delalloc block accounting detected at unmount time. Fix this by rounding down start_fsb in the unlikely event that written == 0. Second, it is possible for a failed overwrite of a delalloc extent to leave dirty pagecache around over a hole in the file. This is because is possible to hit ->iomap_end() on write failure before the iomap code has attempted to allocate pagecache, and thus has no need to clean it up. If the targeted delalloc extent was successfully written by a previous write, however, then it does still have dirty pages when ->iomap_end() punches out the underlying blocks. This ultimately results in writeback over a hole. To fix this problem, unconditionally punch out the pagecache from XFS before the associated delalloc range. Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Diffstat (limited to 'fs')
-rw-r--r--fs/xfs/xfs_iomap.c5
1 files changed, 5 insertions, 0 deletions
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index e0bc290396dc..e2fbed4c2d80 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -1079,6 +1079,8 @@ xfs_file_iomap_end_delalloc(
int error = 0;
start_fsb = XFS_B_TO_FSB(mp, offset + written);
+ if (unlikely(!written))
+ start_fsb = XFS_B_TO_FSBT(mp, offset);
end_fsb = XFS_B_TO_FSB(mp, offset + length);
/*
@@ -1090,6 +1092,9 @@ xfs_file_iomap_end_delalloc(
* blocks in the range, they are ours.
*/
if (start_fsb < end_fsb) {
+ truncate_pagecache_range(VFS_I(ip), XFS_FSB_TO_B(mp, start_fsb),
+ XFS_FSB_TO_B(mp, end_fsb) - 1);
+
xfs_ilock(ip, XFS_ILOCK_EXCL);
error = xfs_bmap_punch_delalloc_range(ip, start_fsb,
end_fsb - start_fsb);