diff options
author | Lars Ellenberg <lars.ellenberg@linbit.com> | 2010-12-05 14:11:14 +0100 |
---|---|---|
committer | Philipp Reisner <philipp.reisner@linbit.com> | 2011-03-10 11:35:20 +0100 |
commit | 8a3c104438be4986a77f332009b695fcac48f620 (patch) | |
tree | 5f659c3125cb4dd901bfb15532c3ac051f94c8cc /drivers/block | |
parent | 09b9e7979378fe070784de20e50bb1d42aa643ab (diff) |
drbd: fix regression, we need to close drbd epochs during normal operation
commit e2041475e6ddb081734d161f6421977323f5a9b9
drbd: Starting with protocol 96 we can allow app-IO while receiving the bitmap
Contained a bad chunk that tried to optimize away drbd barriers during
bitmap exchange, but accidentally dropped them for normal mode as well.
Impact: depending on activity log size and access pattern, activity log
extents may not be recycled in time, causeing IO to block indefinetely.
Fix: skip drbd barriers only if there is no connection to send them on,
or the request being completed has not been on the network at all.
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Diffstat (limited to 'drivers/block')
-rw-r--r-- | drivers/block/drbd/drbd_req.c | 11 |
1 files changed, 8 insertions, 3 deletions
diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c index 4cb8247d83c9..de5fe70f2b42 100644 --- a/drivers/block/drbd/drbd_req.c +++ b/drivers/block/drbd/drbd_req.c @@ -140,9 +140,14 @@ static void _about_to_complete_local_write(struct drbd_conf *mdev, struct hlist_node *n; struct hlist_head *slot; - /* before we can signal completion to the upper layers, - * we may need to close the current epoch */ - if (mdev->state.conn >= C_WF_BITMAP_T && mdev->state.conn < C_AHEAD && + /* Before we can signal completion to the upper layers, + * we may need to close the current epoch. + * We can skip this, if this request has not even been sent, because we + * did not have a fully established connection yet/anymore, during + * bitmap exchange, or while we are C_AHEAD due to congestion policy. + */ + if (mdev->state.conn >= C_CONNECTED && + (s & RQ_NET_SENT) != 0 && req->epoch == mdev->newest_tle->br_number) queue_barrier(mdev); |