summaryrefslogtreecommitdiff
path: root/scripts/gdb/linux/utils.py
diff options
context:
space:
mode:
authorAndreas Gruenbacher <agruenba@redhat.com>2021-11-30 18:26:15 +0100
committerAndreas Gruenbacher <agruenba@redhat.com>2021-12-02 12:41:10 +0100
commit3d36e57ff768dbb919c06ffedec4bfe4587c6254 (patch)
tree5d23ce928fe154178779ea07db25a86e89960f33 /scripts/gdb/linux/utils.py
parent5f6e13baebf31d71779617b45fbe88ed62f121dc (diff)
gfs2: gfs2_create_inode rework
When gfs2_lookup_by_inum() calls gfs2_inode_lookup() for an uncached inode, gfs2_inode_lookup() will place a new tentative inode into the inode cache before verifying that there is a valid inode at the given address. This can race with gfs2_create_inode() which doesn't check for duplicates inodes. gfs2_create_inode() will try to assign the new inode to the corresponding inode glock, and glock_set_object() will complain that the glock is still in use by gfs2_inode_lookup's tentative inode. We noticed this bug after adding commit 486408d690e1 ("gfs2: Cancel remote delete work asynchronously") which allowed delete_work_func() to race with gfs2_create_inode(), but the same race exists for open-by-handle. Fix that by switching from insert_inode_hash() to insert_inode_locked4(), which does check for duplicate inodes. We know we've just managed to to allocate the new inode, so an inode tentatively created by gfs2_inode_lookup() will eventually go away and insert_inode_locked4() will always succeed. In addition, don't flush the inode glock work anymore (this can now only make things worse) and clean up glock_{set,clear}_object for the inode glock somewhat. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Diffstat (limited to 'scripts/gdb/linux/utils.py')
0 files changed, 0 insertions, 0 deletions