Message ID | 20200221004134.30599-2-ira.weiny@intel.com (mailing list archive) |
---|---|
State | Superseded, archived |
Headers | show |
Series | Enable per-file/per-directory DAX operations V4 | expand |
On Thu, Feb 20, 2020 at 04:41:22PM -0800, ira.weiny@intel.com wrote: > From: Ira Weiny <ira.weiny@intel.com> > > xfs_reinit_inode() -> inode_init_always() already handles calling > init_rwsem(i_rwsem). Doing so again is unneeded. > > Signed-off-by: Ira Weiny <ira.weiny@intel.com> Except that this inode has been destroyed and freed by the VFS, and we are now recycling it back into the VFS before we actually physically freed it. Hence we have re-initialise the semaphore because the semaphore can contain internal state that is specific to it's new life cycle (e.g. the lockdep context) that will cause problems if we just assume that the inode is the same inode as it was before we recycled it back into the VFS caches. So, yes, we actually do need to re-initialise the rwsem here. Cheers, Dave.
On Fri, Feb 21, 2020 at 12:26:25PM +1100, Dave Chinner wrote: > On Thu, Feb 20, 2020 at 04:41:22PM -0800, ira.weiny@intel.com wrote: > > From: Ira Weiny <ira.weiny@intel.com> > > > > xfs_reinit_inode() -> inode_init_always() already handles calling > > init_rwsem(i_rwsem). Doing so again is unneeded. > > > > Signed-off-by: Ira Weiny <ira.weiny@intel.com> > > Except that this inode has been destroyed and freed by the VFS, and > we are now recycling it back into the VFS before we actually > physically freed it. > > Hence we have re-initialise the semaphore because the semaphore can > contain internal state that is specific to it's new life cycle (e.g. > the lockdep context) that will cause problems if we just assume that > the inode is the same inode as it was before we recycled it back > into the VFS caches. > > So, yes, we actually do need to re-initialise the rwsem here. I'm fine dropping the patch... But I'm not following how the xfs_reinit_inode() on line 422 does not take care of this? fs/xfs/xfs_icache.c: 422 error = xfs_reinit_inode(mp, inode); 423 if (error) { 424 bool wake; 425 /* 426 * Re-initializing the inode failed, and we are in deep 427 * trouble. Try to re-add it to the reclaim list. 428 */ 429 rcu_read_lock(); 430 spin_lock(&ip->i_flags_lock); 431 wake = !!__xfs_iflags_test(ip, XFS_INEW); 432 ip->i_flags &= ~(XFS_INEW | XFS_IRECLAIM); 433 if (wake) 434 wake_up_bit(&ip->i_flags, __XFS_INEW_BIT); 435 ASSERT(ip->i_flags & XFS_IRECLAIMABLE); 436 trace_xfs_iget_reclaim_fail(ip); 437 goto out_error; 438 } 439 440 spin_lock(&pag->pag_ici_lock); 441 spin_lock(&ip->i_flags_lock); 442 443 /* 444 * Clear the per-lifetime state in the inode as we are now 445 * effectively a new inode and need to return to the initial 446 * state before reuse occurs. 447 */ 448 ip->i_flags &= ~XFS_IRECLAIM_RESET_FLAGS; 449 ip->i_flags |= XFS_INEW; 450 xfs_inode_clear_reclaim_tag(pag, ip->i_ino); 451 inode->i_state = I_NEW; 452 ip->i_sick = 0; 453 ip->i_checked = 0; 454 455 ASSERT(!rwsem_is_locked(&inode->i_rwsem)); 456 init_rwsem(&inode->i_rwsem); 457 458 spin_unlock(&ip->i_flags_lock); 459 spin_unlock(&pag->pag_ici_lock); Does this need to be done under one of these locks? Ira > > Cheers, > > Dave. > -- > Dave Chinner > david@fromorbit.com
diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c index 8dc2e5414276..836a1f09be03 100644 --- a/fs/xfs/xfs_icache.c +++ b/fs/xfs/xfs_icache.c @@ -419,6 +419,7 @@ xfs_iget_cache_hit( spin_unlock(&ip->i_flags_lock); rcu_read_unlock(); + ASSERT(!rwsem_is_locked(&inode->i_rwsem)); error = xfs_reinit_inode(mp, inode); if (error) { bool wake; @@ -452,9 +453,6 @@ xfs_iget_cache_hit( ip->i_sick = 0; ip->i_checked = 0; - ASSERT(!rwsem_is_locked(&inode->i_rwsem)); - init_rwsem(&inode->i_rwsem); - spin_unlock(&ip->i_flags_lock); spin_unlock(&pag->pag_ici_lock); } else {