Message ID | 20220517192913.21405-1-bostroesser@gmail.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 325d5c5fb216674296f3902a8902b942da3adc5b |
Headers | show |
Series | scsi: target: tcmu: Avoid holding XArray lock when calling lock_page | expand |
Bodo, > In tcmu_blocks_release, lock_page() is called to prevent a race > causing possible data corruption. Since lock_page() might sleep, > calling it while holding XArray lock is a bug. Applied to 5.19/scsi-staging, thanks!
On Tue, 17 May 2022 21:29:13 +0200, Bodo Stroesser wrote: > In tcmu_blocks_release, lock_page() is called to prevent a race causing > possible data corruption. Since lock_page() might sleep, calling it > while holding XArray lock is a bug. > > To fix the bug switch to XArray normal API by replacing the xas_for_each > with xa_for_each_range. Since normal API does its own handling of > XArray locking, now the xas_lock and xas_unlock around the loop are > obsolete. > > [...] Applied to 5.19/scsi-queue, thanks! [1/1] scsi: target: tcmu: Avoid holding XArray lock when calling lock_page https://git.kernel.org/mkp/scsi/c/325d5c5fb216
diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index b1fd06edea59..3deaeecb712e 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -1661,13 +1661,14 @@ static int tcmu_check_and_free_pending_cmd(struct tcmu_cmd *cmd) static u32 tcmu_blocks_release(struct tcmu_dev *udev, unsigned long first, unsigned long last) { - XA_STATE(xas, &udev->data_pages, first * udev->data_pages_per_blk); struct page *page; + unsigned long dpi; u32 pages_freed = 0; - xas_lock(&xas); - xas_for_each(&xas, page, (last + 1) * udev->data_pages_per_blk - 1) { - xas_store(&xas, NULL); + first = first * udev->data_pages_per_blk; + last = (last + 1) * udev->data_pages_per_blk - 1; + xa_for_each_range(&udev->data_pages, dpi, page, first, last) { + xa_erase(&udev->data_pages, dpi); /* * While reaching here there may be page faults occurring on * the to-be-released pages. A race condition may occur if @@ -1691,7 +1692,6 @@ static u32 tcmu_blocks_release(struct tcmu_dev *udev, unsigned long first, __free_page(page); pages_freed++; } - xas_unlock(&xas); atomic_sub(pages_freed, &global_page_count);
In tcmu_blocks_release, lock_page() is called to prevent a race causing possible data corruption. Since lock_page() might sleep, calling it while holding XArray lock is a bug. To fix the bug switch to XArray normal API by replacing the xas_for_each with xa_for_each_range. Since normal API does its own handling of XArray locking, now the xas_lock and xas_unlock around the loop are obsolete. While keeping the code short and simple, the switch to normal API slows down the loop slightly, which is acceptable since tcmu_blocks_release is not relevant for performance. Fixes: bb9b9eb0ae2e ("scsi: target: tcmu: Fix possible data corruption") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Bodo Stroesser <bostroesser@gmail.com> --- drivers/target/target_core_user.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-)