Message ID | 20230922160704.511283-1-bgeffon@google.com (mailing list archive) |
---|---|
State | Mainlined, archived |
Headers | show |
Series | PM: hibernate: clean up sync_read handling in snapshot_write_next | expand |
On Fri, Sep 22, 2023 at 6:07 PM Brian Geffon <bgeffon@google.com> wrote: > > In snapshot_write_next sync_read is set and unset in three different > spots unnecessiarly. As a result there is a subtle bug where the first > page after the meta data has been loaded unconditionally sets sync_read > to 0. If this first pfn was actually a highmem page then the returned > buffer will be the global "buffer," and the page needs to be loaded > synchronously. > > That is, I'm not sure we can always assume the following to be safe: > handle->buffer = get_buffer(&orig_bm, &ca); > handle->sync_read = 0; > > Because get_buffer can call get_highmem_page_buffer which can > return 'buffer' > > The easiest way to address this is just set sync_read before > snapshot_write_next returns if handle->buffer == buffer. > > Signed-off-by: Brian Geffon <bgeffon@google.com> > Fixes: 8357376d3df2 ("[PATCH] swsusp: Improve handling of highmem") > Cc: stable@vger.kernel.org If you send an update of a patch, it is always better to give it a higher version number to avoid any possible confusion. > --- > kernel/power/snapshot.c | 6 +----- > 1 file changed, 1 insertion(+), 5 deletions(-) > > diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c > index 190ed707ddcc..362e6bae5891 100644 > --- a/kernel/power/snapshot.c > +++ b/kernel/power/snapshot.c > @@ -2780,8 +2780,6 @@ int snapshot_write_next(struct snapshot_handle *handle) > if (handle->cur > 1 && handle->cur > nr_meta_pages + nr_copy_pages + nr_zero_pages) > return 0; > > - handle->sync_read = 1; > - > if (!handle->cur) { > if (!buffer) > /* This makes the buffer be freed by swsusp_free() */ > @@ -2824,7 +2822,6 @@ int snapshot_write_next(struct snapshot_handle *handle) > memory_bm_position_reset(&zero_bm); > restore_pblist = NULL; > handle->buffer = get_buffer(&orig_bm, &ca); > - handle->sync_read = 0; > if (IS_ERR(handle->buffer)) > return PTR_ERR(handle->buffer); > } > @@ -2834,9 +2831,8 @@ int snapshot_write_next(struct snapshot_handle *handle) > handle->buffer = get_buffer(&orig_bm, &ca); > if (IS_ERR(handle->buffer)) > return PTR_ERR(handle->buffer); > - if (handle->buffer != buffer) > - handle->sync_read = 0; > } > + handle->sync_read = (handle->buffer == buffer); > handle->cur++; > > /* Zero pages were not included in the image, memset it and move on. */ > -- Anyway, applied as 6.7 material with some minor edits in the subject and changelog. Thanks!
On Tue, Sep 26, 2023 at 2:24 PM Rafael J. Wysocki <rafael@kernel.org> wrote: > > On Fri, Sep 22, 2023 at 6:07 PM Brian Geffon <bgeffon@google.com> wrote: > > > > In snapshot_write_next sync_read is set and unset in three different > > spots unnecessiarly. As a result there is a subtle bug where the first > > page after the meta data has been loaded unconditionally sets sync_read > > to 0. If this first pfn was actually a highmem page then the returned > > buffer will be the global "buffer," and the page needs to be loaded > > synchronously. > > > > That is, I'm not sure we can always assume the following to be safe: > > handle->buffer = get_buffer(&orig_bm, &ca); > > handle->sync_read = 0; > > > > Because get_buffer can call get_highmem_page_buffer which can > > return 'buffer' > > > > The easiest way to address this is just set sync_read before > > snapshot_write_next returns if handle->buffer == buffer. > > > > Signed-off-by: Brian Geffon <bgeffon@google.com> > > Fixes: 8357376d3df2 ("[PATCH] swsusp: Improve handling of highmem") > > Cc: stable@vger.kernel.org > > If you send an update of a patch, it is always better to give it a > higher version number to avoid any possible confusion. Yes, I apologize. I wanted to add a Fixes line and I wasn't sure what the best practice was. I'll do my best to not create any additional work for you going forward! > > > --- > > kernel/power/snapshot.c | 6 +----- > > 1 file changed, 1 insertion(+), 5 deletions(-) > > > > diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c > > index 190ed707ddcc..362e6bae5891 100644 > > --- a/kernel/power/snapshot.c > > +++ b/kernel/power/snapshot.c > > @@ -2780,8 +2780,6 @@ int snapshot_write_next(struct snapshot_handle *handle) > > if (handle->cur > 1 && handle->cur > nr_meta_pages + nr_copy_pages + nr_zero_pages) > > return 0; > > > > - handle->sync_read = 1; > > - > > if (!handle->cur) { > > if (!buffer) > > /* This makes the buffer be freed by swsusp_free() */ > > @@ -2824,7 +2822,6 @@ int snapshot_write_next(struct snapshot_handle *handle) > > memory_bm_position_reset(&zero_bm); > > restore_pblist = NULL; > > handle->buffer = get_buffer(&orig_bm, &ca); > > - handle->sync_read = 0; > > if (IS_ERR(handle->buffer)) > > return PTR_ERR(handle->buffer); > > } > > @@ -2834,9 +2831,8 @@ int snapshot_write_next(struct snapshot_handle *handle) > > handle->buffer = get_buffer(&orig_bm, &ca); > > if (IS_ERR(handle->buffer)) > > return PTR_ERR(handle->buffer); > > - if (handle->buffer != buffer) > > - handle->sync_read = 0; > > } > > + handle->sync_read = (handle->buffer == buffer); > > handle->cur++; > > > > /* Zero pages were not included in the image, memset it and move on. */ > > -- > > Anyway, applied as 6.7 material with some minor edits in the subject > and changelog. Thank you for taking the time to review! Cheers > > Thanks!
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index 190ed707ddcc..362e6bae5891 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -2780,8 +2780,6 @@ int snapshot_write_next(struct snapshot_handle *handle) if (handle->cur > 1 && handle->cur > nr_meta_pages + nr_copy_pages + nr_zero_pages) return 0; - handle->sync_read = 1; - if (!handle->cur) { if (!buffer) /* This makes the buffer be freed by swsusp_free() */ @@ -2824,7 +2822,6 @@ int snapshot_write_next(struct snapshot_handle *handle) memory_bm_position_reset(&zero_bm); restore_pblist = NULL; handle->buffer = get_buffer(&orig_bm, &ca); - handle->sync_read = 0; if (IS_ERR(handle->buffer)) return PTR_ERR(handle->buffer); } @@ -2834,9 +2831,8 @@ int snapshot_write_next(struct snapshot_handle *handle) handle->buffer = get_buffer(&orig_bm, &ca); if (IS_ERR(handle->buffer)) return PTR_ERR(handle->buffer); - if (handle->buffer != buffer) - handle->sync_read = 0; } + handle->sync_read = (handle->buffer == buffer); handle->cur++; /* Zero pages were not included in the image, memset it and move on. */
In snapshot_write_next sync_read is set and unset in three different spots unnecessiarly. As a result there is a subtle bug where the first page after the meta data has been loaded unconditionally sets sync_read to 0. If this first pfn was actually a highmem page then the returned buffer will be the global "buffer," and the page needs to be loaded synchronously. That is, I'm not sure we can always assume the following to be safe: handle->buffer = get_buffer(&orig_bm, &ca); handle->sync_read = 0; Because get_buffer can call get_highmem_page_buffer which can return 'buffer' The easiest way to address this is just set sync_read before snapshot_write_next returns if handle->buffer == buffer. Signed-off-by: Brian Geffon <bgeffon@google.com> Fixes: 8357376d3df2 ("[PATCH] swsusp: Improve handling of highmem") Cc: stable@vger.kernel.org --- kernel/power/snapshot.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-)