Message ID | 20240711180305.15626-1-pbonzini@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm, virt: merge AS_UNMOVABLE and AS_INACCESSIBLE | expand |
On 11.07.24 20:03, Paolo Bonzini wrote: > The flags AS_UNMOVABLE and AS_INACCESSIBLE were both added just for guest_memfd; > AS_UNMOVABLE is already in existing versions of Linux, while AS_INACCESSIBLE was > acked for inclusion in 6.11. > > But really, they are the same thing: only guest_memfd uses them, at least for > now, and guest_memfd pages are unmovable because they should not be > accessed by the CPU. > > So merge them into one; use the AS_INACCESSIBLE name which is more comprehensive. > At the same time, this fixes an embarrassing bug where AS_INACCESSIBLE was used > as a bit mask, despite it being just a bit index. > > The bug was mostly benign, becaus AS_INACCESSIBLE's bit representation (1010) > corresponded to setting AS_UNEVICTABLE (which is already set) and AS_ENOSPC > (except no async writes can happen on the guest_memfd). So the AS_INACCESSIBLE > flag simply had no effect. > > Fixes: 1d23040caa8b ("KVM: guest_memfd: Use AS_INACCESSIBLE when creating guest_memfd inode") > Fixes: c72ceafbd12c ("mm: Introduce AS_INACCESSIBLE for encrypted/confidential memory") > Cc: linux-mm@kvack.org Yeah, if we have to bring back the separation in the future, we can revisit that. Acked-by: David Hildenbrand <david@redhat.com>
On Thu, Jul 11, 2024 at 02:03:05PM -0400, Paolo Bonzini wrote: > The flags AS_UNMOVABLE and AS_INACCESSIBLE were both added just for guest_memfd; > AS_UNMOVABLE is already in existing versions of Linux, while AS_INACCESSIBLE was > acked for inclusion in 6.11. > > But really, they are the same thing: only guest_memfd uses them, at least for > now, and guest_memfd pages are unmovable because they should not be > accessed by the CPU. > > So merge them into one; use the AS_INACCESSIBLE name which is more comprehensive. > At the same time, this fixes an embarrassing bug where AS_INACCESSIBLE was used > as a bit mask, despite it being just a bit index. > > The bug was mostly benign, becaus AS_INACCESSIBLE's bit representation (1010) > corresponded to setting AS_UNEVICTABLE (which is already set) and AS_ENOSPC > (except no async writes can happen on the guest_memfd). So the AS_INACCESSIBLE > flag simply had no effect. *facepalm*. Thank you for catching this. > > Fixes: 1d23040caa8b ("KVM: guest_memfd: Use AS_INACCESSIBLE when creating guest_memfd inode") > Fixes: c72ceafbd12c ("mm: Introduce AS_INACCESSIBLE for encrypted/confidential memory") > Cc: linux-mm@kvack.org > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> I re-tested the AS_INACCESSIBLE handling with SNP. I also tested with Sean's patch that enables THP support in gmem to test it since that was the more problematic case. Tested-by: Michael Roth <michael.roth@amd.com> Reviewed-by: Michael Roth <michael.roth@amd.com> > --- > include/linux/pagemap.h | 14 +++++++------- > mm/compaction.c | 12 ++++++------ > mm/migrate.c | 2 +- > mm/truncate.c | 2 +- > virt/kvm/guest_memfd.c | 3 +-- > 5 files changed, 16 insertions(+), 17 deletions(-) > > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h > index ce7bac8f81da..e05585eda771 100644 > --- a/include/linux/pagemap.h > +++ b/include/linux/pagemap.h > @@ -208,8 +208,8 @@ enum mapping_flags { > AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */ > AS_STABLE_WRITES, /* must wait for writeback before modifying > folio contents */ > - AS_UNMOVABLE, /* The mapping cannot be moved, ever */ > - AS_INACCESSIBLE, /* Do not attempt direct R/W access to the mapping */ > + AS_INACCESSIBLE, /* Do not attempt direct R/W access to the mapping, > + including to move the mapping */ > }; > > /** > @@ -310,20 +310,20 @@ static inline void mapping_clear_stable_writes(struct address_space *mapping) > clear_bit(AS_STABLE_WRITES, &mapping->flags); > } > > -static inline void mapping_set_unmovable(struct address_space *mapping) > +static inline void mapping_set_inaccessible(struct address_space *mapping) > { > /* > - * It's expected unmovable mappings are also unevictable. Compaction > + * It's expected inaccessible mappings are also unevictable. Compaction > * migrate scanner (isolate_migratepages_block()) relies on this to > * reduce page locking. > */ > set_bit(AS_UNEVICTABLE, &mapping->flags); > - set_bit(AS_UNMOVABLE, &mapping->flags); > + set_bit(AS_INACCESSIBLE, &mapping->flags); > } > > -static inline bool mapping_unmovable(struct address_space *mapping) > +static inline bool mapping_inaccessible(struct address_space *mapping) > { > - return test_bit(AS_UNMOVABLE, &mapping->flags); > + return test_bit(AS_INACCESSIBLE, &mapping->flags); > } > > static inline gfp_t mapping_gfp_mask(struct address_space * mapping) > diff --git a/mm/compaction.c b/mm/compaction.c > index e731d45befc7..714afd9c6df6 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -1172,22 +1172,22 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, > if (((mode & ISOLATE_ASYNC_MIGRATE) && is_dirty) || > (mapping && is_unevictable)) { > bool migrate_dirty = true; > - bool is_unmovable; > + bool is_inaccessible; > > /* > * Only folios without mappings or that have > * a ->migrate_folio callback are possible to migrate > * without blocking. > * > - * Folios from unmovable mappings are not migratable. > + * Folios from inaccessible mappings are not migratable. > * > * However, we can be racing with truncation, which can > * free the mapping that we need to check. Truncation > * holds the folio lock until after the folio is removed > * from the page so holding it ourselves is sufficient. > * > - * To avoid locking the folio just to check unmovable, > - * assume every unmovable folio is also unevictable, > + * To avoid locking the folio just to check inaccessible, > + * assume every inaccessible folio is also unevictable, > * which is a cheaper test. If our assumption goes > * wrong, it's not a correctness bug, just potentially > * wasted cycles. > @@ -1200,9 +1200,9 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, > migrate_dirty = !mapping || > mapping->a_ops->migrate_folio; > } > - is_unmovable = mapping && mapping_unmovable(mapping); > + is_inaccessible = mapping && mapping_inaccessible(mapping); > folio_unlock(folio); > - if (!migrate_dirty || is_unmovable) > + if (!migrate_dirty || is_inaccessible) > goto isolate_fail_put; > } > > diff --git a/mm/migrate.c b/mm/migrate.c > index dd04f578c19c..50b60fb414e9 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -965,7 +965,7 @@ static int move_to_new_folio(struct folio *dst, struct folio *src, > > if (!mapping) > rc = migrate_folio(mapping, dst, src, mode); > - else if (mapping_unmovable(mapping)) > + else if (mapping_inaccessible(mapping)) > rc = -EOPNOTSUPP; > else if (mapping->a_ops->migrate_folio) > /* > diff --git a/mm/truncate.c b/mm/truncate.c > index 60388935086d..581977d2356f 100644 > --- a/mm/truncate.c > +++ b/mm/truncate.c > @@ -233,7 +233,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) > * doing a complex calculation here, and then doing the zeroing > * anyway if the page split fails. > */ > - if (!(folio->mapping->flags & AS_INACCESSIBLE)) > + if (!mapping_inaccessible(folio->mapping)) > folio_zero_range(folio, offset, length); > > if (folio_has_private(folio)) > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c > index 9148b9679bb1..1c509c351261 100644 > --- a/virt/kvm/guest_memfd.c > +++ b/virt/kvm/guest_memfd.c > @@ -416,11 +416,10 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) > inode->i_private = (void *)(unsigned long)flags; > inode->i_op = &kvm_gmem_iops; > inode->i_mapping->a_ops = &kvm_gmem_aops; > - inode->i_mapping->flags |= AS_INACCESSIBLE; > inode->i_mode |= S_IFREG; > inode->i_size = size; > mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); > - mapping_set_unmovable(inode->i_mapping); > + mapping_set_inaccessible(inode->i_mapping); > /* Unmovable mappings are supposed to be marked unevictable as well. */ > WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); > > -- > 2.43.0 >
On 7/11/24 8:03 PM, Paolo Bonzini wrote: > The flags AS_UNMOVABLE and AS_INACCESSIBLE were both added just for guest_memfd; > AS_UNMOVABLE is already in existing versions of Linux, while AS_INACCESSIBLE was > acked for inclusion in 6.11. > > But really, they are the same thing: only guest_memfd uses them, at least for > now, and guest_memfd pages are unmovable because they should not be > accessed by the CPU. > > So merge them into one; use the AS_INACCESSIBLE name which is more comprehensive. > At the same time, this fixes an embarrassing bug where AS_INACCESSIBLE was used > as a bit mask, despite it being just a bit index. > > The bug was mostly benign, becaus AS_INACCESSIBLE's bit representation (1010) > corresponded to setting AS_UNEVICTABLE (which is already set) and AS_ENOSPC > (except no async writes can happen on the guest_memfd). So the AS_INACCESSIBLE > flag simply had no effect. Oops, thanks for catching that before becoming mainline. > Fixes: 1d23040caa8b ("KVM: guest_memfd: Use AS_INACCESSIBLE when creating guest_memfd inode") > Fixes: c72ceafbd12c ("mm: Introduce AS_INACCESSIBLE for encrypted/confidential memory") > Cc: linux-mm@kvack.org > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> > --- > include/linux/pagemap.h | 14 +++++++------- > mm/compaction.c | 12 ++++++------ > mm/migrate.c | 2 +- > mm/truncate.c | 2 +- > virt/kvm/guest_memfd.c | 3 +-- > 5 files changed, 16 insertions(+), 17 deletions(-) > > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h > index ce7bac8f81da..e05585eda771 100644 > --- a/include/linux/pagemap.h > +++ b/include/linux/pagemap.h > @@ -208,8 +208,8 @@ enum mapping_flags { > AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */ > AS_STABLE_WRITES, /* must wait for writeback before modifying > folio contents */ > - AS_UNMOVABLE, /* The mapping cannot be moved, ever */ > - AS_INACCESSIBLE, /* Do not attempt direct R/W access to the mapping */ > + AS_INACCESSIBLE, /* Do not attempt direct R/W access to the mapping, > + including to move the mapping */ > }; > > /** > @@ -310,20 +310,20 @@ static inline void mapping_clear_stable_writes(struct address_space *mapping) > clear_bit(AS_STABLE_WRITES, &mapping->flags); > } > > -static inline void mapping_set_unmovable(struct address_space *mapping) > +static inline void mapping_set_inaccessible(struct address_space *mapping) > { > /* > - * It's expected unmovable mappings are also unevictable. Compaction > + * It's expected inaccessible mappings are also unevictable. Compaction > * migrate scanner (isolate_migratepages_block()) relies on this to > * reduce page locking. > */ > set_bit(AS_UNEVICTABLE, &mapping->flags); > - set_bit(AS_UNMOVABLE, &mapping->flags); > + set_bit(AS_INACCESSIBLE, &mapping->flags); > } > > -static inline bool mapping_unmovable(struct address_space *mapping) > +static inline bool mapping_inaccessible(struct address_space *mapping) > { > - return test_bit(AS_UNMOVABLE, &mapping->flags); > + return test_bit(AS_INACCESSIBLE, &mapping->flags); > } > > static inline gfp_t mapping_gfp_mask(struct address_space * mapping) > diff --git a/mm/compaction.c b/mm/compaction.c > index e731d45befc7..714afd9c6df6 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -1172,22 +1172,22 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, > if (((mode & ISOLATE_ASYNC_MIGRATE) && is_dirty) || > (mapping && is_unevictable)) { > bool migrate_dirty = true; > - bool is_unmovable; > + bool is_inaccessible; > > /* > * Only folios without mappings or that have > * a ->migrate_folio callback are possible to migrate > * without blocking. > * > - * Folios from unmovable mappings are not migratable. > + * Folios from inaccessible mappings are not migratable. > * > * However, we can be racing with truncation, which can > * free the mapping that we need to check. Truncation > * holds the folio lock until after the folio is removed > * from the page so holding it ourselves is sufficient. > * > - * To avoid locking the folio just to check unmovable, > - * assume every unmovable folio is also unevictable, > + * To avoid locking the folio just to check inaccessible, > + * assume every inaccessible folio is also unevictable, > * which is a cheaper test. If our assumption goes > * wrong, it's not a correctness bug, just potentially > * wasted cycles. > @@ -1200,9 +1200,9 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, > migrate_dirty = !mapping || > mapping->a_ops->migrate_folio; > } > - is_unmovable = mapping && mapping_unmovable(mapping); > + is_inaccessible = mapping && mapping_inaccessible(mapping); > folio_unlock(folio); > - if (!migrate_dirty || is_unmovable) > + if (!migrate_dirty || is_inaccessible) > goto isolate_fail_put; > } > > diff --git a/mm/migrate.c b/mm/migrate.c > index dd04f578c19c..50b60fb414e9 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -965,7 +965,7 @@ static int move_to_new_folio(struct folio *dst, struct folio *src, > > if (!mapping) > rc = migrate_folio(mapping, dst, src, mode); > - else if (mapping_unmovable(mapping)) > + else if (mapping_inaccessible(mapping)) > rc = -EOPNOTSUPP; > else if (mapping->a_ops->migrate_folio) > /* > diff --git a/mm/truncate.c b/mm/truncate.c > index 60388935086d..581977d2356f 100644 > --- a/mm/truncate.c > +++ b/mm/truncate.c > @@ -233,7 +233,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) > * doing a complex calculation here, and then doing the zeroing > * anyway if the page split fails. > */ > - if (!(folio->mapping->flags & AS_INACCESSIBLE)) > + if (!mapping_inaccessible(folio->mapping)) > folio_zero_range(folio, offset, length); > > if (folio_has_private(folio)) > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c > index 9148b9679bb1..1c509c351261 100644 > --- a/virt/kvm/guest_memfd.c > +++ b/virt/kvm/guest_memfd.c > @@ -416,11 +416,10 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) > inode->i_private = (void *)(unsigned long)flags; > inode->i_op = &kvm_gmem_iops; > inode->i_mapping->a_ops = &kvm_gmem_aops; > - inode->i_mapping->flags |= AS_INACCESSIBLE; > inode->i_mode |= S_IFREG; > inode->i_size = size; > mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); > - mapping_set_unmovable(inode->i_mapping); > + mapping_set_inaccessible(inode->i_mapping); > /* Unmovable mappings are supposed to be marked unevictable as well. */ > WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); >
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index ce7bac8f81da..e05585eda771 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -208,8 +208,8 @@ enum mapping_flags { AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */ AS_STABLE_WRITES, /* must wait for writeback before modifying folio contents */ - AS_UNMOVABLE, /* The mapping cannot be moved, ever */ - AS_INACCESSIBLE, /* Do not attempt direct R/W access to the mapping */ + AS_INACCESSIBLE, /* Do not attempt direct R/W access to the mapping, + including to move the mapping */ }; /** @@ -310,20 +310,20 @@ static inline void mapping_clear_stable_writes(struct address_space *mapping) clear_bit(AS_STABLE_WRITES, &mapping->flags); } -static inline void mapping_set_unmovable(struct address_space *mapping) +static inline void mapping_set_inaccessible(struct address_space *mapping) { /* - * It's expected unmovable mappings are also unevictable. Compaction + * It's expected inaccessible mappings are also unevictable. Compaction * migrate scanner (isolate_migratepages_block()) relies on this to * reduce page locking. */ set_bit(AS_UNEVICTABLE, &mapping->flags); - set_bit(AS_UNMOVABLE, &mapping->flags); + set_bit(AS_INACCESSIBLE, &mapping->flags); } -static inline bool mapping_unmovable(struct address_space *mapping) +static inline bool mapping_inaccessible(struct address_space *mapping) { - return test_bit(AS_UNMOVABLE, &mapping->flags); + return test_bit(AS_INACCESSIBLE, &mapping->flags); } static inline gfp_t mapping_gfp_mask(struct address_space * mapping) diff --git a/mm/compaction.c b/mm/compaction.c index e731d45befc7..714afd9c6df6 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1172,22 +1172,22 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (((mode & ISOLATE_ASYNC_MIGRATE) && is_dirty) || (mapping && is_unevictable)) { bool migrate_dirty = true; - bool is_unmovable; + bool is_inaccessible; /* * Only folios without mappings or that have * a ->migrate_folio callback are possible to migrate * without blocking. * - * Folios from unmovable mappings are not migratable. + * Folios from inaccessible mappings are not migratable. * * However, we can be racing with truncation, which can * free the mapping that we need to check. Truncation * holds the folio lock until after the folio is removed * from the page so holding it ourselves is sufficient. * - * To avoid locking the folio just to check unmovable, - * assume every unmovable folio is also unevictable, + * To avoid locking the folio just to check inaccessible, + * assume every inaccessible folio is also unevictable, * which is a cheaper test. If our assumption goes * wrong, it's not a correctness bug, just potentially * wasted cycles. @@ -1200,9 +1200,9 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, migrate_dirty = !mapping || mapping->a_ops->migrate_folio; } - is_unmovable = mapping && mapping_unmovable(mapping); + is_inaccessible = mapping && mapping_inaccessible(mapping); folio_unlock(folio); - if (!migrate_dirty || is_unmovable) + if (!migrate_dirty || is_inaccessible) goto isolate_fail_put; } diff --git a/mm/migrate.c b/mm/migrate.c index dd04f578c19c..50b60fb414e9 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -965,7 +965,7 @@ static int move_to_new_folio(struct folio *dst, struct folio *src, if (!mapping) rc = migrate_folio(mapping, dst, src, mode); - else if (mapping_unmovable(mapping)) + else if (mapping_inaccessible(mapping)) rc = -EOPNOTSUPP; else if (mapping->a_ops->migrate_folio) /* diff --git a/mm/truncate.c b/mm/truncate.c index 60388935086d..581977d2356f 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -233,7 +233,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) * doing a complex calculation here, and then doing the zeroing * anyway if the page split fails. */ - if (!(folio->mapping->flags & AS_INACCESSIBLE)) + if (!mapping_inaccessible(folio->mapping)) folio_zero_range(folio, offset, length); if (folio_has_private(folio)) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 9148b9679bb1..1c509c351261 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -416,11 +416,10 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) inode->i_private = (void *)(unsigned long)flags; inode->i_op = &kvm_gmem_iops; inode->i_mapping->a_ops = &kvm_gmem_aops; - inode->i_mapping->flags |= AS_INACCESSIBLE; inode->i_mode |= S_IFREG; inode->i_size = size; mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); - mapping_set_unmovable(inode->i_mapping); + mapping_set_inaccessible(inode->i_mapping); /* Unmovable mappings are supposed to be marked unevictable as well. */ WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
The flags AS_UNMOVABLE and AS_INACCESSIBLE were both added just for guest_memfd; AS_UNMOVABLE is already in existing versions of Linux, while AS_INACCESSIBLE was acked for inclusion in 6.11. But really, they are the same thing: only guest_memfd uses them, at least for now, and guest_memfd pages are unmovable because they should not be accessed by the CPU. So merge them into one; use the AS_INACCESSIBLE name which is more comprehensive. At the same time, this fixes an embarrassing bug where AS_INACCESSIBLE was used as a bit mask, despite it being just a bit index. The bug was mostly benign, becaus AS_INACCESSIBLE's bit representation (1010) corresponded to setting AS_UNEVICTABLE (which is already set) and AS_ENOSPC (except no async writes can happen on the guest_memfd). So the AS_INACCESSIBLE flag simply had no effect. Fixes: 1d23040caa8b ("KVM: guest_memfd: Use AS_INACCESSIBLE when creating guest_memfd inode") Fixes: c72ceafbd12c ("mm: Introduce AS_INACCESSIBLE for encrypted/confidential memory") Cc: linux-mm@kvack.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- include/linux/pagemap.h | 14 +++++++------- mm/compaction.c | 12 ++++++------ mm/migrate.c | 2 +- mm/truncate.c | 2 +- virt/kvm/guest_memfd.c | 3 +-- 5 files changed, 16 insertions(+), 17 deletions(-)