Message ID | 20240503144604.151095-4-ryan.roberts@arm.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | arm64/mm: Enable userfaultfd write-protect | expand |
On 03.05.24 16:46, Ryan Roberts wrote: > PTE_PRESENT_INVALID was previously occupying bit 59, which when a PTE is > valid can either be IGNORED, PBHA[0] or AttrIndex[3], depending on the > HW configuration. In practice this is currently not a problem because > PTE_PRESENT_INVALID can only be 1 when PTE_VALID=0 and upstream Linux > always requires the bit set to 0 for a valid pte. > > However, if in future Linux wants to use the field (e.g. AttrIndex[3]) > then we could end up with confusion when PTE_PRESENT_INVALID comes along > and corrupts the field - we would ideally want to preserve it even for > an invalid (but present) pte. > > The other problem with bit 59 is that it prevents the offset field of a > swap entry within a swap pte from growing beyond 51 bits. By moving > PTE_PRESENT_INVALID to a low bit we can lay the swap pte out so that the > offset field could grow to 52 bits in future. > > So let's move PTE_PRESENT_INVALID to overlay PTE_NG (bit 11). > > There is no need to persist NG for a present-invalid entry; it is always > set for user mappings and is not used by SW to derive any state from the > pte. PTE_NS was considered instead of PTE_NG, but it is RES0 for > non-secure SW, so there is a chance that future architecture may > allocate the bit and we may therefore need to persist that bit for > present-invalid ptes. > > These are both marginal benefits, but make things a bit tidier in my > opinion. > > Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> > Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: David Hildenbrand <david@redhat.com>
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index 81f07b44f7b8..35c9de13f7ed 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -24,7 +24,7 @@ * interpreted according to the HW layout by SW but any attempted HW access to * the address will result in a fault. pte_present() returns true. */ -#define PTE_PRESENT_INVALID (_AT(pteval_t, 1) << 59) /* only when !PTE_VALID */ +#define PTE_PRESENT_INVALID (PTE_NG) /* only when !PTE_VALID */ #define _PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED) #define _PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index c0f4471423db..7f1ff59c43ed 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1254,15 +1254,15 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, * Encode and decode a swap entry: * bits 0-1: present (must be zero) * bits 2: remember PG_anon_exclusive - * bits 3-7: swap type - * bits 8-57: swap offset - * bit 59: PTE_PRESENT_INVALID (must be zero) + * bits 6-10: swap type + * bit 11: PTE_PRESENT_INVALID (must be zero) + * bits 12-61: swap offset */ -#define __SWP_TYPE_SHIFT 3 +#define __SWP_TYPE_SHIFT 6 #define __SWP_TYPE_BITS 5 -#define __SWP_OFFSET_BITS 50 #define __SWP_TYPE_MASK ((1 << __SWP_TYPE_BITS) - 1) -#define __SWP_OFFSET_SHIFT (__SWP_TYPE_BITS + __SWP_TYPE_SHIFT) +#define __SWP_OFFSET_SHIFT 12 +#define __SWP_OFFSET_BITS 50 #define __SWP_OFFSET_MASK ((1UL << __SWP_OFFSET_BITS) - 1) #define __swp_type(x) (((x).val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK)