Message ID | 20220517093532.127095-1-catalin.marinas@arm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] arm64: mte: Ensure the cleared tags are visible before setting the PTE | expand |
On 5/17/22 10:35, Catalin Marinas wrote: > As an optimisation, only pages mapped with PROT_MTE in user space have > the MTE tags zeroed. This is done lazily at the set_pte_at() time via > mte_sync_tags(). However, this function is missing a barrier and another > CPU may see the PTE updated before the zeroed tags are visible. Add an > smp_wmb() barrier if the mapping is Normal Tagged. > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE") > Cc: <stable@vger.kernel.org> # 5.10.x > Reported-by: Vladimir Murzin <vladimir.murzin@arm.com> > Cc: Will Deacon <will@kernel.org> > --- > > Changes in v2: > - make the barrier unconditional > - not including reviewed-by, tested-by tags for v1 as the patch is slightly > different > > arch/arm64/kernel/mte.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c > index 90994aca54f3..d565ae25e48f 100644 > --- a/arch/arm64/kernel/mte.c > +++ b/arch/arm64/kernel/mte.c > @@ -67,6 +67,9 @@ void mte_sync_tags(pte_t old_pte, pte_t pte) > mte_sync_page_tags(page, old_pte, check_swap, > pte_is_tagged); > } > + > + /* ensure the tags are visible before the PTE is set */ > + smp_wmb(); > } > > int memcmp_pages(struct page *page1, struct page *page2) As I said in another e-mail this version has been tested in parallel to v1, so Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> Cheers Vladimir
On 17/05/2022 10:35, Catalin Marinas wrote: > As an optimisation, only pages mapped with PROT_MTE in user space have > the MTE tags zeroed. This is done lazily at the set_pte_at() time via > mte_sync_tags(). However, this function is missing a barrier and another > CPU may see the PTE updated before the zeroed tags are visible. Add an > smp_wmb() barrier if the mapping is Normal Tagged. > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE") > Cc: <stable@vger.kernel.org> # 5.10.x > Reported-by: Vladimir Murzin <vladimir.murzin@arm.com> > Cc: Will Deacon <will@kernel.org> Reviewed-by: Steven Price <steven.price@arm.com> > --- > > Changes in v2: > - make the barrier unconditional > - not including reviewed-by, tested-by tags for v1 as the patch is slightly > different > > arch/arm64/kernel/mte.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c > index 90994aca54f3..d565ae25e48f 100644 > --- a/arch/arm64/kernel/mte.c > +++ b/arch/arm64/kernel/mte.c > @@ -67,6 +67,9 @@ void mte_sync_tags(pte_t old_pte, pte_t pte) > mte_sync_page_tags(page, old_pte, check_swap, > pte_is_tagged); > } > + > + /* ensure the tags are visible before the PTE is set */ > + smp_wmb(); > } > > int memcmp_pages(struct page *page1, struct page *page2)
On 5/17/22 10:35, Catalin Marinas wrote: > As an optimisation, only pages mapped with PROT_MTE in user space have > the MTE tags zeroed. This is done lazily at the set_pte_at() time via > mte_sync_tags(). However, this function is missing a barrier and another > CPU may see the PTE updated before the zeroed tags are visible. Add an > smp_wmb() barrier if the mapping is Normal Tagged. > > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> > Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE") > Cc: <stable@vger.kernel.org> # 5.10.x > Reported-by: Vladimir Murzin <vladimir.murzin@arm.com> > Cc: Will Deacon <will@kernel.org> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com> > --- > > Changes in v2: > - make the barrier unconditional > - not including reviewed-by, tested-by tags for v1 as the patch is slightly > different > > arch/arm64/kernel/mte.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c > index 90994aca54f3..d565ae25e48f 100644 > --- a/arch/arm64/kernel/mte.c > +++ b/arch/arm64/kernel/mte.c > @@ -67,6 +67,9 @@ void mte_sync_tags(pte_t old_pte, pte_t pte) > mte_sync_page_tags(page, old_pte, check_swap, > pte_is_tagged); > } > + > + /* ensure the tags are visible before the PTE is set */ > + smp_wmb(); > } > > int memcmp_pages(struct page *page1, struct page *page2)
On Tue, 17 May 2022 10:35:32 +0100, Catalin Marinas wrote: > As an optimisation, only pages mapped with PROT_MTE in user space have > the MTE tags zeroed. This is done lazily at the set_pte_at() time via > mte_sync_tags(). However, this function is missing a barrier and another > CPU may see the PTE updated before the zeroed tags are visible. Add an > smp_wmb() barrier if the mapping is Normal Tagged. > > > [...] Applied to arm64 (for-next/fixes), thanks! [1/1] arm64: mte: Ensure the cleared tags are visible before setting the PTE https://git.kernel.org/arm64/c/1d0cb4c8864a Cheers,
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 90994aca54f3..d565ae25e48f 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -67,6 +67,9 @@ void mte_sync_tags(pte_t old_pte, pte_t pte) mte_sync_page_tags(page, old_pte, check_swap, pte_is_tagged); } + + /* ensure the tags are visible before the PTE is set */ + smp_wmb(); } int memcmp_pages(struct page *page1, struct page *page2)
As an optimisation, only pages mapped with PROT_MTE in user space have the MTE tags zeroed. This is done lazily at the set_pte_at() time via mte_sync_tags(). However, this function is missing a barrier and another CPU may see the PTE updated before the zeroed tags are visible. Add an smp_wmb() barrier if the mapping is Normal Tagged. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE") Cc: <stable@vger.kernel.org> # 5.10.x Reported-by: Vladimir Murzin <vladimir.murzin@arm.com> Cc: Will Deacon <will@kernel.org> --- Changes in v2: - make the barrier unconditional - not including reviewed-by, tested-by tags for v1 as the patch is slightly different arch/arm64/kernel/mte.c | 3 +++ 1 file changed, 3 insertions(+)