Message ID | 20210602235230.3928842-1-pcc@google.com (mailing list archive) |
---|---|
Headers | show |
Series | arm64: improve efficiency of setting tags for user pages | expand |
On Wed, 2 Jun 2021 16:52:26 -0700 Peter Collingbourne <pcc@google.com> wrote: > Currently we can end up touching PROT_MTE user pages twice on fault > and once on unmap. On fault, with KASAN disabled we first clear data > and then set tags to 0, and with KASAN enabled we simultaneously > clear data and set tags to the KASAN random tag, and then set tags > again to 0. On unmap, we poison the page by setting tags, but this > is less likely to find a bug than poisoning kernel pages. > > This patch series fixes these inefficiencies by only touching the pages > once on fault using the DC GZVA instruction to clear both data and > tags, and avoiding poisoning user pages on free. > > ... > > arch/alpha/include/asm/page.h | 6 +-- > arch/arm64/include/asm/mte.h | 4 ++ > arch/arm64/include/asm/page.h | 10 +++-- > arch/arm64/lib/mte.S | 20 ++++++++++ > arch/arm64/mm/fault.c | 26 +++++++++++++ > arch/arm64/mm/proc.S | 10 +++-- > arch/ia64/include/asm/page.h | 6 +-- > arch/m68k/include/asm/page_no.h | 6 +-- > arch/s390/include/asm/page.h | 6 +-- > arch/x86/include/asm/page.h | 6 +-- > include/linux/gfp.h | 18 +++++++-- > include/linux/highmem.h | 43 ++++++++------------- > include/linux/kasan.h | 64 +++++++++++++++++++------------- > include/linux/page-flags.h | 9 +++++ > include/trace/events/mmflags.h | 9 ++++- > mm/kasan/common.c | 4 +- > mm/kasan/hw_tags.c | 32 ++++++++++++++++ > mm/mempool.c | 6 ++- > mm/page_alloc.c | 66 +++++++++++++++++++-------------- > 19 files changed, 242 insertions(+), 109 deletions(-) This is more MMish than ARMish, but I expect it will get more exposure in an ARM tree than in linux-next alone. I'll grab them for now, but in the hope that they will appear in -next via an ARM tree so I get to drop them again.
On Thu, Jun 03, 2021 at 08:03:08PM -0700, Andrew Morton wrote: > On Wed, 2 Jun 2021 16:52:26 -0700 Peter Collingbourne <pcc@google.com> wrote: > > > Currently we can end up touching PROT_MTE user pages twice on fault > > and once on unmap. On fault, with KASAN disabled we first clear data > > and then set tags to 0, and with KASAN enabled we simultaneously > > clear data and set tags to the KASAN random tag, and then set tags > > again to 0. On unmap, we poison the page by setting tags, but this > > is less likely to find a bug than poisoning kernel pages. > > > > This patch series fixes these inefficiencies by only touching the pages > > once on fault using the DC GZVA instruction to clear both data and > > tags, and avoiding poisoning user pages on free. > > > > ... > > > > arch/alpha/include/asm/page.h | 6 +-- > > arch/arm64/include/asm/mte.h | 4 ++ > > arch/arm64/include/asm/page.h | 10 +++-- > > arch/arm64/lib/mte.S | 20 ++++++++++ > > arch/arm64/mm/fault.c | 26 +++++++++++++ > > arch/arm64/mm/proc.S | 10 +++-- > > arch/ia64/include/asm/page.h | 6 +-- > > arch/m68k/include/asm/page_no.h | 6 +-- > > arch/s390/include/asm/page.h | 6 +-- > > arch/x86/include/asm/page.h | 6 +-- > > include/linux/gfp.h | 18 +++++++-- > > include/linux/highmem.h | 43 ++++++++------------- > > include/linux/kasan.h | 64 +++++++++++++++++++------------- > > include/linux/page-flags.h | 9 +++++ > > include/trace/events/mmflags.h | 9 ++++- > > mm/kasan/common.c | 4 +- > > mm/kasan/hw_tags.c | 32 ++++++++++++++++ > > mm/mempool.c | 6 ++- > > mm/page_alloc.c | 66 +++++++++++++++++++-------------- > > 19 files changed, 242 insertions(+), 109 deletions(-) > > This is more MMish than ARMish, but I expect it will get more exposure > in an ARM tree than in linux-next alone. > > I'll grab them for now, but in the hope that they will appear in -next > via an ARM tree so I get to drop them again. Sure thing, I'll queue this in a sec... Peter -- please cc me on patches touching arch/arm64 in future, that way I won't miss anything (or at least, you can yell at me if I do!). Cheers, Will
On Wed, 2 Jun 2021 16:52:26 -0700, Peter Collingbourne wrote: > Currently we can end up touching PROT_MTE user pages twice on fault > and once on unmap. On fault, with KASAN disabled we first clear data > and then set tags to 0, and with KASAN enabled we simultaneously > clear data and set tags to the KASAN random tag, and then set tags > again to 0. On unmap, we poison the page by setting tags, but this > is less likely to find a bug than poisoning kernel pages. > > [...] Applied to arm64 (for-next/mte), thanks! [1/4] mm: arch: remove indirection level in alloc_zeroed_user_highpage_movable() https://git.kernel.org/arm64/c/92638b4e1b47 [2/4] kasan: use separate (un)poison implementation for integrated init https://git.kernel.org/arm64/c/7a3b83537188 [3/4] arm64: mte: handle tags zeroing at page allocation time https://git.kernel.org/arm64/c/013bb59dbb7c [4/4] kasan: disable freed user page poisoning with HW tags https://git.kernel.org/arm64/c/c275c5c6d50a Cheers,