From patchwork Mon Aug 29 21:25:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 12958481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CA072ECAAD4 for ; Mon, 29 Aug 2022 21:44:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=VVRN+th6uYVZGtaI7pMGpLi03ZXgDk8z5zsqaajWk5M=; b=odETb3u+qdl7Skeq90PhX0xUS4 OaDu6+zYfSvm1ySyqmWdIxKWntFMWsPps2jk1vEWiOUe99KknewHoP1j6GA1EBrtB3UQvVuvInCAA 6fv2o8u2/amXB1d7AI61d8ttjmTdxYHOYqAtA4mV76YqzgMh4iuB0L2GUbByR+AYD6JO8XpXEIE5f LFjmWbD4nY0/9rsMXkKx1zgdl9Ybb81+9g8ZB/BsaiSsbcZxRgn+cdIt778Ya1RIoaqO+HTEVrr/L 7lMgfrXKLSFknvJHK0OcYXWOyFdoU2n+6B71WSeScAJLSDzmGFMF2iflx0824fiQf3/tecqPP6nm1 noXyQfcQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1oSmXw-00CtYN-Ud; Mon, 29 Aug 2022 21:43:53 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oSmXl-00CtVm-R9 for linux-arm-kernel@bombadil.infradead.org; Mon, 29 Aug 2022 21:43:42 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Type:Cc:To:From:Subject: Message-ID:References:Mime-Version:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=3FeDd/uESQnEyJqRWNnPK+iT++ObIkcBX2+m7Yxyetk=; b=ts6Wpd5XTNmHV3CNaKO10c0l89 I6cmDAMFxGCVB2hXq2IAkGI8ZaWXpQqYHT3Ss5E388gNe/CydZZ4E2NxlnpCU07YEArQVInhlIqw+ zl7zBjbsGuq/l7MRPTjhYNHwravpJ6c14yonYD8d2c2NzO1rAevxW5U1UumEVym9uRYrTB5HQWkhx /R2XvErU3Z5oosPS+//vXAF7VeiErpbfkggbBahDXQWoMFDY1oNDKGiWn/GMiS3A8kT50B2IQZknY DYLZm+ORDTywnYqqO1Z1zNLLUeS9Bn2Dna/2gd+lkX1MtdxzzO4AcXJLzA8yI5Raw52KTsUYYrrVA wykvdBAA==; Received: from mail-pf1-x449.google.com ([2607:f8b0:4864:20::449]) by casper.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1oSmGz-003Q1f-I0 for linux-arm-kernel@lists.infradead.org; Mon, 29 Aug 2022 21:26:23 +0000 Received: by mail-pf1-x449.google.com with SMTP id b25-20020aa78119000000b00536a929d8e4so3565640pfi.1 for ; Mon, 29 Aug 2022 14:26:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc; bh=3FeDd/uESQnEyJqRWNnPK+iT++ObIkcBX2+m7Yxyetk=; b=WuXmy7spYby6FKlYZxVkneAX6CYLeedEW25AaCdm5U/xwDtXPaTsukbnW+BREr3wmK Yp+DdB6epKs/SB7JvycD5yPQfOWs3SBJlayjVM1bzBPF4xDdscUUlmQ491HoasCmixWz EDQApyvlKJ0m9j6IZ5Bl5eAvYAMUvQ5PzTt+MmJPi2OMUuj7wlttfCVr1jtd4buEfno/ mHPpjk9OptKM0QuTB8OPa5JPYvPxCHU8q8PlG/PC9yFoVmLZ3eeWZBp1XntU6iG84/rA BgcQG1UC8Z471LrYLx/b3SNnNzdfHV9f4k2uE0RZvNOL2F/Wod5mOLtISQc7G5F1GR1Y 7dZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc; bh=3FeDd/uESQnEyJqRWNnPK+iT++ObIkcBX2+m7Yxyetk=; b=l4dxTiX0gtD7JzJXt4u4MCv+yVBGHS0HaVrUx+YalfUqSbcEBYPSuD2o+JKOAm+LP0 aTrPl/kZHT3QqQihV/23cYGlsgcYly8rdFD9CFKOMU0YW1F+cEIBTd401aCeylDh3qTL +6ljliWpVKCjVIPfF0/8n6+98qVsUozBrOkl6G0q5mTaL8xGMq1LargpFmZMD1wERiSa 9+n3weIyCh5HFgQV2SsQafz0i5ouUWEA45JNPrIAFHeegJv8yb4Z3sXunGhupHkat4Ur Swm/YScq5vSpD9sJwHNfcmJ+LEVaEl9zALrYtPBn6EowqsFTEk+RRGGfaG2ttyFoQWoP BSRQ== X-Gm-Message-State: ACgBeo3eKMTjsm4r/e5MtUTcnmZeMpXu/ZYzoMAaIz7QXY+ckoYgJbiH Rkyj3om5gv9weGz163ePdrK920iUU9s= X-Google-Smtp-Source: AA6agR65SMIHezE5eUhR7FLJy4sKjKwDRErJ8crUMropZEJrhJg/maBEVhsE6eunhKdLtoV+/pEsLbtUb5g= X-Received: from surenb-spec.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:3e30]) (user=surenb job=sendgmr) by 2002:a62:b519:0:b0:537:9723:5cf2 with SMTP id y25-20020a62b519000000b0053797235cf2mr18436343pfe.15.1661808377917; Mon, 29 Aug 2022 14:26:17 -0700 (PDT) Date: Mon, 29 Aug 2022 21:25:27 +0000 In-Reply-To: <20220829212531.3184856-1-surenb@google.com> Mime-Version: 1.0 References: <20220829212531.3184856-1-surenb@google.com> X-Mailer: git-send-email 2.37.2.672.g94769d06f0-goog Message-ID: <20220829212531.3184856-25-surenb@google.com> Subject: [RFC PATCH 24/28] arm64/mm: try VMA lock-based page fault handling first From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, paulmck@kernel.org, riel@surriel.com, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, surenb@google.com, kernel-team@android.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220829_222621_614218_CA0BB349 X-CRM114-Status: GOOD ( 11.40 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Attempt VMA lock-based page fault handling first, and fall back to the existing mmap_lock-based handling if that fails. Signed-off-by: Suren Baghdasaryan --- arch/arm64/mm/fault.c | 36 ++++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index c33f1fad2745..f05ce40ff32b 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -525,6 +525,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, unsigned long vm_flags; unsigned int mm_flags = FAULT_FLAG_DEFAULT; unsigned long addr = untagged_addr(far); +#ifdef CONFIG_PER_VMA_LOCK + struct vm_area_struct *vma; +#endif if (kprobe_page_fault(regs, esr)) return 0; @@ -575,6 +578,36 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); +#ifdef CONFIG_PER_VMA_LOCK + if (!(mm_flags & FAULT_FLAG_USER) || atomic_read(&mm->mm_users) == 1) + goto lock_mmap; + + vma = find_and_lock_anon_vma(mm, addr); + if (!vma) + goto lock_mmap; + + if (!(vma->vm_flags & vm_flags)) { + vma_read_unlock(vma); + goto lock_mmap; + } + fault = handle_mm_fault(vma, addr & PAGE_MASK, + mm_flags | FAULT_FLAG_VMA_LOCK, regs); + vma_read_unlock(vma); + + if (!(fault & VM_FAULT_RETRY)) { + count_vm_vma_lock_event(VMA_LOCK_SUCCESS); + goto done; + } + count_vm_vma_lock_event(VMA_LOCK_RETRY); + + /* Quick path to respond to signals */ + if (fault_signal_pending(fault, regs)) { + if (!user_mode(regs)) + goto no_context; + return 0; + } +lock_mmap: +#endif /* CONFIG_PER_VMA_LOCK */ /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code, @@ -618,6 +651,9 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, } mmap_read_unlock(mm); +#ifdef CONFIG_PER_VMA_LOCK +done: +#endif /* * Handle the "normal" (no error) case first. */