From patchwork Mon Feb 24 20:30:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michel Lespinasse X-Patchwork-Id: 11401505 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 84583924 for ; Mon, 24 Feb 2020 20:31:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3A0B320CC7 for ; Mon, 24 Feb 2020 20:31:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="K57GW7qB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A0B320CC7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E4A356B0092; Mon, 24 Feb 2020 15:31:28 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DAA6C6B0093; Mon, 24 Feb 2020 15:31:28 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C6BD16B0095; Mon, 24 Feb 2020 15:31:28 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A52936B0092 for ; Mon, 24 Feb 2020 15:31:28 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 66A94181AC9CB for ; Mon, 24 Feb 2020 20:31:28 +0000 (UTC) X-FDA: 76526165856.23.smash88_29e501ebcd914 X-Spam-Summary: 2,0,0,c9624b497eb60578,d41d8cd98f00b204,3njjuxgykce0bp0zt2v33v0t.r310x29c-11zaprz.36v@flex--walken.bounces.google.com,,RULES_HIT:41:69:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1543:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:2901:3138:3139:3140:3141:3142:3152:3355:3865:3867:3868:3870:3871:3872:3874:4117:4250:4321:4605:5007:6120:6261:6653:7875:7903:8603:9592:9969:10004:10400:11026:11658:11914:12043:12048:12291:12296:12297:12438:12555:12683:12895:13161:13190:13229:14096:14097:14181:14659:14721:21080:21324:21433:21444:21451:21627:21795:21939:21987:21990:30003:30051:30054:30070,0,RBL:209.85.216.74:@flex--walken.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: smash88_29e501ebcd914 X-Filterd-Recvd-Size: 6800 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 20:31:27 +0000 (UTC) Received: by mail-pj1-f74.google.com with SMTP id i3so406952pjx.8 for ; Mon, 24 Feb 2020 12:31:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=gWGWLzMDfRJn6B5dEmX6CBuT6y2zEQqydtIvsZF/DT8=; b=K57GW7qByWEztIciIM8ZIfggV5pcKrGlHx/TVIrdw8MUi6fHGw+kkhncRBXKGghjz7 vIBemnN7YITHiMS1otpi9D3SYHVPm9QRzJ9/g8MIjn9CILqOd4+QH15ffrpmWPa8PuNw UOe7Pu2UScy0JieOlOU53H1C1Ad5OgjbaN09dBG6UO4YiG3/u7fyciwo4DLmGj+EZzV7 HbZ49q0z2e1vhzX5EmQ5YZh6L4HBAnGnwLF4vmpiITOXjMjVr744zGrpqHNEk8iQ0F1S LsW85NrSMafLZNhP6gF6nVQMQcqtJlo4U0a/5kHmkHqOw6NBq4qOGoShEFAl6LVwU2Iw QCQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gWGWLzMDfRJn6B5dEmX6CBuT6y2zEQqydtIvsZF/DT8=; b=lS3K+ozVSpLkLtGiv4sIRked6pYp7l5kGvXiwCZINUW98vGgBmTgbhf2ckOfvm4MCc ScnyaHRqL13Jj26+Bzm+M5najoG6opP/V1iH0QoNPz5kgkaw3DqMbErxdpOdcabmg6HU 2JkEK9ZmQMapJ7ipoLJ5GE9Vlo7/ti+KTDJ07YzUP/BbXZQ0zHLtv/hSz8jXiZqHQhqk +zD+50iP5VC++g2ZihMrkQ3CBdpO+Y+HwG6TjfiZyYN9G0khUZN5N2IgSYI9oRjLezc3 PxPTjyqGgmD9RXSU5UBaNV0AM4+OnHoFqwGICi1oo8gh8largvhFvFJssVoYB/UeqNQe yNFg== X-Gm-Message-State: APjAAAUWF3KU48EAEen4cbt071ZynVVx23mFwOleEPYRuedSKQXdpjq0 PxmSgJs9Xsd64x4TTfxR+qYSpEiHOLk= X-Google-Smtp-Source: APXvYqwzHQ0lmu3zCTa1iGZHZygP+/9GmAeEE7svXuVy7RHc7e7z9w1+B3xO9DGUdE2diYNUfh1PjiAkCiE= X-Received: by 2002:a63:6d01:: with SMTP id i1mr53357291pgc.55.1582576286784; Mon, 24 Feb 2020 12:31:26 -0800 (PST) Date: Mon, 24 Feb 2020 12:30:44 -0800 In-Reply-To: <20200224203057.162467-1-walken@google.com> Message-Id: <20200224203057.162467-12-walken@google.com> Mime-Version: 1.0 References: <20200224203057.162467-1-walken@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [RFC PATCH 11/24] x86 fault handler: merge bad_area() functions From: Michel Lespinasse To: Peter Zijlstra , Andrew Morton , Laurent Dufour , Vlastimil Babka , Matthew Wilcox , "Liam R . Howlett" , Jerome Glisse , Davidlohr Bueso , David Rientjes Cc: linux-mm , Michel Lespinasse X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This merges the bad_area(), bad_area_access_error() and the underlying __bad_area() functions into one single unified function. Passing a NULL vma triggers the prior bad_area() behavior, while passing a non-NULL vma triggers the prior bad_area_access_error() behavior. The control flow is very similar in all cases, and we now release the mmap_sem read lock in one single place rather than 3. Text size is reduced by 356 bytes here. Signed-off-by: Michel Lespinasse --- arch/x86/mm/fault.c | 54 ++++++++++++++++++++------------------------- 1 file changed, 24 insertions(+), 30 deletions(-) diff --git arch/x86/mm/fault.c arch/x86/mm/fault.c index a8ce9e160b72..adbd2b03fcf9 100644 --- arch/x86/mm/fault.c +++ arch/x86/mm/fault.c @@ -919,26 +919,6 @@ bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code, __bad_area_nosemaphore(regs, error_code, address, 0, SEGV_MAPERR); } -static void -__bad_area(struct pt_regs *regs, unsigned long error_code, - unsigned long address, u32 pkey, int si_code) -{ - struct mm_struct *mm = current->mm; - /* - * Something tried to access memory that isn't in our memory map.. - * Fix it, but check if it's kernel or user first.. - */ - mm_read_unlock(mm); - - __bad_area_nosemaphore(regs, error_code, address, pkey, si_code); -} - -static noinline void -bad_area(struct pt_regs *regs, unsigned long error_code, unsigned long address) -{ - __bad_area(regs, error_code, address, 0, SEGV_MAPERR); -} - static inline bool bad_area_access_from_pkeys(unsigned long error_code, struct vm_area_struct *vma) { @@ -957,9 +937,15 @@ static inline bool bad_area_access_from_pkeys(unsigned long error_code, } static noinline void -bad_area_access_error(struct pt_regs *regs, unsigned long error_code, - unsigned long address, struct vm_area_struct *vma) +bad_area(struct pt_regs *regs, unsigned long error_code, + unsigned long address, struct vm_area_struct *vma) { + u32 pkey = 0; + int si_code = SEGV_MAPERR; + + if (!vma) + goto unlock; + /* * This OSPKE check is not strictly necessary at runtime. * But, doing it this way allows compiler optimizations @@ -986,12 +972,20 @@ bad_area_access_error(struct pt_regs *regs, unsigned long error_code, * 6. T1 : reaches here, sees vma_pkey(vma)=5, when we really * faulted on a pte with its pkey=4. */ - u32 pkey = vma_pkey(vma); - - __bad_area(regs, error_code, address, pkey, SEGV_PKUERR); + pkey = vma_pkey(vma); + si_code = SEGV_PKUERR; } else { - __bad_area(regs, error_code, address, 0, SEGV_ACCERR); + si_code = SEGV_ACCERR; } + +unlock: + /* + * Something tried to access memory that isn't in our memory map.. + * Fix it, but check if it's kernel or user first.. + */ + mm_read_unlock(current->mm); + + __bad_area_nosemaphore(regs, error_code, address, pkey, si_code); } static void @@ -1401,17 +1395,17 @@ void do_user_addr_fault(struct pt_regs *regs, vma = find_vma(mm, address); if (unlikely(!vma)) { - bad_area(regs, hw_error_code, address); + bad_area(regs, hw_error_code, address, NULL); return; } if (likely(vma->vm_start <= address)) goto good_area; if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) { - bad_area(regs, hw_error_code, address); + bad_area(regs, hw_error_code, address, NULL); return; } if (unlikely(expand_stack(vma, address))) { - bad_area(regs, hw_error_code, address); + bad_area(regs, hw_error_code, address, NULL); return; } @@ -1421,7 +1415,7 @@ void do_user_addr_fault(struct pt_regs *regs, */ good_area: if (unlikely(access_error(hw_error_code, vma))) { - bad_area_access_error(regs, hw_error_code, address, vma); + bad_area(regs, hw_error_code, address, vma); return; }