From patchwork Mon Feb 24 20:30:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michel Lespinasse X-Patchwork-Id: 11401507 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2F13D924 for ; Mon, 24 Feb 2020 20:31:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D827C20675 for ; Mon, 24 Feb 2020 20:31:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="U8wFrHdt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D827C20675 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4ABF26B0093; Mon, 24 Feb 2020 15:31:31 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 45D8A6B0096; Mon, 24 Feb 2020 15:31:31 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 239BC6B0098; Mon, 24 Feb 2020 15:31:31 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0037.hostedemail.com [216.40.44.37]) by kanga.kvack.org (Postfix) with ESMTP id 0A98F6B0093 for ; Mon, 24 Feb 2020 15:31:31 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B5076824556B for ; Mon, 24 Feb 2020 20:31:30 +0000 (UTC) X-FDA: 76526165940.22.drug61_2a3f1d3df102d X-Spam-Summary: 2,0,0,d01f8bc63cded571,d41d8cd98f00b204,3otjuxgykcfaes32w5y66y3w.u64305cf-442dsu2.69y@flex--walken.bounces.google.com,,RULES_HIT:41:69:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1437:1516:1518:1535:1543:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:2901:3138:3139:3140:3141:3142:3152:3355:3865:3867:3868:3870:3871:3872:3874:4117:4321:5007:6120:6261:6653:7875:7901:9969:10004:10400:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:14096:14097:14181:14659:14721:21080:21324:21444:21451:21627:21795:21990:30003:30012:30051:30054:30070,0,RBL:209.85.210.202:@flex--walken.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: drug61_2a3f1d3df102d X-Filterd-Recvd-Size: 6752 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Mon, 24 Feb 2020 20:31:30 +0000 (UTC) Received: by mail-pf1-f202.google.com with SMTP id z26so7354552pfr.9 for ; Mon, 24 Feb 2020 12:31:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GcCPJNEn156iZDe5OrcUz4vkIvQxzhW1VpVa0VZC23M=; b=U8wFrHdtr39pVE2aWVXgPSBJ4EVXlAHzgSfn/7Le9tHg5zclYkZ9IbRUCwAKmJkd7V opuoeQ8TVhdFJd2r1r0t9yRd1tpgtJj/H4xXQ10qQodXnP6x+nPyN6l3lMiZp5U5/vee yCWUPV3G3vKaSpR78vd2uPw7NFfAsWld+bddtt+og45GyaBqUOuRvj/OskXv5fl/7WJF H894w39KXA/1lnnGtoPzujNIoWx49HAkBBQ9uczrcPEeKgEtCkrtXrqDce5mm4gX4BXn OSRFespvfpG2EdN9f3+WWkNry7NpxqRFO05KzQNFhIMeRm/luliTTw0NzkDRG8dcYmDF L+8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GcCPJNEn156iZDe5OrcUz4vkIvQxzhW1VpVa0VZC23M=; b=pI3ZfHyJbVRInnSfKeyBVVyzsTiwxPSj34jgxEcH983CoQiGIdPZMVlmLZ1/8xirb/ L+Ru8uSj95MH7mAuI/hcCprcphhr5tHYURZrYEhD/DUaRmRypvv1IEDPWkS3VHviLNgL upIcF5WqDMIz5wdkTQpLHQKgi+TChhokzM4e6G0oMLKV6c+qnQujiaEOEfSDTK8oHyoN m8+K1OMeFwQLwEL5FBsgNF3OKUhNphAwUnMjUx1p4VFMXQNb3Hx/O25KbJL562GYWQfU LcZ/F0qDZYsd3eY5BCLce/eltvQJP2EfkB7pAXQo2S+9t4of3+u2yQcISgUR4wbV1627 dJLw== X-Gm-Message-State: APjAAAVlc5B41ikJOOYjWcfQgRIxxljOdFEAdZgczD0JYenRyTa6FCnP 7ZO2loN/q+07UahMfqrkDdH8t5Jwnzs= X-Google-Smtp-Source: APXvYqw9zsunbocjX8KBY57IPWCQIiFzzMMpFdjUe+NdIhEuigwfleV8PWMzg2EmU76v3y30jO1MukW0eH4= X-Received: by 2002:a65:6718:: with SMTP id u24mr55516598pgf.289.1582576289284; Mon, 24 Feb 2020 12:31:29 -0800 (PST) Date: Mon, 24 Feb 2020 12:30:45 -0800 In-Reply-To: <20200224203057.162467-1-walken@google.com> Message-Id: <20200224203057.162467-13-walken@google.com> Mime-Version: 1.0 References: <20200224203057.162467-1-walken@google.com> X-Mailer: git-send-email 2.25.0.265.gbab2e86ba0-goog Subject: [RFC PATCH 12/24] x86 fault handler: use an explicit MM lock range From: Michel Lespinasse To: Peter Zijlstra , Andrew Morton , Laurent Dufour , Vlastimil Babka , Matthew Wilcox , "Liam R . Howlett" , Jerome Glisse , Davidlohr Bueso , David Rientjes Cc: linux-mm , Michel Lespinasse X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use an explicit memory range throughthe fault handler and any called functions. Signed-off-by: Michel Lespinasse --- arch/x86/mm/fault.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git arch/x86/mm/fault.c arch/x86/mm/fault.c index adbd2b03fcf9..700da3cc3db9 100644 --- arch/x86/mm/fault.c +++ arch/x86/mm/fault.c @@ -938,7 +938,8 @@ static inline bool bad_area_access_from_pkeys(unsigned long error_code, static noinline void bad_area(struct pt_regs *regs, unsigned long error_code, - unsigned long address, struct vm_area_struct *vma) + unsigned long address, struct vm_area_struct *vma, + struct mm_lock_range *range) { u32 pkey = 0; int si_code = SEGV_MAPERR; @@ -983,7 +984,7 @@ bad_area(struct pt_regs *regs, unsigned long error_code, * Something tried to access memory that isn't in our memory map.. * Fix it, but check if it's kernel or user first.. */ - mm_read_unlock(current->mm); + mm_read_range_unlock(current->mm, range); __bad_area_nosemaphore(regs, error_code, address, pkey, si_code); } @@ -1277,6 +1278,7 @@ void do_user_addr_fault(struct pt_regs *regs, unsigned long hw_error_code, unsigned long address) { + struct mm_lock_range *range; struct vm_area_struct *vma; struct task_struct *tsk; struct mm_struct *mm; @@ -1361,6 +1363,8 @@ void do_user_addr_fault(struct pt_regs *regs, } #endif + range = mm_coarse_lock_range(); + /* * Kernel-mode access to the user address space should only occur * on well-defined single instructions listed in the exception @@ -1373,7 +1377,7 @@ void do_user_addr_fault(struct pt_regs *regs, * 1. Failed to acquire mmap_sem, and * 2. The access did not originate in userspace. */ - if (unlikely(!mm_read_trylock(mm))) { + if (unlikely(!mm_read_range_trylock(mm, range))) { if (!user_mode(regs) && !search_exception_tables(regs->ip)) { /* * Fault from code in kernel from @@ -1383,7 +1387,7 @@ void do_user_addr_fault(struct pt_regs *regs, return; } retry: - mm_read_lock(mm); + mm_read_range_lock(mm, range); } else { /* * The above down_read_trylock() might have succeeded in @@ -1395,17 +1399,17 @@ void do_user_addr_fault(struct pt_regs *regs, vma = find_vma(mm, address); if (unlikely(!vma)) { - bad_area(regs, hw_error_code, address, NULL); + bad_area(regs, hw_error_code, address, NULL, range); return; } if (likely(vma->vm_start <= address)) goto good_area; if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) { - bad_area(regs, hw_error_code, address, NULL); + bad_area(regs, hw_error_code, address, NULL, range); return; } if (unlikely(expand_stack(vma, address))) { - bad_area(regs, hw_error_code, address, NULL); + bad_area(regs, hw_error_code, address, NULL, range); return; } @@ -1415,7 +1419,7 @@ void do_user_addr_fault(struct pt_regs *regs, */ good_area: if (unlikely(access_error(hw_error_code, vma))) { - bad_area(regs, hw_error_code, address, vma); + bad_area(regs, hw_error_code, address, vma, range); return; } @@ -1432,7 +1436,7 @@ void do_user_addr_fault(struct pt_regs *regs, * userland). The return to userland is identified whenever * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags. */ - fault = handle_mm_fault(vma, address, flags); + fault = handle_mm_fault_range(vma, address, flags, range); major |= fault & VM_FAULT_MAJOR; /* @@ -1458,7 +1462,7 @@ void do_user_addr_fault(struct pt_regs *regs, return; } - mm_read_unlock(mm); + mm_read_range_unlock(mm, range); if (unlikely(fault & VM_FAULT_ERROR)) { mm_fault_error(regs, hw_error_code, address, fault); return;