From patchwork Wed Mar 20 15:23:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 10862107 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 909AE139A for ; Wed, 20 Mar 2019 15:23:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 75F1028C97 for ; Wed, 20 Mar 2019 15:23:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6A15F28D18; Wed, 20 Mar 2019 15:23:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D583B28C97 for ; Wed, 20 Mar 2019 15:23:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 12BA06B0003; Wed, 20 Mar 2019 11:23:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0B7D66B0006; Wed, 20 Mar 2019 11:23:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E99C06B0007; Wed, 20 Mar 2019 11:23:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id B09156B0003 for ; Wed, 20 Mar 2019 11:23:47 -0400 (EDT) Received: by mail-pg1-f198.google.com with SMTP id 14so2976012pgf.22 for ; Wed, 20 Mar 2019 08:23:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=QSSH5mmyQj+orx4MDN6qDovVAFAP1JQVbmouvPf0A0w=; b=SIZzz3FfAm7soWstoUyfe6345RUgSwEubrgTPIaU2L+pfbUX6/ZMcDaSr9x6zdZveZ cTHJL86YuYStt4mYjr1JXk40A2jFwoNZh9M+7Bw+5wSdWCUyEuST03i+r7BbbiNf51xI B17EqK0gydVkl6rIG3CwFML4qbwyvTsGk6KxarIRaR/PgAkpfDAkiJ5f4zArS+jF1Jct 3c+tUhkxFEtFBsrW6GkjCfa9Gw2/arq4QqL3dD7Vhr9Fr/XuMbQdvWdLtqXb4Sdy1985 yi9eRnq/3UoRcPboVh/gewit3dwLCv9QpjrVGcIfs1u9bLtZ37HpwXHanp4kSnNcPfNr KJ0Q== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of thellstrom@vmware.com designates 208.91.0.190 as permitted sender) smtp.mailfrom=thellstrom@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com X-Gm-Message-State: APjAAAXR6vy0YMTsTUHNvFQ2szKjvaHDAM4i496lAjTFtufKyw5WW9nS urUPAWL5BUP7fYLFAhWokVkaBNvWewY/QJ0CDOQ6c5FIO/Iv3kYk5XDXidYK4lJTVO1AT06AEU1 YFDpQi7nXxSXrAGpsHTevebCMBHoCdrzWThcq7PY3SNYq+NGro+9HXqS9FEJ+w2YWNw== X-Received: by 2002:a63:4e5b:: with SMTP id o27mr4305385pgl.204.1553095427253; Wed, 20 Mar 2019 08:23:47 -0700 (PDT) X-Google-Smtp-Source: APXvYqzFCONFLzWerpCXnu+QEzABfnd8cbYBsEp/Kugyru1GaSBS+6l2t24/UmGqO7TSBA2j3G6j X-Received: by 2002:a63:4e5b:: with SMTP id o27mr4305279pgl.204.1553095425984; Wed, 20 Mar 2019 08:23:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553095425; cv=none; d=google.com; s=arc-20160816; b=k9PO8IYty5UXBqBNLfG6SOM+u0A9d5NN90q6JrIpFdknmZlBJbDhfkf3LC/rJQrI8P HdxSf+Vg2qwyqcxDcas6/y13CTtsWQz2LyQJq476mgSRWqd+xHfI19D80Ddsm0r8g+7O h/e32HGEMv/NGWi3Lq8GKnFx4ioz+3VomerzN77c4fug2/ktQIkaWc3hrAT8a4LkROGg 1Wz2YRfLatHqCkcoa03kPf68/rqbntV1gqtqYzOl0/WuPo37RSPByAstuwlyJ1RD0sMD 66YvjXXsBls5A6sqaAMAnMp3o177El9nEYKhqKDcu4cBxx1IC/6ZY23z7dG4yHXrWrH8 BEfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=QSSH5mmyQj+orx4MDN6qDovVAFAP1JQVbmouvPf0A0w=; b=PtPhNknAIRrqR1ixz5yevkNJjWRIUX7VoRBfVnFToJ09YMNqdu7SbtIUsdTysrURz7 tyl1VwwspL2C/ntgLPs/zbKMceC7pyamytG/c7UI59xyJYFQNZtlmjkhMUJJzWGPJ371 //d5O+A2NuA6HCTW2bI7F9bErETaLY7MH9Azyz55Hngyi+vF6XyEvgXjlglQI+hizWRS 6pF5KNAuoTnmS/PEJ72J+19MFEBqRI4xJm/L7ePobaD26tfRRCvuHldaBskIovhDtB8A qYkBe9TPW5QZ2Uh/jqbCffB9xg93FNhQjRaOAahy+icBbHouSy9S5bP6R8B4sTirR3u+ KbxQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of thellstrom@vmware.com designates 208.91.0.190 as permitted sender) smtp.mailfrom=thellstrom@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com Received: from EX13-EDG-OU-002.vmware.com (ex13-edg-ou-002.vmware.com. [208.91.0.190]) by mx.google.com with ESMTPS id j65si1502473plb.104.2019.03.20.08.23.45 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 20 Mar 2019 08:23:45 -0700 (PDT) Received-SPF: pass (google.com: domain of thellstrom@vmware.com designates 208.91.0.190 as permitted sender) client-ip=208.91.0.190; Authentication-Results: mx.google.com; spf=pass (google.com: domain of thellstrom@vmware.com designates 208.91.0.190 as permitted sender) smtp.mailfrom=thellstrom@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Wed, 20 Mar 2019 08:23:40 -0700 Received: from fedoratest.localdomain (unknown [10.30.24.114]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id D58B94199D; Wed, 20 Mar 2019 08:23:41 -0700 (PDT) From: Thomas Hellstrom To: CC: , Thomas Hellstrom , Andrew Morton , Matthew Wilcox , Will Deacon , Peter Zijlstra , Rik van Riel , Minchan Kim , Michal Hocko , Huang Ying , Souptick Joarder , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , , Subject: [RFC PATCH 1/3] mm: Allow the [page|pfn]_mkwrite callbacks to drop the mmap_sem Date: Wed, 20 Mar 2019 16:23:13 +0100 Message-ID: <20190320152315.82758-2-thellstrom@vmware.com> X-Mailer: git-send-email 2.19.0.rc1 In-Reply-To: <20190320152315.82758-1-thellstrom@vmware.com> References: <20190320152315.82758-1-thellstrom@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-002.vmware.com: thellstrom@vmware.com does not designate permitted sender hosts) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Driver fault callbacks are allowed to drop the mmap_sem when expecting long hardware waits to avoid blocking other mm users. Allow the mkwrite callbacks to do the same by returning early on VM_FAULT_RETRY. In particular we want to be able to drop the mmap_sem when waiting for a reservation object lock on a GPU buffer object. These locks may be held while waiting for the GPU. Cc: Andrew Morton Cc: Matthew Wilcox Cc: Will Deacon Cc: Peter Zijlstra Cc: Rik van Riel Cc: Minchan Kim Cc: Michal Hocko Cc: Huang Ying Cc: Souptick Joarder Cc: "Jérôme Glisse" Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Thomas Hellstrom --- mm/memory.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index a52663c0612d..dcd80313cf10 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2144,7 +2144,7 @@ static vm_fault_t do_page_mkwrite(struct vm_fault *vmf) ret = vmf->vma->vm_ops->page_mkwrite(vmf); /* Restore original flags so that caller is not surprised */ vmf->flags = old_flags; - if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE))) + if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_RETRY | VM_FAULT_NOPAGE))) return ret; if (unlikely(!(ret & VM_FAULT_LOCKED))) { lock_page(page); @@ -2419,7 +2419,7 @@ static vm_fault_t wp_pfn_shared(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); vmf->flags |= FAULT_FLAG_MKWRITE; ret = vma->vm_ops->pfn_mkwrite(vmf); - if (ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE)) + if (ret & (VM_FAULT_ERROR | VM_FAULT_RETRY | VM_FAULT_NOPAGE)) return ret; return finish_mkwrite_fault(vmf); } @@ -2440,7 +2440,8 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); tmp = do_page_mkwrite(vmf); if (unlikely(!tmp || (tmp & - (VM_FAULT_ERROR | VM_FAULT_NOPAGE)))) { + (VM_FAULT_ERROR | VM_FAULT_RETRY | + VM_FAULT_NOPAGE)))) { put_page(vmf->page); return tmp; } @@ -3472,7 +3473,8 @@ static vm_fault_t do_shared_fault(struct vm_fault *vmf) unlock_page(vmf->page); tmp = do_page_mkwrite(vmf); if (unlikely(!tmp || - (tmp & (VM_FAULT_ERROR | VM_FAULT_NOPAGE)))) { + (tmp & (VM_FAULT_ERROR | VM_FAULT_RETRY | + VM_FAULT_NOPAGE)))) { put_page(vmf->page); return tmp; } From patchwork Wed Mar 20 15:23:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 10862111 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3D9B714DE for ; Wed, 20 Mar 2019 15:23:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 22E2628C97 for ; Wed, 20 Mar 2019 15:23:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 173A928D18; Wed, 20 Mar 2019 15:23:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3987B28C97 for ; Wed, 20 Mar 2019 15:23:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C8AD6B0006; Wed, 20 Mar 2019 11:23:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 351526B0007; Wed, 20 Mar 2019 11:23:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1EEE66B0008; Wed, 20 Mar 2019 11:23:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f200.google.com (mail-pf1-f200.google.com [209.85.210.200]) by kanga.kvack.org (Postfix) with ESMTP id D01A06B0006 for ; Wed, 20 Mar 2019 11:23:51 -0400 (EDT) Received: by mail-pf1-f200.google.com with SMTP id t5so2832649pfh.18 for ; Wed, 20 Mar 2019 08:23:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=y8htJanxmbFoZO20cRMOpqM+VW4C4SEBRSZ+uu5QWy8=; b=pr+RRkEcnnTDY3soE3CS1024SYb15J6bKcnbmIGfUdlZgh5ZJbIkUUYYMMTLca1hiN zIkorzWXF8Zrn4j3ncsHIpALjisNzByXZpgCYUHGAZC0OW3mDoy1DZKSJIXiCJWSTCQz xbziCE6A9QIFtOIHuwf/BTxmVAr1NntdiN1KCICAJ1uyPErYQrp6WdL+RXOQTt9oi2ja AtpcTADs8Q0GB5zswzLE0K/hQbulWx7pXz4fSJBflByuyLa/u5tOjO0Fghw1Jz1+b89l ZYK7yapzRMJTtNRDrsCgkCt8EprXWo79Hqn7hkqF5oAQFHi1k40a+1QqdC58kzpkenS8 5eRQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of thellstrom@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=thellstrom@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com X-Gm-Message-State: APjAAAUQVjx55tvRtoE39lOEXE7gM9cWNT+7Hup4dg//XCcL4o1z57+/ tT6RqHwi8jZFoMH+fl1cfL8cjwu3O3kZVPCgNqR5GbFf/g3D2PtRKjsWpln/dy/7DE4uW9Xkds2 UkuUEAyjoLT1pirMmW4a/6pZnDF47sKjQsPsdW/yUeBFkPgSyfaT4c6c8KgP4n33Nfg== X-Received: by 2002:aa7:8117:: with SMTP id b23mr8872983pfi.2.1553095431342; Wed, 20 Mar 2019 08:23:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqzwRudKGtckNKnATbOIUApz7bmdGt4swBE30NsStMJucjBB2wz3TmnxiDPXr9CNpwlLCjYM X-Received: by 2002:aa7:8117:: with SMTP id b23mr8872835pfi.2.1553095429649; Wed, 20 Mar 2019 08:23:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553095429; cv=none; d=google.com; s=arc-20160816; b=WK2yXT7lF8/ffOm5l2xzzs7it61VPC5LPKabIDl1NAghEPgnG/PZqnO863YPYr3D6P e0JF8ntZdQQvVSRzEZJY6tr8NMkg2UKKwyyy0EVR30Wm6N/QsrqNogzGk4Q93yvp2e6S rIxK1/5zXci5YfuQUidL14rYS6P1fFcf2yn4lDuEEC3XveOAPRIGlBu8baMpAPJ/ngMo L0tqRCVOB9gCw3aRcGnS8GUmXdoCx+DYF+NkS/jAUoqHOCn6iHS1QQ1YaEDq6uv37FrZ 0xF/MZ+TUG25BcvYCvdpKTptCe6ioMDiXoJKT2ObPEx3f8UO2HodVIVLHbwRBY3V+3Eh 83qw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=y8htJanxmbFoZO20cRMOpqM+VW4C4SEBRSZ+uu5QWy8=; b=Gyz5MaLrnZ5cL6R98hBlkeIVMCQo25IVZQJIoDq7FQDHlkgKWJ7+DtrGzoOYPgbVfW LSWS6ZzeqoAkxlQQojwNQDKwR9DFdlY2LGIDW+4Zy3rtcFZEt9fIS/UOy0NZX3P4mgEU 3A8OuDxU1GjT9oHk4H9L9HQL0TnvXh/ZmIW7q8TXdahoeMZkxFXwMi3ROzJWPQSjAGnV ZYjCUr2K2lNWxy3WqSSmm28bJcGbgAXLyDbxbujq3j1KQxijMk9BqG94FgWEmglXp/wN TOy8j6adlsUBWD8eBjhAsJwnyiJGtzoRdcP+NdH+mmStqlGJFwGHdzjyE55ACCB/z4lD tCxw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of thellstrom@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=thellstrom@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com. [208.91.0.189]) by mx.google.com with ESMTPS id 31si2115987pld.6.2019.03.20.08.23.49 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 20 Mar 2019 08:23:49 -0700 (PDT) Received-SPF: pass (google.com: domain of thellstrom@vmware.com designates 208.91.0.189 as permitted sender) client-ip=208.91.0.189; Authentication-Results: mx.google.com; spf=pass (google.com: domain of thellstrom@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=thellstrom@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Wed, 20 Mar 2019 08:23:44 -0700 Received: from fedoratest.localdomain (unknown [10.30.24.114]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id A0C374199D; Wed, 20 Mar 2019 08:23:45 -0700 (PDT) From: Thomas Hellstrom To: CC: , Thomas Hellstrom , Andrew Morton , Matthew Wilcox , Will Deacon , Peter Zijlstra , Rik van Riel , Minchan Kim , Michal Hocko , Huang Ying , Souptick Joarder , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , , Subject: [RFC PATCH 2/3] mm: Add an apply_to_pfn_range interface Date: Wed, 20 Mar 2019 16:23:14 +0100 Message-ID: <20190320152315.82758-3-thellstrom@vmware.com> X-Mailer: git-send-email 2.19.0.rc1 In-Reply-To: <20190320152315.82758-1-thellstrom@vmware.com> References: <20190320152315.82758-1-thellstrom@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: thellstrom@vmware.com does not designate permitted sender hosts) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This is basically apply_to_page_range with added functionality: Allocating missing parts of the page table becomes optional, which means that the function can be guaranteed not to error if allocation is disabled. Also passing of the closure struct and callback function becomes different and more in line with how things are done elsewhere. Finally we keep apply_to_page_range as a wrapper around apply_to_pfn_range Cc: Andrew Morton Cc: Matthew Wilcox Cc: Will Deacon Cc: Peter Zijlstra Cc: Rik van Riel Cc: Minchan Kim Cc: Michal Hocko Cc: Huang Ying Cc: Souptick Joarder Cc: "Jérôme Glisse" Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Thomas Hellstrom --- include/linux/mm.h | 10 ++++ mm/memory.c | 121 +++++++++++++++++++++++++++++++++------------ 2 files changed, 99 insertions(+), 32 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 80bb6408fe73..b7dd4ddd6efb 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2632,6 +2632,16 @@ typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr, extern int apply_to_page_range(struct mm_struct *mm, unsigned long address, unsigned long size, pte_fn_t fn, void *data); +struct pfn_range_apply; +typedef int (*pter_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr, + struct pfn_range_apply *closure); +struct pfn_range_apply { + struct mm_struct *mm; + pter_fn_t ptefn; + unsigned int alloc; +}; +extern int apply_to_pfn_range(struct pfn_range_apply *closure, + unsigned long address, unsigned long size); #ifdef CONFIG_PAGE_POISONING extern bool page_poisoning_enabled(void); diff --git a/mm/memory.c b/mm/memory.c index dcd80313cf10..0feb7191c2d2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1938,18 +1938,17 @@ int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long } EXPORT_SYMBOL(vm_iomap_memory); -static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, - unsigned long addr, unsigned long end, - pte_fn_t fn, void *data) +static int apply_to_pte_range(struct pfn_range_apply *closure, pmd_t *pmd, + unsigned long addr, unsigned long end) { pte_t *pte; int err; pgtable_t token; spinlock_t *uninitialized_var(ptl); - pte = (mm == &init_mm) ? + pte = (closure->mm == &init_mm) ? pte_alloc_kernel(pmd, addr) : - pte_alloc_map_lock(mm, pmd, addr, &ptl); + pte_alloc_map_lock(closure->mm, pmd, addr, &ptl); if (!pte) return -ENOMEM; @@ -1960,86 +1959,103 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, token = pmd_pgtable(*pmd); do { - err = fn(pte++, token, addr, data); + err = closure->ptefn(pte++, token, addr, closure); if (err) break; } while (addr += PAGE_SIZE, addr != end); arch_leave_lazy_mmu_mode(); - if (mm != &init_mm) + if (closure->mm != &init_mm) pte_unmap_unlock(pte-1, ptl); return err; } -static int apply_to_pmd_range(struct mm_struct *mm, pud_t *pud, - unsigned long addr, unsigned long end, - pte_fn_t fn, void *data) +static int apply_to_pmd_range(struct pfn_range_apply *closure, pud_t *pud, + unsigned long addr, unsigned long end) { pmd_t *pmd; unsigned long next; - int err; + int err = 0; BUG_ON(pud_huge(*pud)); - pmd = pmd_alloc(mm, pud, addr); + pmd = pmd_alloc(closure->mm, pud, addr); if (!pmd) return -ENOMEM; + do { next = pmd_addr_end(addr, end); - err = apply_to_pte_range(mm, pmd, addr, next, fn, data); + if (!closure->alloc && pmd_none_or_clear_bad(pmd)) + continue; + err = apply_to_pte_range(closure, pmd, addr, next); if (err) break; } while (pmd++, addr = next, addr != end); return err; } -static int apply_to_pud_range(struct mm_struct *mm, p4d_t *p4d, - unsigned long addr, unsigned long end, - pte_fn_t fn, void *data) +static int apply_to_pud_range(struct pfn_range_apply *closure, p4d_t *p4d, + unsigned long addr, unsigned long end) { pud_t *pud; unsigned long next; - int err; + int err = 0; - pud = pud_alloc(mm, p4d, addr); + pud = pud_alloc(closure->mm, p4d, addr); if (!pud) return -ENOMEM; + do { next = pud_addr_end(addr, end); - err = apply_to_pmd_range(mm, pud, addr, next, fn, data); + if (!closure->alloc && pud_none_or_clear_bad(pud)) + continue; + err = apply_to_pmd_range(closure, pud, addr, next); if (err) break; } while (pud++, addr = next, addr != end); return err; } -static int apply_to_p4d_range(struct mm_struct *mm, pgd_t *pgd, - unsigned long addr, unsigned long end, - pte_fn_t fn, void *data) +static int apply_to_p4d_range(struct pfn_range_apply *closure, pgd_t *pgd, + unsigned long addr, unsigned long end) { p4d_t *p4d; unsigned long next; - int err; + int err = 0; - p4d = p4d_alloc(mm, pgd, addr); + p4d = p4d_alloc(closure->mm, pgd, addr); if (!p4d) return -ENOMEM; + do { next = p4d_addr_end(addr, end); - err = apply_to_pud_range(mm, p4d, addr, next, fn, data); + if (!closure->alloc && p4d_none_or_clear_bad(p4d)) + continue; + err = apply_to_pud_range(closure, p4d, addr, next); if (err) break; } while (p4d++, addr = next, addr != end); return err; } -/* - * Scan a region of virtual memory, filling in page tables as necessary - * and calling a provided function on each leaf page table. +/** + * apply_to_pfn_range - Scan a region of virtual memory, calling a provided + * function on each leaf page table entry + * @closure: Details about how to scan and what function to apply + * @addr: Start virtual address + * @size: Size of the region + * + * If @closure->alloc is set to 1, the function will fill in the page table + * as necessary. Otherwise it will skip non-present parts. + * + * Returns: Zero on success. If the provided function returns a non-zero status, + * the page table walk will terminate and that status will be returned. + * If @closure->alloc is set to 1, then this function may also return memory + * allocation errors arising from allocating page table memory. */ -int apply_to_page_range(struct mm_struct *mm, unsigned long addr, - unsigned long size, pte_fn_t fn, void *data) +int apply_to_pfn_range(struct pfn_range_apply *closure, + unsigned long addr, unsigned long size) { pgd_t *pgd; unsigned long next; @@ -2049,16 +2065,57 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr, if (WARN_ON(addr >= end)) return -EINVAL; - pgd = pgd_offset(mm, addr); + pgd = pgd_offset(closure->mm, addr); do { next = pgd_addr_end(addr, end); - err = apply_to_p4d_range(mm, pgd, addr, next, fn, data); + if (!closure->alloc && pgd_none_or_clear_bad(pgd)) + continue; + err = apply_to_p4d_range(closure, pgd, addr, next); if (err) break; } while (pgd++, addr = next, addr != end); return err; } +EXPORT_SYMBOL_GPL(apply_to_pfn_range); + +struct page_range_apply { + struct pfn_range_apply pter; + pte_fn_t fn; + void *data; +}; + +/* + * Callback wrapper to enable use of apply_to_pfn_range for + * the apply_to_page_range interface + */ +static int apply_to_page_range_wrapper(pte_t *pte, pgtable_t token, + unsigned long addr, + struct pfn_range_apply *pter) +{ + struct page_range_apply *pra = + container_of(pter, typeof(*pra), pter); + + return pra->fn(pte, token, addr, pra->data); +} + +/* + * Scan a region of virtual memory, filling in page tables as necessary + * and calling a provided function on each leaf page table. + */ +int apply_to_page_range(struct mm_struct *mm, unsigned long addr, + unsigned long size, pte_fn_t fn, void *data) +{ + struct page_range_apply pra = { + .pter = {.mm = mm, + .alloc = 1, + .ptefn = apply_to_page_range_wrapper }, + .fn = fn, + .data = data + }; + + return apply_to_pfn_range(&pra.pter, addr, size); +} EXPORT_SYMBOL_GPL(apply_to_page_range); /* From patchwork Wed Mar 20 15:23:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Hellstrom X-Patchwork-Id: 10862115 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 78DBA139A for ; Wed, 20 Mar 2019 15:23:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 59A0828CD3 for ; Wed, 20 Mar 2019 15:23:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4D00929661; Wed, 20 Mar 2019 15:23:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4233328CD3 for ; Wed, 20 Mar 2019 15:23:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 96CB56B0007; Wed, 20 Mar 2019 11:23:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8CD3C6B0008; Wed, 20 Mar 2019 11:23:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 744AA6B000C; Wed, 20 Mar 2019 11:23:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by kanga.kvack.org (Postfix) with ESMTP id 341736B0007 for ; Wed, 20 Mar 2019 11:23:55 -0400 (EDT) Received: by mail-pg1-f197.google.com with SMTP id j184so3004524pgd.7 for ; Wed, 20 Mar 2019 08:23:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=jvEto9PPWZ+DefVLvJIrEh89kUoGNu1x2THt/FLaKtM=; b=jP9A3B8YWsjSzwD8JuJ4shK12ZTQDEivUycn1lu30GKJfJ3c789RLDytpaY98RVWc0 sGaGd6FJaJO0bQHWTJlnCJ+7rtrnfkRmPS6abCRxS+oV8b6BxRYZePM4FjPG4vaaYcYW kq1rK/8D8PO9Yfsyn712KdzEUyg9pawnij2sfLJZNisR25+C7k+uH5J9Ms63kegbzadR HnSQKcdB0js678TeU0Wt/64y9LeCuB96YxoBk7GTNWTIjJjMXM8LLFJmQ8tnnt96A4sC oBbrnH44nZWd4fQq81WhT3Adk8Amjm5ZXVIgLQqHneumH3nBtRd6Ffi1VsWB0EPxqX5F N+EA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of thellstrom@vmware.com designates 208.91.0.190 as permitted sender) smtp.mailfrom=thellstrom@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com X-Gm-Message-State: APjAAAVgJPKPSn1eq+gv53y0Is4z9oWqiKQ6nDYGDOR9JodSETk3jX3o kV1w97eJcq2YbKrAqOZ+Wu8CTxQNZgGlcbzCP29iOEw+UOMkPWKGRu8YqbBJQofFmsMGhgKZBcF 0J32BBwZfG9MmZqSlXXWPqr0sbKWQFAg+ENdld+y50HLTxhf36ulnjIISA+G7CPO3lA== X-Received: by 2002:a65:60cb:: with SMTP id r11mr8087995pgv.143.1553095434686; Wed, 20 Mar 2019 08:23:54 -0700 (PDT) X-Google-Smtp-Source: APXvYqwsIFMV7fxRjKOO4aBjjlGfsURgrMpewbOXK+DKLeRYVc6rV5yf3zFL5nor85bLqLf7upJm X-Received: by 2002:a65:60cb:: with SMTP id r11mr8087903pgv.143.1553095433525; Wed, 20 Mar 2019 08:23:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553095433; cv=none; d=google.com; s=arc-20160816; b=WgHJzCLT5F+1isLRhM+cIcUW5cyJ18ktnYsims/GanzYrYSaU/bhXl9GGmpeNSg+zD 19J9RQs7YM7+eJwVn5Fdn+YV+wVk5772YHunzyvrBL90o+813YQ14LP1HtYGiqlVJzsg 3nYoAJBF+pZGrmQeQX98mxSJYfVuHESZ/nYjhf67xEmBmszc71aue1rGfuR+TX/rCpPd TFntHX5WMU7hj6M3rPeWSZzucUxCBLEJWqusszzKh2Xb0EuFyDjkfggD6GPG0nou9HuD bQ1FuGnwoq3besSzzsnZEhpE7SQP+M0AB5oLVSX/fypaw6rqleLQSjpp9deQ+zHktoo1 tA7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=jvEto9PPWZ+DefVLvJIrEh89kUoGNu1x2THt/FLaKtM=; b=yx35ILHbfsAhQ5ek3K+b6FEVXpYoL5kdmxmNViEe8CO6TZxg4nYNspRWZ1J5KVcltF P72MI7Bt8YiDBgXRIoj9Fz3mN7x4YPidS2zpWeBnMshvK68X/IReMiT7RyYOdPEIWnTi d1J5MzOB+E6vBgJ6DuDWz/FgM9lq7B43e5oAHIaDnopCpjjt3Ino84H+xpuf4R+LPWZA JYhJrH/g+elRrv6WOtv2AC+fdZeSg0bweLtXZdePbMofJBHIdTqOk4BFSuYNpz/jRBfk sV5JD6nC2u72Uf5tRxy1bc70LPx6oeDWY8tJhzn6jH4BHGlakod3W3sodxFbr6t6HnOE dZ3w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of thellstrom@vmware.com designates 208.91.0.190 as permitted sender) smtp.mailfrom=thellstrom@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com Received: from EX13-EDG-OU-002.vmware.com (ex13-edg-ou-002.vmware.com. [208.91.0.190]) by mx.google.com with ESMTPS id j10si1792580pgp.463.2019.03.20.08.23.53 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 20 Mar 2019 08:23:53 -0700 (PDT) Received-SPF: pass (google.com: domain of thellstrom@vmware.com designates 208.91.0.190 as permitted sender) client-ip=208.91.0.190; Authentication-Results: mx.google.com; spf=pass (google.com: domain of thellstrom@vmware.com designates 208.91.0.190 as permitted sender) smtp.mailfrom=thellstrom@vmware.com; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=vmware.com Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Wed, 20 Mar 2019 08:23:48 -0700 Received: from fedoratest.localdomain (unknown [10.30.24.114]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 6F7FB4199D; Wed, 20 Mar 2019 08:23:49 -0700 (PDT) From: Thomas Hellstrom To: CC: , Thomas Hellstrom , Andrew Morton , Matthew Wilcox , Will Deacon , Peter Zijlstra , Rik van Riel , Minchan Kim , Michal Hocko , Huang Ying , Souptick Joarder , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , , Subject: [RFC PATCH 3/3] mm: Add write-protect and clean utilities for address space ranges Date: Wed, 20 Mar 2019 16:23:15 +0100 Message-ID: <20190320152315.82758-4-thellstrom@vmware.com> X-Mailer: git-send-email 2.19.0.rc1 In-Reply-To: <20190320152315.82758-1-thellstrom@vmware.com> References: <20190320152315.82758-1-thellstrom@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-002.vmware.com: thellstrom@vmware.com does not designate permitted sender hosts) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Add two utilities to a) write-protect and b) clean all ptes pointing into a range of an address space The utilities are intended to aid in tracking dirty pages (either driver-allocated system memory or pci device memory). The write-protect utility should be used in conjunction with page_mkwrite() and pfn_mkwrite() to trigger write page-faults on page accesses. Typically one would want to use this on sparse accesses into large memory regions. The clean utility should be used to utilize hardware dirtying functionality and avoid the overhead of page-faults, typically on large accesses into small memory regions. Cc: Andrew Morton Cc: Matthew Wilcox Cc: Will Deacon Cc: Peter Zijlstra Cc: Rik van Riel Cc: Minchan Kim Cc: Michal Hocko Cc: Huang Ying Cc: Souptick Joarder Cc: "Jérôme Glisse" Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Thomas Hellstrom --- include/linux/mm.h | 9 +- mm/Makefile | 2 +- mm/apply_as_range.c | 257 ++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 266 insertions(+), 2 deletions(-) create mode 100644 mm/apply_as_range.c diff --git a/include/linux/mm.h b/include/linux/mm.h index b7dd4ddd6efb..62f24dd0bfa0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2642,7 +2642,14 @@ struct pfn_range_apply { }; extern int apply_to_pfn_range(struct pfn_range_apply *closure, unsigned long address, unsigned long size); - +unsigned long apply_as_wrprotect(struct address_space *mapping, + pgoff_t first_index, pgoff_t nr); +unsigned long apply_as_clean(struct address_space *mapping, + pgoff_t first_index, pgoff_t nr, + pgoff_t bitmap_pgoff, + unsigned long *bitmap, + pgoff_t *start, + pgoff_t *end); #ifdef CONFIG_PAGE_POISONING extern bool page_poisoning_enabled(void); extern void kernel_poison_pages(struct page *page, int numpages, int enable); diff --git a/mm/Makefile b/mm/Makefile index d210cc9d6f80..a94b78f12692 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -39,7 +39,7 @@ obj-y := filemap.o mempool.o oom_kill.o fadvise.o \ mm_init.o mmu_context.o percpu.o slab_common.o \ compaction.o vmacache.o \ interval_tree.o list_lru.o workingset.o \ - debug.o $(mmu-y) + debug.o apply_as_range.o $(mmu-y) obj-y += init-mm.o obj-y += memblock.o diff --git a/mm/apply_as_range.c b/mm/apply_as_range.c new file mode 100644 index 000000000000..9f03e272ebd0 --- /dev/null +++ b/mm/apply_as_range.c @@ -0,0 +1,257 @@ +// SPDX-License-Identifier: GPL-2.0 +#include +#include +#include +#include +#include +#include + +/** + * struct apply_as - Closure structure for apply_as_range + * @base: struct pfn_range_apply we derive from + * @start: Address of first modified pte + * @end: Address of last modified pte + 1 + * @total: Total number of modified ptes + * @vma: Pointer to the struct vm_area_struct we're currently operating on + * @flush_cache: Whether to call a cache flush before modifying a pte + * @flush_tlb: Whether to flush the tlb after modifying a pte + */ +struct apply_as { + struct pfn_range_apply base; + unsigned long start, end; + unsigned long total; + const struct vm_area_struct *vma; + u32 flush_cache : 1; + u32 flush_tlb : 1; +}; + +/** + * apply_pt_wrprotect - Leaf pte callback to write-protect a pte + * @pte: Pointer to the pte + * @token: Page table token, see apply_to_pfn_range() + * @addr: The virtual page address + * @closure: Pointer to a struct pfn_range_apply embedded in a + * struct apply_as + * + * The function write-protects a pte and records the range in + * virtual address space of touched ptes for efficient TLB flushes. + * + * Return: Always zero. + */ +static int apply_pt_wrprotect(pte_t *pte, pgtable_t token, + unsigned long addr, + struct pfn_range_apply *closure) +{ + struct apply_as *aas = container_of(closure, typeof(*aas), base); + + if (pte_write(*pte)) { + set_pte_at(closure->mm, addr, pte, pte_wrprotect(*pte)); + aas->total++; + if (addr < aas->start) + aas->start = addr; + if (addr + PAGE_SIZE > aas->end) + aas->end = addr + PAGE_SIZE; + } + + return 0; +} + +/** + * struct apply_as_clean - Closure structure for apply_as_clean + * @base: struct apply_as we derive from + * @bitmap_pgoff: Address_space Page offset of the first bit in @bitmap + * @bitmap: Bitmap with one bit for each page offset in the address_space range + * covered. + * @start: Address_space page offset of first modified pte + * @end: Address_space page offset of last modified pte + */ +struct apply_as_clean { + struct apply_as base; + pgoff_t bitmap_pgoff; + unsigned long *bitmap; + pgoff_t start, end; +}; + +/** + * apply_pt_clean - Leaf pte callback to clean a pte + * @pte: Pointer to the pte + * @token: Page table token, see apply_to_pfn_range() + * @addr: The virtual page address + * @closure: Pointer to a struct pfn_range_apply embedded in a + * struct apply_as_clean + * + * The function cleans a pte and records the range in + * virtual address space of touched ptes for efficient TLB flushes. + * It also records dirty ptes in a bitmap representing page offsets + * in the address_space, as well as the first and last of the bits + * touched. + * + * Return: Always zero. + */ +static int apply_pt_clean(pte_t *pte, pgtable_t token, + unsigned long addr, + struct pfn_range_apply *closure) +{ + struct apply_as *aas = container_of(closure, typeof(*aas), base); + struct apply_as_clean *clean = container_of(aas, typeof(*clean), base); + + if (pte_dirty(*pte)) { + pgoff_t pgoff = ((addr - aas->vma->vm_start) >> PAGE_SHIFT) + + aas->vma->vm_pgoff - clean->bitmap_pgoff; + + set_pte_at(closure->mm, addr, pte, pte_mkclean(*pte)); + aas->total++; + if (addr < aas->start) + aas->start = addr; + if (addr + PAGE_SIZE > aas->end) + aas->end = addr + PAGE_SIZE; + + __set_bit(pgoff, clean->bitmap); + clean->start = min(clean->start, pgoff); + clean->end = max(clean->end, pgoff + 1); + } + + return 0; +} + +/** + * apply_as_range - Apply a pte callback to all PTEs pointing into a range + * of an address_space. + * @mapping: Pointer to the struct address_space + * @aas: Closure structure + * @first_index: First page offset in the address_space + * @nr: Number of incremental page offsets to cover + * + * Return: Number of ptes touched. Note that this number might be larger + * than @nr if there are overlapping vmas + */ +static unsigned long apply_as_range(struct address_space *mapping, + struct apply_as *aas, + pgoff_t first_index, pgoff_t nr) +{ + struct vm_area_struct *vma; + pgoff_t vba, vea, cba, cea; + unsigned long start_addr, end_addr; + + /* FIXME: Is a read lock sufficient here? */ + down_write(&mapping->i_mmap_rwsem); + vma_interval_tree_foreach(vma, &mapping->i_mmap, first_index, + first_index + nr - 1) { + aas->base.mm = vma->vm_mm; + + /* Clip to the vma */ + vba = vma->vm_pgoff; + vea = vba + vma_pages(vma); + cba = first_index; + cba = max(cba, vba); + cea = first_index + nr; + cea = min(cea, vea); + + /* Translate to virtual address */ + start_addr = ((cba - vba) << PAGE_SHIFT) + vma->vm_start; + end_addr = ((cea - vba) << PAGE_SHIFT) + vma->vm_start; + + /* + * TODO: Should caches be flushed individually on demand + * in the leaf-pte callbacks instead? That is, how + * costly are inter-core interrupts in an SMP system? + */ + if (aas->flush_cache) + flush_cache_range(vma, start_addr, end_addr); + aas->start = end_addr; + aas->end = start_addr; + aas->vma = vma; + + /* Should not error since aas->base.alloc == 0 */ + WARN_ON(apply_to_pfn_range(&aas->base, start_addr, + end_addr - start_addr)); + if (aas->flush_tlb && aas->end > aas->start) + flush_tlb_range(vma, aas->start, aas->end); + } + up_write(&mapping->i_mmap_rwsem); + + return aas->total; +} + +/** + * apply_as_wrprotect - Write-protect all ptes in an address_space range + * @mapping: The address_space we want to write protect + * @first_index: The first page offset in the range + * @nr: Number of incremental page offsets to cover + * + * Return: The number of ptes actually write-protected. Note that + * already write-protected ptes are not counted. + */ +unsigned long apply_as_wrprotect(struct address_space *mapping, + pgoff_t first_index, pgoff_t nr) +{ + struct apply_as aas = { + .base = { + .alloc = 0, + .ptefn = apply_pt_wrprotect, + }, + .total = 0, + .flush_cache = 1, + .flush_tlb = 1 + }; + + return apply_as_range(mapping, &aas, first_index, nr); +} +EXPORT_SYMBOL(apply_as_wrprotect); + +/** + * apply_as_clean - Clean all ptes in an address_space range + * @mapping: The address_space we want to clean + * @first_index: The first page offset in the range + * @nr: Number of incremental page offsets to cover + * @bitmap_pgoff: The page offset of the first bit in @bitmap + * @bitmap: Pointer to a bitmap of at least @nr bits. The bitmap needs to + * cover the whole range @first_index..@first_index + @nr. + * @start: Pointer to page offset of the first set bit in @bitmap, or if + * none set the value pointed to should be @bitmap_pgoff + @nr. The value + * is modified as new bits are set by the function. + * @end: Page offset of the last set bit in @bitmap + 1 or @bitmap_pgoff if + * none set. The value is modified as new bets are set by the function. + * + * Note: When this function returns there is no guarantee that a CPU has + * not already dirtied new ptes. However it will not clean any ptes not + * reported in the bitmap. + * + * If a caller needs to make sure all dirty ptes are picked up and none + * additional are added, it first needs to write-protect the address-space + * range and make sure new writers are blocked in page_mkwrite() or + * pfn_mkwrite(). And then after a TLB flush following the write-protection + * pick upp all dirty bits. + * + * Return: The number of dirty ptes actually cleaned. + */ +unsigned long apply_as_clean(struct address_space *mapping, + pgoff_t first_index, pgoff_t nr, + pgoff_t bitmap_pgoff, + unsigned long *bitmap, + pgoff_t *start, + pgoff_t *end) +{ + struct apply_as_clean clean = { + .base = { + .base = { + .alloc = 0, + .ptefn = apply_pt_clean, + }, + .total = 0, + .flush_cache = 0, + .flush_tlb = 1, + }, + .bitmap_pgoff = bitmap_pgoff, + .bitmap = bitmap, + .start = *start, + .end = *end, + }; + unsigned long ret = apply_as_range(mapping, &clean.base, first_index, + nr); + + *start = clean.start; + *end = clean.end; + return ret; +} +EXPORT_SYMBOL(apply_as_clean);