From patchwork Thu Jul 9 16:57:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11654769 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 51A9D14DD for ; Thu, 9 Jul 2020 16:57:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1E26D2078B for ; Thu, 9 Jul 2020 16:57:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="ohj96eQb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1E26D2078B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 74C046B0005; Thu, 9 Jul 2020 12:57:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6FDCE6B0006; Thu, 9 Jul 2020 12:57:30 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C31C6B0007; Thu, 9 Jul 2020 12:57:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id 45E2E6B0005 for ; Thu, 9 Jul 2020 12:57:30 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id ECCD9824805A for ; Thu, 9 Jul 2020 16:57:29 +0000 (UTC) X-FDA: 77019143418.24.scene66_190d6ba26ec7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id BBA571A4A5 for ; Thu, 9 Jul 2020 16:57:29 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,rcampbell@nvidia.com,,RULES_HIT:30003:30054:30064,0,RBL:216.228.121.65:@nvidia.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10;04yfafzr4fsgdiz6airw8drkdfprjocs1uozfd4uhw47fxgtqmdw3s3mnbq3tn7.13ynntyx481pk7z6t3oemwgunkppyyrep4bd1zq1363xsa5gq9ai4ioko3cmfn4.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: scene66_190d6ba26ec7 X-Filterd-Recvd-Size: 4304 Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Thu, 9 Jul 2020 16:57:29 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 09 Jul 2020 09:57:15 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Thu, 09 Jul 2020 09:57:27 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Thu, 09 Jul 2020 09:57:27 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 9 Jul 2020 16:57:19 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 9 Jul 2020 16:57:19 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 09 Jul 2020 09:57:19 -0700 From: Ralph Campbell To: , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Bharata B Rao" , Shuah Khan , Andrew Morton , Ralph Campbell Subject: [PATCH 1/2] mm/migrate: optimize migrate_vma_setup() for holes Date: Thu, 9 Jul 2020 09:57:10 -0700 Message-ID: <20200709165711.26584-2-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200709165711.26584-1-rcampbell@nvidia.com> References: <20200709165711.26584-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1594313835; bh=B2D/nS1wOmFM1BnqXCP74e2MZLq0Kn5AbDcOovgyNoI=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=ohj96eQbydmV4kRF027+IYlNfuLSjZlOYmKTmjsLUdErYCgeG6G8GXVP1Ch1NvqKb VfQRNMApdYINbV1gCcw6ZXWDH5J2+QkMq6sNFAyjDF9vBnhk+O5aJGcOgeh+fNKaMx T3GXHedfsKe/5bYAi55X+WOqB8N7JNB9k4xcuKmYp1JclNurGSFI3Mq0uhgk5+L1qW MK4Mn98Zi+NIYIjqp0rG4VeUe2XMUUWdV8rzL12AzXnvYVVOl5t2Ymre5SbnJvIsoI Wa7HzIflxlyLka8WbWknLVDVyxq2jC4gMKobdNPT04IKAl8gefMk1ccDGJ+RKCECZG h5HrfNG2JVhKg== X-Rspamd-Queue-Id: BBA571A4A5 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When migrating system memory to device private memory, if the source address range is a valid VMA range and there is no memory or a zero page, the source PFN array is marked as valid but with no PFN. This lets the device driver allocate private memory and clear it, then insert the new device private struct page into the CPU's page tables when migrate_vma_pages() is called. migrate_vma_pages() only inserts the new page if the VMA is an anonymous range. There is no point in telling the device driver to allocate device private memory and then not migrate the page. Instead, mark the source PFN array entries as not migrating to avoid this overhead. Signed-off-by: Ralph Campbell --- mm/migrate.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/migrate.c b/mm/migrate.c index b0125c082549..8aa434691577 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2204,9 +2204,13 @@ static int migrate_vma_collect_hole(unsigned long start, { struct migrate_vma *migrate = walk->private; unsigned long addr; + unsigned long flags; + + /* Only allow populating anonymous memory. */ + flags = vma_is_anonymous(walk->vma) ? MIGRATE_PFN_MIGRATE : 0; for (addr = start; addr < end; addr += PAGE_SIZE) { - migrate->src[migrate->npages] = MIGRATE_PFN_MIGRATE; + migrate->src[migrate->npages] = flags; migrate->dst[migrate->npages] = 0; migrate->npages++; migrate->cpages++; From patchwork Thu Jul 9 16:57:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ralph Campbell X-Patchwork-Id: 11654763 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2CF4E92A for ; Thu, 9 Jul 2020 16:57:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EE2382078B for ; Thu, 9 Jul 2020 16:57:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="N1NuZpOr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EE2382078B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CDF376B0002; Thu, 9 Jul 2020 12:57:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C30786B0007; Thu, 9 Jul 2020 12:57:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A2F2E6B0002; Thu, 9 Jul 2020 12:57:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id 83C5E6B0005 for ; Thu, 9 Jul 2020 12:57:29 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 34E1C181AEF1A for ; Thu, 9 Jul 2020 16:57:29 +0000 (UTC) X-FDA: 77019143418.05.pies37_1d02b3926ec7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id 116AF18025C0D for ; Thu, 9 Jul 2020 16:57:29 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,rcampbell@nvidia.com,,RULES_HIT:30054:30056:30064,0,RBL:216.228.121.143:@nvidia.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100;04yfbe9x1oh5583rfhggqadfkde8zop9wxdb6gngqe6ee78ph4q497g5xnghzd3.4wjuawu74w8arsmpf9ifiyk3adpdidhgr49y35gqkaeaxwu4yxbos4yocotosob.y-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: pies37_1d02b3926ec7 X-Filterd-Recvd-Size: 4388 Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Thu, 9 Jul 2020 16:57:28 +0000 (UTC) Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Thu, 09 Jul 2020 09:55:39 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Thu, 09 Jul 2020 09:57:27 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Thu, 09 Jul 2020 09:57:27 -0700 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 9 Jul 2020 16:57:19 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Thu, 9 Jul 2020 16:57:20 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Thu, 09 Jul 2020 09:57:19 -0700 From: Ralph Campbell To: , , , CC: Jerome Glisse , John Hubbard , Christoph Hellwig , Jason Gunthorpe , "Bharata B Rao" , Shuah Khan , Andrew Morton , Ralph Campbell Subject: [PATCH 2/2] mm/migrate: add migrate-shared test for migrate_vma_*() Date: Thu, 9 Jul 2020 09:57:11 -0700 Message-ID: <20200709165711.26584-3-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200709165711.26584-1-rcampbell@nvidia.com> References: <20200709165711.26584-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1594313739; bh=7bNg4+W9LUQA56P48nlUXj5006Ok1vcN7L9AinD3FBk=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=N1NuZpOrbvHrgSl7AhEeXOCRv63B9Q2lx5GBdR7wv1/EB/1YG+Wv3031VziHsXXKj ERMrBXcnO322IhTci24SV9muQZuCMkihqa5YOUoduSZEXCH2iM1s4NJ0UoifO+UFf+ gUq32l0b3sU0GwN9X1Wmd5IC9wcGmGAlVXco8PLHiBQFFgmdpc5LJzkQdroOLE3OOx pAzTgDdTOD70Rpnk39ftF2js0MNcVLi7moL2LsdS87z0KlTms5+mdlBkJdrQHLxkD1 ndFpfVSGAI6HCiNEp/dSCWoiPEvsk0r2ZZ1MHFVozfe/Z65zDygxjoj5ARcJhA36a/ hCRojLyFcxRNQ== X-Rspamd-Queue-Id: 116AF18025C0D X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a migrate_vma_*() self test for mmap(MAP_SHARED) to verify that !vma_anonymous() ranges won't be migrated. Signed-off-by: Ralph Campbell --- tools/testing/selftests/vm/hmm-tests.c | 35 ++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c index 79db22604019..e83d3ab37697 100644 --- a/tools/testing/selftests/vm/hmm-tests.c +++ b/tools/testing/selftests/vm/hmm-tests.c @@ -931,6 +931,41 @@ TEST_F(hmm, migrate_fault) hmm_buffer_free(buffer); } +/* + * Migrate anonymous shared memory to device private memory. + */ +TEST_F(hmm, migrate_shared) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_SHARED | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Migrate memory to device. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_MIGRATE, buffer, npages); + ASSERT_EQ(ret, -ENOENT); + + hmm_buffer_free(buffer); +} + /* * Try to migrate various memory types to device private memory. */