From patchwork Wed Apr 7 08:42:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 12187477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5DA5C43461 for ; Wed, 7 Apr 2021 08:43:19 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 8A163610F9 for ; Wed, 7 Apr 2021 08:43:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8A163610F9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 6751A890AA; Wed, 7 Apr 2021 08:43:18 +0000 (UTC) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam07on2063.outbound.protection.outlook.com [40.107.95.63]) by gabe.freedesktop.org (Postfix) with ESMTPS id 1CC7F6E8DB; Wed, 7 Apr 2021 08:43:17 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=n8BvO/mw66XWnNbunOK10p9AN9l8o1BsgOYYNmL3PFtzLJQQgZU0netibHVrvPCKgLraAqx7c3zsZDbZCEw4XvwM93aE9iUBd5MzWbiYzGmIAv6nQV/5Kbtwp2RipMwoOeynksPyb2PBKffWOt2bnvxATB2ze7lFSYP+n96/Ceq11lyRuTilf3rWPhWPUMId4+yEz2U0suPdWUz1TbAF6JDkrGBU7YpY155BfmL0r0RTfM1BQ2ZI5/J4lAcIY69Y/dSIRk//jhILJjzsgmEoWy3UVl3X5OYScoBwue3XUrHy4KbJ0b2LA5qYoV6R7vzgXED5a8c215g9bDP0t4+iQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=f1PlBuDFv+fkoDCHxutPb13euniKAWFWXRYTXtMR9iI=; b=Ykai/ATzEUksvmQ4ZvZSEqBwWbuIUbVOwhXQ4EeAURVeQeUaetBRQXAU4xgVlq92bWa17Xp/ildh3M9PZaj2a89abqocliz1Y+iXI7/m/gqiuhNF+ZRzaw0AbiZEOiHCHJLJrx0TuqXH7OLUhOvFCn/RSHaKMUKW/uevJPO6hh1jr7G/mzHyKNlQ44Cq2ydWnd4bVfkYd7D3uR0cQxY/bUCHEaFPquVIybNgjKs/j65jzKVczxnihqXvjwbJalqm+MtqrR2IE/dyjUnW87gg2pdi3R5fa7Ja0ai4Xic5Gm55hSBt9xAx+7rDhTaToQSan1zsS4dM6nPjEyyxHho54w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=infradead.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=f1PlBuDFv+fkoDCHxutPb13euniKAWFWXRYTXtMR9iI=; b=WKOmiy9dzXJswIEFvP1EcF9Nh2EWXlqLjmEw1hEM+KMAscewsLt1fqiz9pn+UsCIl82g3IlPrdPt1egvGPpD9e8FdTkj1SL6O0xjhz18kSI9RQ4bmqLQpW0B0XBeeEmH/BEpzU0dEnisxkDr7V0ZobLi2jpVC0KBnIV0ECGPq1KTHoS0isxDtzv0z9QaO4p/kIJk4y4H7unhUKUIIWirwdQruHxsXgGqteIZYL+Cq9723I4FLpvHf1mz2SkOPcUyZWowE/XoSL8tHX7ncMHlhHbAkz9eDsi5zl35n2vh8eHw8QSPCmsxj4wTJysBZx3DCGtFQyMhAKVZCC8aHNGmxQ== Received: from BN9PR03CA0784.namprd03.prod.outlook.com (2603:10b6:408:13f::9) by MN2PR12MB4029.namprd12.prod.outlook.com (2603:10b6:208:167::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3999.32; Wed, 7 Apr 2021 08:43:15 +0000 Received: from BN8NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13f:cafe::ac) by BN9PR03CA0784.outlook.office365.com (2603:10b6:408:13f::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4020.16 via Frontend Transport; Wed, 7 Apr 2021 08:43:15 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; infradead.org; dkim=none (message not signed) header.d=none;infradead.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by BN8NAM11FT066.mail.protection.outlook.com (10.13.177.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.3999.28 via Frontend Transport; Wed, 7 Apr 2021 08:43:15 +0000 Received: from localhost (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 7 Apr 2021 08:43:13 +0000 From: Alistair Popple To: , , , Subject: [PATCH v8 6/8] mm: Selftests for exclusive device memory Date: Wed, 7 Apr 2021 18:42:36 +1000 Message-ID: <20210407084238.20443-7-apopple@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210407084238.20443-1-apopple@nvidia.com> References: <20210407084238.20443-1-apopple@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4afcf28f-c2a2-4aea-cffa-08d8f9a12f94 X-MS-TrafficTypeDiagnostic: MN2PR12MB4029: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5516; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: OSwk/O+Nq0UDIVKrncIIqzzo1R1f9E2aLrcbzO30LS9Q9onB56vosSyF2f3XEFmuannOiQSNLHU40DbAEZ4fiRTl1LYuf9JI5O8PXVf/1KDPFap+rj7ip53ka7H8lYEBefl4Lur6Zck8Fb8UzwsXuJZNX1FPPFjarPqdNrE8c95YcN5ItK07h0+4ExgsHT2IHBg6+OuBr6bgrSiffcwrUMBpOMziMAGw/Sa1SEwozH889FRUkhFPX9j6QRZa+8afG/wkmob5tq0ge/iajOw1RzKMw1a4NgXHm9+q0TAYEOsIsoyonnnCIHm8ZpkmPgKW5tEq8aU2r02CgFzThgoTe77FREoV8ROtxgU6S1UPYxkXTWdc/+Pi+pG/vBR8tEAklI4+re7ZXyGEI/xB5Ozb7fm5wuXNkzwrkNUcHm64fgheREM2uEBFsN3MfeAlHFBa3jLvpi//wlCz0Dvwk4xCKKMAzeV8RjmEqHlJa7BYSSEvZp25mR2KTL4/5mK7JXHlJclvgFA/nR0JQO2vxWoAxR9UBB1x6JLgPrLHQKccAPZr3KnpyE990dH1OfjS1aB5Hq9Lz/uvmUXdhXMFAdHIiX5scBcxae82ImMzxKQVG18TctcinZSKjlogHT6YCF1AyRiED3awGOncwU17dZcxVw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(346002)(136003)(39860400002)(396003)(376002)(36840700001)(46966006)(7636003)(70206006)(86362001)(47076005)(4326008)(426003)(110136005)(2616005)(36756003)(5660300002)(7416002)(356005)(36860700001)(82310400003)(8676002)(82740400003)(16526019)(186003)(54906003)(8936002)(36906005)(478600001)(26005)(6666004)(2906002)(70586007)(83380400001)(316002)(336012)(1076003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Apr 2021 08:43:15.4578 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4afcf28f-c2a2-4aea-cffa-08d8f9a12f94 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4029 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: rcampbell@nvidia.com, linux-doc@vger.kernel.org, jhubbard@nvidia.com, bsingharora@gmail.com, Alistair Popple , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, hch@infradead.org, jglisse@redhat.com, willy@infradead.org, jgg@nvidia.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Adds some selftests for exclusive device memory. Signed-off-by: Alistair Popple Acked-by: Jason Gunthorpe Tested-by: Ralph Campbell Reviewed-by: Ralph Campbell --- lib/test_hmm.c | 124 +++++++++++++++++++ lib/test_hmm_uapi.h | 2 + tools/testing/selftests/vm/hmm-tests.c | 158 +++++++++++++++++++++++++ 3 files changed, 284 insertions(+) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 5c9f5a020c1d..305a9d9e2b4c 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -25,6 +25,7 @@ #include #include #include +#include #include "test_hmm_uapi.h" @@ -46,6 +47,7 @@ struct dmirror_bounce { unsigned long cpages; }; +#define DPT_XA_TAG_ATOMIC 1UL #define DPT_XA_TAG_WRITE 3UL /* @@ -619,6 +621,54 @@ static void dmirror_migrate_alloc_and_copy(struct migrate_vma *args, } } +static int dmirror_check_atomic(struct dmirror *dmirror, unsigned long start, + unsigned long end) +{ + unsigned long pfn; + + for (pfn = start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++) { + void *entry; + struct page *page; + + entry = xa_load(&dmirror->pt, pfn); + page = xa_untag_pointer(entry); + if (xa_pointer_tag(entry) == DPT_XA_TAG_ATOMIC) + return -EPERM; + } + + return 0; +} + +static int dmirror_atomic_map(unsigned long start, unsigned long end, + struct page **pages, struct dmirror *dmirror) +{ + unsigned long pfn, mapped = 0; + int i; + + /* Map the migrated pages into the device's page tables. */ + mutex_lock(&dmirror->mutex); + + for (i = 0, pfn = start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++, i++) { + void *entry; + + if (!pages[i]) + continue; + + entry = pages[i]; + entry = xa_tag_pointer(entry, DPT_XA_TAG_ATOMIC); + entry = xa_store(&dmirror->pt, pfn, entry, GFP_ATOMIC); + if (xa_is_err(entry)) { + mutex_unlock(&dmirror->mutex); + return xa_err(entry); + } + + mapped++; + } + + mutex_unlock(&dmirror->mutex); + return mapped; +} + static int dmirror_migrate_finalize_and_map(struct migrate_vma *args, struct dmirror *dmirror) { @@ -661,6 +711,71 @@ static int dmirror_migrate_finalize_and_map(struct migrate_vma *args, return 0; } +static int dmirror_exclusive(struct dmirror *dmirror, + struct hmm_dmirror_cmd *cmd) +{ + unsigned long start, end, addr; + unsigned long size = cmd->npages << PAGE_SHIFT; + struct mm_struct *mm = dmirror->notifier.mm; + struct page *pages[64]; + struct dmirror_bounce bounce; + unsigned long next; + int ret; + + start = cmd->addr; + end = start + size; + if (end < start) + return -EINVAL; + + /* Since the mm is for the mirrored process, get a reference first. */ + if (!mmget_not_zero(mm)) + return -EINVAL; + + mmap_read_lock(mm); + for (addr = start; addr < end; addr = next) { + int i, mapped; + + if (end < addr + (ARRAY_SIZE(pages) << PAGE_SHIFT)) + next = end; + else + next = addr + (ARRAY_SIZE(pages) << PAGE_SHIFT); + + ret = make_device_exclusive_range(mm, addr, next, pages, NULL); + mapped = dmirror_atomic_map(addr, next, pages, dmirror); + for (i = 0; i < ret; i++) { + if (pages[i]) { + unlock_page(pages[i]); + put_page(pages[i]); + } + } + + if (addr + (mapped << PAGE_SHIFT) < next) { + mmap_read_unlock(mm); + mmput(mm); + return -EBUSY; + } + } + mmap_read_unlock(mm); + mmput(mm); + + /* Return the migrated data for verification. */ + ret = dmirror_bounce_init(&bounce, start, size); + if (ret) + return ret; + mutex_lock(&dmirror->mutex); + ret = dmirror_do_read(dmirror, start, end, &bounce); + mutex_unlock(&dmirror->mutex); + if (ret == 0) { + if (copy_to_user(u64_to_user_ptr(cmd->ptr), bounce.ptr, + bounce.size)) + ret = -EFAULT; + } + + cmd->cpages = bounce.cpages; + dmirror_bounce_fini(&bounce); + return ret; +} + static int dmirror_migrate(struct dmirror *dmirror, struct hmm_dmirror_cmd *cmd) { @@ -949,6 +1064,15 @@ static long dmirror_fops_unlocked_ioctl(struct file *filp, ret = dmirror_migrate(dmirror, &cmd); break; + case HMM_DMIRROR_EXCLUSIVE: + ret = dmirror_exclusive(dmirror, &cmd); + break; + + case HMM_DMIRROR_CHECK_EXCLUSIVE: + ret = dmirror_check_atomic(dmirror, cmd.addr, + cmd.addr + (cmd.npages << PAGE_SHIFT)); + break; + case HMM_DMIRROR_SNAPSHOT: ret = dmirror_snapshot(dmirror, &cmd); break; diff --git a/lib/test_hmm_uapi.h b/lib/test_hmm_uapi.h index 670b4ef2a5b6..f14dea5dcd06 100644 --- a/lib/test_hmm_uapi.h +++ b/lib/test_hmm_uapi.h @@ -33,6 +33,8 @@ struct hmm_dmirror_cmd { #define HMM_DMIRROR_WRITE _IOWR('H', 0x01, struct hmm_dmirror_cmd) #define HMM_DMIRROR_MIGRATE _IOWR('H', 0x02, struct hmm_dmirror_cmd) #define HMM_DMIRROR_SNAPSHOT _IOWR('H', 0x03, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_EXCLUSIVE _IOWR('H', 0x04, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_CHECK_EXCLUSIVE _IOWR('H', 0x05, struct hmm_dmirror_cmd) /* * Values returned in hmm_dmirror_cmd.ptr for HMM_DMIRROR_SNAPSHOT. diff --git a/tools/testing/selftests/vm/hmm-tests.c b/tools/testing/selftests/vm/hmm-tests.c index 5d1ac691b9f4..864f126ffd78 100644 --- a/tools/testing/selftests/vm/hmm-tests.c +++ b/tools/testing/selftests/vm/hmm-tests.c @@ -1485,4 +1485,162 @@ TEST_F(hmm2, double_map) hmm_buffer_free(buffer); } +/* + * Basic check of exclusive faulting. + */ +TEST_F(hmm, exclusive) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + int *ptr; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Map memory exclusively for device access. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_EXCLUSIVE, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + /* Fault pages back to system memory and check them. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i]++, i); + + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i+1); + + /* Check atomic access revoked */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_CHECK_EXCLUSIVE, buffer, npages); + ASSERT_EQ(ret, 0); + + hmm_buffer_free(buffer); +} + +TEST_F(hmm, exclusive_mprotect) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + int *ptr; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Map memory exclusively for device access. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_EXCLUSIVE, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i = 0, ptr = buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + ret = mprotect(buffer->ptr, size, PROT_READ); + ASSERT_EQ(ret, 0); + + /* Simulate a device writing system memory. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_WRITE, buffer, npages); + ASSERT_EQ(ret, -EPERM); + + hmm_buffer_free(buffer); +} + +/* + * Check copy-on-write works. + */ +TEST_F(hmm, exclusive_cow) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + int *ptr; + int ret; + + npages = ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size = npages << self->page_shift; + + buffer = malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd = -1; + buffer->size = size; + buffer->mirror = malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr = mmap(NULL, size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + /* Initialize buffer in system memory. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] = i; + + /* Map memory exclusively for device access. */ + ret = hmm_dmirror_cmd(self->fd, HMM_DMIRROR_EXCLUSIVE, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + fork(); + + /* Fault pages back to system memory and check them. */ + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i]++, i); + + for (i = 0, ptr = buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i+1); + + hmm_buffer_free(buffer); +} + TEST_HARNESS_MAIN