From patchwork Fri May 18 19:45:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 10411975 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 76FF4602C2 for ; Fri, 18 May 2018 19:45:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 620CE289F0 for ; Fri, 18 May 2018 19:45:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5262028A51; Fri, 18 May 2018 19:45:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B66FB289F0 for ; Fri, 18 May 2018 19:45:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 134646B0663; Fri, 18 May 2018 15:45:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0E5346B0667; Fri, 18 May 2018 15:45:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EBDE86B0663; Fri, 18 May 2018 15:45:25 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f200.google.com (mail-pf0-f200.google.com [209.85.192.200]) by kanga.kvack.org (Postfix) with ESMTP id 850156B0663 for ; Fri, 18 May 2018 15:45:25 -0400 (EDT) Received: by mail-pf0-f200.google.com with SMTP id x21-v6so5246160pfn.23 for ; Fri, 18 May 2018 12:45:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=Jc5MwzrDjk8ce/4bpqUP2/d4GKMd/RUIjoXKolUQisM=; b=mck1vhO4NqAKgTLFBuiT5Bo7R27P/gdIu3ASqRkzrOFAejtgUn0GS2E3GsURg0ZSPu 2wp7n4kwOVzIPqwpT9PHiR1uf/6VX2pfDkFQzwlR/ym11F2itcM7dtfWt5yVSUoG7fma S/EY81E/pUqZ/hFoNyHMxaYxWWiEly5Uil1gKlVpeAem95UAF2WEndp3wQh9Lqad8FJL 0oA+xRhXJzKYR9BCHdCx4JKaQ7Oalil/9SbrsXQeNyERAZ2T1cC389InnKftO9Z7cki+ gGnG+vL5eDjD326YKX+/1wjFfcXcNWVhjGfnp0gzUDO/W25vIHiaSzO7deXmm7BsgyfQ yHJw== X-Gm-Message-State: ALKqPwee9zyJw4dso/zFhNM+qW+GS9LPl21rhr91LTaO6k955JHaqxoc kE1iL2N49AckarMrq8kWN2cbTDdZHT6TI7atZ1CihDRcp3lDYLVxa02pUQud62ETsGySZF+KBWg 1JODpzMKfjLygnqRR5cCfXRNlLfVQeE3ECycnv0sYHv7xAVrNsRODKwBlL6zel7Bfgw== X-Received: by 2002:a17:902:7446:: with SMTP id e6-v6mr10607987plt.369.1526672725217; Fri, 18 May 2018 12:45:25 -0700 (PDT) X-Google-Smtp-Source: AB8JxZovOo7KcxD9HHPYDJjVbrKA9oboV+tOxManXWq3i2A9IjTmtiWwYAEF32YVHtC8PFd001+x X-Received: by 2002:a17:902:7446:: with SMTP id e6-v6mr10607915plt.369.1526672723518; Fri, 18 May 2018 12:45:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526672723; cv=none; d=google.com; s=arc-20160816; b=xR698AFEg0BU56fxdDZJkf0iniTB96lRR84M0zJC2VddpfvWDE6rPCInYMSRLNtT2l HbNE/Y4uVgwkbtAqyb24pUds8sa/rf9Dsjcs4Pa7Eyd7zCxtL9h4QElfjCe0Gx3RiwIA uArm9C46rAL36Nh28xH8TTYYVu56/z9qCnSHOW9xpP1QgYLQfkC9+G6tU/u+UUpFpW5H fk6wrpiyoWTUPwIwO4p6L2DUZzIm4dA9YVegLvbO93+yPCmcKQ0ebIJqiT/6rdSOiBaF mlsZ0EzXpcJXLgeHoeXPotBpqIyFLStHBTHftTAfvDT84otHzBWwrXBhdRazERG+iRt3 CBwQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=Jc5MwzrDjk8ce/4bpqUP2/d4GKMd/RUIjoXKolUQisM=; b=IS1ebEJD4+whFxoA6sw6By30t3ptVnaTOp5TJ5ESc0jIVHfbJqKr9mS1LnwhkZnVdy 0tL+BqvFwGpvMb01MtkslFvl1B/TPoB3IKKeXxK67MJ/oB6kLujpDJbcBSL5x+z0OV4M BT7NJZ3o1RhMhozT39wYS5uYXzFtPICa6CpTNRD+S6MPb1ppfvS+LEUJeLSobTUDUZ62 rQ1iczxuHbSVyFok32ZfjtxZclxepsnqgifOa5axwUUdIWVFqDCp9VERJsSSOKkLhJBV TNA0NelDTzK1j5ZDfKvj24p57QnhxdOgUDzPHNGQ3PmKGsmmDmrtNPQ1aovDiUPTAMss 72oA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=GYhckbnP; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id l186-v6si6493605pgd.371.2018.05.18.12.45.23 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 18 May 2018 12:45:23 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=GYhckbnP; spf=pass (google.com: best guess record for domain of willy@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=References:In-Reply-To:Message-Id: Date:Subject:Cc:To:From:Sender:Reply-To:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Jc5MwzrDjk8ce/4bpqUP2/d4GKMd/RUIjoXKolUQisM=; b=GYhckbnPrfBQzkCSTpDZhsgW3 tLQDBhsOVHZyT75VsgCX7PzH84VgnQThQjCuYrCiTjr+d6XprckO8XdO5UOGTSj0kf6NAxn5/Dpwj qS9oM8EOeShqwSAO8QNzZ79A7ldah7hU5HrRZ6bY16f5qmRe90+FJqE4lDD7QZvs4s2ot81KKt56J dC3oe/VRSlaP5/TETjzwQLFp/2f0tFGX5PDc+sYChdidLmhYFMiCUmLcuuoX1l6X9TDInT+ELZEUr xZ0wtZlxZHk3KZT3fmD+MD5jNfwt9notgbGm4Sl6WNdX8ZcrWLxGQ1ZbFL6Ci2LhFS4SEmajxD2ET 7r28HLz+w==; Received: from willy by bombadil.infradead.org with local (Exim 4.90_1 #2 (Red Hat Linux)) id 1fJlJZ-00010X-RN; Fri, 18 May 2018 19:45:21 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: Matthew Wilcox , Andrew Morton , "Kirill A . Shutemov" , Christoph Lameter , Lai Jiangshan , Pekka Enberg , Vlastimil Babka , Dave Hansen , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [PATCH v6 01/17] s390: Use _refcount for pgtables Date: Fri, 18 May 2018 12:45:03 -0700 Message-Id: <20180518194519.3820-2-willy@infradead.org> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180518194519.3820-1-willy@infradead.org> References: <20180518194519.3820-1-willy@infradead.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Matthew Wilcox s390 borrows the storage used for _mapcount in struct page in order to account whether the bottom or top half is being used for 2kB page tables. I want to use that for something else, so use the top byte of _refcount instead of the bottom byte of _mapcount. _refcount may temporarily be incremented by other CPUs that see a stale pointer to this page in the page cache, but each CPU can only increment it by one, and there are no systems with 2^24 CPUs today, so they will not change the upper byte of _refcount. We do have to be a little careful not to lose any of their writes (as they will subsequently decrement the counter). Signed-off-by: Matthew Wilcox Acked-by: Martin Schwidefsky --- arch/s390/mm/pgalloc.c | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 562f72955956..84bd6329a88d 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -190,14 +190,15 @@ unsigned long *page_table_alloc(struct mm_struct *mm) if (!list_empty(&mm->context.pgtable_list)) { page = list_first_entry(&mm->context.pgtable_list, struct page, lru); - mask = atomic_read(&page->_mapcount); + mask = atomic_read(&page->_refcount) >> 24; mask = (mask | (mask >> 4)) & 3; if (mask != 3) { table = (unsigned long *) page_to_phys(page); bit = mask & 1; /* =1 -> second 2K */ if (bit) table += PTRS_PER_PTE; - atomic_xor_bits(&page->_mapcount, 1U << bit); + atomic_xor_bits(&page->_refcount, + 1U << (bit + 24)); list_del(&page->lru); } } @@ -218,12 +219,12 @@ unsigned long *page_table_alloc(struct mm_struct *mm) table = (unsigned long *) page_to_phys(page); if (mm_alloc_pgste(mm)) { /* Return 4K page table with PGSTEs */ - atomic_set(&page->_mapcount, 3); + atomic_xor_bits(&page->_refcount, 3 << 24); memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE); memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } else { /* Return the first 2K fragment of the page */ - atomic_set(&page->_mapcount, 1); + atomic_xor_bits(&page->_refcount, 1 << 24); memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE); spin_lock_bh(&mm->context.lock); list_add(&page->lru, &mm->context.pgtable_list); @@ -242,7 +243,8 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) /* Free 2K page table fragment of a 4K page */ bit = (__pa(table) & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t)); spin_lock_bh(&mm->context.lock); - mask = atomic_xor_bits(&page->_mapcount, 1U << bit); + mask = atomic_xor_bits(&page->_refcount, 1U << (bit + 24)); + mask >>= 24; if (mask & 3) list_add(&page->lru, &mm->context.pgtable_list); else @@ -253,7 +255,6 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) } pgtable_page_dtor(page); - atomic_set(&page->_mapcount, -1); __free_page(page); } @@ -274,7 +275,8 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, } bit = (__pa(table) & ~PAGE_MASK) / (PTRS_PER_PTE*sizeof(pte_t)); spin_lock_bh(&mm->context.lock); - mask = atomic_xor_bits(&page->_mapcount, 0x11U << bit); + mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); + mask >>= 24; if (mask & 3) list_add_tail(&page->lru, &mm->context.pgtable_list); else @@ -296,12 +298,13 @@ static void __tlb_remove_table(void *_table) break; case 1: /* lower 2K of a 4K page table */ case 2: /* higher 2K of a 4K page table */ - if (atomic_xor_bits(&page->_mapcount, mask << 4) != 0) + mask = atomic_xor_bits(&page->_refcount, mask << (4 + 24)); + mask >>= 24; + if (mask != 0) break; /* fallthrough */ case 3: /* 4K page table with pgstes */ pgtable_page_dtor(page); - atomic_set(&page->_mapcount, -1); __free_page(page); break; }