From patchwork Tue Apr 4 22:11:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Auld X-Patchwork-Id: 9662687 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 93EF06032D for ; Tue, 4 Apr 2017 22:12:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8484B26E56 for ; Tue, 4 Apr 2017 22:12:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 797E82838E; Tue, 4 Apr 2017 22:12:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 05C6426E56 for ; Tue, 4 Apr 2017 22:12:10 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0FFEC6E703; Tue, 4 Apr 2017 22:11:50 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6A3236E700 for ; Tue, 4 Apr 2017 22:11:48 +0000 (UTC) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga105.fm.intel.com with ESMTP; 04 Apr 2017 15:11:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,276,1486454400"; d="scan'208";a="83060629" Received: from cmachale-mobl1.ger.corp.intel.com (HELO mwahaha.ger.corp.intel.com) ([10.252.25.9]) by orsmga005.jf.intel.com with ESMTP; 04 Apr 2017 15:11:47 -0700 From: Matthew Auld To: intel-gfx@lists.freedesktop.org Date: Tue, 4 Apr 2017 23:11:27 +0100 Message-Id: <20170404221128.3943-18-matthew.auld@intel.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170404221128.3943-1-matthew.auld@intel.com> References: <20170404221128.3943-1-matthew.auld@intel.com> Subject: [Intel-gfx] [PATCH 17/18] mm/shmem: tweak the huge-page interface X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP In its current form huge-pages through shmemfs are controlled at the super-block level, and are currently disabled by default, so to enable huge-pages for a shmem backed gem object we would need to re-mount the fs with the huge= argument, but for drm the mount is not user visible, so good luck with that. The other option is the global sysfs knob shmem_enabled which exposes the same huge= options, with the addition of DENY and FORCE. Neither option seems really workable, what we probably want is to able to control the use of huge-pages at the time of pinning the backing storage for a particular gem object, and only where it makes sense given the size of the object. One caveat is when we write into the page cache prior to pinning the backing storage. I played around with a bunch of ideas but in the end just settled with driver overridable huge option embedded in shmem_inode_info. Thoughts? Signed-off-by: Matthew Auld --- include/linux/shmem_fs.h | 1 + mm/shmem.c | 10 ++++++++-- 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index a7d6bd2a918f..001be751420d 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -21,6 +21,7 @@ struct shmem_inode_info { struct shared_policy policy; /* NUMA memory alloc policy */ struct simple_xattrs xattrs; /* list of xattrs */ struct inode vfs_inode; + bool huge; /* driver override shmem_huge */ }; struct shmem_sb_info { diff --git a/mm/shmem.c b/mm/shmem.c index e67d6ba4e98e..879a9e514afe 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1723,6 +1723,9 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, /* shmem_symlink() */ if (mapping->a_ops != &shmem_aops) goto alloc_nohuge; + /* driver override shmem_huge */ + if (info->huge) + goto alloc_huge; if (shmem_huge == SHMEM_HUGE_DENY || sgp_huge == SGP_NOHUGE) goto alloc_nohuge; if (shmem_huge == SHMEM_HUGE_FORCE) @@ -2000,6 +2003,7 @@ unsigned long shmem_get_unmapped_area(struct file *file, unsigned long inflated_len; unsigned long inflated_addr; unsigned long inflated_offset; + struct shmem_inode_info *info = SHMEM_I(file_inode(file)); if (len > TASK_SIZE) return -ENOMEM; @@ -2016,7 +2020,7 @@ unsigned long shmem_get_unmapped_area(struct file *file, if (addr > TASK_SIZE - len) return addr; - if (shmem_huge == SHMEM_HUGE_DENY) + if (!info->huge && shmem_huge == SHMEM_HUGE_DENY) return addr; if (len < HPAGE_PMD_SIZE) return addr; @@ -2030,7 +2034,7 @@ unsigned long shmem_get_unmapped_area(struct file *file, if (uaddr) return addr; - if (shmem_huge != SHMEM_HUGE_FORCE) { + if (!info->huge && shmem_huge != SHMEM_HUGE_FORCE) { struct super_block *sb; if (file) { @@ -4034,6 +4038,8 @@ bool shmem_huge_enabled(struct vm_area_struct *vma) loff_t i_size; pgoff_t off; + if (SHMEM_I(inode)->huge) + return true; if (shmem_huge == SHMEM_HUGE_FORCE) return true; if (shmem_huge == SHMEM_HUGE_DENY)