From patchwork Wed Oct 17 08:58:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kuo-Hsin Yang X-Patchwork-Id: 10645105 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 435A417D4 for ; Wed, 17 Oct 2018 08:58:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 34AEB2AB67 for ; Wed, 17 Oct 2018 08:58:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 290CE2AB74; Wed, 17 Oct 2018 08:58:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B4FD2AB67 for ; Wed, 17 Oct 2018 08:58:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8347A6B026E; Wed, 17 Oct 2018 04:58:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7E4796B0272; Wed, 17 Oct 2018 04:58:56 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D32B6B027C; Wed, 17 Oct 2018 04:58:56 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 301656B026E for ; Wed, 17 Oct 2018 04:58:56 -0400 (EDT) Received: by mail-pf1-f198.google.com with SMTP id r81-v6so26014936pfk.11 for ; Wed, 17 Oct 2018 01:58:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=uD+DH1P0SFFdY+Fa3qAgij6eAcAFeaVCmZBUuVfGQX4=; b=g8fdJLpTUrfPWNBQrMHkaLY6Yh8p2va+pp103Eb54+R+DVl5dX4M7J9aovOjBv/oT4 X1MX9aUkg6C4O+k9ET5Rh+kZMdeD6wIx4P1mlS4hfjHe7loIDiderwsrazG2BMGItklx 13jTeQaTIKl+53utbR6IAoqzwHTxNi+r1/9RWLWD3ouJcCHu85l0NHlbjrUAAQRCJ3kI saLoKdE692NyhKbfqNSBkdfewkqACisguoEwXXxOt415Sg9b+4hK4hmEh06JTvJyW4oK ejR9wHAKXDjYj6OJ5I1KvDM+BJo1n9RVK6HFzOQvoAeDZ5VvB0niarELjqsj8KFxjXQh Qpsw== X-Gm-Message-State: ABuFfogEc/oEX2Z/IpGraaUSA+jOso5A56D6HN0XlQqEQKKUekfpFLRb lZSRaf+lfXIfbG5sGkcX+bO6Su75/0D2uNbFIcjfw6MwUhwz4SiCnILDd76Lzy2FGltSF64liVa TCgiaN+DkaqiJcKwbwWji2Y3L6QB+07YSd5wBv6Y2SDqQEbIdQFN6+E+UJj9uJqbP5IIG0S/d1f Z/rz49KcuCTfr6LAWFmFTw7Mn3Qrv1dk7aOmLhxgvwg33N2niPrEfa0f/z0YRNRLgvk+fRYK83Z x0VsKtaoM3vuecA3dZWItxdexSsBGU+rbCIkFlqtuPsEM0YDiPs9vzhLDB9Vj7nwzEjb8ShxMRS vz7xCqdwbTkC2RIBZnAB9OL5lcFgxBLRPJUuoABUbhuNvgDttYk52bxJDdkG52YXUlkWMvrjYfx M X-Received: by 2002:a63:bd01:: with SMTP id a1-v6mr23724824pgf.58.1539766735842; Wed, 17 Oct 2018 01:58:55 -0700 (PDT) X-Received: by 2002:a63:bd01:: with SMTP id a1-v6mr23724786pgf.58.1539766734974; Wed, 17 Oct 2018 01:58:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539766734; cv=none; d=google.com; s=arc-20160816; b=nH/qKEjx8ZRlNm4cAY+Hd7L+9IUf2L7YaW/Bp+eps1u1OzZtzJQ3BfDhpFFpSZJKGz z+IS6/1OmxQCB0Rpb/qQZXhOxj+EVXpf41bTq/d5u28LOcij6jKZilOf5JZ7+lM07+JD Cs7/L2bvtB7wU4yxsHCYXUu67xfzsZO1YYdYycg7zii4yqs2CAxwY/vaXlkTBbJK2myk fAeylkSmJds18jbKJbRu9aeeBoOTxaXizzGsD5MyBoVKidC3GOiXS04TulVLa+8ALqhr TmHPm1ZE30DMu5dRtvJzYtxs4hUSxKZpsvYn2VcwdCNVbleKYLiJnlxfkkhP1teDrwH1 XDGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=uD+DH1P0SFFdY+Fa3qAgij6eAcAFeaVCmZBUuVfGQX4=; b=A0tiioyjV1NYTSxQfG3ePAvD5oIfrIxXHsL/SEFIeqUTU2oiAPQyP9pZtDAEPDLgoW lU0ONjhMtvSMGYPWi7sNhkPwyaIdwKnffc0skLFjI8aoevoCapQAb5yyYkEraCC14Iuu TtzsXIFp1JTwyUjmsCVy5qqpruSGCm/3sC/ztyW15INoDutTas2KoC4CT5gGo6GAeebO J9RvsUxXWs4TQ+MN/Sv2hVcSccLFsVVAESCKznwCwYUJ36iUpfa0GUB+lMqGnAhtDrZA Vi4ibEujxLgVjxVKS5J/WPlaCMnCWBArNq+YwdAy77nPEmXsqk7EJNwyR0jg9RfDNdQe M49w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=KWht7aTR; spf=pass (google.com: domain of vovoy@chromium.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=vovoy@chromium.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id p8-v6sor7099507pgf.59.2018.10.17.01.58.54 for (Google Transport Security); Wed, 17 Oct 2018 01:58:54 -0700 (PDT) Received-SPF: pass (google.com: domain of vovoy@chromium.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=KWht7aTR; spf=pass (google.com: domain of vovoy@chromium.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=vovoy@chromium.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uD+DH1P0SFFdY+Fa3qAgij6eAcAFeaVCmZBUuVfGQX4=; b=KWht7aTRlkiKCBtRfVeJNsP4i4VUhahGY5ijojCW63jtUpOs4tO1YQPPfBRq28ztKf pwbjKA+NcHMq8buohff8sKTcAm83Kq4FsjvkNQ8NruvsG8+UC9EEZTm1B4i1M1GZzYKV /TGgaHI/3co+Gf+NZmGh6yKSrf0Lb5Ptunwso= X-Google-Smtp-Source: ACcGV60qBnultztrjkNHwdH05ny71X4T7UVJPkAsnQ7nnm6Q6mWRPLq2U4LFEdHzL+DnQc8fyM1B6g== X-Received: by 2002:a63:5c1f:: with SMTP id q31-v6mr23803403pgb.452.1539766734448; Wed, 17 Oct 2018 01:58:54 -0700 (PDT) Received: from vovoy-z840.tpe.corp.google.com ([2401:fa00:1:b:d89e:cfa6:3c8:e61b]) by smtp.gmail.com with ESMTPSA id 84-v6sm28477983pfs.108.2018.10.17.01.58.51 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 17 Oct 2018 01:58:53 -0700 (PDT) From: Kuo-Hsin Yang To: vovoy@chromium.org Cc: akpm@linux-foundation.org, chris@chris-wilson.co.uk, corbet@lwn.net, dave.hansen@intel.com, hoegsberg@chromium.org, hughd@google.com, intel-gfx@lists.freedesktop.org, joonas.lahtinen@linux.intel.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, marcheu@chromium.org, mhocko@suse.com, peterz@infradead.org Subject: [PATCH v2] shmem, drm/i915: mark pinned shmemfs pages as unevictable Date: Wed, 17 Oct 2018 16:58:01 +0800 Message-Id: <20181017085801.220742-1-vovoy@chromium.org> X-Mailer: git-send-email 2.19.1.331.ge82ca0e54c-goog In-Reply-To: <20181016174300.197906-1-vovoy@chromium.org> References: <20181016174300.197906-1-vovoy@chromium.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The i915 driver uses shmemfs to allocate backing storage for gem objects. These shmemfs pages can be pinned (increased ref count) by shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan wastes a lot of time scanning these pinned pages. In some extreme case, all pages in the inactive anon lru are pinned, and only the inactive anon lru is scanned due to inactive_ratio, the system cannot swap and invokes the oom-killer. Mark these pinned pages as unevictable to speed up vmscan. By exporting shmem_unlock_mapping, drivers can: 1. mark a shmemfs address space as unevictable with mapping_set_unevictable(), pages in the address space will be moved to unevictable list in vmscan. 2. mark an address space as evictable with mapping_clear_unevictable(), and move these pages back to evictable list with shmem_unlock_mapping(). This patch was inspired by Chris Wilson's change [1]. [1]: https://patchwork.kernel.org/patch/9768741/ Signed-off-by: Kuo-Hsin Yang --- Changes for v2: Squashed the two patches. Documentation/vm/unevictable-lru.rst | 4 +++- drivers/gpu/drm/i915/i915_gem.c | 8 ++++++++ mm/shmem.c | 2 ++ 3 files changed, 13 insertions(+), 1 deletion(-) diff --git a/Documentation/vm/unevictable-lru.rst b/Documentation/vm/unevictable-lru.rst index fdd84cb8d511..a812fb55136d 100644 --- a/Documentation/vm/unevictable-lru.rst +++ b/Documentation/vm/unevictable-lru.rst @@ -143,7 +143,7 @@ using a number of wrapper functions: Query the address space, and return true if it is completely unevictable. -These are currently used in two places in the kernel: +These are currently used in three places in the kernel: (1) By ramfs to mark the address spaces of its inodes when they are created, and this mark remains for the life of the inode. @@ -154,6 +154,8 @@ These are currently used in two places in the kernel: swapped out; the application must touch the pages manually if it wants to ensure they're in memory. + (3) By the i915 driver to mark pinned address space until it's unpinned. + Detecting Unevictable Pages --------------------------- diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index fcc73a6ab503..e0ff5b736128 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2390,6 +2390,7 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj, { struct sgt_iter sgt_iter; struct page *page; + struct address_space *mapping; __i915_gem_object_release_shmem(obj, pages, true); @@ -2409,6 +2410,10 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj, } obj->mm.dirty = false; + mapping = file_inode(obj->base.filp)->i_mapping; + mapping_clear_unevictable(mapping); + shmem_unlock_mapping(mapping); + sg_free_table(pages); kfree(pages); } @@ -2551,6 +2556,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) * Fail silently without starting the shrinker */ mapping = obj->base.filp->f_mapping; + mapping_set_unevictable(mapping); noreclaim = mapping_gfp_constraint(mapping, ~__GFP_RECLAIM); noreclaim |= __GFP_NORETRY | __GFP_NOWARN; @@ -2664,6 +2670,8 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) err_pages: for_each_sgt_page(page, sgt_iter, st) put_page(page); + mapping_clear_unevictable(mapping); + shmem_unlock_mapping(mapping); sg_free_table(st); kfree(st); diff --git a/mm/shmem.c b/mm/shmem.c index 446942677cd4..d1ce34c09df6 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -786,6 +786,7 @@ void shmem_unlock_mapping(struct address_space *mapping) cond_resched(); } } +EXPORT_SYMBOL_GPL(shmem_unlock_mapping); /* * Remove range of pages and swap entries from radix tree, and free them. @@ -3874,6 +3875,7 @@ int shmem_lock(struct file *file, int lock, struct user_struct *user) void shmem_unlock_mapping(struct address_space *mapping) { } +EXPORT_SYMBOL_GPL(shmem_unlock_mapping); #ifdef CONFIG_MMU unsigned long shmem_get_unmapped_area(struct file *file,