From patchwork Tue Sep 18 06:38:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 10603785 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 91CD117E1 for ; Tue, 18 Sep 2018 06:40:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8070B2A1C2 for ; Tue, 18 Sep 2018 06:40:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 744FF2A22D; Tue, 18 Sep 2018 06:40:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BC4742A1C2 for ; Tue, 18 Sep 2018 06:40:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 00EA38E0003; Tue, 18 Sep 2018 02:40:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EAC798E0002; Tue, 18 Sep 2018 02:40:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C893D8E0005; Tue, 18 Sep 2018 02:40:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by kanga.kvack.org (Postfix) with ESMTP id 6E8FF8E0002 for ; Tue, 18 Sep 2018 02:40:22 -0400 (EDT) Received: by mail-pl1-f200.google.com with SMTP id 2-v6so542458plc.11 for ; Mon, 17 Sep 2018 23:40:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version; bh=NalqpQM7vvC10O1HP4Oq4QYQbGocL9ruPnSSgMk0VLE=; b=XC+v5stsEMQPmtAC1W7CnzABefHbvhrMAx5Yy5E32idf3kH7Qoah1UFRiJICAnvHHM Y5YTz1Nn5vh6esSSArlfgJc46W98B54hCVkyvP7itj1iWdzGy+fMmcTiYQihLnX6LKAJ sUxAM3IM+62PIq4ltvUd+mP6qxmj78WRaioqhBoDxaNfJlq/DCvXliNJvQKQtv99J7pg pvLKDHt5W9RQzc0kZ4WewPp7+jMZEiQkzwHA/QJ3N40yTdi1Hqg8dawlUQG5pCoGJsoM LLxhfk2LoU/Vxjh1iIWHmN1DnwPu5DyRqyvFM1EBhIkgKs9TLAcCvhM4tcc+4U+fFYu4 ds8w== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=vmware.com X-Gm-Message-State: APzg51Ai0yzoonDdjI83MEHI5q+BtdY8tCdhr32cyXuLJOojgYq7k4z7 AjV1k6Bfg4h3wTU8JT8Db878PHOqtaTQQsZpqd7u0C2mkvRae/03p1ZyhXXdZ+Z+WEskzhaiRIZ BwYSlbSlSOoIpcSwu9hBmFTl6NpLRnsHgAgFH5ZF84YYXVFLjuT8tvTVRHDR+05hRdA== X-Received: by 2002:a17:902:7b87:: with SMTP id w7-v6mr27869253pll.142.1537252822109; Mon, 17 Sep 2018 23:40:22 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZGLCZtypKDrD/p+/+7y4xaAVMgeFiAdRVdN0pR+AduNd8qtzYqQsKUWH5HE7Kak/v+qpQc X-Received: by 2002:a17:902:7b87:: with SMTP id w7-v6mr27869190pll.142.1537252821031; Mon, 17 Sep 2018 23:40:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537252821; cv=none; d=google.com; s=arc-20160816; b=B3fpMcfxZQGt/Lu02ysyR/3FHPxEY9/w8bE6lOLkLJu3u0ah5dM/mWRE91ls5nMxAX x1Qj3ULdAOy79RsysgOt1DQlNPnafAiSsTp/chR+7/nT5lOpqkwEPHBwQraJki+MNQDt Ex9l5LLFUuCrpXMvpHL9FOlyhxgCul07pqxq+zhp/uXgHZWhXcUy+2/hYudusCPRs8Y0 +wCfZMh2EQufI6FGrdMVk44j0bumH+hS5rv0tLxnCdJisWw9DVped4R2YTS5VN5F6j1a Psw+ne60QcVqMxsPC8OF1/UPhY6lU8eVgIldUHDNvwJBNbiNdrU7OywL+23LbcEqBbIL plOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=NalqpQM7vvC10O1HP4Oq4QYQbGocL9ruPnSSgMk0VLE=; b=jLqI9vuYjwGs1HdMIie6OY0RLbLS9wNQz3lUqFmljNtAsc5sB4SnBs7+tzvKHkKT69 yR9k76yblZ9Tne1vBoLFjL35A5dseRBWvc/G1/B3f4I3RaUh3hXXkh/9pN2XC2k9qTrB +hdToEljUlxm5427DjIujeNdcy+0hV3gF+8TdJzZyCgyYB2jOmFuokZYZHXRotYtm9dF YmjnwBlC6KoSpIfmFz6KH4kQkxlg0LEK1U58xr27RwR4Cq9/IfNpiq2MnL4yn1jxPo16 epYTia7rDbv8xYDfPWZYrWUjHjRSSrEDI9IGAZFBx3v2/wy9ZisqaaQHAcBpVxsSxWYt I6Xg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=vmware.com Received: from EX13-EDG-OU-001.vmware.com (ex13-edg-ou-001.vmware.com. [208.91.0.189]) by mx.google.com with ESMTPS id i21-v6si17741515pgg.513.2018.09.17.23.40.20 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 17 Sep 2018 23:40:20 -0700 (PDT) Received-SPF: pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) client-ip=208.91.0.189; Authentication-Results: mx.google.com; spf=pass (google.com: domain of namit@vmware.com designates 208.91.0.189 as permitted sender) smtp.mailfrom=namit@vmware.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=vmware.com Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Mon, 17 Sep 2018 23:40:17 -0700 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 5367C407E6; Mon, 17 Sep 2018 23:40:20 -0700 (PDT) From: Nadav Amit To: Arnd Bergmann , Greg Kroah-Hartman CC: , Nadav Amit , "Michael S. Tsirkin" , Jason Wang , , Subject: [PATCH 15/19] mm/balloon_compaction: list interfaces Date: Mon, 17 Sep 2018 23:38:49 -0700 Message-ID: <20180918063853.198332-16-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180918063853.198332-1-namit@vmware.com> References: <20180918063853.198332-1-namit@vmware.com> MIME-Version: 1.0 Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Introduce interfaces for ballooning enqueueing and dequeueing of a list of pages. These interfaces reduce the overhead of storing and restoring IRQs by batching the operations. In addition they do not panic if the list of pages is empty. Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: linux-mm@kvack.org Cc: virtualization@lists.linux-foundation.org Reviewed-by: Xavier Deguillard Signed-off-by: Nadav Amit --- include/linux/balloon_compaction.h | 4 + mm/balloon_compaction.c | 139 +++++++++++++++++++++-------- 2 files changed, 105 insertions(+), 38 deletions(-) diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h index 53051f3d8f25..2c5a8e09e413 100644 --- a/include/linux/balloon_compaction.h +++ b/include/linux/balloon_compaction.h @@ -72,6 +72,10 @@ extern struct page *balloon_page_alloc(void); extern void balloon_page_enqueue(struct balloon_dev_info *b_dev_info, struct page *page); extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info); +extern void balloon_page_list_enqueue(struct balloon_dev_info *b_dev_info, + struct list_head *pages); +extern int balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info, + struct list_head *pages, int n_req_pages); static inline void balloon_devinfo_init(struct balloon_dev_info *balloon) { diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c index a6c0efb3544f..b920c2a10d6f 100644 --- a/mm/balloon_compaction.c +++ b/mm/balloon_compaction.c @@ -10,6 +10,100 @@ #include #include +static int balloon_page_enqueue_one(struct balloon_dev_info *b_dev_info, + struct page *page) +{ + /* + * Block others from accessing the 'page' when we get around to + * establishing additional references. We should be the only one + * holding a reference to the 'page' at this point. + */ + if (!trylock_page(page)) { + WARN_ONCE(1, "balloon inflation failed to enqueue page\n"); + return -EFAULT; + } + list_del(&page->lru); + balloon_page_insert(b_dev_info, page); + unlock_page(page); + __count_vm_event(BALLOON_INFLATE); + return 0; +} + +/** + * balloon_page_list_enqueue() - inserts a list of pages into the balloon page + * list. + * @b_dev_info: balloon device descriptor where we will insert a new page to + * @pages: pages to enqueue - allocated using balloon_page_alloc. + * + * Driver must call it to properly enqueue a balloon pages before definitively + * removing it from the guest system. + */ +void balloon_page_list_enqueue(struct balloon_dev_info *b_dev_info, + struct list_head *pages) +{ + struct page *page, *tmp; + unsigned long flags; + + spin_lock_irqsave(&b_dev_info->pages_lock, flags); + list_for_each_entry_safe(page, tmp, pages, lru) + balloon_page_enqueue_one(b_dev_info, page); + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); +} +EXPORT_SYMBOL_GPL(balloon_page_list_enqueue); + +/** + * balloon_page_list_dequeue() - removes pages from balloon's page list and + * returns a list of the pages. + * @b_dev_info: balloon device decriptor where we will grab a page from. + * @pages: pointer to the list of pages that would be returned to the caller. + * @n_req_pages: number of requested pages. + * + * Driver must call it to properly de-allocate a previous enlisted balloon pages + * before definetively releasing it back to the guest system. This function + * tries to remove @n_req_pages from the ballooned pages and return it to the + * caller in the @pages list. + * + * Note that this function may fail to dequeue some pages temporarily empty due + * to compaction isolated pages. + * + * Return: number of pages that were added to the @pages list. + */ +int balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info, + struct list_head *pages, int n_req_pages) +{ + struct page *page, *tmp; + unsigned long flags; + int n_pages = 0; + + spin_lock_irqsave(&b_dev_info->pages_lock, flags); + list_for_each_entry_safe(page, tmp, &b_dev_info->pages, lru) { + /* + * Block others from accessing the 'page' while we get around + * establishing additional references and preparing the 'page' + * to be released by the balloon driver. + */ + if (!trylock_page(page)) + continue; + + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION) && + PageIsolated(page)) { + /* raced with isolation */ + unlock_page(page); + continue; + } + balloon_page_delete(page); + __count_vm_event(BALLOON_DEFLATE); + unlock_page(page); + list_add(&page->lru, pages); + if (++n_pages >= n_req_pages) + break; + } + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); + + return n_pages; +} +EXPORT_SYMBOL_GPL(balloon_page_list_dequeue); + /* * balloon_page_alloc - allocates a new page for insertion into the balloon * page list. @@ -44,17 +138,9 @@ void balloon_page_enqueue(struct balloon_dev_info *b_dev_info, { unsigned long flags; - /* - * Block others from accessing the 'page' when we get around to - * establishing additional references. We should be the only one - * holding a reference to the 'page' at this point. - */ - BUG_ON(!trylock_page(page)); spin_lock_irqsave(&b_dev_info->pages_lock, flags); - balloon_page_insert(b_dev_info, page); - __count_vm_event(BALLOON_INFLATE); + balloon_page_enqueue_one(b_dev_info, page); spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); - unlock_page(page); } EXPORT_SYMBOL_GPL(balloon_page_enqueue); @@ -71,36 +157,13 @@ EXPORT_SYMBOL_GPL(balloon_page_enqueue); */ struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info) { - struct page *page, *tmp; unsigned long flags; - bool dequeued_page; + LIST_HEAD(pages); + int n_pages; - dequeued_page = false; - spin_lock_irqsave(&b_dev_info->pages_lock, flags); - list_for_each_entry_safe(page, tmp, &b_dev_info->pages, lru) { - /* - * Block others from accessing the 'page' while we get around - * establishing additional references and preparing the 'page' - * to be released by the balloon driver. - */ - if (trylock_page(page)) { -#ifdef CONFIG_BALLOON_COMPACTION - if (PageIsolated(page)) { - /* raced with isolation */ - unlock_page(page); - continue; - } -#endif - balloon_page_delete(page); - __count_vm_event(BALLOON_DEFLATE); - unlock_page(page); - dequeued_page = true; - break; - } - } - spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); + n_pages = balloon_page_list_dequeue(b_dev_info, &pages, 1); - if (!dequeued_page) { + if (n_pages != 1) { /* * If we are unable to dequeue a balloon page because the page * list is empty and there is no isolated pages, then something @@ -113,9 +176,9 @@ struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info) !b_dev_info->isolated_pages)) BUG(); spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); - page = NULL; + return NULL; } - return page; + return list_first_entry(&pages, struct page, lru); } EXPORT_SYMBOL_GPL(balloon_page_dequeue);