From patchwork Mon Sep 15 20:11:25 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andres Lagar-Cavilla X-Patchwork-Id: 4909041 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 08460BEEA5 for ; Mon, 15 Sep 2014 20:12:10 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C5FFF20166 for ; Mon, 15 Sep 2014 20:12:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5402C200DF for ; Mon, 15 Sep 2014 20:12:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756688AbaIOULt (ORCPT ); Mon, 15 Sep 2014 16:11:49 -0400 Received: from mail-yh0-f73.google.com ([209.85.213.73]:43093 "EHLO mail-yh0-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755930AbaIOULd (ORCPT ); Mon, 15 Sep 2014 16:11:33 -0400 Received: by mail-yh0-f73.google.com with SMTP id t59so280362yho.0 for ; Mon, 15 Sep 2014 13:11:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=M7lVPT+gJ1Ef6l6X4VFfhyC3xRSG9VQ9ZV5ay4SgPbU=; b=RbE52gYL3d2bcmJ1ih/m1HLXGW1rd3Zs6XekloArIpqVCSWIxng4uzho7IirFPBIK5 YLPtLjkRfBu1QuifGYR7e/96xRrWE/Hdapumv6Lg+ReWULQXElIPtvsnyxvUEzD4JaY3 j2EeO4NpQ+ZPtDFuKWUDvfRtNEYYWux9vAhpqqHJTy3HuzmD/vvHx0pIxemNnef7aaHF E/7j8Nx+YTJdOOpfXgnCbTSbdkPQjXgS9wmxNrCisitTh2xEqSRr/NpZ+Zd+gmHE/18c jJD+fS69lZp28dkS10Wo6GPU74Jlf2uZpr2R39E0MUO27xsdvssbrLil09hNdicd2iV5 wiEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=M7lVPT+gJ1Ef6l6X4VFfhyC3xRSG9VQ9ZV5ay4SgPbU=; b=PyLnHwj+CkOHanGpJz/JUYbhIUS67OSyaVccoLGXI3sGbKX48jeNVA6SJnSKbwLKX0 MLCda88PFKZeNpEAf2wU1gjuITyMnHLoVMySvZmZc9A5odJaycHy81G6M9Nj87onZ8j/ hBfmIT8pPB34jR+qprBD7DkIry4vQrcpfVAQiosHU2q8PvBKv/ZapvdO50S+/Fu5xzkS IEbIdQ/Zlo8eSAgVRhy2pPsm3MsjuQ9dZatFY4JZGwAOtc1civ7V84lCNYm8CJeVcLST 1eQyWW8Az50Wi8sJIqJu7WXCju3UE0V7sSQawSc9V7+0GJ+hvR0SUBFM9j8qqj1TE5nI Eh2w== X-Gm-Message-State: ALoCoQnW+TLjgqdP7NAM3RCBGiAqcfjmC9uvQT1TZIsxxk1MIBBq1YEsoxDR+UcGluWSSpNM9cGz X-Received: by 10.236.112.234 with SMTP id y70mr16575135yhg.32.1410811892892; Mon, 15 Sep 2014 13:11:32 -0700 (PDT) Received: from corpmail-nozzle1-2.hot.corp.google.com ([100.108.1.103]) by gmr-mx.google.com with ESMTPS id n22si599841yhd.1.2014.09.15.13.11.31 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 15 Sep 2014 13:11:32 -0700 (PDT) Received: from sigsegv.mtv.corp.google.com ([172.17.131.1]) by corpmail-nozzle1-2.hot.corp.google.com with ESMTP id dycX6dh2.1; Mon, 15 Sep 2014 13:11:32 -0700 Received: by sigsegv.mtv.corp.google.com (Postfix, from userid 256548) id 4B42812009D; Mon, 15 Sep 2014 13:11:31 -0700 (PDT) From: Andres Lagar-Cavilla To: Gleb Natapov , Rik van Riel , Peter Zijlstra , Mel Gorman , Andy Lutomirski , Andrew Morton , Andrea Arcangeli , Sasha Levin , Jianyu Zhan , Paul Cassella , Hugh Dickins , Peter Feiner , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Andres Lagar-Cavilla Subject: [PATCH] kvm: Faults which trigger IO release the mmap_sem Date: Mon, 15 Sep 2014 13:11:25 -0700 Message-Id: <1410811885-17267-1-git-send-email-andreslc@google.com> X-Mailer: git-send-email 2.1.0.rc2.206.gedb03e5 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When KVM handles a tdp fault it uses FOLL_NOWAIT. If the guest memory has been swapped out or is behind a filemap, this will trigger async readahead and return immediately. The rationale is that KVM will kick back the guest with an "async page fault" and allow for some other guest process to take over. If async PFs are enabled the fault is retried asap from a workqueue, or immediately if no async PFs. The retry will not relinquish the mmap semaphore and will block on the IO. This is a bad thing, as other mmap semaphore users now stall. The fault could take a long time, depending on swap or filemap latency. This patch ensures both the regular and async PF path re-enter the fault allowing for the mmap semaphore to be relinquished in the case of IO wait. Signed-off-by: Andres Lagar-Cavilla Acked-by: Radim Kr?má? Reviewed-by: Radim Kr?má? Reviewed-by: Radim Kr?má? --- include/linux/kvm_host.h | 9 +++++++++ include/linux/mm.h | 1 + mm/gup.c | 4 ++++ virt/kvm/async_pf.c | 4 +--- virt/kvm/kvm_main.c | 45 ++++++++++++++++++++++++++++++++++++++++++--- 5 files changed, 57 insertions(+), 6 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 3addcbc..704908d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -198,6 +198,15 @@ int kvm_setup_async_pf(struct kvm_vcpu *vcpu, gva_t gva, unsigned long hva, int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu); #endif +/* + * Retry a fault after a gup with FOLL_NOWAIT. This properly relinquishes mmap + * semaphore if the filemap/swap has to wait on page lock (and retries the gup + * to completion after that). + */ +int kvm_get_user_page_retry(struct task_struct *tsk, struct mm_struct *mm, + unsigned long addr, bool write_fault, + struct page **pagep); + enum { OUTSIDE_GUEST_MODE, IN_GUEST_MODE, diff --git a/include/linux/mm.h b/include/linux/mm.h index ebc5f90..13e585f7 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2011,6 +2011,7 @@ static inline struct page *follow_page(struct vm_area_struct *vma, #define FOLL_HWPOISON 0x100 /* check page is hwpoisoned */ #define FOLL_NUMA 0x200 /* force NUMA hinting page fault */ #define FOLL_MIGRATION 0x400 /* wait for page to replace migration entry */ +#define FOLL_TRIED 0x800 /* a retry, previous pass started an IO */ typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr, void *data); diff --git a/mm/gup.c b/mm/gup.c index 91d044b..332d1c3 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -281,6 +281,10 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, fault_flags |= FAULT_FLAG_ALLOW_RETRY; if (*flags & FOLL_NOWAIT) fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT; + if (*flags & FOLL_TRIED) { + WARN_ON_ONCE(fault_flags & FAULT_FLAG_ALLOW_RETRY); + fault_flags |= FAULT_FLAG_TRIED; + } ret = handle_mm_fault(mm, vma, address, fault_flags); if (ret & VM_FAULT_ERROR) { diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index d6a3d09..17b78b1 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -80,9 +80,7 @@ static void async_pf_execute(struct work_struct *work) might_sleep(); - down_read(&mm->mmap_sem); - get_user_pages(NULL, mm, addr, 1, 1, 0, NULL, NULL); - up_read(&mm->mmap_sem); + kvm_get_user_page_retry(NULL, mm, addr, 1, NULL); kvm_async_page_present_sync(vcpu, apf); spin_lock(&vcpu->async_pf.lock); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 7ef6b48..43a9ab9 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1115,6 +1115,39 @@ static int get_user_page_nowait(struct task_struct *tsk, struct mm_struct *mm, return __get_user_pages(tsk, mm, start, 1, flags, page, NULL, NULL); } +int kvm_get_user_page_retry(struct task_struct *tsk, struct mm_struct *mm, + unsigned long addr, bool write_fault, + struct page **pagep) +{ + int npages; + int locked = 1; + int flags = FOLL_TOUCH | FOLL_HWPOISON | + (pagep ? FOLL_GET : 0) | + (write_fault ? FOLL_WRITE : 0); + + /* + * Retrying fault, we get here *not* having allowed the filemap to wait + * on the page lock. We should now allow waiting on the IO with the + * mmap semaphore released. + */ + down_read(&mm->mmap_sem); + npages = __get_user_pages(tsk, mm, addr, 1, flags, pagep, NULL, + &locked); + if (!locked) { + BUG_ON(npages != -EBUSY); + /* + * The previous call has now waited on the IO. Now we can + * retry and complete. Pass TRIED to ensure we do not re + * schedule async IO (see e.g. filemap_fault). + */ + down_read(&mm->mmap_sem); + npages = __get_user_pages(tsk, mm, addr, 1, flags | FOLL_TRIED, + pagep, NULL, NULL); + } + up_read(&mm->mmap_sem); + return npages; +} + static inline int check_user_page_hwpoison(unsigned long addr) { int rc, flags = FOLL_TOUCH | FOLL_HWPOISON | FOLL_WRITE; @@ -1177,9 +1210,15 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, npages = get_user_page_nowait(current, current->mm, addr, write_fault, page); up_read(¤t->mm->mmap_sem); - } else - npages = get_user_pages_fast(addr, 1, write_fault, - page); + } else { + /* + * By now we have tried gup_fast, and possible async_pf, and we + * are certainly not atomic. Time to retry the gup, allowing + * mmap semaphore to be relinquished in the case of IO. + */ + npages = kvm_get_user_page_retry(current, current->mm, addr, + write_fault, page); + } if (npages != 1) return npages;