From patchwork Mon Jul 13 04:07:35 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bandan Das X-Patchwork-Id: 6773891 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 996109F2F0 for ; Mon, 13 Jul 2015 04:09:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AF74A204E0 for ; Mon, 13 Jul 2015 04:09:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B6A2720584 for ; Mon, 13 Jul 2015 04:09:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751743AbbGMEJX (ORCPT ); Mon, 13 Jul 2015 00:09:23 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58991 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751726AbbGMEJU (ORCPT ); Mon, 13 Jul 2015 00:09:20 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (Postfix) with ESMTPS id 30345347DC2; Mon, 13 Jul 2015 04:09:20 +0000 (UTC) Received: from aqua.redhat.com (ovpn-113-42.phx2.redhat.com [10.3.113.42]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t6D482Fp014431; Mon, 13 Jul 2015 00:09:19 -0400 From: Bandan Das To: kvm@vger.kernel.org Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, mst@redhat.com, Eyal Moscovici , Razya Ladelsky , cgroups@vger.kernel.org, jasowang@redhat.com Subject: [RFC PATCH 4/4] vhost: Add cgroup-aware creation of worker threads Date: Mon, 13 Jul 2015 00:07:35 -0400 Message-Id: <1436760455-5686-5-git-send-email-bsd@redhat.com> In-Reply-To: <1436760455-5686-1-git-send-email-bsd@redhat.com> References: <1436760455-5686-1-git-send-email-bsd@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.26 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP With the help of the cgroup function to compare groups introduced in the previous patch, this changes worker creation policy. If the new device belongs to different cgroups than any of the devices we are currently serving, we end up creating a new worker thread even if we haven't reached the devs_per_worker threshold Signed-off-by: Bandan Das --- drivers/vhost/vhost.c | 47 +++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 39 insertions(+), 8 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 6a5d4c0..dc0fa37 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -261,12 +261,6 @@ static int vhost_worker(void *data) use_mm(dev->mm); } - /* TODO: Consider a more elegant solution */ - if (worker->owner != dev->owner) { - /* Should check for return value */ - cgroup_attach_task_all(dev->owner, current); - worker->owner = dev->owner; - } work->fn(work); if (need_resched()) schedule(); @@ -278,6 +272,36 @@ static int vhost_worker(void *data) return 0; } +struct vhost_attach_cgroups_struct { + struct vhost_work work; + struct task_struct *owner; + int ret; +}; + +static void vhost_attach_cgroups_work(struct vhost_work *work) +{ + struct vhost_attach_cgroups_struct *s; + + s = container_of(work, struct vhost_attach_cgroups_struct, work); + s->ret = cgroup_attach_task_all(s->owner, current); +} + +static void vhost_attach_cgroups(struct vhost_dev *dev, + struct vhost_worker *worker) +{ + struct vhost_attach_cgroups_struct attach; + + attach.owner = dev->owner; + vhost_work_init(dev, &attach.work, vhost_attach_cgroups_work); + vhost_work_queue(worker, &attach.work); + vhost_work_flush(worker, &attach.work); + + if (!attach.ret) + worker->owner = dev->owner; + + dev->err = attach.ret; +} + static void vhost_create_worker(struct vhost_dev *dev) { struct vhost_worker *worker; @@ -300,8 +324,14 @@ static void vhost_create_worker(struct vhost_dev *dev) spin_lock_init(&worker->work_lock); INIT_LIST_HEAD(&worker->work_list); + + /* attach to the cgroups of the process that created us */ + vhost_attach_cgroups(dev, worker); + if (dev->err) + goto therror; + worker->owner = dev->owner; + list_add(&worker->node, &pool->workers); - worker->owner = NULL; worker->num_devices++; total_vhost_workers++; dev->worker = worker; @@ -320,7 +350,8 @@ static int vhost_dev_assign_worker(struct vhost_dev *dev) mutex_lock(&vhost_pool->pool_lock); list_for_each_entry(worker, &vhost_pool->workers, node) { - if (worker->num_devices < devs_per_worker) { + if (worker->num_devices < devs_per_worker && + (!cgroup_match_groups(dev->owner, worker->owner))) { dev->worker = worker; dev->worker_assigned = true; worker->num_devices++;