From patchwork Mon Jun 16 10:09:18 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Herrmann X-Patchwork-Id: 4357141 Return-Path: X-Original-To: patchwork-linux-input@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 59FAABEEAA for ; Mon, 16 Jun 2014 10:09:42 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id E112520204 for ; Mon, 16 Jun 2014 10:09:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9FF10201ED for ; Mon, 16 Jun 2014 10:09:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754994AbaFPKJi (ORCPT ); Mon, 16 Jun 2014 06:09:38 -0400 Received: from mail-wi0-f181.google.com ([209.85.212.181]:62372 "EHLO mail-wi0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754926AbaFPKJi (ORCPT ); Mon, 16 Jun 2014 06:09:38 -0400 Received: by mail-wi0-f181.google.com with SMTP id n3so3732344wiv.14 for ; Mon, 16 Jun 2014 03:09:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=IinAeeb2gyGCklOnKEyk8UCD2dyE6DkGjIRrDMApfIM=; b=q0fGBgtJVlczTve9ZybjoCk+1F1TdXDpD/1M1EktL1lXeOWhMRmu300OOD9k0HD6n0 4d4/NfhQZtDxn204rZ0X0W3Z8F2vjeR2Xkw34zx8p8pnRI0DCpdGtQxEd7oOMWyff2bn otiR8CQRqwr8aXnfRGeiqc5KeTORQwAuhNcF6LRPSu5vSK/vcKJTIt/nZZ7xnFcSxgTW a9IRyGa+m183qj00CR6in0v9YGiyU2kfWSvamCLfVryoyuu7EyrmaRTFTxRjB8uAVQGp xZBvhN/IFKLm78ioiaOw84nNOdcv77sgm8L1fmMe6EuU1Yoeo83bxY3/i645Q02hI1DR K7kA== X-Received: by 10.180.73.169 with SMTP id m9mr25665211wiv.53.1402913376564; Mon, 16 Jun 2014 03:09:36 -0700 (PDT) Received: from david-tp.localdomain (stgt-5f728f36.pool.mediaWays.net. [95.114.143.54]) by mx.google.com with ESMTPSA id gi7sm13788868wib.1.2014.06.16.03.09.34 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 16 Jun 2014 03:09:35 -0700 (PDT) From: David Herrmann To: linux-input@vger.kernel.org Cc: Dmitry Torokhov , Benjamin Tissoires , Peter Hutterer , Lennart Poettering , David Herrmann Subject: [PATCH v2] Input: evdev - add event-mask API Date: Mon, 16 Jun 2014 12:09:18 +0200 Message-Id: <1402913358-682-1-git-send-email-dh.herrmann@gmail.com> X-Mailer: git-send-email 2.0.0 Sender: linux-input-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-input@vger.kernel.org X-Spam-Status: No, score=-7.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Hardware manufacturers group keys in the weirdest way possible. This may cause a power-key to be grouped together with normal keyboard keys and thus be reported on the same kernel interface. However, user-space is often only interested in specific sets of events. For instance, daemons dealing with system-reboot (like systemd-logind) listen for KEY_POWER, but are not interested in any main keyboard keys. Usually, power keys are reported via separate interfaces, however, some i8042 boards report it in the AT matrix. To avoid waking up those system daemons on each key-press, we had two ideas: - split off KEY_POWER into a separate interface unconditionally - allow masking a specific set of events on evdev FDs Splitting of KEY_POWER is a rather weird way to deal with this and may break backwards-compatibility. It is also specific to KEY_POWER and might be required for other stuff, too. Moreover, we might end up with a huge set of input-devices just to have them properly split. Hence, this patchset implements the second idea: An event-mask to specify which events you're interested in. Two ioctls allow setting this mask for each event-type. If not set, all events are reported. The type==0 entry is used same as in EVIOCGBIT to set the actual EV_* mask of masked events. This way, you have a two-level filter. We are heavily forward-compatible to new event-types and event-codes. So new user-space will be able to run on an old kernel which doesn't know the given event-codes or event-types. Signed-off-by: David Herrmann Acked-by: Peter Hutterer --- v2: - Drop empty SYN_REPORT - turn evdev_get_mask_cnt() into an array-lookup - add documentation - fix coding-style drivers/input/evdev.c | 156 ++++++++++++++++++++++++++++++++++++++++++++- include/uapi/linux/input.h | 53 +++++++++++++++ 2 files changed, 207 insertions(+), 2 deletions(-) diff --git a/drivers/input/evdev.c b/drivers/input/evdev.c index fd325ec..f8367190 100644 --- a/drivers/input/evdev.c +++ b/drivers/input/evdev.c @@ -51,10 +51,130 @@ struct evdev_client { struct list_head node; int clkid; bool revoked; + unsigned long *evmasks[EV_CNT]; unsigned int bufsize; struct input_event buffer[]; }; +static size_t evdev_get_mask_cnt(unsigned int type) +{ + static size_t counts[EV_CNT] = { + /* EV_SYN==0 is EV_CNT, _not_ SYN_CNT, see EVIOCGBIT */ + [EV_SYN] = EV_CNT, + [EV_KEY] = KEY_CNT, + [EV_REL] = REL_CNT, + [EV_ABS] = ABS_CNT, + [EV_MSC] = MSC_CNT, + [EV_SW] = SW_CNT, + [EV_LED] = LED_CNT, + [EV_SND] = SND_CNT, + [EV_FF] = FF_CNT, + }; + + return (type < EV_CNT) ? counts[type] : 0; +} + +/* must be called with evdev-mutex held */ +static int evdev_set_mask(struct evdev_client *client, + unsigned int type, + const void __user *codes, + u32 codes_size) +{ + unsigned long flags, *mask, *oldmask; + size_t cnt, size; + + /* unknown masks are simply ignored for forward-compat */ + cnt = evdev_get_mask_cnt(type); + if (!cnt) + return 0; + + /* we allow 'codes_size > size' for forward-compat */ + size = sizeof(unsigned long) * BITS_TO_LONGS(cnt); + + mask = kzalloc(size, GFP_KERNEL); + if (!mask) + return -ENOMEM; + + if (copy_from_user(mask, codes, min_t(size_t, codes_size, size))) { + kfree(mask); + return -EFAULT; + } + + spin_lock_irqsave(&client->buffer_lock, flags); + oldmask = client->evmasks[type]; + client->evmasks[type] = mask; + spin_unlock_irqrestore(&client->buffer_lock, flags); + + kfree(oldmask); + + return 0; +} + +/* must be called with evdev-mutex held */ +static int evdev_get_mask(struct evdev_client *client, + unsigned int type, + void __user *codes, + u32 codes_size) +{ + unsigned long *mask; + size_t cnt, size, min, i; + u8 __user *out; + + /* we allow unknown types and 'codes_size > size' for forward-compat */ + cnt = evdev_get_mask_cnt(type); + size = sizeof(unsigned long) * BITS_TO_LONGS(cnt); + min = min_t(size_t, codes_size, size); + + if (cnt > 0) { + mask = client->evmasks[type]; + if (mask) { + if (copy_to_user(codes, mask, min)) + return -EFAULT; + } else { + /* fake mask with all bits set */ + out = (u8 __user*)codes; + for (i = 0; i < min; ++i) { + if (put_user((u8)0xff, out + i)) + return -EFAULT; + } + } + } + + codes = (u8*)codes + min; + codes_size -= min; + + if (codes_size > 0 && clear_user(codes, codes_size)) + return -EFAULT; + + return 0; +} + +/* requires the buffer lock to be held */ +static bool __evdev_is_masked(struct evdev_client *client, + unsigned int type, + unsigned int code) +{ + unsigned long *mask; + size_t cnt; + + /* EV_SYN and unknown codes are never masked */ + if (type == EV_SYN || type >= EV_CNT) + return false; + + /* first test whether the type is masked */ + mask = client->evmasks[0]; + if (mask && !test_bit(type, mask)) + return true; + + /* unknown values are never masked */ + cnt = evdev_get_mask_cnt(type); + if (!cnt || code >= cnt) + return false; + + mask = client->evmasks[type]; + return mask && !test_bit(code, mask); +} + /* flush queued events of type @type, caller must hold client->buffer_lock */ static void __evdev_flush_queue(struct evdev_client *client, unsigned int type) { @@ -177,12 +297,21 @@ static void evdev_pass_values(struct evdev_client *client, spin_lock(&client->buffer_lock); for (v = vals; v != vals + count; v++) { + if (__evdev_is_masked(client, v->type, v->code)) + continue; + + if (v->type == EV_SYN && v->code == SYN_REPORT) { + /* drop empty SYN_REPORT */ + if (client->packet_head == client->head) + continue; + + wakeup = true; + } + event.type = v->type; event.code = v->code; event.value = v->value; __pass_event(client, &event); - if (v->type == EV_SYN && v->code == SYN_REPORT) - wakeup = true; } spin_unlock(&client->buffer_lock); @@ -365,6 +494,7 @@ static int evdev_release(struct inode *inode, struct file *file) { struct evdev_client *client = file->private_data; struct evdev *evdev = client->evdev; + unsigned int i; mutex_lock(&evdev->mutex); evdev_ungrab(evdev, client); @@ -372,6 +502,9 @@ static int evdev_release(struct inode *inode, struct file *file) evdev_detach_client(evdev, client); + for (i = 0; i < EV_CNT; ++i) + kfree(client->evmasks[i]); + if (is_vmalloc_addr(client)) vfree(client); else @@ -811,6 +944,7 @@ static long evdev_do_ioctl(struct file *file, unsigned int cmd, struct evdev *evdev = client->evdev; struct input_dev *dev = evdev->handle.dev; struct input_absinfo abs; + struct input_mask mask; struct ff_effect effect; int __user *ip = (int __user *)p; unsigned int i, t, u, v; @@ -872,6 +1006,24 @@ static long evdev_do_ioctl(struct file *file, unsigned int cmd, else return evdev_revoke(evdev, client, file); + case EVIOCGMASK: + if (copy_from_user(&mask, p, sizeof(mask))) + return -EFAULT; + + return evdev_get_mask(client, + mask.type, + (void*)(long)mask.codes_ptr, + mask.codes_size); + + case EVIOCSMASK: + if (copy_from_user(&mask, p, sizeof(mask))) + return -EFAULT; + + return evdev_set_mask(client, + mask.type, + (const void*)(long)mask.codes_ptr, + mask.codes_size); + case EVIOCSCLOCKID: if (copy_from_user(&i, p, sizeof(unsigned int))) return -EFAULT; diff --git a/include/uapi/linux/input.h b/include/uapi/linux/input.h index 19df18c..4753f94 100644 --- a/include/uapi/linux/input.h +++ b/include/uapi/linux/input.h @@ -97,6 +97,12 @@ struct input_keymap_entry { __u8 scancode[32]; }; +struct input_mask { + __u32 type; + __u32 codes_size; + __u64 codes_ptr; +}; + #define EVIOCGVERSION _IOR('E', 0x01, int) /* get driver version */ #define EVIOCGID _IOR('E', 0x02, struct input_id) /* get device ID */ #define EVIOCGREP _IOR('E', 0x03, unsigned int[2]) /* get repeat settings */ @@ -154,6 +160,53 @@ struct input_keymap_entry { #define EVIOCGRAB _IOW('E', 0x90, int) /* Grab/Release device */ #define EVIOCREVOKE _IOW('E', 0x91, int) /* Revoke device access */ +/** + * EVIOCGMASK - Retrieve current event-mask + * + * This retrieves the current event-mask for a specific event-type. The + * argument must be of type "struct input_mask" and specifies the event-type to + * query, the receive buffer and the size of the receive buffer. + * + * The event-mask is a per-client mask that specifies which events are forwarded + * to the client. Each event-code is represented by a single bit in the + * event-mask. If set, the event is not-masked. If unset, the event is masked + * and will never be queued on the client's receive buffer. + * + * This ioctl provides full forward-compatibility. That means, if a kernel is + * queried for an unknown event-type or if the receive buffer is larger than the + * number of event-codes known to the kernel, the kernel will return all zeroes + * for those codes (which means, those codes are masked). This effectively + * means, codes unknown to the kernel are always considered hard-masked. + * If the receive buffer is too small to contain the whole event-mask, a + * truncated mask is copied to user-space. + * + * This ioctl may fail with ENODEV in case the file is revoked. EFAULT is + * returned if the receive-buffer points to invalid memory. EINVAL is returned + * if the kernel does not implement the ioctl. + */ +#define EVIOCGMASK _IOR('E', 0x92, struct input_mask) /* Get event-masks */ + +/** + * EVIOCSMASK - Set event-mask + * + * This is the counterpart to EVIOCGMASK. Instead of receiving the current + * event-mask, this changes the client's event-mask for a specific type. See + * EVIOCGMASK for a description of event-masks and the argument-type. + * + * This ioctl provides full forward-compatibility. If the passed event-type is + * unknown to the kernel, or if the number of codes is bigger than known to the + * kernel, the ioctl is still accepted and applied. However, any unknown codes + * are left untouched and stay masked. That means, the kernel hard-masks unknown + * codes regardless of what the client requests. + * If the new mask doesn't cover all known event-codes, all remaining codes are + * automatically cleared and thus masked. + * + * This ioctl may fail with ENODEV in case the file is revoked. EFAULT is + * returned if the receive-buffer points to invalid memory. EINVAL is returned + * if the kernel does not implement the ioctl. + */ +#define EVIOCSMASK _IOW('E', 0x93, struct input_mask) /* Set event-masks */ + #define EVIOCSCLOCKID _IOW('E', 0xa0, int) /* Set clockid to be used for timestamps */ /*