From patchwork Wed Apr 20 22:25:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12820869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2145CC433FE for ; Wed, 20 Apr 2022 22:25:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382791AbiDTW2f (ORCPT ); Wed, 20 Apr 2022 18:28:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1382779AbiDTW21 (ORCPT ); Wed, 20 Apr 2022 18:28:27 -0400 Received: from mail-qv1-xf34.google.com (mail-qv1-xf34.google.com [IPv6:2607:f8b0:4864:20::f34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64ABE40E70; Wed, 20 Apr 2022 15:25:32 -0700 (PDT) Received: by mail-qv1-xf34.google.com with SMTP id x20so2410392qvl.10; Wed, 20 Apr 2022 15:25:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=0aCn71l79sWRnE6PzUFhlzrvw/Z81bCS4EyB61dWEL0=; b=BeT9QhYpFDK/CREoEjtw3QLmD+QNTZYuSZtoOHzOZeF9Wk/6j2V2H3kA3wL4UYGuWk bStsY3ZHW3fS86+/Qp4uKXdOHItAD8ztT8NsEfukqkzFoAfgcOUgd8yUM/nDHnF4sklJ dd6QdPuIkebTGaheYNDy0f/odnNV+3d7kYB8XcetsILmO7TE8+NomUZDVgQl64wBbJf/ mbVLw/7/pX+8Dp52EgtlUHsndFzWYaOF/ijzguYNdcuGEeh12jdHmoJ/N1X0f+mzK33N O5k3n+n4mMqME0y1UH9WUXDcYro2Drz38nXYidMQhdGlFfmvQMttCXCbAA5q4kt0hGho cesA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0aCn71l79sWRnE6PzUFhlzrvw/Z81bCS4EyB61dWEL0=; b=KQgoCrvHO/U93ut8PIUrUXW+F+L7GF7FMeU5NgD80kWhphjMgbGU8TTF6jkJCswGxZ nwchzNYw5JYNT8u5gLTH58qfu9DwknPQ4lKP7GOIlDHNX9kt0ahxFWFThSXJbzF6hm+K YsYSd0iKNi/SaUb2vVThGih1UKEyLkJUk6QzY0jRfDXS5ePU5SsJwzyoL9sg3veElUn3 l+TG4m01XgbD+VSa1rUPPAYEGwfTTRQkPhn+yAqMReRV7bmhzU5Ly6VxuLfYAZt3MFN9 n5YqCiy/k7k4lPu5EKa35wJvcRK7iejrs2CUR1zTh+pYsRBL3bPWwFynsKpaeJsS3kjb t2zw== X-Gm-Message-State: AOAM532vFpdoTJJtLPAy4/NSSDihy1lSReVyABVnGVH07SJ5WlwFPxkz NUygIyla/TvwDBiYwXqr2axde2fOeeQ= X-Google-Smtp-Source: ABdhPJzKjiKgZgtEl0vYGkFkHYaxfwEQ6T08SLZ3cvCfSIFo6Vw62fRxM9StAFAMUIujrWHH+gVNUA== X-Received: by 2002:ad4:58c9:0:b0:444:4bb4:651a with SMTP id dh9-20020ad458c9000000b004444bb4651amr16938759qvb.49.1650493531208; Wed, 20 Apr 2022 15:25:31 -0700 (PDT) Received: from localhost ([2601:c4:c432:60a:188a:94a5:4e52:4f76]) by smtp.gmail.com with ESMTPSA id w6-20020a05622a190600b002f1f91ad3e7sm2539168qtc.22.2022.04.20.15.25.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 15:25:30 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Alexander Gordeev , Andy Shevchenko , Christian Borntraeger , Claudio Imbrenda , David Hildenbrand , Heiko Carstens , Janosch Frank , Rasmus Villemoes , Sven Schnelle , Vasily Gorbik , Yury Norov , linux-s390@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH 1/4] lib/bitmap: extend comment for bitmap_(from,to)_arr32() Date: Wed, 20 Apr 2022 15:25:27 -0700 Message-Id: <20220420222530.910125-2-yury.norov@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220420222530.910125-1-yury.norov@gmail.com> References: <20220420222530.910125-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On LE systems bitmaps are naturally ordered, therefore we can potentially use bitmap_copy routines when converting from 32-bit arrays, even if host system is 64-bit. But it may lead to out-of-bond access due to unsafe typecast, and the bitmap_(from,to)_arr32 comment doesn't explain that clearly. Signed-off-by: Yury Norov --- include/linux/bitmap.h | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index a89b626d0fbe..10d805c2893c 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -271,8 +271,12 @@ static inline void bitmap_copy_clear_tail(unsigned long *dst, } /* - * On 32-bit systems bitmaps are represented as u32 arrays internally, and - * therefore conversion is not needed when copying data from/to arrays of u32. + * On 32-bit systems bitmaps are represented as u32 arrays internally. On LE64 + * machines the order of hi and lo parts of nubmers match the bitmap structure. + * In both cases conversion is not needed when copying data from/to arrays of + * u32. But in LE64 case, typecast in bitmap_copy_clear_tail() may lead to the + * out-of-bound access. To avoid that, both LE and BE variants of 64-bit + * architectures are not using bitmap_copy_clear_tail(). */ #if BITS_PER_LONG == 64 void bitmap_from_arr32(unsigned long *bitmap, const u32 *buf, From patchwork Wed Apr 20 22:25:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12820870 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6645DC4332F for ; Wed, 20 Apr 2022 22:25:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382798AbiDTW2g (ORCPT ); Wed, 20 Apr 2022 18:28:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1376924AbiDTW21 (ORCPT ); Wed, 20 Apr 2022 18:28:27 -0400 Received: from mail-qk1-x72e.google.com (mail-qk1-x72e.google.com [IPv6:2607:f8b0:4864:20::72e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C0ACC40E78; Wed, 20 Apr 2022 15:25:33 -0700 (PDT) Received: by mail-qk1-x72e.google.com with SMTP id s4so2394899qkh.0; Wed, 20 Apr 2022 15:25:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=SEW9sH4yh315Yy80b7kF6Br9mFoOl8COFMpMKcd1BB0=; b=D2YcdZil5EgnqmY0lEu7pW35FVCXFTN7QBggpmqztS0NydHUmEbFzTvSPSPJT+EQfe VAsOzraW5ictSt74zTmYKvwEV1f7iOgoHzQe//8vdTLdnOktIAo585hUVVSYaDyhwd3y BpDUzP2dZeDdmceAEI80TzovIy3W0jLYnx42WVgHvMPHxXdR4gUtM2iok1wjOq/ReaXz bSD6xMMDajgrg/AF4CSf64rTTkvYL+aB6URvganrC+e5vbexVThSIN+5+MeZDGiA2Qnw 7GTtqJquXnDmJ0on2iG+CQ92xniACrdLmdgZQD3qyf593g5sizF3R3iGnqgh/jYI+TPw ANmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SEW9sH4yh315Yy80b7kF6Br9mFoOl8COFMpMKcd1BB0=; b=d+ZoVD7g/f2xGISJxHNzdTehfJku9B6l9DL+dHd4yk+0kw7QEWHRV5mva1J70p7kaZ Prg5JGyDXc2js4pW5Mepat/zeb/AZrGrZGkU6raE5x+k9T+kp8xiHbPzJU4ZRhrPLPec UAsFtTD83YcWGMJkh4/B82OKkr9xd8qd9CpEY6e4SUXPLmXzCANAYdFbJOynDtgno/iU PJUVm41i4FOtinrMtX/Wvi2UPflIA4t1x8trJcBHFMZkAtAdCnhEwL1NXHoxU1O8OibS 1ERFfmfAdqM7pLEUKusRJQcDK4TBF7Ih+Ik9Vj1o9QWNV6m6Ih1Ja8hDE+p59zM81JPq HDIQ== X-Gm-Message-State: AOAM533PrDu+aH/OBeiEmu/LqXd23g00kBR1lfgDQ+9Zsa7j8v1b7MXa I4zerBn8UkAI9++B2a1tnLFQSW4mEzI= X-Google-Smtp-Source: ABdhPJwA3ZRr3c9oiLejkR6iKE6q9OIySi66LXZ9CXc706o4hud0p2Rfcvn2Dc0vB2CkfOUOavxeYg== X-Received: by 2002:a37:a689:0:b0:69e:be4d:6d8f with SMTP id p131-20020a37a689000000b0069ebe4d6d8fmr7220315qke.332.1650493532633; Wed, 20 Apr 2022 15:25:32 -0700 (PDT) Received: from localhost ([2601:c4:c432:60a:188a:94a5:4e52:4f76]) by smtp.gmail.com with ESMTPSA id w82-20020a376255000000b0069ee3f0ae63sm1004534qkb.45.2022.04.20.15.25.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 15:25:32 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Alexander Gordeev , Andy Shevchenko , Christian Borntraeger , Claudio Imbrenda , David Hildenbrand , Heiko Carstens , Janosch Frank , Rasmus Villemoes , Sven Schnelle , Vasily Gorbik , Yury Norov , linux-s390@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH 2/4] lib: add bitmap_{from,to}_arr64 Date: Wed, 20 Apr 2022 15:25:28 -0700 Message-Id: <20220420222530.910125-3-yury.norov@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220420222530.910125-1-yury.norov@gmail.com> References: <20220420222530.910125-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Manipulating 64-bit arrays with bitmap functions is potentially dangerous because on 32-bit BE machines the order of halfwords doesn't match. Another issue is that compiler may throw a warning about out-of-boundary access. This patch adds bitmap_{from,to}_arr64 functions in addition to existing bitmap_{from,to}_arr32. Signed-off-by: Yury Norov --- include/linux/bitmap.h | 23 +++++++++++++++++---- lib/bitmap.c | 47 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 66 insertions(+), 4 deletions(-) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 10d805c2893c..f78c534fb814 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -292,6 +292,24 @@ void bitmap_to_arr32(u32 *buf, const unsigned long *bitmap, (const unsigned long *) (bitmap), (nbits)) #endif +/* + * On 64-bit systems bitmaps are represented as u64 arrays internally. On LE32 + * machines the order of hi and lo parts of nubmers match the bitmap structure. + * In both cases conversion is not needed when copying data from/to arrays of + * u64. + */ +#if (BITS_PER_LONG == 32) && defined(__BIG_ENDIAN) +void bitmap_from_arr64(unsigned long *bitmap, const u64 *buf, unsigned int nbits); +void bitmap_to_arr64(u64 *buf, const unsigned long *bitmap, unsigned int nbits); +#else +#define bitmap_from_arr64(bitmap, buf, nbits) \ + bitmap_copy_clear_tail((unsigned long *) (bitmap), \ + (const unsigned long *) (buf), (nbits)) +#define bitmap_to_arr64(buf, bitmap, nbits) \ + bitmap_copy_clear_tail((unsigned long *) (buf), \ + (const unsigned long *) (bitmap), (nbits)) +#endif + static inline int bitmap_and(unsigned long *dst, const unsigned long *src1, const unsigned long *src2, unsigned int nbits) { @@ -596,10 +614,7 @@ static inline void bitmap_next_set_region(unsigned long *bitmap, */ static inline void bitmap_from_u64(unsigned long *dst, u64 mask) { - dst[0] = mask & ULONG_MAX; - - if (sizeof(mask) > sizeof(unsigned long)) - dst[1] = mask >> 32; + bitmap_from_arr64(dst, &mask, 64); } /** diff --git a/lib/bitmap.c b/lib/bitmap.c index d9a4480af5b9..aea9493f4216 100644 --- a/lib/bitmap.c +++ b/lib/bitmap.c @@ -1533,5 +1533,52 @@ void bitmap_to_arr32(u32 *buf, const unsigned long *bitmap, unsigned int nbits) buf[halfwords - 1] &= (u32) (UINT_MAX >> ((-nbits) & 31)); } EXPORT_SYMBOL(bitmap_to_arr32); +#endif + +#if (BITS_PER_LONG == 32) && defined(__BIG_ENDIAN) +/** + * bitmap_from_arr64 - copy the contents of u64 array of bits to bitmap + * @bitmap: array of unsigned longs, the destination bitmap + * @buf: array of u64 (in host byte order), the source bitmap + * @nbits: number of bits in @bitmap + */ +void bitmap_from_arr64(unsigned long *bitmap, const u64 *buf, unsigned int nbits) +{ + while (nbits > 0) { + u64 val = *buf++; + + *bitmap++ = (unsigned long)val; + if (nbits > 32) + *bitmap++ = (unsigned long)(val >> 32); + nbits -= 64; + } + /* Clear tail bits in last word beyond nbits. */ + if (nbits % BITS_PER_LONG) + bitmap[-1] &= BITMAP_LAST_WORD_MASK(nbits); +} +EXPORT_SYMBOL(bitmap_from_arr64); + +/** + * bitmap_to_arr64 - copy the contents of bitmap to a u64 array of bits + * @buf: array of u64 (in host byte order), the dest bitmap + * @bitmap: array of unsigned longs, the source bitmap + * @nbits: number of bits in @bitmap + */ +void bitmap_to_arr64(u64 *buf, const unsigned long *bitmap, unsigned int nbits) +{ + unsigned long *end = bitmap + BITS_TO_LONGS(nbits); + + while (bitmap < end) { + *buf = *bitmap++; + if (bitmap < end) + *buf |= *bitmap++ << 32; + buf++; + } + + /* Clear tail bits in last element of array beyond nbits. */ + if (nbits % 64) + buf[-1] &= GENMASK_ULL(nbits, 0); +} +EXPORT_SYMBOL(bitmap_to_arr64); #endif From patchwork Wed Apr 20 22:25:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12820871 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC57EC43217 for ; Wed, 20 Apr 2022 22:25:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1382805AbiDTW2i (ORCPT ); Wed, 20 Apr 2022 18:28:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380305AbiDTW2a (ORCPT ); Wed, 20 Apr 2022 18:28:30 -0400 Received: from mail-qt1-x834.google.com (mail-qt1-x834.google.com [IPv6:2607:f8b0:4864:20::834]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E7C2C41983; Wed, 20 Apr 2022 15:25:34 -0700 (PDT) Received: by mail-qt1-x834.google.com with SMTP id o18so2063223qtk.7; Wed, 20 Apr 2022 15:25:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=yGEzNjv26fUZkerJN19S5j+uBluuftoVLyPl8tEFVgI=; b=YlCDfZY2WsSellZ8sQtSwQunudrhtcHkFKK+m/ZwcQkS6RJcjiVBiw3oSQ+gm9fKLN CvRZVyb5dUJpfrYqqVWSF8pwKamYZ5KdcjBcp1HAJvLS3IqBSPKWjUMNU34au8gAZKdE nhRq381Nzg/1DJBs/R93/lWTCxz9FxgaP0ExNuU7Ek4xoVsSzR5skz+YAf57U1OgMSgY oTV4GwCZYBVn1KCCQiUT9WME1TepygE1l8b/orAvgVnLyhGPz09blXq2D2yOIe8aNRMj dsu1+PoxPDHFPNT2r47/dXwyR3S3VmmWz1QbOy3dMiIbPMbB4knAs9pxB7ejRjlhjZXZ iXeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yGEzNjv26fUZkerJN19S5j+uBluuftoVLyPl8tEFVgI=; b=GHnXkfemlbgYorjRjqqWpXykQGJnW7wxc6A67hInNDVr3+uxiuxCR+7x2wsnIOyf78 MRprrM9hwYwTu86p88+Y5+G2UcGOYQHtbkV1Ier99gKiXvW9AqXxkPyjzRIv4Y+4+Tcj Ol3ahBOPg1lVF/iAv1OpxN7xl0GN/KLp6hJALmpnxPuYTVGN5poBtwP2hX0GxRUevmUA YVT8YVD0fvPoqfci4ihK2jEGflX3iyr+da7wxGSnbaWa9ouayo2kJGjkeeNSVPWwQ8Rm nMAawWH2ChrHmN/L/Fs3QCul2KKrdLWqw0eww9ZCnfhRimj0uY2bNskJyv+/F2UWKYvo Et8w== X-Gm-Message-State: AOAM531EYGPfTDcHMTkhuXFieI8mTnxj5e7Fk6JaKfBlXWhRk2Ribhol mfP+97RZxHZ1xxKvibDI31Sl2rFaQRc= X-Google-Smtp-Source: ABdhPJwfFcdGuS18mNampYGWiv7LRnVZ9031WCdOaVOFiEYbn/Qn8DVd01wYt/CZUyf4eCwjpnjfZw== X-Received: by 2002:ac8:58d2:0:b0:2e1:c5d9:9060 with SMTP id u18-20020ac858d2000000b002e1c5d99060mr15012558qta.168.1650493533822; Wed, 20 Apr 2022 15:25:33 -0700 (PDT) Received: from localhost ([2601:c4:c432:60a:188a:94a5:4e52:4f76]) by smtp.gmail.com with ESMTPSA id p13-20020a05622a048d00b002e1ce0c627csm2652151qtx.58.2022.04.20.15.25.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 15:25:33 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Alexander Gordeev , Andy Shevchenko , Christian Borntraeger , Claudio Imbrenda , David Hildenbrand , Heiko Carstens , Janosch Frank , Rasmus Villemoes , Sven Schnelle , Vasily Gorbik , Yury Norov , linux-s390@vger.kernel.org, kvm@vger.kernel.org Subject: [PATCH 3/4] KVM: s390: replace bitmap_copy with bitmap_{from,to}_arr64 where appropriate Date: Wed, 20 Apr 2022 15:25:29 -0700 Message-Id: <20220420222530.910125-4-yury.norov@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220420222530.910125-1-yury.norov@gmail.com> References: <20220420222530.910125-1-yury.norov@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Copying bitmaps from/to 64-bit arrays with bitmap_copy is not safe in general case. Use designated functions instead. Signed-off-by: Yury Norov Reviewed-by: David Hildenbrand --- arch/s390/kvm/kvm-s390.c | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 156d1c25a3c1..a353bb43ee48 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -1332,8 +1332,7 @@ static int kvm_s390_set_processor_feat(struct kvm *kvm, mutex_unlock(&kvm->lock); return -EBUSY; } - bitmap_copy(kvm->arch.cpu_feat, (unsigned long *) data.feat, - KVM_S390_VM_CPU_FEAT_NR_BITS); + bitmap_from_arr64(kvm->arch.cpu_feat, data.feat, KVM_S390_VM_CPU_FEAT_NR_BITS); mutex_unlock(&kvm->lock); VM_EVENT(kvm, 3, "SET: guest feat: 0x%16.16llx.0x%16.16llx.0x%16.16llx", data.feat[0], @@ -1504,8 +1503,7 @@ static int kvm_s390_get_processor_feat(struct kvm *kvm, { struct kvm_s390_vm_cpu_feat data; - bitmap_copy((unsigned long *) data.feat, kvm->arch.cpu_feat, - KVM_S390_VM_CPU_FEAT_NR_BITS); + bitmap_to_arr64(data.feat, kvm->arch.cpu_feat, KVM_S390_VM_CPU_FEAT_NR_BITS); if (copy_to_user((void __user *)attr->addr, &data, sizeof(data))) return -EFAULT; VM_EVENT(kvm, 3, "GET: guest feat: 0x%16.16llx.0x%16.16llx.0x%16.16llx", @@ -1520,9 +1518,7 @@ static int kvm_s390_get_machine_feat(struct kvm *kvm, { struct kvm_s390_vm_cpu_feat data; - bitmap_copy((unsigned long *) data.feat, - kvm_s390_available_cpu_feat, - KVM_S390_VM_CPU_FEAT_NR_BITS); + bitmap_to_arr64(data.feat, kvm_s390_available_cpu_feat, KVM_S390_VM_CPU_FEAT_NR_BITS); if (copy_to_user((void __user *)attr->addr, &data, sizeof(data))) return -EFAULT; VM_EVENT(kvm, 3, "GET: host feat: 0x%16.16llx.0x%16.16llx.0x%16.16llx", From patchwork Wed Apr 20 22:41:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yury Norov X-Patchwork-Id: 12820902 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E506BC433FE for ; Wed, 20 Apr 2022 22:41:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348447AbiDTWok (ORCPT ); Wed, 20 Apr 2022 18:44:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50910 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1357858AbiDTWod (ORCPT ); Wed, 20 Apr 2022 18:44:33 -0400 Received: from mail-qk1-x731.google.com (mail-qk1-x731.google.com [IPv6:2607:f8b0:4864:20::731]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4923B42A30; Wed, 20 Apr 2022 15:41:32 -0700 (PDT) Received: by mail-qk1-x731.google.com with SMTP id j9so2385109qkg.1; Wed, 20 Apr 2022 15:41:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=SwoMNkAayM1tGtIPzhdqe+GGGbTWcPc9hf6KXEvIQFI=; b=UChEwxvVmlgoXQ/dzIfmm8v8hKOVL5+jR55AcJURNhPlKL+BIsf/KSWCDjG/m1eXeZ 6jjDAdazrGaycUznHOyxrC2wC/ZbfE+aZrdeX9QyQ2QYEghLpWG1riyww47UNcWr85yq ikujT3FmqgtzcOwQFmFyMUh8X/r9jQIEJFF4U6WvWXoG0cHbmEPcSQn/QxvHOSXIXlNc 916QzdTupJB+aL4KVRo0D0qWyxHz/faPCVlUinXns/w+dyi8zZgM3rBdYrvsR2do36an BY93qO1qWMscBHcej3upnI3g42pFwfFuah9iLLmWSuzuApnuggGHz2EDLY4kztW3PjNN VVaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SwoMNkAayM1tGtIPzhdqe+GGGbTWcPc9hf6KXEvIQFI=; b=Z81lp+1Qbe2Qeq47olXvK3Z2sfUMiN7A/gmirotPP0qftDR6Ip8VoClhjmPwFXw0Sq OeJm+ZGorLXAs2wBKaT92U8nywGveZ3t4pBpid/wIPJBaysGd2WQaIMywuYGP5gnw00X VoU5F/CIT0BeHcIaTc7OcYf2LH/0Sj3SFMwMyTR56kunVCV5klpubuP+fAOb6/Z/oBwj xV7+8eVbG9tLv6VcJDEtAmGAniEOL2D7Iaq7dePgdm8KQQt945r055mrDD78c2nSyhVa kbv1GZS/hWP331UQEuTOoh4Kcp9RMIWa/MyKiIM1P2HtRP0UR8tapE9vN6tBsvVdzOtJ dY/Q== X-Gm-Message-State: AOAM533QAbHvRP8xxxzj1S6qZkDwUZZ5xCbLiOU7oDEAXny601o58flb FMvRaH53wdMBpmovMOvjqwgG4zpR8sg= X-Google-Smtp-Source: ABdhPJx+cJj+hjYtc8sD2VNJvWCriKrBX+gkOIJ7TIxfipQHy/mUJXbSZqqrI18/l4r0rk2iFvqyVg== X-Received: by 2002:a05:620a:1a99:b0:680:f33c:dbd3 with SMTP id bl25-20020a05620a1a9900b00680f33cdbd3mr14008035qkb.17.1650494487360; Wed, 20 Apr 2022 15:41:27 -0700 (PDT) Received: from localhost ([2601:c4:c432:60a:7d5c:9c92:ea6c:f1c8]) by smtp.gmail.com with ESMTPSA id x10-20020a37630a000000b0069ecbe5dd32sm2119311qkb.130.2022.04.20.15.41.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Apr 2022 15:41:27 -0700 (PDT) From: Yury Norov To: linux-kernel@vger.kernel.org, Alexander Gordeev , Andy Shevchenko , Christian Borntraeger , Claudio Imbrenda , David Hildenbrand , Heiko Carstens , Janosch Frank , Rasmus Villemoes , Sven Schnelle , Vasily Gorbik , Yury Norov , linux-s390@vger.kernel.org, kvm@vger.kernel.org, Evan Quan , Alex Deucher , =?utf-8?q?Christian_K=C3=B6nig?= , Pan Xinhui , David Airlie , Daniel Vetter , Lijo Lazar , Guchun Chen , Chengming Gui , Darren Powell , Luben Tuikov , Mario Limonciello , Kevin Wang , Xiaomeng Hou , Prike Liang , Yifan Zhang , Tao Zhou , amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 4/4] drm/amd/pm: use bitmap_{from,to}_arr32 where appropriate Date: Wed, 20 Apr 2022 15:41:28 -0700 Message-Id: <20220420224128.911759-1-yury.norov@gmail.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220420222530.910125-1-yury.norov@gmail.com> References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org The smu_v1X_0_set_allowed_mask() uses bitmap_copy() to convert bitmap to 32-bit array. This may be wrong due to endianness issues. Fix it by switching to bitmap_{from,to}_arr32. Signed-off-by: Yury Norov --- drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c | 2 +- drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c index b87f550af26b..5f8809f6990d 100644 --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c @@ -781,7 +781,7 @@ int smu_v11_0_set_allowed_mask(struct smu_context *smu) goto failed; } - bitmap_copy((unsigned long *)feature_mask, feature->allowed, 64); + bitmap_to_arr32(feature_mask, feature->allowed, 64); ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetAllowedFeaturesMaskHigh, feature_mask[1], NULL); diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c index cf09e30bdfe0..747430ce6394 100644 --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c @@ -730,7 +730,7 @@ int smu_v13_0_set_allowed_mask(struct smu_context *smu) feature->feature_num < 64) return -EINVAL; - bitmap_copy((unsigned long *)feature_mask, feature->allowed, 64); + bitmap_to_arr32(feature_mask, feature->allowed, 64); ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetAllowedFeaturesMaskHigh, feature_mask[1], NULL);