From patchwork Thu Apr 30 20:38:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 11521561 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D540613B2 for ; Thu, 30 Apr 2020 20:41:56 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B2E8F206C0 for ; Thu, 30 Apr 2020 20:41:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="BF4nknvl" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B2E8F206C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Xn2C3qdsUZbJmrltZrkBJwAtl5qYLcW1IHHE6rQYf4w=; b=BF4nknvlHIMLHP d8c4Twd26eZKnF9emrzMJZ7odTEJDi749Lxj14mjCIgqPFxoiW80gtyeYPcIRJKy8UvLyY1tkdgWQ Fr3Yk4c7ljoH02092PoAUspb50oTZW3i4Tq3/udS/ktzzGzIvjhav2t+amiLmtaA6VxqNXCU7uQav Zlf3NZ6kpS8SnC4HiuLPcmlNIoN9GaCSrry9GQAmBZyBTgNVMXOWN7lJo+OFr6/BgE+K14cTInO++ 3MVPSuGQc69L22cUDd6DK/vlupn1cWp9Q0wcTfYcdVD7woq4gFU58jDmpMN6Jmv8KnuGNXF6GAw5b JCPhjkqJuILnMy8oyz0Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUG0G-0004iE-2F; Thu, 30 Apr 2020 20:41:52 +0000 Received: from mga07.intel.com ([134.134.136.100]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jUFxQ-0000Ke-Tb; Thu, 30 Apr 2020 20:38:59 +0000 IronPort-SDR: +4147qTMZKdabug61xoM6leqZcHEIRFalywgHJYP4QyVwnhwQXEPkoIpNLAD7YjYL5Yr9E0w4R 5x7Il7sgRD4Q== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2020 13:38:56 -0700 IronPort-SDR: raQpdC72YFymAIww5pBWhgwTlpswHIXh/yYnXb0jxGotejI0KfGe2RpyzN+m+jLvUgvh7ne7hF E5EUVtWuHrAw== X-IronPort-AV: E=Sophos;i="5.73,337,1583222400"; d="scan'208";a="459712118" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Apr 2020 13:38:54 -0700 From: ira.weiny@intel.com To: linux-kernel@vger.kernel.org, Andrew Morton , Christian Koenig , Huang Rui Subject: [PATCH V1 08/10] arch/kmap: Don't hard code kmap_prot values Date: Thu, 30 Apr 2020 13:38:43 -0700 Message-Id: <20200430203845.582900-9-ira.weiny@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200430203845.582900-1-ira.weiny@intel.com> References: <20200430203845.582900-1-ira.weiny@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200430_133857_052088_111C381A X-CRM114-Status: GOOD ( 10.21 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [134.134.136.100 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Zijlstra , Benjamin Herrenschmidt , Dave Hansen , dri-devel@lists.freedesktop.org, "James E.J. Bottomley" , Max Filippov , Paul Mackerras , "H. Peter Anvin" , sparclinux@vger.kernel.org, Ira Weiny , Thomas Gleixner , Helge Deller , x86@kernel.org, linux-csky@vger.kernel.org, Ingo Molnar , linux-snps-arc@lists.infradead.org, linux-xtensa@linux-xtensa.org, Borislav Petkov , Andy Lutomirski , Dan Williams , linux-arm-kernel@lists.infradead.org, Chris Zankel , Thomas Bogendoerfer , linux-parisc@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, "David S. Miller" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Ira Weiny To support kmap_atomic_prot() on all architectures each arch must support protections passed in to them. Change csky, mips, nds32 and xtensa to use their global kmap_prot value rather than a hard coded value which was equal. Signed-off-by: Ira Weiny Reviewed-by: Christoph Hellwig --- arch/csky/mm/highmem.c | 2 +- arch/mips/mm/highmem.c | 2 +- arch/nds32/mm/highmem.c | 2 +- arch/xtensa/mm/highmem.c | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c index 0aafbbbe651c..f4311669b5bb 100644 --- a/arch/csky/mm/highmem.c +++ b/arch/csky/mm/highmem.c @@ -32,7 +32,7 @@ void *kmap_atomic_high(struct page *page) #ifdef CONFIG_DEBUG_HIGHMEM BUG_ON(!pte_none(*(kmap_pte - idx))); #endif - set_pte(kmap_pte-idx, mk_pte(page, PAGE_KERNEL)); + set_pte(kmap_pte-idx, mk_pte(page, kmap_prot)); flush_tlb_one((unsigned long)vaddr); return (void *)vaddr; diff --git a/arch/mips/mm/highmem.c b/arch/mips/mm/highmem.c index 155fbb107b35..87023bd1a33c 100644 --- a/arch/mips/mm/highmem.c +++ b/arch/mips/mm/highmem.c @@ -29,7 +29,7 @@ void *kmap_atomic_high(struct page *page) #ifdef CONFIG_DEBUG_HIGHMEM BUG_ON(!pte_none(*(kmap_pte - idx))); #endif - set_pte(kmap_pte-idx, mk_pte(page, PAGE_KERNEL)); + set_pte(kmap_pte-idx, mk_pte(page, kmap_prot)); local_flush_tlb_one((unsigned long)vaddr); return (void*) vaddr; diff --git a/arch/nds32/mm/highmem.c b/arch/nds32/mm/highmem.c index f6e6915c0d31..809f8c830f06 100644 --- a/arch/nds32/mm/highmem.c +++ b/arch/nds32/mm/highmem.c @@ -21,7 +21,7 @@ void *kmap_atomic_high(struct page *page) idx = type + KM_TYPE_NR * smp_processor_id(); vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); - pte = (page_to_pfn(page) << PAGE_SHIFT) | (PAGE_KERNEL); + pte = (page_to_pfn(page) << PAGE_SHIFT) | (kmap_prot); ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr); set_pte(ptep, pte); diff --git a/arch/xtensa/mm/highmem.c b/arch/xtensa/mm/highmem.c index f57a7770eb08..8c58c4c37033 100644 --- a/arch/xtensa/mm/highmem.c +++ b/arch/xtensa/mm/highmem.c @@ -48,7 +48,7 @@ void *kmap_atomic_high(struct page *page) #ifdef CONFIG_DEBUG_HIGHMEM BUG_ON(!pte_none(*(kmap_pte + idx))); #endif - set_pte(kmap_pte + idx, mk_pte(page, PAGE_KERNEL_EXEC)); + set_pte(kmap_pte + idx, mk_pte(page, kmap_prot)); return (void *)vaddr; }