From patchwork Sat May 15 00:27:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12259321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6BEDAC433ED for ; Sat, 15 May 2021 00:27:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 19E2261444 for ; Sat, 15 May 2021 00:27:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 19E2261444 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ACBC76B007B; Fri, 14 May 2021 20:27:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA2826B007D; Fri, 14 May 2021 20:27:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 96BBC6B007E; Fri, 14 May 2021 20:27:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0168.hostedemail.com [216.40.44.168]) by kanga.kvack.org (Postfix) with ESMTP id 06D586B007B for ; Fri, 14 May 2021 20:27:26 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 96D349885 for ; Sat, 15 May 2021 00:27:26 +0000 (UTC) X-FDA: 78141576492.39.D2AA390 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf09.hostedemail.com (Postfix) with ESMTP id 1D3A66000111 for ; Sat, 15 May 2021 00:27:24 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 3EC0361440; Sat, 15 May 2021 00:27:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1621038445; bh=SgpnbkSZHTi4AWM+dQwq9Re1kiGetDCBWeaMldKCkCE=; h=Date:From:To:Subject:In-Reply-To:From; b=eJ6c/FM/b94SZUvBfznpgNhbI63k1+kxTq2SVrBtMlVwJ8u2lG4n2JqZlGzs2fIgQ UfU/+gojbSka0Re0Za1qDDEGgPW94yxG6KCJrnO0Yc15YuBAqLHbgxWbCNTLhYl+Oz NL1/pI/tVeU/MJC/212EzmtqxvZK2YxE0+P45Y34= Date: Fri, 14 May 2021 17:27:24 -0700 From: Andrew Morton To: akpm@linux-foundation.org, brouer@redhat.com, ilias.apalodimas@linaro.org, linux-mm@kvack.org, mcroce@linux.microsoft.com, mm-commits@vger.kernel.org, stable@vger.kernel.org, torvalds@linux-foundation.org, vbabka@suse.cz, willy@infradead.org Subject: [patch 08/13] mm: fix struct page layout on 32-bit systems Message-ID: <20210515002724.4C8uJfcjY%akpm@linux-foundation.org> In-Reply-To: <20210514172634.9018621171d5334ceee97e95@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: 1D3A66000111 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b="eJ6c/FM/"; spf=pass (imf09.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Rspamd-Server: rspam04 X-Stat-Signature: 49hgmt494exjy6brts3jw4q1r6zbq9mc X-HE-Tag: 1621038444-300582 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Subject: mm: fix struct page layout on 32-bit systems 32-bit architectures which expect 8-byte alignment for 8-byte integers and need 64-bit DMA addresses (arm, mips, ppc) had their struct page inadvertently expanded in 2019. When the dma_addr_t was added, it forced the alignment of the union to 8 bytes, which inserted a 4 byte gap between 'flags' and the union. Fix this by storing the dma_addr_t in one or two adjacent unsigned longs. This restores the alignment to that of an unsigned long. We always store the low bits in the first word to prevent the PageTail bit from being inadvertently set on a big endian platform. If that happened, get_user_pages_fast() racing against a page which was freed and reallocated to the page_pool could dereference a bogus compound_head(), which would be hard to trace back to this cause. Link: https://lkml.kernel.org/r/20210510153211.1504886-1-willy@infradead.org Fixes: c25fff7171be ("mm: add dma_addr_t to struct page") Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Ilias Apalodimas Acked-by: Jesper Dangaard Brouer Acked-by: Vlastimil Babka Tested-by: Matteo Croce Cc: Signed-off-by: Andrew Morton --- include/linux/mm_types.h | 4 ++-- include/net/page_pool.h | 12 +++++++++++- net/core/page_pool.c | 12 +++++++----- 3 files changed, 20 insertions(+), 8 deletions(-) --- a/include/linux/mm_types.h~mm-fix-struct-page-layout-on-32-bit-systems +++ a/include/linux/mm_types.h @@ -97,10 +97,10 @@ struct page { }; struct { /* page_pool used by netstack */ /** - * @dma_addr: might require a 64-bit value even on + * @dma_addr: might require a 64-bit value on * 32-bit architectures. */ - dma_addr_t dma_addr; + unsigned long dma_addr[2]; }; struct { /* slab, slob and slub */ union { --- a/include/net/page_pool.h~mm-fix-struct-page-layout-on-32-bit-systems +++ a/include/net/page_pool.h @@ -198,7 +198,17 @@ static inline void page_pool_recycle_dir static inline dma_addr_t page_pool_get_dma_addr(struct page *page) { - return page->dma_addr; + dma_addr_t ret = page->dma_addr[0]; + if (sizeof(dma_addr_t) > sizeof(unsigned long)) + ret |= (dma_addr_t)page->dma_addr[1] << 16 << 16; + return ret; +} + +static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr) +{ + page->dma_addr[0] = addr; + if (sizeof(dma_addr_t) > sizeof(unsigned long)) + page->dma_addr[1] = upper_32_bits(addr); } static inline bool is_page_pool_compiled_in(void) --- a/net/core/page_pool.c~mm-fix-struct-page-layout-on-32-bit-systems +++ a/net/core/page_pool.c @@ -174,8 +174,10 @@ static void page_pool_dma_sync_for_devic struct page *page, unsigned int dma_sync_size) { + dma_addr_t dma_addr = page_pool_get_dma_addr(page); + dma_sync_size = min(dma_sync_size, pool->p.max_len); - dma_sync_single_range_for_device(pool->p.dev, page->dma_addr, + dma_sync_single_range_for_device(pool->p.dev, dma_addr, pool->p.offset, dma_sync_size, pool->p.dma_dir); } @@ -195,7 +197,7 @@ static bool page_pool_dma_map(struct pag if (dma_mapping_error(pool->p.dev, dma)) return false; - page->dma_addr = dma; + page_pool_set_dma_addr(page, dma); if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV) page_pool_dma_sync_for_device(pool, page, pool->p.max_len); @@ -331,13 +333,13 @@ void page_pool_release_page(struct page_ */ goto skip_dma_unmap; - dma = page->dma_addr; + dma = page_pool_get_dma_addr(page); - /* When page is unmapped, it cannot be returned our pool */ + /* When page is unmapped, it cannot be returned to our pool */ dma_unmap_page_attrs(pool->p.dev, dma, PAGE_SIZE << pool->p.order, pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC); - page->dma_addr = 0; + page_pool_set_dma_addr(page, 0); skip_dma_unmap: /* This may be the last page returned, releasing the pool, so * it is not safe to reference pool afterwards.