From patchwork Wed Nov 29 20:21:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shiraz Saleem X-Patchwork-Id: 13473416 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="X8wOkIzU" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6069F8F for ; Wed, 29 Nov 2023 12:21:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1701289318; x=1732825318; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=YfvxXlILWLoUrRdfY091vC8v7XL3slbOZIDf8yIGBRY=; b=X8wOkIzU5fOlxVo2sAEaBd9SFz6aK/MJqWKCIFeKUcLr1oX1wsrRWaHx ZFTz6H/kJdFgekqvCRza1ZCbrw0f94zlndqlfXjrJoyhuwfagjfL75fWd upmvNXfDnskMgVqZq5lpTad5fcGpNwwWbBtaFWfWSt3n+NEZuHSAZqdtF CkEZA6yDJ/gVCj4sQF88sLbnob9UmjLkfagsiB2RjcpQFIzYvEJvikmpv 74BXxrTnHqhUlZVLN3j/CJWuh726Y7zFZ/mS5mpY/FrAgE/C7rA5gsAgg CNMUTv+qXuIXXCZnEOjyfrgbFKT9C/prLNknA8njkPmwyFpq1wXBXliQc g==; X-IronPort-AV: E=McAfee;i="6600,9927,10909"; a="392087192" X-IronPort-AV: E=Sophos;i="6.04,237,1695711600"; d="scan'208";a="392087192" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2023 12:21:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10909"; a="859952975" X-IronPort-AV: E=Sophos;i="6.04,237,1695711600"; d="scan'208";a="859952975" Received: from ssaleem-mobl1.amr.corp.intel.com ([10.124.161.227]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Nov 2023 12:21:57 -0800 From: Shiraz Saleem To: jgg@nvidia.com, leon@kernel.org Cc: linux-rdma@vger.kernel.org, Shiraz Saleem Subject: [PATCH v2 for-rc 0/3] Fixes for 64K page size support Date: Wed, 29 Nov 2023 14:21:40 -0600 Message-Id: <20231129202143.1434-1-shiraz.saleem@intel.com> X-Mailer: git-send-email 2.39.0 Precedence: bulk X-Mailing-List: linux-rdma@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 This is a three patch series. The first core hunk corrects the core iterator to use __sg_advance to skip preceding 4k HCA pages. The second patch corrects an iWarp issue where the SQ must be PAGE_SIZE aligned. The third patch corrects an issue with the RDMA driver use of ib_umem_find_best_pgsz(). QP and CQ allocations pass PAGE_SIZE as the only bitmap bit. This is incorrect and should use the precise 4k value. v1->v2: Add a umem specific block iter next function Mike Marciniszyn (3): RDMA/core: Fix umem iterator when PAGE_SIZE is greater then HCA pgsz RDMA/irdma: Ensure iWarp QP queue memory is OS paged aligned RDMA/irdma: Fix support for 64k pages drivers/infiniband/core/umem.c | 6 ------ drivers/infiniband/hw/irdma/verbs.c | 7 ++++++- include/rdma/ib_umem.h | 9 ++++++++- include/rdma/ib_verbs.h | 1 + 4 files changed, 15 insertions(+), 8 deletions(-)