From patchwork Fri Jul 1 16:55:26 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Per Forlin X-Patchwork-Id: 936852 Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by demeter1.kernel.org (8.14.4/8.14.4) with ESMTP id p61GwUgJ002777 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Fri, 1 Jul 2011 16:58:51 GMT Received: from canuck.infradead.org ([2001:4978:20e::1]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1Qch2M-0005QU-Ak; Fri, 01 Jul 2011 16:57:54 +0000 Received: from localhost ([127.0.0.1] helo=canuck.infradead.org) by canuck.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1Qch2J-0000pu-UO; Fri, 01 Jul 2011 16:57:48 +0000 Received: from mail-bw0-f49.google.com ([209.85.214.49]) by canuck.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1Qch16-0000RU-CV for linux-arm-kernel@lists.infradead.org; Fri, 01 Jul 2011 16:56:35 +0000 Received: by mail-bw0-f49.google.com with SMTP id 12so3721176bwf.36 for ; Fri, 01 Jul 2011 09:56:32 -0700 (PDT) Received: by 10.205.37.194 with SMTP id tf2mr3315798bkb.181.1309539391797; Fri, 01 Jul 2011 09:56:31 -0700 (PDT) Received: from localhost.localdomain (c-3c7b71d5.029-82-6c756e10.cust.bredbandsbolaget.se [213.113.123.60]) by mx.google.com with ESMTPS id l24sm3176525bkw.3.2011.07.01.09.56.29 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 01 Jul 2011 09:56:30 -0700 (PDT) From: Per Forlin To: linaro-dev@lists.linaro.org, Nicolas Pitre , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mmc@vger.kernel.org, Venkatraman S , Linus Walleij , Kyungmin Park , Arnd Bergmann , Sourav Poddar , Chris Ball Subject: [PATCH v9 05/12] mmc: mmc_test: add test for non-blocking transfers Date: Fri, 1 Jul 2011 18:55:26 +0200 Message-Id: <1309539333-2606-6-git-send-email-per.forlin@linaro.org> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1309539333-2606-1-git-send-email-per.forlin@linaro.org> References: <1309539333-2606-1-git-send-email-per.forlin@linaro.org> X-CRM114-Version: 20090807-BlameThorstenAndJenny ( TRE 0.7.6 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20110701_125632_719218_CDE585B6 X-CRM114-Status: GOOD ( 22.71 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.3.1 on canuck.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.214.49 listed in list.dnswl.org] Cc: Per Forlin X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Fri, 01 Jul 2011 16:58:51 +0000 (UTC) Add four tests for read and write performance per different transfer size, 4k to 4M. * Read using blocking mmc request * Read using non-blocking mmc request * Write using blocking mmc request * Write using non-blocking mmc request The host dirver must support pre_req() and post_req() in order to run the non-blocking test cases. Signed-off-by: Per Forlin Acked-by: Kyungmin Park Acked-by: Arnd Bergmann Reviewed-by: Venkatraman S Tested-by: Sourav Poddar Tested-by: Linus Walleij --- drivers/mmc/card/mmc_test.c | 318 +++++++++++++++++++++++++++++++++++++++++- 1 files changed, 310 insertions(+), 8 deletions(-) diff --git a/drivers/mmc/card/mmc_test.c b/drivers/mmc/card/mmc_test.c index e8508e9..51031a5 100644 --- a/drivers/mmc/card/mmc_test.c +++ b/drivers/mmc/card/mmc_test.c @@ -148,6 +148,26 @@ struct mmc_test_card { struct mmc_test_general_result *gr; }; +enum mmc_test_prep_media { + MMC_TEST_PREP_NONE = 0, + MMC_TEST_PREP_WRITE_FULL = 1 << 0, + MMC_TEST_PREP_ERASE = 1 << 1, +}; + +struct mmc_test_multiple_rw { + unsigned int *bs; + unsigned int len; + unsigned int size; + bool do_write; + bool do_nonblock_req; + enum mmc_test_prep_media prepare; +}; + +struct mmc_test_async_req { + struct mmc_async_req areq; + struct mmc_test_card *test; +}; + /*******************************************************************/ /* General helper functions */ /*******************************************************************/ @@ -661,7 +681,7 @@ static void mmc_test_prepare_broken_mrq(struct mmc_test_card *test, * Checks that a normal transfer didn't have any errors */ static int mmc_test_check_result(struct mmc_test_card *test, - struct mmc_request *mrq) + struct mmc_request *mrq) { int ret; @@ -685,6 +705,17 @@ static int mmc_test_check_result(struct mmc_test_card *test, return ret; } +static int mmc_test_check_result_async(struct mmc_card *card, + struct mmc_async_req *areq) +{ + struct mmc_test_async_req *test_async = + container_of(areq, struct mmc_test_async_req, areq); + + mmc_test_wait_busy(test_async->test); + + return mmc_test_check_result(test_async->test, areq->mrq); +} + /* * Checks that a "short transfer" behaved as expected */ @@ -720,6 +751,85 @@ static int mmc_test_check_broken_result(struct mmc_test_card *test, } /* + * Tests nonblock transfer with certain parameters + */ +static void mmc_test_nonblock_reset(struct mmc_request *mrq, + struct mmc_command *cmd, + struct mmc_command *stop, + struct mmc_data *data) +{ + memset(mrq, 0, sizeof(struct mmc_request)); + memset(cmd, 0, sizeof(struct mmc_command)); + memset(data, 0, sizeof(struct mmc_data)); + memset(stop, 0, sizeof(struct mmc_command)); + + mrq->cmd = cmd; + mrq->data = data; + mrq->stop = stop; +} +static int mmc_test_nonblock_transfer(struct mmc_test_card *test, + struct scatterlist *sg, unsigned sg_len, + unsigned dev_addr, unsigned blocks, + unsigned blksz, int write, int count) +{ + struct mmc_request mrq1; + struct mmc_command cmd1; + struct mmc_command stop1; + struct mmc_data data1; + + struct mmc_request mrq2; + struct mmc_command cmd2; + struct mmc_command stop2; + struct mmc_data data2; + + struct mmc_test_async_req test_areq[2]; + struct mmc_async_req *done_areq; + struct mmc_async_req *cur_areq = &test_areq[0].areq; + struct mmc_async_req *other_areq = &test_areq[1].areq; + int i; + int ret; + + test_areq[0].test = test; + test_areq[1].test = test; + + mmc_test_nonblock_reset(&mrq1, &cmd1, &stop1, &data1); + mmc_test_nonblock_reset(&mrq2, &cmd2, &stop2, &data2); + + cur_areq->mrq = &mrq1; + cur_areq->err_check = mmc_test_check_result_async; + other_areq->mrq = &mrq2; + other_areq->err_check = mmc_test_check_result_async; + + for (i = 0; i < count; i++) { + mmc_test_prepare_mrq(test, cur_areq->mrq, sg, sg_len, dev_addr, + blocks, blksz, write); + done_areq = mmc_start_req(test->card->host, cur_areq, &ret); + + if (ret || (!done_areq && i > 0)) + goto err; + + if (done_areq) { + if (done_areq->mrq == &mrq2) + mmc_test_nonblock_reset(&mrq2, &cmd2, + &stop2, &data2); + else + mmc_test_nonblock_reset(&mrq1, &cmd1, + &stop1, &data1); + } + done_areq = cur_areq; + cur_areq = other_areq; + other_areq = done_areq; + dev_addr += blocks; + } + + done_areq = mmc_start_req(test->card->host, NULL, &ret); + + return ret; +err: + return ret; +} + +/* * Tests a basic transfer with certain parameters */ static int mmc_test_simple_transfer(struct mmc_test_card *test, @@ -1336,14 +1446,17 @@ static int mmc_test_area_transfer(struct mmc_test_card *test, } /* - * Map and transfer bytes. + * Map and transfer bytes for multiple transfers. */ -static int mmc_test_area_io(struct mmc_test_card *test, unsigned long sz, - unsigned int dev_addr, int write, int max_scatter, - int timed) +static int mmc_test_area_io_seq(struct mmc_test_card *test, unsigned long sz, + unsigned int dev_addr, int write, + int max_scatter, int timed, int count, + bool nonblock) { struct timespec ts1, ts2; - int ret; + int ret = 0; + int i; + struct mmc_test_area *t = &test->area; /* * In the case of a maximally scattered transfer, the maximum transfer @@ -1367,8 +1480,15 @@ static int mmc_test_area_io(struct mmc_test_card *test, unsigned long sz, if (timed) getnstimeofday(&ts1); + if (nonblock) + ret = mmc_test_nonblock_transfer(test, t->sg, t->sg_len, + dev_addr, t->blocks, 512, write, count); + else + for (i = 0; i < count && ret == 0; i++) { + ret = mmc_test_area_transfer(test, dev_addr, write); + dev_addr += sz >> 9; + } - ret = mmc_test_area_transfer(test, dev_addr, write); if (ret) return ret; @@ -1376,11 +1496,19 @@ static int mmc_test_area_io(struct mmc_test_card *test, unsigned long sz, getnstimeofday(&ts2); if (timed) - mmc_test_print_rate(test, sz, &ts1, &ts2); + mmc_test_print_avg_rate(test, sz, count, &ts1, &ts2); return 0; } +static int mmc_test_area_io(struct mmc_test_card *test, unsigned long sz, + unsigned int dev_addr, int write, int max_scatter, + int timed) +{ + return mmc_test_area_io_seq(test, sz, dev_addr, write, max_scatter, + timed, 1, false); +} + /* * Write the test area entirely. */ @@ -1954,6 +2082,153 @@ static int mmc_test_large_seq_write_perf(struct mmc_test_card *test) return mmc_test_large_seq_perf(test, 1); } +static int mmc_test_rw_multiple(struct mmc_test_card *test, + struct mmc_test_multiple_rw *tdata, + unsigned int reqsize, unsigned int size) +{ + unsigned int dev_addr; + struct mmc_test_area *t = &test->area; + int ret = 0; + + /* Set up test area */ + if (size > mmc_test_capacity(test->card) / 2 * 512) + size = mmc_test_capacity(test->card) / 2 * 512; + if (reqsize > t->max_tfr) + reqsize = t->max_tfr; + dev_addr = mmc_test_capacity(test->card) / 4; + if ((dev_addr & 0xffff0000)) + dev_addr &= 0xffff0000; /* Round to 64MiB boundary */ + else + dev_addr &= 0xfffff800; /* Round to 1MiB boundary */ + if (!dev_addr) + goto err; + + if (reqsize > size) + return 0; + + /* prepare test area */ + if (mmc_can_erase(test->card) && + tdata->prepare & MMC_TEST_PREP_ERASE) { + ret = mmc_erase(test->card, dev_addr, + size / 512, MMC_SECURE_ERASE_ARG); + if (ret) + ret = mmc_erase(test->card, dev_addr, + size / 512, MMC_ERASE_ARG); + if (ret) + goto err; + } + + /* Run test */ + ret = mmc_test_area_io_seq(test, reqsize, dev_addr, + tdata->do_write, 0, 1, size / reqsize, + tdata->do_nonblock_req); + if (ret) + goto err; + + return ret; + err: + printk(KERN_INFO "[%s] error\n", __func__); + return ret; +} + +static int mmc_test_rw_multiple_size(struct mmc_test_card *test, + struct mmc_test_multiple_rw *rw) +{ + int ret = 0; + int i; + void *pre_req = test->card->host->ops->pre_req; + void *post_req = test->card->host->ops->post_req; + + if (rw->do_nonblock_req && + ((!pre_req && post_req) || (pre_req && !post_req))) { + printk(KERN_INFO "error: only one of pre/post is defined\n"); + return -EINVAL; + } + + for (i = 0 ; i < rw->len && ret == 0; i++) { + ret = mmc_test_rw_multiple(test, rw, rw->bs[i], rw->size); + if (ret) + break; + } + return ret; +} + +/* + * Multiple blocking write 4k to 4 MB chunks + */ +static int mmc_test_profile_mult_write_blocking_perf(struct mmc_test_card *test) +{ + unsigned int bs[] = {1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16, + 1 << 17, 1 << 18, 1 << 19, 1 << 20, 1 << 22}; + struct mmc_test_multiple_rw test_data = { + .bs = bs, + .size = TEST_AREA_MAX_SIZE, + .len = ARRAY_SIZE(bs), + .do_write = true, + .do_nonblock_req = false, + .prepare = MMC_TEST_PREP_ERASE, + }; + + return mmc_test_rw_multiple_size(test, &test_data); +}; + +/* + * Multiple non-blocking write 4k to 4 MB chunks + */ +static int mmc_test_profile_mult_write_nonblock_perf(struct mmc_test_card *test) +{ + unsigned int bs[] = {1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16, + 1 << 17, 1 << 18, 1 << 19, 1 << 20, 1 << 22}; + struct mmc_test_multiple_rw test_data = { + .bs = bs, + .size = TEST_AREA_MAX_SIZE, + .len = ARRAY_SIZE(bs), + .do_write = true, + .do_nonblock_req = true, + .prepare = MMC_TEST_PREP_ERASE, + }; + + return mmc_test_rw_multiple_size(test, &test_data); +} + +/* + * Multiple blocking read 4k to 4 MB chunks + */ +static int mmc_test_profile_mult_read_blocking_perf(struct mmc_test_card *test) +{ + unsigned int bs[] = {1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16, + 1 << 17, 1 << 18, 1 << 19, 1 << 20, 1 << 22}; + struct mmc_test_multiple_rw test_data = { + .bs = bs, + .size = TEST_AREA_MAX_SIZE, + .len = ARRAY_SIZE(bs), + .do_write = false, + .do_nonblock_req = false, + .prepare = MMC_TEST_PREP_NONE, + }; + + return mmc_test_rw_multiple_size(test, &test_data); +} + +/* + * Multiple non-blocking read 4k to 4 MB chunks + */ +static int mmc_test_profile_mult_read_nonblock_perf(struct mmc_test_card *test) +{ + unsigned int bs[] = {1 << 12, 1 << 13, 1 << 14, 1 << 15, 1 << 16, + 1 << 17, 1 << 18, 1 << 19, 1 << 20, 1 << 22}; + struct mmc_test_multiple_rw test_data = { + .bs = bs, + .size = TEST_AREA_MAX_SIZE, + .len = ARRAY_SIZE(bs), + .do_write = false, + .do_nonblock_req = true, + .prepare = MMC_TEST_PREP_NONE, + }; + + return mmc_test_rw_multiple_size(test, &test_data); +} + static const struct mmc_test_case mmc_test_cases[] = { { .name = "Basic write (no data verification)", @@ -2221,6 +2496,33 @@ static const struct mmc_test_case mmc_test_cases[] = { .cleanup = mmc_test_area_cleanup, }, + { + .name = "Write performance with blocking req 4k to 4MB", + .prepare = mmc_test_area_prepare, + .run = mmc_test_profile_mult_write_blocking_perf, + .cleanup = mmc_test_area_cleanup, + }, + + { + .name = "Write performance with non-blocking req 4k to 4MB", + .prepare = mmc_test_area_prepare, + .run = mmc_test_profile_mult_write_nonblock_perf, + .cleanup = mmc_test_area_cleanup, + }, + + { + .name = "Read performance with blocking req 4k to 4MB", + .prepare = mmc_test_area_prepare, + .run = mmc_test_profile_mult_read_blocking_perf, + .cleanup = mmc_test_area_cleanup, + }, + + { + .name = "Read performance with non-blocking req 4k to 4MB", + .prepare = mmc_test_area_prepare, + .run = mmc_test_profile_mult_read_nonblock_perf, + .cleanup = mmc_test_area_cleanup, + }, }; static DEFINE_MUTEX(mmc_test_lock);