From patchwork Tue Feb 15 00:09:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Burkov X-Patchwork-Id: 12746261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ECDA4C4167E for ; Tue, 15 Feb 2022 00:10:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232439AbiBOAKQ (ORCPT ); Mon, 14 Feb 2022 19:10:16 -0500 Received: from mxb-00190b01.gslb.pphosted.com ([23.128.96.19]:49746 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232206AbiBOAKQ (ORCPT ); Mon, 14 Feb 2022 19:10:16 -0500 Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 521BAA66DF; Mon, 14 Feb 2022 16:10:07 -0800 (PST) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id B45BE5C02DF; Mon, 14 Feb 2022 19:10:06 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Mon, 14 Feb 2022 19:10:06 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bur.io; h=cc :content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm3; bh=c+6WOi8GCmL9N22ybsVenfpNpb+mL3 qfkVVEbLFiylU=; b=PBuVo3EqOVU3IEyCSYcd6M10s1FTJpj5OHhqexMVgqZg5v GcQ2pRXHLFLHBnuFS1CFYRLFk1apjrgmr1BP/i0Z/qHrf6KQ7pE0Ufw/X3HjHxQ0 L7xfRYMY4UrgtPfrhaQDIyhoShCaxjjaag26Vpej+Ltxhd69iiX5Kh1RlV7Apn1w G0Ko0PL5IKu5rySeftfSpVxQ9ge4OudbTcCYCPw4A0DRQ7B8nNhu2/pp+KwWFlz2 /7SPHkj73EehCbfikYwVmEZbxwHj0hlUCL79BO3ZQPm7K/A/pfkuFY4zia4ckQed 96vGEYAFdmonOgg9Qjm67k2HBOioPwUYacUH/lxw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:date :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=c+6WOi 8GCmL9N22ybsVenfpNpb+mL3qfkVVEbLFiylU=; b=j/UHS0yUfl7YpKinNDPElo XuNOj4GFoLFzocbRcvs2E7GBfkJmjddQyrnToExRofLLIXbb1quPKo05yLiLGfRW hINODjnV389/v0tw4R7vu7Y7ln9WaQvNMIDpFZA1heRTFcsZr+9RhSuuQojcG/Cb GU4C1C5++My8QQ1fWZna/yJKnNxsQgE0mB7Xptin+JoUOvgj4Oh/6WxQ/R/EQLH+ DAj8qIi2qmaaOhMzMgdPg8XmzxVbVfskAduHBzJgNZZSR6/dEXNnmkxEQVuGrzq2 NLWg3ebaH6dyxk/H9zEvRlTOXG2qIYpAiKEXg8dgpBAfebWgAKiupv1AJaFPc0sQ == X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvvddrjeefgddulecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpeeuohhrihhsuceuuhhrkhhovhcuoegsohhrihhssegsuhhrrdhi oheqnecuggftrfgrthhtvghrnhepieeuffeuvdeiueejhfehiefgkeevudejjeejffevvd ehtddufeeihfekgeeuheelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm rghilhhfrhhomhepsghorhhishessghurhdrihho X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 14 Feb 2022 19:10:06 -0500 (EST) From: Boris Burkov To: fstests@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v6 3/4] btrfs: test verity orphans with dmlogwrites Date: Mon, 14 Feb 2022 16:09:57 -0800 Message-Id: X-Mailer: git-send-email 2.34.0 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fscrypt@vger.kernel.org The behavior of orphans is most interesting across mounts, interrupted at arbitrary points during fsverity enable. To cover as many such cases as possible, use dmlogwrites and dmsnapshot as in log-writes/replay-individual.sh. At each log entry, we want to assert a somewhat complicated invariant: If verity has not yet started: an orphan indicates that verity has started. If verity has started: mount should handle the orphan and blow away verity data: expect 0 merkle items after mounting the snapshot dev. If we can measure the file, verity has finished. If verity has finished: the orphan should be gone, so mount should not blow away merkle items. Expect the same number of merkle items before and after mounting the snapshot dev. Note that this relies on grepping btrfs inspect-internal dump-tree. Until btrfs-progs has the ability to print the new Merkle items, they will show up as UNKNOWN.36/37. Signed-off-by: Boris Burkov --- tests/btrfs/291 | 161 ++++++++++++++++++++++++++++++++++++++++++++ tests/btrfs/291.out | 2 + 2 files changed, 163 insertions(+) create mode 100755 tests/btrfs/291 create mode 100644 tests/btrfs/291.out diff --git a/tests/btrfs/291 b/tests/btrfs/291 new file mode 100755 index 00000000..1bb3f1b3 --- /dev/null +++ b/tests/btrfs/291 @@ -0,0 +1,161 @@ +#! /bin/bash +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021 Facebook, Inc. All Rights Reserved. +# +# FS QA Test 291 +# +# Test btrfs consistency after each FUA while enabling verity on a file +# This test works by following the pattern in log-writes/replay-individual.sh: +# 1. run a workload (verity + sync) while logging to the log device +# 2. replay an entry to the replay device +# 3. snapshot the replay device to the snapshot device +# 4. run destructive tests on the snapshot device (e.g. mount with orphans) +# 5. goto 2 +# +. ./common/preamble +_begin_fstest auto verity + +# Override the default cleanup function. +_cleanup() +{ + cd / + _log_writes_cleanup &> /dev/null + rm -f $img + $LVM_PROG vgremove -f -y $vgname >>$seqres.full 2>&1 + losetup -d $loop_dev >>$seqres.full 2>&1 +} + +# Import common functions. +. ./common/filter +. ./common/attr +. ./common/dmlogwrites +. ./common/verity + +# real QA test starts here +_supported_fs btrfs + +_require_scratch +_require_test +_require_log_writes +_require_dm_target snapshot +_require_command $LVM_PROG lvm +_require_scratch_verity +_require_btrfs_command inspect-internal dump-tree +_require_test_program "log-writes/replay-log" + +sync_loop() { + i=$1 + [ -z "$i" ] && _fail "sync loop needs a number of iterations" + while [ $i -gt 0 ] + do + $XFS_IO_PROG -c sync $SCRATCH_MNT + let i-=1 + done +} + +dump_tree() { + local dev=$1 + $BTRFS_UTIL_PROG inspect-internal dump-tree $dev +} + +count_item() { + local dev=$1 + local item=$2 + dump_tree $dev | grep -c $item +} + +_log_writes_init $SCRATCH_DEV +_log_writes_mkfs +_log_writes_mount + +f=$SCRATCH_MNT/fsv +MB=$((1024 * 1024)) +img=$TEST_DIR/$$.img +$XFS_IO_PROG -fc "pwrite -q 0 $((10 * $MB))" $f +$XFS_IO_PROG -c sync $SCRATCH_MNT +sync_loop 10 & +sync_proc=$! +_fsv_enable $f +$XFS_IO_PROG -c sync $SCRATCH_MNT +wait $sync_proc + +_log_writes_unmount +_log_writes_remove + +# the snapshot and the replay will each be the size of the log writes dev +# so we create a loop device of size 2 * logwrites and then split it into +# replay and snapshot with lvm. +log_writes_blocks=$(blockdev --getsz $LOGWRITES_DEV) +replay_bytes=$((512 * $log_writes_blocks)) +img_bytes=$((2 * $replay_bytes)) + +$XFS_IO_PROG -fc "pwrite -q -S 0 $img_bytes $MB" $img >>$seqres.full 2>&1 || \ + _fail "failed to create image for loop device" +loop_dev=$(losetup -f --show $img) +vgname=vg_replay +lvname=lv_replay +replay_dev=/dev/mapper/vg_replay-lv_replay +snapname=lv_snap +snap_dev=/dev/mapper/vg_replay-$snapname + +$LVM_PROG vgcreate -f $vgname $loop_dev >>$seqres.full 2>&1 || _fail "failed to vgcreate $vgname" +$LVM_PROG lvcreate -L "$replay_bytes"B -n $lvname $vgname -y >>$seqres.full 2>&1 || \ + _fail "failed to lvcreate $lvname" +$UDEV_SETTLE_PROG >>$seqres.full 2>&1 + +replay_log_prog=$here/src/log-writes/replay-log +num_entries=$($replay_log_prog --log $LOGWRITES_DEV --num-entries) +entry=$($replay_log_prog --log $LOGWRITES_DEV --replay $replay_dev --find --end-mark mkfs | cut -d@ -f1) +$replay_log_prog --log $LOGWRITES_DEV --replay $replay_dev --limit $entry || \ + _fail "failed to replay to start entry $entry" +let entry+=1 + +# state = 0: verity hasn't started +# state = 1: verity underway +# state = 2: verity done +state=0 +while [ $entry -lt $num_entries ]; +do + $replay_log_prog --limit 1 --log $LOGWRITES_DEV --replay $replay_dev --start $entry || \ + _fail "failed to take replay step at entry: $entry" + + $LVM_PROG lvcreate -s -L 4M -n $snapname $vgname/$lvname >>$seqres.full 2>&1 || \ + _fail "Failed to create snapshot" + $UDEV_SETTLE_PROG >>$seqres.full 2>&1 + + orphan=$(count_item $snap_dev ORPHAN) + if [ $state -eq 0 ]; then + [ $orphan -gt 0 ] && state=1 + fi + + pre_mount=$(count_item $snap_dev UNKNOWN.3[67]) + _mount $snap_dev $SCRATCH_MNT || _fail "mount failed at entry $entry" + fsverity measure $SCRATCH_MNT/fsv >>$seqres.full 2>&1 + measured=$? + umount $SCRATCH_MNT + [ $state -eq 1 ] && [ $measured -eq 0 ] && state=2 + [ $state -eq 2 ] && ([ $measured -eq 0 ] || _fail "verity done, but measurement failed at entry $entry") + post_mount=$(count_item $snap_dev UNKNOWN.3[67]) + + echo "entry: $entry, state: $state, orphan: $orphan, pre_mount: $pre_mount, post_mount: $post_mount" >> $seqres.full + + if [ $state -eq 1 ]; then + [ $post_mount -eq 0 ] || \ + _fail "mount failed to clear under-construction merkle items pre: $pre_mount, post: $post_mount at entry $entry"; + fi + if [ $state -eq 2 ]; then + [ $pre_mount -gt 0 ] || \ + _fail "expected to have verity items before mount at entry $entry" + [ $pre_mount -eq $post_mount ] || \ + _fail "mount cleared merkle items after verity was enabled $pre_mount vs $post_mount at entry $entry"; + fi + + let entry+=1 + $LVM_PROG lvremove $vgname/$snapname -y >>$seqres.full +done + +echo "Silence is golden" + +# success, all done +status=0 +exit diff --git a/tests/btrfs/291.out b/tests/btrfs/291.out new file mode 100644 index 00000000..04605c70 --- /dev/null +++ b/tests/btrfs/291.out @@ -0,0 +1,2 @@ +QA output created by 291 +Silence is golden