From patchwork Tue Apr 26 22:40:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Burkov X-Patchwork-Id: 12828030 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B6EDC4332F for ; Tue, 26 Apr 2022 22:40:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1355911AbiDZWnh (ORCPT ); Tue, 26 Apr 2022 18:43:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355905AbiDZWng (ORCPT ); Tue, 26 Apr 2022 18:43:36 -0400 Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C2B815806; Tue, 26 Apr 2022 15:40:26 -0700 (PDT) Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id A1C845C0172; Tue, 26 Apr 2022 18:40:25 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Tue, 26 Apr 2022 18:40:25 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bur.io; h=cc :content-transfer-encoding:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm1; t=1651012825; x=1651099225; bh=RY lCIpGr73cUBkgW3S5yoqcSnk4QBZRP3VUSpFtXMmc=; b=EVQLqKyJZnUByoOvz9 u06C1wF4GYGIs0NQ3S1MKyFfHLk6bYzWU/rfWKCzSKN8gSBy9GEc0gIg4Bzg97Tv emBuUQXwOeMHU2yLy5rQ8KATj21/0VSCll750KVPB69zykHM6vQlOUdTzFZ1Th84 bhuBWk5P2UC+CemTj5J9H63r4Vd6jgouVBX4qRoCx/Fvg5O6MnM4hckogH965Y0g UPTbfv2dMeP55NW/yq5UVZhMRXm6bWzbzsQFiTAUhUPFGkjvfFFwgSesOAMp6Js/ nhtSZ7ZuPbjfsTUd8MvdThshAWruVD34hyZXutBCOmCylT6voKnibhaaj7G0aIM0 +vkA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:date :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t= 1651012825; x=1651099225; bh=RYlCIpGr73cUBkgW3S5yoqcSnk4QBZRP3VU SpFtXMmc=; b=ffQiCqKZLadvP/Gn1dN7QKAQWFTXkYDFOsbuZER0UFxna6VELsN 7bLomZgl/Z0k25HtUxEnud5B8BgXnSQ1hYSM4Hq4nqYauEAyKrqMhe4Qd/WPZ4B4 AvEL+VKkJNMBqn47jEKMXOdCPuBAwn2ucSQq8TsrWUiHdzCGuKi0o9LVcbJ2+Ovp apH9glkQUs/640OQK/x4rtB3R4O7NovIDrJmCkMMGi/w79KI8hsRLJwp/JWo0pWL zY89gDAYS6sK2H7y+vZDlGQv3P7+wXYpQEvNP8JiKZ07F5ZgiLxOMMNmShDSmpgz YnxRVNgyVhsVtb7e/e0Yrby3YVuOTJ8XwWQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrudeggddugecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecunecujfgurhephffvufffkffojghfggfgsedtkeertd ertddtnecuhfhrohhmpeeuohhrihhsuceuuhhrkhhovhcuoegsohhrihhssegsuhhrrdhi oheqnecuggftrfgrthhtvghrnhepieeuffeuvdeiueejhfehiefgkeevudejjeejffevvd ehtddufeeihfekgeeuheelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm rghilhhfrhhomhepsghorhhishessghurhdrihho X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 26 Apr 2022 18:40:25 -0400 (EDT) From: Boris Burkov To: fstests@vger.kernel.org, linux-fscrypt@vger.kernel.org, linux-btrfs@vger.kernel.org, kernel-team@fb.com Subject: [PATCH v9 4/5] btrfs: test verity orphans with dmlogwrites Date: Tue, 26 Apr 2022 15:40:15 -0700 Message-Id: X-Mailer: git-send-email 2.35.1 In-Reply-To: References: MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: fstests@vger.kernel.org The behavior of orphans is most interesting across mounts, interrupted at arbitrary points during fsverity enable. To cover as many such cases as possible, use dmlogwrites and dmsnapshot as in log-writes/replay-individual.sh. At each log entry, we want to assert a somewhat complicated invariant: If verity has not yet started: an orphan indicates that verity has started. If verity has started: mount should handle the orphan and blow away verity data: expect 0 merkle items after mounting the snapshot dev. If we can measure the file, verity has finished. If verity has finished: the orphan should be gone, so mount should not blow away merkle items. Expect the same number of merkle items before and after mounting the snapshot dev. Note that this relies on grepping btrfs inspect-internal dump-tree. Until btrfs-progs has the ability to print the new Merkle items, they will show up as UNKNOWN.36/37. Signed-off-by: Boris Burkov --- tests/btrfs/291 | 161 ++++++++++++++++++++++++++++++++++++++++++++ tests/btrfs/291.out | 2 + 2 files changed, 163 insertions(+) create mode 100755 tests/btrfs/291 create mode 100644 tests/btrfs/291.out diff --git a/tests/btrfs/291 b/tests/btrfs/291 new file mode 100755 index 00000000..1bb3f1b3 --- /dev/null +++ b/tests/btrfs/291 @@ -0,0 +1,161 @@ +#! /bin/bash +# SPDX-License-Identifier: GPL-2.0 +# Copyright (C) 2021 Facebook, Inc. All Rights Reserved. +# +# FS QA Test 291 +# +# Test btrfs consistency after each FUA while enabling verity on a file +# This test works by following the pattern in log-writes/replay-individual.sh: +# 1. run a workload (verity + sync) while logging to the log device +# 2. replay an entry to the replay device +# 3. snapshot the replay device to the snapshot device +# 4. run destructive tests on the snapshot device (e.g. mount with orphans) +# 5. goto 2 +# +. ./common/preamble +_begin_fstest auto verity + +# Override the default cleanup function. +_cleanup() +{ + cd / + _log_writes_cleanup &> /dev/null + rm -f $img + $LVM_PROG vgremove -f -y $vgname >>$seqres.full 2>&1 + losetup -d $loop_dev >>$seqres.full 2>&1 +} + +# Import common functions. +. ./common/filter +. ./common/attr +. ./common/dmlogwrites +. ./common/verity + +# real QA test starts here +_supported_fs btrfs + +_require_scratch +_require_test +_require_log_writes +_require_dm_target snapshot +_require_command $LVM_PROG lvm +_require_scratch_verity +_require_btrfs_command inspect-internal dump-tree +_require_test_program "log-writes/replay-log" + +sync_loop() { + i=$1 + [ -z "$i" ] && _fail "sync loop needs a number of iterations" + while [ $i -gt 0 ] + do + $XFS_IO_PROG -c sync $SCRATCH_MNT + let i-=1 + done +} + +dump_tree() { + local dev=$1 + $BTRFS_UTIL_PROG inspect-internal dump-tree $dev +} + +count_item() { + local dev=$1 + local item=$2 + dump_tree $dev | grep -c $item +} + +_log_writes_init $SCRATCH_DEV +_log_writes_mkfs +_log_writes_mount + +f=$SCRATCH_MNT/fsv +MB=$((1024 * 1024)) +img=$TEST_DIR/$$.img +$XFS_IO_PROG -fc "pwrite -q 0 $((10 * $MB))" $f +$XFS_IO_PROG -c sync $SCRATCH_MNT +sync_loop 10 & +sync_proc=$! +_fsv_enable $f +$XFS_IO_PROG -c sync $SCRATCH_MNT +wait $sync_proc + +_log_writes_unmount +_log_writes_remove + +# the snapshot and the replay will each be the size of the log writes dev +# so we create a loop device of size 2 * logwrites and then split it into +# replay and snapshot with lvm. +log_writes_blocks=$(blockdev --getsz $LOGWRITES_DEV) +replay_bytes=$((512 * $log_writes_blocks)) +img_bytes=$((2 * $replay_bytes)) + +$XFS_IO_PROG -fc "pwrite -q -S 0 $img_bytes $MB" $img >>$seqres.full 2>&1 || \ + _fail "failed to create image for loop device" +loop_dev=$(losetup -f --show $img) +vgname=vg_replay +lvname=lv_replay +replay_dev=/dev/mapper/vg_replay-lv_replay +snapname=lv_snap +snap_dev=/dev/mapper/vg_replay-$snapname + +$LVM_PROG vgcreate -f $vgname $loop_dev >>$seqres.full 2>&1 || _fail "failed to vgcreate $vgname" +$LVM_PROG lvcreate -L "$replay_bytes"B -n $lvname $vgname -y >>$seqres.full 2>&1 || \ + _fail "failed to lvcreate $lvname" +$UDEV_SETTLE_PROG >>$seqres.full 2>&1 + +replay_log_prog=$here/src/log-writes/replay-log +num_entries=$($replay_log_prog --log $LOGWRITES_DEV --num-entries) +entry=$($replay_log_prog --log $LOGWRITES_DEV --replay $replay_dev --find --end-mark mkfs | cut -d@ -f1) +$replay_log_prog --log $LOGWRITES_DEV --replay $replay_dev --limit $entry || \ + _fail "failed to replay to start entry $entry" +let entry+=1 + +# state = 0: verity hasn't started +# state = 1: verity underway +# state = 2: verity done +state=0 +while [ $entry -lt $num_entries ]; +do + $replay_log_prog --limit 1 --log $LOGWRITES_DEV --replay $replay_dev --start $entry || \ + _fail "failed to take replay step at entry: $entry" + + $LVM_PROG lvcreate -s -L 4M -n $snapname $vgname/$lvname >>$seqres.full 2>&1 || \ + _fail "Failed to create snapshot" + $UDEV_SETTLE_PROG >>$seqres.full 2>&1 + + orphan=$(count_item $snap_dev ORPHAN) + if [ $state -eq 0 ]; then + [ $orphan -gt 0 ] && state=1 + fi + + pre_mount=$(count_item $snap_dev UNKNOWN.3[67]) + _mount $snap_dev $SCRATCH_MNT || _fail "mount failed at entry $entry" + fsverity measure $SCRATCH_MNT/fsv >>$seqres.full 2>&1 + measured=$? + umount $SCRATCH_MNT + [ $state -eq 1 ] && [ $measured -eq 0 ] && state=2 + [ $state -eq 2 ] && ([ $measured -eq 0 ] || _fail "verity done, but measurement failed at entry $entry") + post_mount=$(count_item $snap_dev UNKNOWN.3[67]) + + echo "entry: $entry, state: $state, orphan: $orphan, pre_mount: $pre_mount, post_mount: $post_mount" >> $seqres.full + + if [ $state -eq 1 ]; then + [ $post_mount -eq 0 ] || \ + _fail "mount failed to clear under-construction merkle items pre: $pre_mount, post: $post_mount at entry $entry"; + fi + if [ $state -eq 2 ]; then + [ $pre_mount -gt 0 ] || \ + _fail "expected to have verity items before mount at entry $entry" + [ $pre_mount -eq $post_mount ] || \ + _fail "mount cleared merkle items after verity was enabled $pre_mount vs $post_mount at entry $entry"; + fi + + let entry+=1 + $LVM_PROG lvremove $vgname/$snapname -y >>$seqres.full +done + +echo "Silence is golden" + +# success, all done +status=0 +exit diff --git a/tests/btrfs/291.out b/tests/btrfs/291.out new file mode 100644 index 00000000..04605c70 --- /dev/null +++ b/tests/btrfs/291.out @@ -0,0 +1,2 @@ +QA output created by 291 +Silence is golden