Skip to content

ci: run xfstests quick group in a nested VM#1622

Draft
ddiss wants to merge 1 commit intobtrfs:masterfrom
ddiss:btrfs_gh_ci
Draft

ci: run xfstests quick group in a nested VM#1622
ddiss wants to merge 1 commit intobtrfs:masterfrom
ddiss:btrfs_gh_ci

Conversation

@ddiss
Copy link

@ddiss ddiss commented Feb 27, 2026

Github hosted "ubuntu-latest" x86-64 runners have enough resources (KVM, 4 cores, 16G RAM, 14 GB SSD) to build and run mainline kernel + xfstests in a nested VM.
This script uses rapido rapido-linux/rapido#258 as a minimal initramfs generator and thin wrapper around QEMU. For simplicity it'd likely make sense to branch it under the btrfs namespace.
The test VM currently uses btrfs-progs from the ubuntu-24.04 host system. This could also be changed to a source-compiled version. TEST and SCRATCH devices are backed by 8G zstd compressed qcow2 images.

Github hosted "ubuntu-latest" x86-64 runners have enough resources (KVM,
4 cores, 16G RAM, 14 GB SSD) to build and run mainline kernel + xfstests
in a nested VM.
This script uses rapido rapido-linux/rapido#258
as a minimal initramfs generator and thin wrapper around QEMU.
For simplicity it'd likely make sense to branch it under the btrfs
namespace.
The test VM currently uses btrfs-progs from the ubuntu-24.04 host
system. This could also be changed to a source-compiled version.
TEST and SCRATCH devices are backed by 8G zstd compressed qcow2 images.

Signed-off-by: David Disseldorp <ddiss@suse.de>
@ddiss ddiss marked this pull request as draft February 27, 2026 12:17
@ddiss
Copy link
Author

ddiss commented Feb 27, 2026

Raising this as a draft in case there's interest in having the GH hosted VMs perform fstests quick group runs on PR here. I discussed it briefly with @kdave and he mentioned that it may be worth having alongside the self-hosted runners. It doesn't belong in master branch. I can rebase against ci or another branch if desired.

@ddiss
Copy link
Author

ddiss commented Mar 3, 2026

cc'ing @morbidrsa - you may still be familiar with the rapido fstests runners (although in this case it's no longer using Dracut)

@morbidrsa
Copy link
Member

Btw, is there a possibility to trigger a 2nd test VM as well? If yes it would be awesome to have a 2nd test VM with 2 emulated zoned block devices, i.e. via Qemu's NVMe ZNS model:

	-device nvme,id=nvme0,serial=deadbeef 		\
	-drive file=img0,id=nvmezns0,format=raw,if=none 		\
	-device nvme-ns,drive=nvmezns0,bus=nvme0,nsid=1,zoned=true,zoned.max_active=14,zoned.zone_size=64M

	-device nvme,id=nvme1,serial=deadcafe 		\
	-drive file=img1,id=nvmezns1,format=raw,if=none 		\
	-device nvme-ns,drive=nvmezns1,bus=nvme1,nsid=1,zoned=true,zoned.max_active=14,zoned.zone_size=64M


@ddiss
Copy link
Author

ddiss commented Mar 4, 2026

Btw, is there a possibility to trigger a 2nd test VM as well? If yes it would be awesome to have a 2nd test VM with 2 emulated zoned block devices, i.e. via Qemu's NVMe ZNS model:

https://docs.github.com/en/actions/reference/limits indicates that GH hosted CI jobs timeout after 6 hours, which should give us plenty of time to run both standard and zoned fstests runs.

It should just be a matter of reworking your old dracut-based script to use the new rapido-cut manifest format.

Still, I think it makes sense to start simple and use what we have here first, if considered worthwhile.

@kdave
Copy link
Member

kdave commented Mar 4, 2026

#1623 with base branch ci-kvm also uses KVM that I copied from the rapido branch a while back. We can have both ways, I'll add this pull request as branch ci-rapido. The reason to keep them separate is to allow us to continue finetuning or fixing.

@kdave
Copy link
Member

kdave commented Mar 4, 2026

We can add the emulated zoned devices too, I'll add it to ci-kvm.

@kdave
Copy link
Member

kdave commented Mar 5, 2026

The zoned devices do not show up in the VM, I don't know why, qemu does not complain but /proc/partitions is empty. https://github.com/btrfs/linux/actions/runs/22697918965/job/65808369986#step:13:62 the line after lscpu should print it but there's 'free' and then the device mkfs fails.

@morbidrsa
Copy link
Member

The zoned devices do not show up in the VM, I don't know why, qemu does not complain but /proc/partitions is empty. https://github.com/btrfs/linux/actions/runs/22697918965/job/65808369986#step:13:62 the line after lscpu should print it but there's 'free' and then the device mkfs fails.

Not sure this: -o=-device -o=nvme,id=nvme0,serial=deadbeef is correct. In virtme-ng I usually do --qemu-opts -device nvme,id=0,serial=deadbeef --qemu--opts drive file=disk1,id=nvmezns0,format=raw,if=none yada yada

@kdave
Copy link
Member

kdave commented Mar 5, 2026

The --qemu-opts is said to consume all remaining options and should be last so, your example may work but is not according to the docs. The -o or --qemu-opt adds a single argument so splitting it is IMHO correct, and the use of = shold not make any difference but it's visually more obvious what is the option value.

@kdave
Copy link
Member

kdave commented Mar 5, 2026

Lack of zoned devices is probably caused by missing CONFIG_BLK_DEV_ZONED.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

3 participants