1

My btrfs filesystem is somehow damaged (parent transid, internal errors, ...). I tried with repair options, but I think some of the filesystem is just messed up now. The btrfs volume is used for backups. So loosing some backups is not a big deal. But I just want the filesystem to return to a healthy state.

How?

root@openmediavault:/home/christian# btrfs check /dev/sdf
Opening filesystem to check...
Checking filesystem on /dev/sdf
UUID: dc7ca9ad-6af0-47e9-9c3a-860127d2c362
[1/7] checking root items
[2/7] checking extents
ref mismatch on [739855515648 16384] extent item 0, found 1
tree backref 739855515648 parent 2 root 2 not found in extent tree
backpointer mismatch on [739855515648 16384]
...
ref mismatch on [1407768510464 16384] extent item 1, found 0
backref 1407768510464 root 2 not referenced back 0x5642911036f0
incorrect global backref count on 1407768510464 found 1 wanted 0
backpointer mismatch on [1407768510464 16384]
owner ref check failed [1407768510464 16384]
ERROR: errors found in extent allocation tree or chunk allocation
[3/7] checking free space cache
cache and super generation don't match, space cache will be invalidated
[4/7] checking fs roots

Update

root@openmediavault:/home/christian# btrfs rescue super-recover /dev/sdf
All supers are valid, no need to recover

root@openmediavault:/home/christian# btrfs rescue zero-log /dev/sdf Clearing log on /dev/sdf, previous log_root 0, level 0 Failed to find [1407768215552, 168, 16384] btrfs unable to find ref byte nr 1407768248320 parent 0 root 2 owner 0 offset 0 transaction.c:195: btrfs_commit_transaction: BUG_ON ret triggered, value -5 btrfs(+0x3b748)[0x5588027c3748] btrfs(btrfs_commit_transaction+0x12a)[0x5588027c3bcc] btrfs(+0x5ae38)[0x5588027e2e38] btrfs(main+0x1f3)[0x55880279be63] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb)[0x7f0ef5be609b] btrfs(_start+0x2a)[0x55880279beaa] Abgebrochen

Christian
  • 185
  • What exactly have you tried and what was the result? – gronostaj Oct 28 '20 at 15:01
  • I tried different btrfs repair options, but they were not successful. I encountered an btrfs internal bug (see https://github.com/kdave/btrfs-progs/issues/304) but haven't received an answer. So I assume repair is not possible. Question now is how to make the filesystem healthy again. Formatting would probably be the hardest variant, maybe there are less destructive ones. As I said loosing data would not be a too big issue. – Christian Oct 28 '20 at 15:08
  • 1
    Were there any symptoms before this? How is the filesystem configured here (RAID, compresion, etc.)? What kernel version do you have? Which version of btrfs-progs are you using? Any drive S.M.A.R.T. errors? – rickhg12hs Oct 28 '20 at 20:52
  • Btrfs-progs 4.20.1, Debian 5.8.10-1, RAID1 on 5 8 TB disks, where one has 2 read errors, one 5 read errors, all others 0. Compression most extends none, some older zlib, newer zstd. – Christian Oct 29 '20 at 22:03
  • Anything else I can provide? – Christian Nov 02 '20 at 10:23

1 Answers1

0

In your GitHub issue I see that you already reformatted you drive, but I'll leave this here for future reference if anyone else with the same problem lands here:

The output suggests that the filesystem is quite damaged already. In some cases trying to repair might damage it further. The only reliable way to get the filesystem stable again is reformating it clean again.

If you need to recover important data from it your best bet is on btrfs restore then: it accesses the drive read-only - so it will not cause any further damage - and tries to restore all readable files as good as possible, even allowing you to ignore errors with single files to continue with others.

acran
  • 213