this post was submitted on 30 Apr 2024
30 points (96.9% liked)

Linux

48044 readers
735 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 
# sudo btrfs fi df /mnt/disk3
Data, single: total=12.70TiB, used=12.27TiB
System, DUP: total=8.00MiB, used=1.34MiB
Metadata, DUP: total=15.00GiB, used=14.50GiB
GlobalReserve, single: total=512.00MiB, used=608.00KiB

# mkdir /mnt/disk3/tst
mkdir: cannot create directory ‘tst’: No space left on device

I suspect this is BTRFS balancing issue, but even BTRFS's own utility is indicating there's still SOME space left. Certainly should be enough to create a directory.

Any ideas?

Just in general BTRFS default options for creating new volumes seem to not work well for disks that I intend to fill completely immediately after formatting. Are there better options for this use case? I just use

# mkfs.btrfs /dev/sdd1

top 15 comments
sorted by: hot top controversial new old
[–] [email protected] 10 points 6 months ago (3 children)

When you create a filesystem, there is a parameter named as "block percent free". This parameter should be "5%", so a 5% of your partition size can only be written by the "root" user.

You can decrease this value or just free some space. You can try to create files or folders as root as well.

[–] [email protected] 4 points 6 months ago* (last edited 6 months ago) (2 children)

Is there any reason this 5% number still holds true? Back in the days of 40 MB hard drives it made sense to make sure the system didn’t totally run out while root was fixing the low disk situation … but these days even 1% is still several gigabytes of space, not likely to run out that quickly.

[–] [email protected] 2 points 6 months ago

Fragmentation probably but seems arbitrary

[–] [email protected] 3 points 6 months ago

You/I learn something new every day. Cool info!

[–] [email protected] 3 points 6 months ago

Are you sure that's the case with btrfs? I know ext has that feature. My understanding is btrfs just has a global reserve that can be used for any data in an low space situation.

# sudo btrfs fi usage /mnt/disk3
Overall:
    Device size:                  12.73TiB
    Device allocated:             12.73TiB
    Device unallocated:            1.00MiB
    Device missing:                  0.00B
    Device slack:                    0.00B
    Used:                         12.29TiB
    Free (estimated):            449.43GiB      (min: 449.43GiB)
    Free (statfs, df):           449.43GiB
    Data ratio:                       1.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)
    Multiple profiles:                  no

Data,single: Size:12.70TiB, Used:12.26TiB (96.55%)
   /dev/sdd1      12.70TiB

Metadata,DUP: Size:15.00GiB, Used:14.49GiB (96.58%)
   /dev/sdd1      30.00GiB

System,DUP: Size:8.00MiB, Used:1.34MiB (16.80%)
   /dev/sdd1      16.00MiB

Unallocated:
   /dev/sdd1       1.00MiB
[–] [email protected] 5 points 6 months ago

For me the answer is always "snapshots" and normally because of docker.

If you run a docker image store on a BTRFS drive, docker creates snapshots at various times. It never cleans them up; It has no commands that clean them up, and it means that if you delete a file it doesn't free any space because the snapshots keep the file alive.

[–] [email protected] 2 points 6 months ago

Looking at balancing might be right place to start. ref, https://archive.kernel.org/oldwiki/btrfs.wiki.kernel.org/index.php/FAQ.html#Help.21_I_ran_out_of_disk_space.21

You might want to start by rebalancing by percentages and not all at once. If nothing else it'll tell you much sooner if you're on the right track or not. Something like sudo btrfs balance start -dusage=20 -musage=20 /mnt/disk3 to work on only blocks that are 20% full or less. That should coaleace them into single data blocks and free up some others.

[–] [email protected] 2 points 6 months ago (1 children)

also, 'df -i'. probably not this case but...

[–] [email protected] 1 points 6 months ago

btrfs dynamically allocates inodes.

[–] [email protected] 2 points 6 months ago

As a rule of thumb you should keep your disk usage around 60% or under.

My guess it that you have snapshots or other similar hidden data taking up space. List out your snapshots and sub volumes.

[–] [email protected] 1 points 6 months ago

Have you tried a rebalance? What's up over there?

[–] [email protected] 1 points 6 months ago (1 children)

Would be nice if there's some automatic solution, but after running into this issue I always run a couple different btrfs balance after deleting larger files for good measure. Took a while to figure out why Linux said there wasn't any space left when df reported several GB available on the root partition

[–] [email protected] 2 points 6 months ago

I am surprised there isn't an automatic mechanism to handle this especially if it is such a frequent issue.

[–] [email protected] 1 points 6 months ago* (last edited 6 months ago)

The metadata seems to be pretty close to full. I'm not using BTRFS, but I've read earlier that this is solved with rebalancing, or how is that called. Btrfs's management command is able to do it.

But possibly the metadata is unreasonably large, so maybe the solution is not rebalancing.