this post was submitted on 07 Dec 2023
229 points (96.4% liked)
Linux
47950 readers
1634 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So let me give an example, and you tell me if I understand. If you change 1MB in the middle of a 1GB file, the filesystem is smart enough to only allocate a new 1MB chunk and update its tables to say "the first 0.5GB lives in the same old place, then 1MB is over here at this new location, and the remaining 0.5GB is still at the old location"?
If that's how it works, would this over time result in a single file being spread out in different physical blocks all over the place? I assume sequential reads of a file stored contiguously would usually have better performance than random reads of a file stored all over the place, right? Maybe not for modern SSDs...but also fragmentation could become a problem, because now you have a bunch of random 1MB chunks that are free.
I know ZFS encourages regular "scrubs" that I thought just checked for data integrity, but maybe it also takes the opportunity to defrag and re-serialize? I also don't know if the other filesystems have a similar operation.
Not OP, but yes, that's pretty much how it works. (ZFS scrubs do not defrgment data however).
Fragmentation isn't really a problem for several reasons.
Some (most?) COW filesystems have mechanisms to mitigate fragmentation. ZFS, for instance, uses a special allocation strategy to minimize fragmentation and can reallocate data during certain operations like resilvering or rebalancing.
ZFS doesn't even have a traditional defrag command. Because of its design and the way it handles file storage, a typical defrag process is not applicable or even necessary in the same way it is with other traditional filesystems
Btrfs too handles chunk allocation effeciently and generally doesn't require defragmentation, and although it does have a defrag command, it's almost never used by anyone, unless you have a special reason to (eg: maybe you have a program that is reading raw sectors of a file, and needs the data to be contiguous).
Fragmentation is only really an issue for spinning disks, however, that is no longer a concern for most spinning disk users because:
Enterprise users also almost always use a RAID (or similar) setup, so the same as above applies. They also use filesystems like ZFS which employs heavy caching mechanisms, typically backed by SSDs/NVMes, so again, fragmentation isn't really an issue.
Cool, good to know. I'd be interested to learn how they mitigate fragmentation, though. It's not clear to me how COW could mitigate the copy cost without fragmentation, but I'm certain people smarter than me have been thinking about the problem for my whole life. I know spinning disks have their own set of limitations, but even SSDs perform better on sequential reads over random reads, so it seems like the preference would still be to not split a file up too much.