ZFS combines the features of something like LVM (i.e. spanning multiple devices, caching, redundancy, ...) with the functions of a traditional filesystem (think ext4 or similar).
Due to that combination it can tightly integrate the two systems and not treat the "block level" as an opaque layer. For example each data block in ZFS is stored with a checksum, so data corruption can be detected. If a block is stored on multiple devices (due to a mirroring setup or raid-z) then the filesystem layer will read multiple blocks when it detects such a data corruption and re-store the "correct" version to repair the damage.
First off most filesystems (unfortunately and almost surprisingly) don't do that kind of checksum for their data: when the HDD returns rubbish they tend to not detect the corruption (unless the corruption is in their metadata in which case they often fail badly via a crash).
Second: if the duplication was handled via something like LVM it couldn't automatically repair errors in a mirror setup because LVM would have no idea which of the blocks is uncorrupted (if any).
ZFS has many other useful (and some arcane) features, but that's the most important one related to its block-layer "LVM replacement".
ZFS is nifty and I really like it on my Homelab Server/NAS. But it is definitely a "sysadmins filesystem". I probably wouldn't suggest it to anyone just for their workstation, as the learning curve is significant (and you can lock yourself into some bad decisions).