Skip to content

ZFS native snapshotting ability#362

Open
mumblepins wants to merge 3 commits into
openSUSE:masterfrom
mumblepins:feature/zfs
Open

ZFS native snapshotting ability#362
mumblepins wants to merge 3 commits into
openSUSE:masterfrom
mumblepins:feature/zfs

Conversation

@mumblepins
Copy link
Copy Markdown

An initial version of Snapper with ZFS (#145); it at least passes the testsuite, and seems to work.

It uses the zfs binary as the front end. The calls are generally simple enough, and it seemed easier.

Notes:

  • ZFS on Linux doesn't have ACLs enabled by default. Right now that causes some issues with some of the comparison tests of snapper, but it works fine if ACLs are enabled on the volume.
  • ZFS snapshots are the whole volume or nothing. I couldn't figure out a way to exclude a .snapshot directory, so instead snapper makes a subvolume that stores the .snapshot directory. If anyone has a better idea, I'm open to changes.
  • ZFS automatically has a hidden folder in the root of each volume that contains the snapshots (.zfs/snapshot). I'm using symlinks from .snapper/1/snapshot --> .zfs/snapshot/snapper-1. Oh yeah, snapshots are called snapper-# . I've put some stubs in the code for potentially using either bind mounts to this point or using the legacy mounting system. zfs mount doesn't like to mount snapshots, and really, there's not much reason to.
  • There's also a stub where I was going to monitor the zpool get freeing stat as a substitute for the sync command. Kinda busy with lots of other things though, so if anyone else wants to implement it, be my guest.

mumblepins and others added 3 commits October 2, 2017 01:45
@johanfleury
Copy link
Copy Markdown

johanfleury commented Jun 27, 2019

Hi @mumblepins (cc @aschnell).

I'm new to ZFS and I was previously using snapper on BTRFS. I'd like to stick with a tool I know to manage my snapshots. I see this PR is stale and was not updated since its creation in 2017 so i wonder what's missing to see it merged?

@aschnell, it seems that you're snapper's main dev, could you please review this changes and tell us what you think? I'm not a C++ developer but the changes look simple enough that I may be able to work on it if needed.

@mumblepins, regarding your notes:

  • ZFS on Linux doesn't have ACLs enabled by default. Right now that causes some issues with some of the comparison tests of snapper, but it works fine if ACLs are enabled on the volume.

Could this be made a requirement for using snapper on ZFS, with a documentation statement and a warning logged at startup if ACLs aren't enabled?

  • ZFS snapshots are the whole volume or nothing. I couldn't figure out a way to exclude a .snapshot directory, so instead snapper makes a subvolume that stores the .snapshot directory. If anyone has a better idea, I'm open to changes.
  • ZFS automatically has a hidden folder in the root of each volume that contains the snapshots (.zfs/snapshot). I'm using symlinks from .snapper/1/snapshot --> .zfs/snapshot/snapper-1. Oh yeah, snapshots are called snapper-#. I've put some stubs in the code for potentially using either bind mounts to this point or using the legacy mounting system. zfs mount doesn't like to mount

Is it required that snapshots are stored in the .snapshots directory? Could we just let ZFS manage it's own directory and only interact with it through command lines or libraries?

In #145 @aschnell said that LVM snapshots were also not “visible” by default.

I know it would be great to have something consistent with how snapshots are managed on BTRFS but, maybe we could do that in a future iteration if it's a blocking point.

There's also a stub where I was going to monitor the zpool get freeing stat as a substitute for the sync command. Kinda busy with lots of other things though, so if anyone else wants to implement it, be my guest.

Could you elaborate on this? If ZFS doesn't provide a way to sync deletion, could we just ignore the --sync flag? (Of course there must be a statement on this regard in the documentation).

@danboid
Copy link
Copy Markdown
Contributor

danboid commented Apr 22, 2026

I think ZFS support would a great addition to snapper.

I tried sanoid but it had issues tidying up old snapshot. znapzend didn't do what I needed and I tried a few others too. zfs-auto-snapshot works but its just a few simple scripts. You can choose which datasets you don't want to be snapshotted and when, it doesn't have many options beyond that.

I use snapper to handle auto snapshots when I'm "forced" to use BTRFS so it would be handy to be able to use the same snapshot tool with ZFS too.

@mumblepins said

"ZFS snapshots are the whole volume or nothing."

If by volume you mean zfs pool thats not true - there is a zfs command to mark certain datasets to be excluded from snapshotting. I can dig it out if you don't know it.

@johanfleury said

"Is it required that snapshots are stored in the .snapshots directory? Could we just let ZFS manage it's own directory and only interact with it through command lines or libraries?"

I would prefer if snapper could be used in a native to ZFS as possible way, which means using .zfs in the root of the dataset ("subvolume") to store snapshots instead of .snapshots.

I recently contributed a script called srt (Snapshot Restore Tool) to snapper to make it simple for users to restore snapper snapshots of their home dirs, if they all have their own subvolume.

https://github.com/openSUSE/snapper/blob/master/scripts/srt.sh

It will be quite easy for me to add ZFS support to srt so that it could be used with ZFS too, I've just not got round to that yet. There's not much need for it anyway because ZFS users can use httm (hottub time machine) instead which has many more features for restoring individual files etc.

https://github.com/kimono-koans/httm

httm claims to support BTRFS but I had no luck with it. ZFS users who use snapshots should check it out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants