Search

Items tagged with: ZFS



The best way to test the resilience of your service is: reboot the server. I mean it. Don't just restart the service, reboot the server. A story in two parts:

Part 1: a while ago I configured ZFS under my Talos Kubernetes cluster, and everything was fine, until I decided to reboot. When it came back, nothing was working, because I forgot to properly configure a way for Talos to read the ZFS volume encryption key.

Part 2: at some point I configured Audiobookshelf to store data on top of said ZFS volume. Everything was working fine for a few weeks, until I had to reboot the server again (for reasons). When it came back, I lost all my downloaded podcasts, because I had a typo on my configuration that was pointing to a directory outside of the PVC, so it mounted as an emptyDir volume.

Honestly, I should have known better. I had issues in the past when some servers went down because of power failures (battery didn't last) and they did not come back properly.

You gotta do a reboot/power test every once in a while, just like you have to test your backups on a regular basis.

#HomeLab #TalosLinux #ZFS #SRE #DevOps @homelab



Why does Linux partition a #ZFS drive?

On #FreeBSD, we can give it the the whole drive, no partitions at all.

That said, I usually partition my ZFS drives on FreeBSD.



I'm sure I've done this before....

I have four devices: 2x 2TB and 2x 1TB

I could concatenate a 1TB and a 2TB together, then mirror over both.

It can survive one device dying. However, it seems like an ugly risky hack.

But it's not. It's the same risk level, is it not, as 4x 2TB drives in the same configuration. Right?

#FreeBSD #ZFS



Any #ZFS folks want to offer me an opinion on the following? I have 2 ZFS pool of raidz2 with 7 2TB SAS drives each (about 11.5T). I want to add a LOG drive. I have a single SAS SSD that's 300G. When I suggested using a 1TB LOG drive earlier, someone said that was way too much. (which is why I picked up a used 300G SSD)

My question is whether it makes sense (is possible) to partition that drive into 2 partitions and have one pool use 1 partition as a log drive (e.g., 150G) and the other pool use the other partition as a log drive on the same physical device.

Is this going to be worse because it's just too much IO on one device? Is it reasonable? Any other ideas?

Thanks
#homelab #linux #selfhost


Anybody looking for #FreeBSD #ZFS job in Europe or Armenia? Take a look at lists.freebsd.org/archives/freโ€ฆ. As I worked on that position until this August, I can only recommend it. The only reason I left is because of a full time open source position. Whoever fills in the position, I can promise you, you'll see wonders and make wonderful friends for life! ๐Ÿ˜ƒ I still chat with my old team members, and with some of them on a regular basis. In any case, good luck!



You can add a remote disk to a #ZFS mirror using ggatec(8) on #freebsd and the the ZFS pool happily resilvers to the remote disk connected via #geomgate.



Added ๐—จ๐—ฃ๐——๐—”๐—ง๐—˜ ๐Ÿฎ - ๐—ง๐—ฟ๐˜‚๐—ฒ๐—ก๐—”๐—ฆ ๐—–๐—ข๐—ฅ๐—˜ ๐—ถ๐˜€ ๐——๐—ฒ๐—ฎ๐—ฑ - ๐—Ÿ๐—ผ๐—ป๐—ด ๐—Ÿ๐—ถ๐˜ƒ๐—ฒ ๐˜‡๐—ฉ๐—ฎ๐˜‚๐—น๐˜ to the ๐—ง๐—ฟ๐˜‚๐—ฒ๐—ก๐—”๐—ฆ ๐—–๐—ข๐—ฅ๐—˜ ๐˜ƒ๐—ฒ๐—ฟ๐˜€๐˜‚๐˜€ ๐—ง๐—ฟ๐˜‚๐—ฒ๐—ก๐—”๐—ฆ ๐—ฆ๐—–๐—”๐—Ÿ๐—˜ article.

vermaden.wordpress.com/2024/04โ€ฆ

#truenas #zvault #freebsd #zfs #storage #nas #core


Running a `zpool scrub` on my mirrored NVMe SSDs. The image on the left is without heatsinks and the right is with heatsinks. Both graphs show a 50 minute time window.

Before installing heatsinks it took 15-20 minutes to scrub, and since, it's only 6 minutes. Perhaps the drives were thermal throttling?

Either way, โ‚ฌ20 well-spent I'd say.

#zfs

#zfs

โ‡ง