Search

Items tagged with: zfs



Why does Linux partition a #ZFS drive?

On #FreeBSD, we can give it the the whole drive, no partitions at all.

That said, I usually partition my ZFS drives on FreeBSD.



I'm sure I've done this before....

I have four devices: 2x 2TB and 2x 1TB

I could concatenate a 1TB and a 2TB together, then mirror over both.

It can survive one device dying. However, it seems like an ugly risky hack.

But it's not. It's the same risk level, is it not, as 4x 2TB drives in the same configuration. Right?

#FreeBSD #ZFS



Any #ZFS folks want to offer me an opinion on the following? I have 2 ZFS pool of raidz2 with 7 2TB SAS drives each (about 11.5T). I want to add a LOG drive. I have a single SAS SSD that's 300G. When I suggested using a 1TB LOG drive earlier, someone said that was way too much. (which is why I picked up a used 300G SSD)

My question is whether it makes sense (is possible) to partition that drive into 2 partitions and have one pool use 1 partition as a log drive (e.g., 150G) and the other pool use the other partition as a log drive on the same physical device.

Is this going to be worse because it's just too much IO on one device? Is it reasonable? Any other ideas?

Thanks
#homelab #linux #selfhost


Anybody looking for #FreeBSD #ZFS job in Europe or Armenia? Take a look at lists.freebsd.org/archives/freโ€ฆ. As I worked on that position until this August, I can only recommend it. The only reason I left is because of a full time open source position. Whoever fills in the position, I can promise you, you'll see wonders and make wonderful friends for life! ๐Ÿ˜ƒ I still chat with my old team members, and with some of them on a regular basis. In any case, good luck!


should have stared at it longer, then i would have seen it.

I used #lvm for a long time. But have since moved to #zfs, as that signifincanlty reduces the needed layers.

#zfs #lvm



You can add a remote disk to a #ZFS mirror using ggatec(8) on #freebsd and the the ZFS pool happily resilvers to the remote disk connected via #geomgate.



Added ๐—จ๐—ฃ๐——๐—”๐—ง๐—˜ ๐Ÿฎ - ๐—ง๐—ฟ๐˜‚๐—ฒ๐—ก๐—”๐—ฆ ๐—–๐—ข๐—ฅ๐—˜ ๐—ถ๐˜€ ๐——๐—ฒ๐—ฎ๐—ฑ - ๐—Ÿ๐—ผ๐—ป๐—ด ๐—Ÿ๐—ถ๐˜ƒ๐—ฒ ๐˜‡๐—ฉ๐—ฎ๐˜‚๐—น๐˜ to the ๐—ง๐—ฟ๐˜‚๐—ฒ๐—ก๐—”๐—ฆ ๐—–๐—ข๐—ฅ๐—˜ ๐˜ƒ๐—ฒ๐—ฟ๐˜€๐˜‚๐˜€ ๐—ง๐—ฟ๐˜‚๐—ฒ๐—ก๐—”๐—ฆ ๐—ฆ๐—–๐—”๐—Ÿ๐—˜ article.

vermaden.wordpress.com/2024/04โ€ฆ

#truenas #zvault #freebsd #zfs #storage #nas #core


Running a `zpool scrub` on my mirrored NVMe SSDs. The image on the left is without heatsinks and the right is with heatsinks. Both graphs show a 50 minute time window.

Before installing heatsinks it took 15-20 minutes to scrub, and since, it's only 6 minutes. Perhaps the drives were thermal throttling?

Either way, โ‚ฌ20 well-spent I'd say.

#zfs

#zfs

โ‡ง