One big 16TB zpool (8 x 4TB SSDs) or 2 x 8TB zpools?
I got decisions to make now that all this stuff has come together.
One big 16TB zpool (8 x 4TB SSDs) or 2 x 8TB zpools?
I got decisions to make now that all this stuff has come together.
The CfP for the BSD, illumos, OpenZFS, bhyve Devroom at FOSDEM 2026 is now available, you can start submitting your talk ๐คฉ
people.freebsd.org/~rodrigo/foโฆ
#Fosdem2026 #FreeBSD #OpenBSD #NetBSD #Illumos #ZFS #bhyve
boosts appreciated
I'm sure I've done this before....
I have four devices: 2x 2TB and 2x 1TB
I could concatenate a 1TB and a 2TB together, then mirror over both.
It can survive one device dying. However, it seems like an ugly risky hack.
But it's not. It's the same risk level, is it not, as 4x 2TB drives in the same configuration. Right?
Itโs been 10 years since we celebrated the 10th anniversary of the integration of #ZFS into #Solaris: blogs.oracle.com/oracle-systemโฆ
Happy 20th Birthday ZFS!
Any #ZFS folks want to offer me an opinion on the following? I have 2 ZFS pool of raidz2 with 7 2TB SAS drives each (about 11.5T). I want to add a LOG drive. I have a single SAS SSD that's 300G. When I suggested using a 1TB LOG drive earlier, someone said that was way too much. (which is why I picked up a used 300G SSD)
My question is whether it makes sense (is possible) to partition that drive into 2 partitions and have one pool use 1 partition as a log drive (e.g., 150G) and the other pool use the other partition as a log drive on the same physical device.
Is this going to be worse because it's just too much IO on one device? Is it reasonable? Any other ideas?
I don't know if anybody noticed #ZeroFS yet, but it seems there is a completely user space-implementation of #NFS and #blockstorage on top of #S3 #objectstorage: github.com/Barre/zerofs
Including a demo running #ZFS on top of it which essentially allows geo-redundant ZFS volumes: asciinema.org/a/728234 & github.com/Barre/zerofs?tab=reโฆ
I don't see no #FreeBSD port yet, but if that really works it would be absolutely awesome.
As I start to explore the ZFS filesystem in more detail on FreeBSD, this post on snapshot basics is very helpful:
klarasystems.com/articles/basiโฆ
Master ZFS snapshot managementโlearn to create, use, and delete snapshots to protect your data and optimize backups.Dru Lavigne (Klara Systems)
Added ๐จ๐ฃ๐๐๐ง๐ ๐ฎ - ๐ง๐ฟ๐๐ฒ๐ก๐๐ฆ ๐๐ข๐ฅ๐ ๐ถ๐ ๐๐ฒ๐ฎ๐ฑ - ๐๐ผ๐ป๐ด ๐๐ถ๐๐ฒ ๐๐ฉ๐ฎ๐๐น๐ to the ๐ง๐ฟ๐๐ฒ๐ก๐๐ฆ ๐๐ข๐ฅ๐ ๐๐ฒ๐ฟ๐๐๐ ๐ง๐ฟ๐๐ฒ๐ก๐๐ฆ ๐ฆ๐๐๐๐ article.
vermaden.wordpress.com/2024/04โฆ
#truenas #zvault #freebsd #zfs #storage #nas #core
I was really disappointed when I got to know that the FreeBSD based TrueNAS CORE storage appliance โ owned and developed by iXsystems โ will be moved into the โmaintenanceโ โฆ๐๐๐๐๐๐๐๐
Running a `zpool scrub` on my mirrored NVMe SSDs. The image on the left is without heatsinks and the right is with heatsinks. Both graphs show a 50 minute time window.
Before installing heatsinks it took 15-20 minutes to scrub, and since, it's only 6 minutes. Perhaps the drives were thermal throttling?
Either way, โฌ20 well-spent I'd say.
#zfs