I can’t say for sure- but, there is a good chance I might have a problem.

The main picture attached to this post, is a pair of dual bifurcation cards, each with a pair of Samsung PM963 1T enterprise NVMes.

It is going into my r730XD. Which… is getting pretty full. This will fill up the last empty PCIe slots.

But, knock on wood, My r730XD supports bifurcation! LOTS of Bifurcation.

As a result, it now has more HDDs, and NVMes then I can count.

What’s the problem you ask? Well. That is just one of the many servers I have laying around here, all completely filled with NVMe and SATA SSDs…

Figured I would share. Seeing a bunch of SSDs is always a pretty sight.

And- as of two hours ago, my particular lemmy instance was migrated to these new NVMes completely transparently too.

  • HTTP_404_NotFound@lemmyonline.comOP
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    I will say, it’s nice not having to nickel and dime my storage.

    But, the way I have things configured, redundancy takes up a huge chunk of the overall storage.

    I have around 10x 1T NVMe and SATA SSDs in a ceph cluster. 60% storage overhead there.

    Four of those 8T disks are in a ZFS Striped Mirror / Raid 10. 50% storage overhead.

    The 4x 970 evo / evo plus drives are also in a striped mirror ZFS pool. 50% overhead.

    But, still PLENTY of usable storage, and- highly available at that!

    • krolden@lemmy.ml
      cake
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      11 months ago

      Any reason you went with a striped mirror instead of raidz5/6?

      • HTTP_404_NotFound@lemmyonline.comOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        The two ZFS pools are only 4 devices. One pool is spinning rust, the other is all NVMe.

        I don’t use raid 5 for large disks, and instead go for raid6/z2. Given z2 and striped mirrors both have 50% overhead with only 4 disks- striped mirrors has the advantage of being much faster, double the IOPs, and faster rebuilds. For these particular pools, performance was more important than overall disk space.

        However, before all of these disks were moved from TrueNAS to Unraid- there was a 8x8T Z2 pool, which worked exceptionally well.