Goal:

  • 16TB mirrored on 2 drives (raid 1)
  • Hardware raid?
  • Immich, Jellyfin and Nextcloud. (All docker)
  • N100, 8+ GB RAM
  • 500gb boot drive ssd
  • 4 HDD bays, start with using 2

Questions:

  • Which os?
    • My though was to use hardware raid, and just set that up for the 2 hdds, then boot off an ssd with Debian (very familiar, and use it for current server which has 30+ docker containers. Basically I like and am good at docker so would like to stick to Debian+docker. But if hardware raid isn’t the best option for HDDs now a days, I’ll learn the better thing)
  • Which drives? Renewed or refurb are half the cost, so should I buy extra used ones, and just be ready to swap when the fail?
  • Which motherboard?
  • Which case?
  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    5 months ago

    If you can do at least three nodes with high availability. It is more expensive and trickier to setup but in the long run it is worth it when hosting for others. You can literally unplug a system and it will fail over

    It is overkill but you can use Proxmox with a docker swarm.

    Again way overkill but future proof and reliable

    • Emotet@slrpnk.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      5 months ago

      While this is a great approach for any business hosting mission critical or user facing ressources, it is WAY overkill for a basic selfhosted setup involving family and friends.

      For this to make sense, you need to have access to 3 different physical locations with their own ISPs or rent 3 different VPS.

      Assuming one would use only 1 data drive + an equal parity drive, now we’re talking about 6 drives with the total usable capacity of one. If one decides to use fewer drives and link your nodes to one or two data drives (remotely), I/O and latency becomes an issue and you effectively introduced more points of failure than before.

      Not even talking about the massive increase in initial and running costs as well as administrive headaches, this isn’t worth it for basically anyone.

    • Evil_Shrubbery@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      5 months ago

      I think this is the way and not an overkill at all!

      Its super easy to swarm ProxMox, and you make your inevitable admin job easier. Not to mention backups, first testing & setting up a VM on your server before copying it to their, etc.

      • lud@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        You need at minimum three ceph nodes but actually four if you want it to work better. But ceph isn’t ideally designed in mind with clusters that small. 7 nodes would be more reasonable.

        While clustering proxmox using ceph is cool as fuck it’s not easy or cheap to accomplish at home.