I am currently running most of my stuff from an unraid box using spare parts I have. It seems like I am hitting my limit on it and just want to turn it into a NAS. Micro PCs/USFF are what I am planning on moving stuff to (probably a cluster of 2 for now but might expand later.). Just a few quick questions:

  1. Running arr services on a proxmox cluster to download to a device on the same network. I don’t think there would be any problems but wanted to see what changes need to be done.

  2. Which micro PCs are you running? I am leaving towards HP prodesk or Lenovo 7xx/9xx series around 200 each. I don’t really plan on getting more than 2-3 and don’t run too many things, but would want enough overhead if I switch stuff over to home assistant and windows and Linux VMs if needed.

  3. Any best practices you recommend when starting a Proxmox cluster? I’ve learned over time it’s best to set it up correctly than try to fix stuff when it’s running. I wish I could coach myself from 7 years ago now. Would of saved a lot of headaches lol.

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      I haven’t done it - but I believe Proxmox allows for creating a “backplane” network which the servers can use to talk directly to each other. This would be used for ceph and server migrations so that the large amount of network traffic doesn’t interfere with other traffic being used by the VMs and the rest of your network.

      You’d just need a second NIC and a switch to create the second network, then staticly assign IPs. This network wouldn’t route anywhere else.

      • fuckwit_mcbumcrumble@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        In proxmox there’s no need to assign it to a physical NIC. If you want a virtual network that goes as frast as possible you’d create a bridge or whatever and assign it to nothing. If you assign it to a NIC then since it wants to use SR-IOV it would only go as fast as the NIC can go.

        • monkinto@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          Is there a reason to do this over just giving the nic for the vm/container a vlan tag?

          • DeltaTangoLima@reddrefuge.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            You still need to do that, but you need the Linux bridge interface to have VLANs defined as well, as the physical switch port that trunks the traffic is going to tag the respective VLANs to/from the Proxmox server and virtual guests.

            So, vmbr1 maps to physical interface enp2s0f0. On vmbr1, I have two VLAN interfaces defined - vmbr1.100 (Proxmox guest VLAN) and vmbr1.60 (Phsyical infrastructure VLAN).

            My Proxmox server has its own address in vlan60, and my Proxmox guests have addresses (and vlan tag) for vlan100.

            The added headfuck (especially at setup) is that I also run an OPNsense VM on Proxmox, and it has its own vlan interfaces defined - essentially virtual interfaces on top of a virtual interface. So, I have:

            • switch trunk port
              • enp2s0f0 (physical)
                • vmbr1 (Linux bridge)
                  • vmbr1.60 (Proxmox server interface)
                  • vmbr1.100 (Proxmox VLAN interface)
                    • virtual guest nic (w/ vlan tag and IP address)
                  • vtnet1 (OPNsense “physical” nic, but actually virtual)
                    • vtnet1_vlan[xxx] (OPNsense virtual nic per vlan)

            All virtual guests default route via OPNsense’s IP address in vlan100, which maps to OPNsense virtual interface vtnet1_vlan100.

            Like I said, it’s a headfuck when you first set it up. Interface-ception.

            The only unnecessary bit in my setup is that my Proxmox server also has an IP address in vlan100 (via vmbr1.100). I had it there when I originally thought I’d use Proxmox firewalling as well, to effectively create a zero trust network for my Proxmox cluster. But, for me, that would’ve been overkill.

        • FiduciaryOne@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          Huh, cool, thank you! I’m going to have to look into that. I’d love for some of my containers and VMs to be on a different VLAN from others. I appreciate the correction. 😊

            • FiduciaryOne@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              Thanks for the kind offer! I won’t get to this for a while, but I may take you up on it if I get stuck.

    • DeltaTangoLima@reddrefuge.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      This is exactly my setup on one of my Proxmox servers - a second NIC connected as my WAN adapter to my fibre internet. OPNsense firewall/router uses it.

    • PlasterAnalyst@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      You want to have at least 3 if you’re going to do that. I usually use the one on the mobo for all the other services and management. Then a dedicated port for lan and wan on a separate nic.

    • stown@sedd.it
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      11 months ago

      Security. Keeping publicly accessible and locally accessible on different networks.

      • DeltaTangoLima@reddrefuge.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        11 months ago

        Hmmm - not really any more. I have everything on the same VLAN, with publicly accessible services sitting behind nginx reverse proxy (using Authelia and 2FA).

        The real separation I have is the separate physical interface I use for WAN connectivity to my virtualised firewall/router - OPNsense. But I could also easily achieve that with VLANs on my switch, if I only had a single interface.

        The days of physical DMZs are almost gone - virtualisation has mostly superseded them. Not saying they’re not still a good idea, just less of an explicit requirement nowadays.