I moved off a Synology NAS to a self-managed machine and one thing I still struggle to replace is something like a synology drive. Here are my requirements:

  • server side store data in a plain FS (I want transparency)
  • client side (windows), it must support VFS (download files when needed, support offloading of large files)
  • having snapshots of data is a must

I have a 40gbit uplink to my desktop, so if everything else fails I’ll just use samba with zfs snapshots exposed to VSS, but we’re talking some large files still (think several hundreds of MBs) and I’m not sure Blender will be happy working off a network disk.

I’ve been pointed to next/own-cloud previously, but they don’t seem to cover my use case, I think. Should I actually try one of those? I browsed around owncloud’s storage bit (which is written in go), and it seems mostly fitting, but I’ve been told I should steer away from ownCloud towards nextCloud.

  • computergeek125@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 months ago

    I don’t have a full answer to snapshots right now, but I can confirm Nextcloud has VFS support on Windows. I’ve been working on a project to move myself over to it from Syno drive. Client wise, the two have fairly similar features with one exception - Nextcloud generates one Explorer sidebar object per connection, which I think Synology handles as shortcuts in the one directory. If prefer if NC did the later or allowed me to choose, but I’m happier with what I got for now.

    As for the snapshotting, you should be able to snapshot the underlying FS/DB at the same time, but I haven’t poked deeply at that. Files I believe are plain (I will disassemble my nextcloud server to confirm this tonight and update my comment), but some do preserve version history so I want to be sure before I give you final confirmation. The Nextcloud root data directory is broken up by internal user ID, which is an immutable field (you cannot change your username even in LDAP), probably because of this filesystem.

    One thing that may interest you is the external storage feature, which I’ve been working on migrating a large data set I have to:

    • can be configured per-user or system-wide
    • password can be per-user, system-wide, or re-use the login password on the fly
    • data is stored raw on an external file server - supports a bunch of protocols, off hand SMB, S3, WebDAV, FTP
    • shows up as a normal-ish folder in the base user folder
    • can template names, such as including your username as part of the share name
    • Nextcloud does not independently contribute versioning data to the backend file server, so the only version control is what your backing server natively implements

    Admin docs for reference: https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/external_storage_configuration_gui.html

    I use LDAP user auth to my nextcloud, with two external shares to my NAS using a pass-through session password (the NAS is AD joined to the same domain as Nextcloud uses for LDAPS). I don’t know if/how the “store password in database” option is encrypted, but if anyone knows I would be curious, because using session passwords prevents the user from sharing the folder to at least a federated destination (I tried with my friend’s NC server, haven’t tried with a local user yet but I assume the same limitations apply). If that’s your vibe, then this is a feature XD.

    One of my two external storage mounts is a “common” share with multiple users accessing the same directory, and the second share is \\nas.example.com\home\nextcloud. Internally, these I believe is handled by PHP spawning smbclient subprocesses, so if you have lots of remote files and don’t want to nuke your Nextcloud, you will probably need to increase the PHP child limits (that too me too long to solve lol)

    That funny sub-mount name above handles an edge case where Nextcloud/DAV can’t handle directories with certain characters - notably the # that Synology uses to expose their #recycle and #snapshot structures. This means that remote mount to SMB has a limitation at the moment where you can’t mount the base share of a Synology NAS that has this feature enabled. I tried a server-side Nextcloud plugin to try to filter this out before it exposed to DAV, but it was glitchy. Unsure if this was because I just had too many files for it to handle thanks to the way Synology snapshots are exposed or if it actually was something else - either way I worked around the problem for now by not ever mounting a base share of my Synology NAS. Other snapshot exposure methods may be affected - I have a ZFS TrueNAS Core, so maybe I’ll throw that at it and see if I can break Nextcloud again :P

    Edit addon: OP just so I answer your real question when I get to this again this evening - when you said that Nextcloud might not meet your needs, was your concern specifically the server-side data format? I assume from the rest of your questions that you’re concerned with data resilience and the ability to get your data back without any vendor tools - that it will just be there when you need it.

    • farcaller@fstab.shOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      when you said that Nextcloud might not meet your needs, was your concern specifically the server-side data format?

      I’d prefer them as plain files. Technically it doesn’t matter much to me if it’s a database, if I have to spin up an S3-compatible API, or if I need to slice up a zvol for it, but I just prefer the files because then I can do zfs snapshots (in which I trust) and backup with restic (in which I trust)

      • computergeek125@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        Apologies for being late, I wanted to be as correct as I could be.

        So, straight to the point: Nextcloud by default uses plain files if you don’t configure the primary storage to be an S3/object store. As far as I can tell, this is not automatic and is an intentional change at system creation by the original admin. There is a third-party migration script, but there does not appear to be a first-party method of converting between the two. That’s very good news for you! (I think/hope)

        My instance was set up as a standalone, so I cannot speak for the all-in-one image. Poking around the root data directory (datadirectory in the config.php), I was able to locate my user account by internal username - which if you do not use LDAP will be the shortened login name. On default LDAP configs, this internal username may be a GUID, but that can be changed during the LDAP enablement process by overriding the Internal Username field in the Expert LDAP settings.

        Once in the user’s home folder in the root data directory, my subdirectory options are cache, files, files_trashbin, files_versions, uploads.

        • files contains the “live” structure of how I perceive my Nextcloud home folder in the Web UI and the Nextcloud Desktop sync engine
        • files_trashbin is an unstructured data folder containing every file that was deleted by this user and kept per the trash folder’s retention policy (this can be configured at the site level). Files retain their original name, but have a suffix added which takes the form .d######... where the numbers appear to be a Unix timestamp, likely the deletion date. A quick scan of these with the file command in Linux showed that each one had an expected file header based on its extension (i.e., a .png showed as a PNG image with an expected resolution). In the Web UI, there is metadata about which folder the file originally resided in, but I was not able to quickly identify this in the file structure. I believe this info is coming in from the SQL database.
        • files_version are how Nextcloud is storing its file version history (if enabled). Old versions are cleaned up per a set of default behaviors to keep more copies of more recent changes, up to a maximum age deletion threshold set at the site level. This folder is stored in approximately the same structure as the main files live structure, however each copy of each version is appended a suffix .v######... where the number appears to be the Unix timestamp the version was taken (*I have not verified that this exactly matches what the UI shows, nor have I read the source code that generates this). I’ve spot checked via the Linux file command and sha256 that the files in this versions structure appear to be real data - tested one Excel doc and one plain text doc.

        I think that should get a fairly rough answer to your original question, but if I left something out you’re curious about, let me know.


        Finally, I wanted to thank you for making me actually take a look at how I had decided to configure and back up my Nextcloud instance and ngl it was kind of a mess. The trash bin and versions can both get out of hand if you have frequently changing or deleting/recreating files (I have network synchronization glued onto some of my games that do not have good remote save support). Retention policy on trash and versions cleaned up extraneous data a lot, as only one of those was partially configured.

        I can see a lot of room for improvements… just gotta rip the band-aid off and make intelligent decisions rather than just slapping an rsync job that connects to the Nextcloud instance and replicates down the files and backend database. Not terrible, but not great.

        In the backend I’m already using ZFS for my files and Redis database, but my core SQL database was located on the server’s root partition (which is XFS - I’d rather not mess with a DKMS module from a boot CD if something happens and upstream borks the compile, which is precisely what happened when I upgraded to OpenZFS 2.1.15).

        I do not have automatic ZFS snapshots configured at this time, but based on the above, I’m reasonably confident that I could get data back from a ZFS snapshot if any of the normal guardrails within Nextcloud failed or did not work as intended (trash bin and internal version history). Plus, the data in that cursed rsync backup should be at least 90% functional.

  • lemmyvore@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    Why not keep the working files local, use a sync tool to get copies to the server, and backup/snapshot on the server as needed?

    • farcaller@fstab.shOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      Lots of files. I’d offload old projects that I worked on with synology drive so they aren’t stored locally, only remotely (but are easily accessible).

  • njordomir@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    I’m no expert. I want to include that disclaimer up front.

    Nextcloud with block storage on btrfs with snapshots seems like it could work for you. No idea about VFS though. I’ll leave that question for someone more knowledgeable. The “drive” portion of Nextcloud is quite decent. I regularly use it to pass large files between my phone (Android), laptop (Linux) and gaming desktop (Windows).

  • Fliegenpilzgünni@slrpnk.net
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    6 months ago

    If you have a spare laptop/ PC, I insist you to try Nextcloud.

    It’s super easy to install, you actually just download the Docker all-in-one container and it runs in less than 10 minutes. You don’t have much to loose.
    I’m relatively happy with it.

    I mean, to be fair, NC isn’t perfect. It sometimes feels a bit wonky and tries to do everything, while exceeding at nothing.
    But it’s damn comfortable to set up and maintain.

    It doesn’t perfectly cover your use case, but everything else (individual services, including web server, database, etc.) is less centralised and more complicated to set up.
    Since NC AIO is inside a container, all data are too. It’s a relatively straightforward file system afaik.
    Backup also is included, but you have to do it manually by default and it stops the services while doing it.

    For offloading large files, you might look into 3rd party tools. NC is basically a remote drive you can connect to with most programs that support it.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      6 months ago

      Yes, but Nextcloud is the fastest way to have something half done, always buggy and sync issues once you’ve a ton of small files. Too bad Syncthing doesn’t do selective sync because it would just be perfect.

      • Fliegenpilzgünni@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        I never had any (major) problems with Nextcloud yet.

        I just have following “conflicts” with it:

        • It doesn’t follow the “Do one thing, and do it right”-philosophy. It tries to do everything at once. File upload/ sharing, media management (NC Photos), RSS, mail, calendar, contacts, and much, much more. I mean, it’s damn convenient and works pretty fine, but nothing is great. For example, Immich/ Photoprism is way better than NC for photo management.
        • There’s a lot of abandonware, or buggy/ unmaintained apps. For example, my “News”-feed looks completely broken for months now.
        • The performance isn’t good. I mean, the “server” (an old thin client) isn’t fast at all, but the loading times and responsiveness is just awful. The file upload also takes ages, even from the same network.
        • It feels bloated. I think, if I would be more into selfhosting and had more time, I would search for alternatives and split all the NC features I use into their own services, e.g. one for file upload, one for document management, one for managing my photos, an own RSS client, and more.

        But, as I said, the ease of use and amount of features is still great. I don’t want to spend three weekends just troubleshooting my server and searching for/ installing dozens of individial services. And for that, it’s good enough.

        • TCB13@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          It tries to do everything at once. File upload/ sharing, media management (NC Photos), RSS, mail, calendar, contacts, and much, much more.

          Yes, and all of those things are fundamentally broken / poorly implemented.

          The performance isn’t good. I mean, the “server” (an old thin client) isn’t fast at all, but the loading times and responsiveness is just awful. The file upload also takes ages, even from the same network.

          I’ve had similar experiences with an overpowered AMD server. It isn’t good at all, but how can I expect a thing written in PHP to be good at syncing files? PHP is good, but certainly not to handle files like NC has to do.

          I don’t want to spend three weekends just troubleshooting my server and searching for/ installing dozens of individial services. And for that, it’s good enough.

          Fair enough, I just hope you don’t have to spend a month trying to fix whatever is wrong with NC on the next update. For me Synching + FileBrowser + Samba seems to be straightforward to get going and is as reliable as it gets.

  • sloppy_diffuser@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    I haven’t tested in Windows, but this is my setup Linux to Linux using rclone which the docs say works with Windows.

    Server

    • LUKS
    • LVM
    • Volgroup with a mishmash of drives in a mirror configuration
    • Cache volume with SSD
    • BTRFS /w Snapshots (or ZFS or any other snapshotting FS)
    • (optional) Rclone local “remote” with Crypt if you want runtime encryption at rest and the ability to decrypt files on the server. You can skip this and do client side only if you don’t want the decryption key on the server.
    • SFTP (or any other self-hosted protocol from https://rclone.org/docs/)

    Client

    • Rclone Config /w SFTP (or chosen protocol)
    • (optional) Rclone Config /w Crypt
    • Rclone mount with VFS.

    I use this setup for my local files and a similar setup to my Backblaze B2 off site backups.

    The VFS implementation has been pretty good. You can also manually sync. Their bisync I don’t fully trust though.

    I can access everything through android using https://github.com/newhinton/Round-Sync. Not great for photos though as thumbnails weren’t loading without pulling the whole file last I tested a year ago.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    5 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    LVM (Linux) Logical Volume Manager for filesystem mapping
    NAS Network-Attached Storage
    SFTP Secure File Transfer Protocol for encrypted file transfer, over SSH
    SMB Server Message Block protocol for file and printer sharing; Windows-native
    SSD Solid State Drive mass storage
    SSH Secure Shell for remote terminal access
    ZFS Solaris/Linux filesystem focusing on data integrity

    6 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

    [Thread #774 for this sub, first seen 30th May 2024, 12:15] [FAQ] [Full list] [Contact] [Source code]