• 1 Post
  • 169 Comments
Joined 11 months ago
cake
Cake day: July 30th, 2023

help-circle
  • computergeek125@lemmy.worldto196@lemmy.blahaj.zoneRule
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    I’ve been that guy at a computer store. Had already found what I needed on my own since I was quite familiar with the store and was browsing a different isle to look at the shinies. Overheard a customer ask a salesperson what the difference between product x and y was, which were marked very similarly on the box but one was something like 30-50% more.

    I noticed the salesperson become quite unsure of what this specific technical difference was, so I added the quick TLDR paragraph of what the generalized difference was and what words the manufacturers use to differentiate them (since there were several product pairs that matched both classes elsewhere on the isle).

    Customer says “oh ok that makes sense”. I forget which one he decided on (I think it might have been the more expensive one kek), but the salesperson put his commission tracking sticker on the selected box and the customer wandered away, hopefully happy. Salesperson turns to me sheepishly “Um… I guess you probably don’t need help?” I responded “No, I’m just browsing, but do you want to put your sticker on this gizmo I found in the bargain bin over there?” He seemed happy with this arrangement, adds the commission sticker, and we part ways.

    …did I inadvertently make a pact with a different type of fae?




  • Apologies for being late, I wanted to be as correct as I could be.

    So, straight to the point: Nextcloud by default uses plain files if you don’t configure the primary storage to be an S3/object store. As far as I can tell, this is not automatic and is an intentional change at system creation by the original admin. There is a third-party migration script, but there does not appear to be a first-party method of converting between the two. That’s very good news for you! (I think/hope)

    My instance was set up as a standalone, so I cannot speak for the all-in-one image. Poking around the root data directory (datadirectory in the config.php), I was able to locate my user account by internal username - which if you do not use LDAP will be the shortened login name. On default LDAP configs, this internal username may be a GUID, but that can be changed during the LDAP enablement process by overriding the Internal Username field in the Expert LDAP settings.

    Once in the user’s home folder in the root data directory, my subdirectory options are cache, files, files_trashbin, files_versions, uploads.

    • files contains the “live” structure of how I perceive my Nextcloud home folder in the Web UI and the Nextcloud Desktop sync engine
    • files_trashbin is an unstructured data folder containing every file that was deleted by this user and kept per the trash folder’s retention policy (this can be configured at the site level). Files retain their original name, but have a suffix added which takes the form .d######... where the numbers appear to be a Unix timestamp, likely the deletion date. A quick scan of these with the file command in Linux showed that each one had an expected file header based on its extension (i.e., a .png showed as a PNG image with an expected resolution). In the Web UI, there is metadata about which folder the file originally resided in, but I was not able to quickly identify this in the file structure. I believe this info is coming in from the SQL database.
    • files_version are how Nextcloud is storing its file version history (if enabled). Old versions are cleaned up per a set of default behaviors to keep more copies of more recent changes, up to a maximum age deletion threshold set at the site level. This folder is stored in approximately the same structure as the main files live structure, however each copy of each version is appended a suffix .v######... where the number appears to be the Unix timestamp the version was taken (*I have not verified that this exactly matches what the UI shows, nor have I read the source code that generates this). I’ve spot checked via the Linux file command and sha256 that the files in this versions structure appear to be real data - tested one Excel doc and one plain text doc.

    I think that should get a fairly rough answer to your original question, but if I left something out you’re curious about, let me know.


    Finally, I wanted to thank you for making me actually take a look at how I had decided to configure and back up my Nextcloud instance and ngl it was kind of a mess. The trash bin and versions can both get out of hand if you have frequently changing or deleting/recreating files (I have network synchronization glued onto some of my games that do not have good remote save support). Retention policy on trash and versions cleaned up extraneous data a lot, as only one of those was partially configured.

    I can see a lot of room for improvements… just gotta rip the band-aid off and make intelligent decisions rather than just slapping an rsync job that connects to the Nextcloud instance and replicates down the files and backend database. Not terrible, but not great.

    In the backend I’m already using ZFS for my files and Redis database, but my core SQL database was located on the server’s root partition (which is XFS - I’d rather not mess with a DKMS module from a boot CD if something happens and upstream borks the compile, which is precisely what happened when I upgraded to OpenZFS 2.1.15).

    I do not have automatic ZFS snapshots configured at this time, but based on the above, I’m reasonably confident that I could get data back from a ZFS snapshot if any of the normal guardrails within Nextcloud failed or did not work as intended (trash bin and internal version history). Plus, the data in that cursed rsync backup should be at least 90% functional.




  • I don’t have a full answer to snapshots right now, but I can confirm Nextcloud has VFS support on Windows. I’ve been working on a project to move myself over to it from Syno drive. Client wise, the two have fairly similar features with one exception - Nextcloud generates one Explorer sidebar object per connection, which I think Synology handles as shortcuts in the one directory. If prefer if NC did the later or allowed me to choose, but I’m happier with what I got for now.

    As for the snapshotting, you should be able to snapshot the underlying FS/DB at the same time, but I haven’t poked deeply at that. Files I believe are plain (I will disassemble my nextcloud server to confirm this tonight and update my comment), but some do preserve version history so I want to be sure before I give you final confirmation. The Nextcloud root data directory is broken up by internal user ID, which is an immutable field (you cannot change your username even in LDAP), probably because of this filesystem.

    One thing that may interest you is the external storage feature, which I’ve been working on migrating a large data set I have to:

    • can be configured per-user or system-wide
    • password can be per-user, system-wide, or re-use the login password on the fly
    • data is stored raw on an external file server - supports a bunch of protocols, off hand SMB, S3, WebDAV, FTP
    • shows up as a normal-ish folder in the base user folder
    • can template names, such as including your username as part of the share name
    • Nextcloud does not independently contribute versioning data to the backend file server, so the only version control is what your backing server natively implements

    Admin docs for reference: https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/external_storage_configuration_gui.html

    I use LDAP user auth to my nextcloud, with two external shares to my NAS using a pass-through session password (the NAS is AD joined to the same domain as Nextcloud uses for LDAPS). I don’t know if/how the “store password in database” option is encrypted, but if anyone knows I would be curious, because using session passwords prevents the user from sharing the folder to at least a federated destination (I tried with my friend’s NC server, haven’t tried with a local user yet but I assume the same limitations apply). If that’s your vibe, then this is a feature XD.

    One of my two external storage mounts is a “common” share with multiple users accessing the same directory, and the second share is \\nas.example.com\home\nextcloud. Internally, these I believe is handled by PHP spawning smbclient subprocesses, so if you have lots of remote files and don’t want to nuke your Nextcloud, you will probably need to increase the PHP child limits (that too me too long to solve lol)

    That funny sub-mount name above handles an edge case where Nextcloud/DAV can’t handle directories with certain characters - notably the # that Synology uses to expose their #recycle and #snapshot structures. This means that remote mount to SMB has a limitation at the moment where you can’t mount the base share of a Synology NAS that has this feature enabled. I tried a server-side Nextcloud plugin to try to filter this out before it exposed to DAV, but it was glitchy. Unsure if this was because I just had too many files for it to handle thanks to the way Synology snapshots are exposed or if it actually was something else - either way I worked around the problem for now by not ever mounting a base share of my Synology NAS. Other snapshot exposure methods may be affected - I have a ZFS TrueNAS Core, so maybe I’ll throw that at it and see if I can break Nextcloud again :P

    Edit addon: OP just so I answer your real question when I get to this again this evening - when you said that Nextcloud might not meet your needs, was your concern specifically the server-side data format? I assume from the rest of your questions that you’re concerned with data resilience and the ability to get your data back without any vendor tools - that it will just be there when you need it.



  • Full extension rails are probably best going to come from the original vendor as a general principle, rather than attempting to use universal rails.

    If you have a wall mounted rack, unless your walls are not drywall, physics is working against you. It’s already a pretty intense heavy cantilever, and putting a server in there that can extend past the front edge is only going to make that worse.

    If you want to use full extension rails, you should get a rack that can sit squarely on the floor on either feet or appropriately rated casters. You should also make sure your heaviest items are on the bottom ESPECIALLY if you have full extension rails - it will make the rack less likely to overbalance itself and tip over when the server is extended.


  • Adding on one aspect to things others have mentioned here.

    I personally have both ports/URLs opened and VPN-only services.

    IMHO, it also depends on the exposure tolerance the software has or risk of what could get compromised if an attacker were to find the password.

    Start by thinking of the VPN itself (Taliscale, Wireguard, OpenVPN, IPSec/IKEv2, Zerotier) as a service just like the service your considering exposing.

    Almost all (working on the all part lol) of my external services require TOTP/2FA and are required to be directly exposed - i.e. VPN gateway, jump host, file server (nextcloud), git server, PBX, music reflector I used for D&D, game servers shared with friends. Those ones I either absolutely need to be external (VPN, jump) or are external so that I don’t have to deal with the complicated networking of per-user firewalls so my friends don’t need to VPN to me to get something done.

    The second part for me is tolerance to be external and what risk it is if it got popped. I have a LOT of things I just don’t want on the web - my VM control panels (proxmox, vSphere, XCP), my UPS/PDU, my NAS control panel, my monitoring server, my SMB/RDP sessions, etc. That kind of stuff is super high risk - there’s a lot of damage that someone could do with that, a LOT of attack surface area, and, especially in the case of embedded firmware like the UPSs and PDUs, potentially software that the vendor hasn’t updated in years with who-knows-what bugs lurking in it.

    So there’s not really a one size fits all kind of situation. You have to address the needs of each service you host on a case by case basis. Some potential questions to ask yourself (but obviously a non-exhaustive list):

    • does this service support native encryption?
      • does the encryption support reasonably modern algorithms?
      • can I disable insecure/broken encryption types?
      • if it does not natively support encryption, can I place it behind a reverse proxy (such as nginx or haproxy) to mitigate this?
    • does this service support strong AAA (Authentication, Authorization, Auditing)?
      • how does it log attempts, successful and failed?
      • does it support strong credentials, such as appropriately complex passwords, client certificate, SSH key, etc?
      • if I use an external authenticator (such as AD/LDAP), does it support my existing authenticator?
      • does it support 2FA?
    • does the service appear to be resilient to internet traffic?
      • does the vendor/provider indicate that it is safe to expose?
      • are there well known un-patched vulnerabilities or other forum/social media indicators that hosting even with sane configuration is a problem?
      • how frequently does the vendor release regular patches (too few and too many can be a problem)?
      • how fast does the vendor/provider respond to past security threats/incidents (if information is available)?
    • is this service required to be exposed?
      • what do I gain/lose by not exposing it?
      • what type of data/network access risk would an attacker gain if they compromised this service?
      • can I mitigate a risk to it by placing a well understood proxy between the internet and it? (for example, a well configured nginx or haproxy could mitigate some problems like a TCP SYN DoS or an intermediate proxy that enforces independent user authentication if it doesn’t have all the authentication bells and whistles)
      • what VLAN/network is the service running on? (*if you have several VLANs you can place services on and each have different access classes)
      • do I have an appropriate alternative means to access this service remotely than exposing it? (Is VPN the right option? some services may have alternative connection methods)

    So, as you can see, it’s not just cut and dry. You have to think about each service you host and what it does.

    Larger well known products - such as Guacamole, Nextcloud, Owncloud, strongswan, OpenVPN, Wireguard - are known to behave well under these circumstances. That’s going to factor in to this too. Many times the right answer will be to expose a port - the most important thing is to make an active decision to do so.


  • If your party is perfectly enjoying themselves playing the system of their common choice with whatever characters all (+GM) agree on, then you’re playing the game correctly.

    You don’t need to play Pathfinder, multi class, or anything else if you’re having fun.

    Now, if people aren’t having fun, maybe it’s time to use Proficiency (Language) and discuss it. If there’s a resolvable conflict, resolve it. If it’s not a resolvable, then maybe it’s time to find a new table.


  • Okay I just figured out what happened here. Sorry for my misunderstanding in the other comment thread, I fell into the same trap initially.

    @Lemmy users - OP is a Mastodon user, so when OP responded to the comment with an @ and attached image, Lemmy federation didn’t pick up the attachment. I know we have images in attachments natively, so I don’t know if this is an upstream bug in Lemmy / Mastodon or some interaction between lemmy.world and Mastodon.scot.

    @[email protected] - you may not have realized that your Mastodon post generated a Lemmy thread. Welcome to another corner of the Fediverse if you’re new to Lemmy. Some of the animosity in the comments here probably came from the fact that we can’t see the image you attached to from our interface. Right now the comment is sitting at 36 net down votes, probably because our corner of the Fediverse didn’t have the full context




  • Somewhat halfway between practical use and just messing around for fun.

    Several years ago I built a GPS NTP clock out of an RPi3 and an Adafruit GPS hat. Once I had the PPS driver installed, it’s precision/drift got pretty good. According to its own self measurements, I got pretty dang close to NIST stratum 1 NTP servers, but those are hundreds of miles away so that measurement isn’t super precise. It’s still running today, clocking nearly 24/7 operation since (checks shopping history) 2017, though I replaced the breadboard and mini module with a full sized hat with the same chipset in 2021.

    Recently I acquired a proper hardware GPS clock and I stacked the two against each other and found out my RPi did not half bad and can get between 0.5-10ms of the professionals (literally I’m pretty sure I’d need more precise measuring equipment to tell the difference between the two at this point than a regular computer). Now my homelab has fully redundant internet-disconneted stratum 1 time. Been half considering if I could write a GPSD driver for it as a joke, but I know upstream won’t accept it because it doesn’t offer SOOO many features they’d need.

    As for what else - I just kind of keep an eye out for projects related to GPS and high precision time, like the open source atomic PCI card that was released a few years ago. Finding out what people are doing to get better and better time is just downright interesting.

    Outside of the time world, it’s just fun to see what projects people come up with relating to maps and navigation. Stretch goal once I have enough server horsepower is to make a render-capable Open Street Map server with my home region loaded to start with, but eventually I’d like to get it to the point where I can load and process world.osm. That… Requires a LOT of CPU and SSD space.