• pixeltree@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    82
    arrow-down
    1
    ·
    edit-2
    2 months ago

    You should have rolling log files of limited size and limited quantity. The issue isn’t that it’s a text file, it’s that they’re not following pretty standard logging procedures to prevent this kind of thing and make logs more useful.

    Essentially, when your log file reaches a configured size, it should create a new one and start writing into that, deleting the oldest if there are more log files than your configured limit.

    This prevents runaway logging like this, and also lets you store more logging info than you can easily open and go through in one document. If you want to store 20 gb of logs, having all of that in one file will make it difficult to go through. 10 2 gb log files is much easier. That’s not so much a consumer issue, but that’s the jist of it.

    • yetAnotherUser@discuss.tchncs.de
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      2 months ago

      Fully agree, but the way it’s worded makes it seem like log being a text file is the issue. Maybe I’m just misinterpreting intent though.

      • meeshen@vegantheoryclub.org
        link
        fedilink
        arrow-up
        29
        ·
        2 months ago

        200GB of a text log file IS weird. It’s one thing if you had a core dump or other huge info dump, which, granted, shouldn’t be generated on their own, but at least they have a reason for being big. 200GB of plain text logs is just silly

        • xantoxis@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 months ago

          no, 200gb of plain text logs is clearly a bug. I run a homelab with 20+ apps in it and all the logs together wouldn’t add up to that for years, even without log rotation. I don’t understand the poster’s decision to blame this on “western game devs” when it’s just a bug by whoever created the engine.

          • MoonMelon@lemmy.ml
            link
            fedilink
            English
            arrow-up
            5
            ·
            2 months ago

            Agreed, and there’s a good chance that log is full of one thing spamming over and over, and the devs would love to know what it is.

        • pancakes@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          It could be a matter of storing non-text information in an uncompressed text format. Kind of like how all files are base 0s and 1s in assembly, other files could be “logged” as massive text versions instead of their original compressed file type.

    • Toribor@corndog.social
      link
      fedilink
      English
      arrow-up
      14
      ·
      2 months ago

      As a sysadmin there are few things that give me more problems than unbounded growth and timezones.

      • biscuitswalrus
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Printers. Desk phones. Wmi service crashing at full core lock under the guise of svchost.

    • teejay@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      Essentially, when your log file reaches a configured size, it should create a new one and start writing into that, deleting archiving the oldest

      FTFY

      • pixeltree@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        4
        ·
        2 months ago

        Sure! Best practices vary to your application. I’m a dev, so I’m used to configuring stuff for local env use. In prod, archiving is definitely nice so you can track back even through heavy logging. Though, tbh, if you’re applications getting used by that many people a db logging system is probably just straight better