Status update July 4th

Just wanted to let you know where we are with Lemmy.world.

Issues

As you might have noticed, things still won’t work as desired… we see several issues:

Performance

  • Loading is mostly OK, but sometimes things take forever
  • We (and you) see many 502 errors, resulting in empty pages etc.
  • System load: The server is roughly at 60% cpu usage and around 25GB RAM usage. (That is, if we restart Lemmy every 30 minutes. Else memory will go to 100%)

Bugs

  • Replying to a DM doesn’t seem to work. When hitting reply, you get a box with the original message which you can edit and save (which does nothing)
  • 2FA seems to be a problem for many people. It doesn’t always work as expected.

Troubleshooting

We have many people helping us, with (site) moderation, sysadmin, troubleshooting, advise etc. There currently are 25 people in our Discord, including admins of other servers. In the Sysadmin channel we are with 8 people. We do troubleshooting sessions with these, and sometimes others. One of the Lemmy devs, @[email protected] is also helping with current issues.

So, all is not yet running smoothly as we hoped, but with all this help we’ll surely get there! Also thank you all for the donations, this helps giving the possibility to use the hardware and tools needed to keep Lemmy.world running!

  • Olap@lemmy.world
    link
    fedilink
    arrow-up
    33
    ·
    1 year ago

    You were so close until you mentioned trying to ditch SQL. Lemmy is 100% tied hard to it, and trying to replicate what it does without ACID and Joins is going to require a massive rewrite. More importantly - Lemmy’s docs suggest a docker-compose stack, not even k8s for now, it’s trying really hard not to tie into a single cloud provider and avoid having three cloud deployment scripts. Which means SQS, lambdas and cloudfront out in the short term. Quick question, are there any STOMP compliant vendors for SQS and lambda equivalent yet?

    Also, the growth lemmy.world has seen has been far outside what any team could handle ime. Most products would have closed signups to handle current load and scale, well done to all involved!

    • jamesorlakin@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      If Postgres becomes the bottleneck I wonder whether something like Citus could work to shard the data (relatively) transparently?

      • irdc@derp.foo
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        One could also move to having multiple read-only PostgreSQL replica instances used when generating the site and a single read-write instance that you’d use whenever anything changes (which is comparatively rare).

        • jamesorlakin@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          True, but that would likely require some code changes in Lemmy to segregate read queries and avoid using the replica if it’s a transaction that might read and write.