Today kbin.social is blocking a huge list of domains just to get federation working again.

The reason for this temporally block is not to defederate, but rather to get the large backlog of 500k messenger queue processed again. Anyway, this does mean that kbin.social is federating again with other instances.

This is a temporary measure. Several users / developers are looking into how to better optimize the failed message queue, as we speak. Hopefully Ernest has eventually time to dive into solutions as well instead of workarounds, once his instance is migrated to Kubernets. See my preview thread: https://kbin.melroy.org/m/updates/t/4257/Kbin-federation-issues-and-infra-upgrade

List of the domains causing trouble:

lemmygrad.ml, eientei.org, vive.im, lemmy.ml, lemmynsfw.com, kbin.lol, lemmy.webgirand.eu, tuna.cat, posta.no, lemmy.atay.dev, sh.itjust.works, kbin.stuffie.club, kbin.dssc.io, bolha.social, dataterm.digital, kbindev.lerman-development.com, test.fedia.io, mer.thekittysays.icu, lemmy.stark-enterprise.net, kbin.rocks, kbin.cocopoops.com, kbin.lgbt, lemmy.deev.io, lemmy.lucaslower.com, lemmy.norbz.org, social.jrruethe.info, digitalgoblin.uk, pwzle.com, lemmy.friheter.com, federated.ninja, lemmy.shtuf.eu, u.fail, arathe.net, lemmy.click, thekittysays.icu, lemmy.ubergeek77.chat, lemmy.maatwo.com, faux.moe, eslemmy.es, seriously.iamincredibly.gay, test.dataharvest.social, programming.dev, kbin.knocknet.net, pawb.social, lucitt.social, longley.ws, kbin.dentora.social, atay.dev, lemmy.kozow.com, ck.altsoshl.com, pawoo.net, techy.news, lemmy.vergaberecht-kanzlei.de, lemmyonline.com, beehaw.org, pouet.chapril.org, kbin.pcft.eu, fl0w.cc, lemmy.sdf.org, lemmy.zip, feddit.dk, fedi.shadowtoot.world, lemmy.noogs.me, lemmy.kemomimi.fans, social.agnitum.co.uk, fediverse.boo, hive.atlanten.se, forkk.me, lemmy.ghostplanet.org, lemmy.mayes.io, lemmy.mats.ooo, lemmy.world, lemmy.sdfeu.org, lemmy.death916.xyz, geddit.social, masto.fediv.eu

  • Drunemeton@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Soon kbin.social instance will be moved to the new infrastructure (using Docker on a kubernetes cluster). Which hopefully would fix all of those scalability issues we’re currently experiencing.

    So it’s two-fold: the underlying technology and the amount of data it can handle.

    Expect growing pains as they (the instances) find tech that works.

    • melroy@kbin.melroy.orgOP
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      The underlying technology has quite some down-sides indeed (activitypub) in terms of scalability. At the same time, large/big instances of the fediverse need to process large amount of data (not only local data but also external data from remote instances). Plus /kbin was still in early development phase, not fully ready to scale yet, so it was a big unexpected (due to Reddit …) migration. All the things I just mentioned are now coming together, all at once.