• sabreW4K3@lazysoci.al
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    7 months ago

    You start out by bemoaning the onboarding experience and then move on to portability and then speak up the idea servers should just be relays and browsers should be the new world order.

    Yes, onboarding definitely needs to be improved.

    Yes, portability can be improved. Lemmy falls short of Mastodon and not even Mastodon is perfect.

    But, what mastodon does so is foster does do excellently is foster the idea that social media is a tool and that users shouldn’t be overly attached. Also, perhaps if we learn to value servers, so not treat them as mere relays, perhaps we’ll be able to teach value and independence.

    The problem is, too many people keep trying to think, how can we make the Fediverse relevant in the modern world? And the better question is, how can we redefine the modern world? How can we normalize the idea of cooperative servers? Whether friends, towns, cities, etc. How can we make it so the people running the servers that host our communities are committed and engaged and not running them at a deficit? I would even go as far as to say that there should be government schemes to repurpose old computers into mini servers and that governments should give everyone a domain like NAME.TOWN.CITY and everyone can run a personal server and get used to it and then they can grow from there.

    • big_slap@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 months ago

      I agree with pretty much everything you’ve stated except for:

      The problem is, too many people keep trying to think, how can we make the Fediverse relevant in the modern world?

      I dont think this is a problem. The fediverse accomplishes exactly what it sought to do, a decentralized social network. This is uncharted territory and has been working out surprisingly well. I thought I would be off this 2 months after reddit killed 3rd party clients, but here I am!!

      the minute we start to push growth at any cost is the minute the fediverse quality declines, in my opinion.

    • rglullis@communick.newsOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      7 months ago

      Also, perhaps if we learn to value servers, so not treat them as mere relays, perhaps we’ll be able to teach value and independence.

      If you want to be independent, the only thing that matters is the ability to able to roam around and port our identity and data wherever we want. Where you are doing your computing doesn’t really matter.

      government schemes to repurpose old computers into mini servers and that governments should give everyone a domain like NAME.TOWN.CITY and everyone can run a personal server and get used to it and then they can grow from there.

      We don’t need any of that. Computing power and storage is so cheap nowadays that even people in middle-income areas can afford to collect piles of used smartphones on their desk drawers. If there was any type of economic demand for what you are saying, we would have seen by now some company trying to make a business out of it.

      • sabreW4K3@lazysoci.al
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        7 months ago

        Computing power and storage is so cheap nowadays that even people in middle-income areas can afford to collect piles of used smartphones on their desk drawers.

        I think that’s a dangerous assumption to make. Not everyone is as well off as ourselves. Some people can’t even afford a desk, let alone have a desk drawer full of old phones.

        • rglullis@communick.newsOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          7 months ago
          1. On average, we are rich enough to have plenty of gadgets around.

          2. Those in extreme poverty need access to more important things than access to these gadgets.

          • biddy@feddit.nl
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago
            1. Those in extreme poverty need access to more important things than access to these gadgets.

            We’re going down a sidetrack here but this is just false. A smartphone these days is a ticket to many things required to live. Applying for jobs, applying for government services, buying essential items cheaply, cheap/free education.

            • rglullis@communick.newsOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              Applying for jobs, applying for government services, buying essential items cheaply, cheap/free education.

              None of these things are even close to be available to people in extreme poverty.

              Think “no access to running water or sewage systems” levels of poverty, not “living in a ghetto area of an European or North American country”.

    • Flax@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      My idea is actually instead of marketing lemmy and mastodon, like “join Lemmy!” Or “Join Mastodon!” Market each individual instance separately.

      • Aurelius@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        After speaking with non-technical friends, I began to think how the key to marketing, onboarding, and growth will be to reduce the friction of the fediverse. The technical aspects of the fediverse (such as instances) and even the word “fediverse” itself should be behind a curtain.

        Unfortunately, Lemmy’s current default frontend does not do a good job at welcoming non-technical users (i.e. needing to find and select instances, fediverse jargon, etc.). Not to mention the lack of common accessibility features

        Ultimately, I think the 3rd party devs building accessible and frictionless frontends will be key in this respect.

        With that being said, I think a better marketing strategy is to say “join this app” (which connects them to the Lemmy/Mastodon network) because I imagine the bounce rate of the default Lemmy onboarding is not great.

        • Flax@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Also instances aren’t really helpful in this regard either. “Feddit” just sounds like “fake reddit” and it carries that reddit baggage. “Lemmy.world” and “lemmy.ml” has lemmy in the name so you have to explain lemmy which is off-putting. Stuff like Beehaw, Sopuli, etc, do well. I think Beehaw is actually a good example as well as it has it’s own personal identity as well as being a federated forum.

  • h3ndrik@feddit.de
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    7 months ago

    That’s a nice idea but has some pretty obvious technical drawbacks that aren’t discussed in the blog article:

    The complexity of most networks grows about exponentially with the number of connections between the entities. It gets immensely more computationally expensive that way and you’re bound to use lots of additional network traffic and total cpu power that way.

    And some (a lot of) people like using social media on their phones instead of a computer. You’re bound to drain their batteries real fast by moving application logic there.

    Other than that I like the general idea. The Fediverse should be more dynamic. Caching and discovery have some big issues in the current form. That should be tackled and we need technical solutions for that. And the current architecture isn’t perfect at all.

    Furthermore, if talking about the edge where networks are smarter… Why then move it into the browser which isn’t at the edge? Wouldn’t that be an argument to invent edge-routers like in edge computing? I mean with c2s you have a server on the one side and a client on the other side with the edge somewhere in between. If you now flip it you end up in a different situation. But there’s still nothing at the edge where you could introduce some smarts…

    • rglullis@communick.newsOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      7 months ago

      And some (a lot of) people like using social media on their phones instead of a computer. You’re bound to drain their batteries real fast by moving application logic there.

      Messaging applications (that need to be online all the time) don’t have this issue. Mobile email clients are even more conservative in resource usage. Why would an AP client be any different?

      You are not going to be transcoding video or executing complex machine learning analysis on the device. I can reasonably argue that a local-first AcvitityPub application would be no different in resource usage than something like a modern XMPP or Matrix client.

      • h3ndrik@feddit.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        7 months ago

        Because with all of that, messaging, email, xmpp, matrix and ActivityPub most of the magic happens on the server. Take email for example. The server takes care to be online 24/7. It provides like 5GB of storage for your inbox that you can access from everywhere. It filters messages and does database stuff so you can habe full text search. Same with messaging. Your server coordinates with like 200 other servers so messages from users from anywhere get forwarded to you. It keeps everything in sync. Caches images so they’re available immediately.

        That allows for the clients/Apps to be very simplistic. It just needs to maintain one connection to your server and ask if there’s anything new every now and then. Or query new data/content. Everything else is already taken care for by the server.

        OP’s suggestion is to change that. Move logic into the client/App. But it’s not super easy. If you now need to be concerned on the client with maintaining the 200 connections at all times instead of just 1 to see if anyone replied… Your phone might drain 200 times as much battery. And requiring the phone to be reachable also comes with a severe penalty. Phones have elaborate mechanisms to save power and sleep most of the time. Any additional network activity requires the processor and the modem to stay active for longer periods of time. And apart from the screen thats one of the major things that draws power.

        • rglullis@communick.newsOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 months ago

          What I am proposing is not getting rid of the server, just reducing the amount of functionality that depends on it. You won’t be connecting with 200 different servers, you will still have only one single node responsible to get notifications.

          Regarding storage: I can speak from experience that it you can have a local-first architecture for structured data that does not blow up the client. In a previous work, we built a messenger app where all client data was stored on PouchDB which could be synced via a “master” CouchDB. All client views would be built from the local data. Of course, media storage would go to the cloud, which means that the data itself was only highly-compressible text. You can go a looooong way with just a 1GB of storage, which is well within the limits of web storage

          • h3ndrik@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            7 months ago

            Hmmh. But how would that then change Mastodon not displaying previous (uncached) posts? Or queries running through the server with it’s perspective?

            And I fail to grasp how hashtags and the Lemmy voting system is related to a client/server architecture… You could just implement a custom voting metric on the server. Sure you can also implement that five times in all the different apps. But you’d end up with the same functionality regardless of where you do the maths.

            And if people are subscribed to like 50 different communities or watch the ‘All’ feed, there is a constant flow of ActivityPub messages all day long. Either you keep the phone running all day to handle that. Or you do away with any notification functionality. And replicating the database to the device either forces you to drain the battery all day, or you just sync when the user opens the App. But opening Lemmy and it takes a minute to sync the database before new posts appear, also isn’t a great user experience.

            I’d say we need nomadic identity, more customizability with the options like hashtags, filters and voting. Dynamic caching because as of now Fediverse servers regularly get overwhelmed if a high profile person with lots if followers posts an image. But most of that needs to be handled by servers. Or we do a full-on P2P approach like with Nostr or other decentralized services. Or edge-computing.

            I don’t quite get where in between federated and decentralized (as in p2p) your approach would be. And if it’d inherit the drawbacks of both worlds or combine the individual advantages.

            And ActivityPub isn’t exactly an efficient protocol and neither are the server implementations. I think we could do way better with a more optimized, still federated protocol. Same with Matrix. That also provides me with a similar functinality my old XMPP server had, just with >10x the resource usage. And both are federated.

            • rglullis@communick.newsOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              7 months ago

              But how would that then change Mastodon not displaying previous (uncached) posts?

              You default to push (messages that come through the server), and you fall back to pull (the client accessing a remote server) when you (your client) is interested in fetching data that you never seen.

              And I fail to grasp how hashtags and the Lemmy voting system is related to a client/server architecture

              hashtags, sorting and ranking methods, moderation policies, and pretty much everything aside from the events themselves are just ways to visualize/filter/aggregate the data according to the user’s preferences. But it turns out that this is only “complex” when your data set is too large (which is bound to happen when your server has to serve lots of users). If you filter the data set at the client, its size becomes manageable.

              we do a full-on P2P approach like with Nostr

              Nostr is not p2p, and p2p is not what I am talking about. Having logic at the client does not mean “p2p”.

              XMPP server (has less resource usage and is) federated.

              Yes, because the XMPP server is only concerned with passing messages around!

              • h3ndrik@feddit.de
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                7 months ago

                Ah, you’re right. Nostr uses relays. Now I know what the name stands for. Sounds a bit like your proposal in extreme. The “servers” get downgraded to relatively simple relays that just forward stuff. The magic happens completely(?) on the clients.

                I’m still not sure about the application logic. Sure I also like the logic close to me (the user.) The current trend has been towards the opposite for quite some time. Sometimes the explanation is simple: If you do most things on the server, you retain control over what’s happening. That’s great for selling ads and controlling the platforms in general. On the other hand it also has some benefits for power efficiency on the devices. I’m not talking about computing stuff, but rather about something like Google Cloud Messaging which has the purpose of reducing the amount of open connections and power draw and combine everything into a single connection for push messages. In order to do decide when to wake a device, it has access to to the result of the filtering and message priorization. Which then needs to be done server-side.

                I’m also not sure with the filtering of hashtags. I mean if you subscribe to a hashtag. Or want to count the sum to calculate a trend… Something needs to work through all the messages and filter/count them. Doesn’t that mean you’d need all Mastodon’s messages of the day on your device? I’m sure that’s technically possible. Phones are fast little computers. And 4G/5G sometimes has good speed. But l’m not sure what kind of additional traffic you’d estimate. 50 Megabytes a day is 1.5GB for your monthly cellular data plan. A bit less because sometimes people are at home and use wifi… But then they also don’t just use one platform, but have Matrix, Lemmy and Mastodon installed. And you can’t just skip messages, you’d need to handle them all to calculate the correct number of upvotes and hashtag use. Even if the user doesn’t open the app for a week.

                I don’t quite “feel it”. But I also wouldn’t rule out the possibility of something like a hybrid approach. Or some clever trickery to get around that for some of the things a social network is concerned with…

                Or like something I’d attribute more to edge computing. The client makes all the decisions and tells the edge (router) exactly what algorithm to use to do the ranking, how to do the filtering and when it wants to be woken up… That device does the heavy lifting and caches stuff and forwards them in chunks as instructed by the client.

                • rglullis@communick.newsOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  7 months ago

                  Doesn’t that mean you’d need all Mastodon’s messages of the day on your device?

                  You wouldn’t need that. Think in terms of XMPP: a server could create the equivalent of a MUC room for tags, and the client could “follow” a tag by joining the room. The server would then push all messages it receives to that room. This scales quite well and still puts the client in control of the logic.

                  Similar architecture could be used for groups.

    • rglullis@communick.newsOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      7 months ago

      Nostr is broken in one crucial aspect: your identity is derived from your private key. If your keys are compromised, your whole account is lost forever. WIth Actor Ids, your name can be a domain name, which makes it easier to protect your identity. With FEP-61 any DID could potentially be used.

      I honestly don’t like this “if you are criticizing what we have, it means that you don’t belong here”. You are responding like I haven’t looked at Nostr, or even the other alternatives. It reeks of gate-keeping. But anyway, for the sake of argument: what I want is a mix of Libervia and Movim, with the ActivityPub vocabulary.

      • smileyhead@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        There is also one more broken thing. Relay servers are not needed, as we already have networks that can deliver any IP packet instead of building a special network just for Nostr.

  • iltg@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    i really disagree with most of your points. a “server” is some machine working for the client. your proposal isn’t getting rid of servers, you’re just making every user responsible to be their own server.

    this mostly feels like “im annoyed my instance is filtering content and lacks replies”. have you tried fedilab? it allows fetching directly from source, bypassing your instance and fetching all replies. i think thats kind of anti-privacy but you may like it

    if you’re interested here’s a wall of text with more argumentations on my points (sorry wanted to be concise but really failed, i may make this into a reply blog post soon:tm:)


    Federation is not the natural unit of social organization

    you argue that onboarding is hard, as if picking a server is signing a contract. new users can go to mastosoc and then migrate from there. AP has a great migration system. also federation is somewhat the natural unit: you will never speak to all 8B people, but you will discuss with your local peers and your ideas may get diffused. somewhat fair points, but kind of overblown

    Servers are expensive to operate

    you really can’t get around this, even if you make every user handle their own stuff, every user will have their database and message queue. every user will receive such post in their message queue, process it and cache in their db. that’s such a wasteful design: you’re replicating once for every member of the network

    We should not need to emulate the fragmentation of closed social networks

    absolutely true! this should get handled by software implementers, AP already allows intercompatibility, we don’t need a different system, just better fedi software

    The server is the wrong place for application logic

    this is really wrong imo, and the crux of my critic. most of your complaints boil down to caching: you only see posts cached on a profile and in a conversation. this can’t be different, how could we solve it?

    • you mention a global search: how do we do that? a central silo which holds all posts ever made, indexed to search? who would run such a monster, and if it existed, why wouldn’t everyone just connect there to have the best experience? that’s centralization
    • again global search: should all servers ask all other servers? who keeps a list of all servers? again centralized, and also such a waste of resources: every query you’re invoking all fedi servers to answer?
    • even worse you mention keeping everything on the client, but how do you do that? my fedi instance db is around 30G, and im a single user instance which only sees posts from my follows, definitely not a global db. is every user supposed to store hundreds of GBs to have their “local global db” to search on? why not keep our “local global dbs” shared in one location so that we deduplicate posts and can contribute to archiving? something like a common server for me and my friends?

    also if the client is responsible of keeping all its data, how do you sync across devices? in some other reply you mention couchdb and pouchdb, but that sounds silly for fedi: if we are 10 users should we all host our pouchdb on a server, each with the same 10 posts? wouldn’t it be better keeping the posts once and serving them on demand? you save storage on both the server and all clients and get the exact same result

    having local dbs for each client also wouldn’t solve broken threads or profiles: each client still needs to see each reply or old post. imagine if every fedi user fetched every old post every time they follow someone, that would be a constant DOS. by having one big server shared across multiple people you’re increasing your chance of finding replies already cached, rather than having to go fetch them

    last security: you are assuming a well intentioned fedi but there are bad actors. i don’t want my end device to connect to every instance under the sun. i made a server, which only holds fedi stuff, which at worst will crash or leak private posts. my phone/pc holds my mails and payment methods, am i supposed to just go fetching far and wide from my personal device as soon as someone delivers me an activity? no fucking way! the server is a layer of defense

    networks are smarter at the edges

    the C2S AP api is really just a way to post, not much different than using madtodon api. as said before content discovery on every client is madness, but timeline/filter managenent is absolutely possible. is it really desirable? megalodon app allows to manage local filters for your timeline, but that’s kind of annoying because you end up with out of sync filters between multiple devices. same for timelines: i like my lists synched honestly, but to each their own, filters/timelines on the client should already be possible.

    you mention cheaper servers but only because you’re delegating costs to each client, and the “no storage” idea is in conflict with the couchdb thing you mentioned somewhere else. servers should cache, caching is more efficient on a server than on every client.

    a social web browser, built into the browser

    im not sure what you’re pitching here. how are AP documents served to other instances from your browser? does your browser need to deliver activities to other instances? is your whole post history just stored in localstorage, deleted if you clear site data? are you supposed to still buy a domain (AP wants domains as identities) and where are you going to point it?

    • rglullis@communick.newsOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      I have not once said that we need to get rid of servers, but I am saying that they could (should?) be used only as an proxy for the outbox/inbox. I’ve said already elsewhere, but it may make it easier to understand: the “ideal” model I have in mind is something like https://movim.eu, but with messages based around the ActivityStream vocabulary.

      you really can’t get around this, even if you make every user handle their own stuff, every user will have their database and message queue.

      Why is it that a XMPP server can handle millions of concurrent users on a single box with 160GB RAM and 40 cores, yet Mastodon deployments for less than 10k active users have crazy expensive bills?

      AP has a great migration system.

      Hard disagree, here. Tell me one system where I can take my domain and just swap the urls of the inbox/outbox. Mastodon lets you migrate your follower list and signals the redirect to your followers about your new actor ID, but you can not bring your data. But most importantly, the identity itself is not portable.

      silo which holds all posts ever made, indexed to search? (…) that’s centralization

      You can have decentralized search indexes. Each server holds a bit of the index, but everyone gets to see the whole thing.

      i don’t want my end device to connect to every instance under the sun.

      Not every instance, but you’d be connecting to the outboxes from the people you follow. How is that different from, e.g, subscribing to a RSS feed?

      my fedi instance db is around 30G, and im a single user instance which only sees posts from my follows

      First: How the hell did you get this much data? :) I have an instance running for 4 years, with a bunch of relays, serving ~10 users and the DB has less than 4GB.

      But to answer your question: If you are running a single-user instance, then you are already running a client, the only difference is that you are running on a remote machine which proxies everything for you. And how you deal with data wouldn’t change: just like you can delete old/stale data in Mastodon, you’d be able to delete or offload messages that are older than X days.

  • intensely_human@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    Yup, sure enough. There it is:

    We should not and need not emulate the fragmentation of closed social networks

    Yes, we should emulate closedness. Completely interconnected spaces are breeding grounds for monopoly. The Fediverse’s lack of perfect interconnection is a feature, not a bug.

    • rglullis@communick.newsOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      I think you didn’t parse the sentence as I meant it.

      I am not saying you should make all networks completely connected. What I am saying is that we should not develop Fediverse apps by emulating a closed (as in proprietary, corporate-controlled) service.