DEF CON Infosec super-band the Cult of the Dead Cow has released Veilid (pronounced vay-lid), an open source project applications can use to connect up clients and transfer information in a peer-to-peer decentralized manner.

The idea being here that apps – mobile, desktop, web, and headless – can find and talk to each other across the internet privately and securely without having to go through centralized and often corporate-owned systems. Veilid provides code for app developers to drop into their software so that their clients can join and communicate in a peer-to-peer community.

In a DEF CON presentation today, Katelyn “medus4” Bowden and Christien “DilDog” Rioux ran through the technical details of the project, which has apparently taken three years to develop.

The system, written primarily in Rust with some Dart and Python, takes aspects of the Tor anonymizing service and the peer-to-peer InterPlanetary File System (IPFS). If an app on one device connects to an app on another via Veilid, it shouldn’t be possible for either client to know the other’s IP address or location from that connectivity, which is good for privacy, for instance. The app makers can’t get that info, either.

Veilid’s design is documented here, and its source code is here, available under the Mozilla Public License Version 2.0.

“IPFS was not designed with privacy in mind,” Rioux told the DEF CON crowd. “Tor was, but it wasn’t built with performance in mind. And when the NSA runs 100 [Tor] exit nodes, it can fail.”

Unlike Tor, Veilid doesn’t run exit nodes. Each node in the Veilid network is equal, and if the NSA wanted to snoop on Veilid users like it does on Tor users, the Feds would have to monitor the entire network, which hopefully won’t be feasible, even for the No Such Agency. Rioux described it as “like Tor and IPFS had sex and produced this thing.”

“The possibilities here are endless,” added Bowden. “All apps are equal, we’re only as strong as the weakest node and every node is equal. We hope everyone will build on it.”

Each copy of an app using the core Veilid library acts as a network node, it can communicate with other nodes, and uses a 256-bit public key as an ID number. There are no special nodes, and there’s no single point of failure. The project supports Linux, macOS, Windows, Android, iOS, and web apps.

Veilid can talk over UDP and TCP, and connections are authenticated, timestamped, strongly end-to-end encrypted, and digitally signed to prevent eavesdropping, tampering, and impersonation. The cryptography involved has been dubbed VLD0, and uses established algorithms since the project didn’t want to risk introducing weaknesses from “rolling its own,” Rioux said.

This means XChaCha20-Poly1305 for encryption, Elliptic curve25519 for public-private-key authentication and signing, x25519 for DH key exchange, BLAKE3 for cryptographic hashing, and Argon2 for password hash generation. These could be switched out for stronger mechanisms if necessary in future.

Files written to local storage by Veilid are fully encrypted, and encrypted table store APIs are available for developers. Keys for encrypting device data can be password protected.

“The system means there’s no IP address, no tracking, no data collection, and no tracking – that’s the biggest way that people are monetizing your internet use,” Bowden said.

“Billionaires are trying to monetize those connections, and a lot of people are falling for that. We have to make sure this is available,” Bowden continued. The hope is that applications will include Veilid and use it to communicate, so that users can benefit from the network without knowing all the above technical stuff: it should just work for them.

To demonstrate the capabilities of the system, the team built a Veilid-based secure instant-messaging app along the lines of Signal called VeilidChat, using the Flutter framework. Many more apps are needed.

If it takes off in a big way, Veilid could put a big hole in the surveillance capitalism economy. It’s been tried before with mixed or poor results, though the Cult has a reputation for getting stuff done right. ®

  • PeleSpirit@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    14
    ·
    1 year ago

    What I don’t understand about these projects is why can’t we both have them and protect the children (child porn, child trafficking, etc.)? Is there a way to self police like the fediverse is starting to do by kicking those people out of the instance or no one will connect with them? I would love the privacy from corporations, not places where really shitty people can do really shitty things.

    • Beryl@lemmy.world
      link
      fedilink
      English
      arrow-up
      67
      ·
      1 year ago

      It’s simple, really : if you have a built-in back door to prevent child porn circulation, then you can use it for anything else, and it WILL eventually be used in other ways.

    • nickwitha_k (he/him)@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      45
      ·
      1 year ago

      What I don’t understand about these projects is why can’t we both have them and protect the children (child porn, child trafficking, etc.)?

      The reason is that the “protect the children” thing is and always has been a bad faith excuse to expand or establish control over others. That’s not to say that places like TOR don’t have a problem with CSAM but if that were the actual target, it would be addressed in the proposed laws and vigorously pursued. It never is.

      Protecting children is always, at most, a token gesture in these laws, which exand censorship and surveillance of the population, while demonstrating complete disregard for harms and unnecessary risks that they introduce, while generally also exempting those in power from being impacted.

      • lateraltwo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        For a good example of how “extra policing and security” can go too far, go no further than the TSA and all the “terrorism” they protect us from

      • jmk1ng@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        17
        ·
        1 year ago

        If that’s what you have to tell yourself, fine. But at least be intellectually honest here and admit you’re simply willing to condone child abuse because you think Google or whatever looking at your texts is more egregious

        • nickwitha_k (he/him)@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          1 year ago

          Nah. I don’t condone it nor do I think its perpetrators should be able to escape justice, regardless of how wealthy or politically connected they are. They deserve the same as slavers for the lifelong harms that they cause their victims. If you’re intellectually honest, you’ll admit that giving unfettered access to personal information and habits to organizations with poor track records of keeping such access safe is a stupid idea.

          ETA: While I don’t like that megacorps have my information, that ship has sailed. What I see as the bigger problem is when the backdoor access and data is stolen by criminal organizations or abused by state actors. Think the increasing rate of identity theft, credit card fraud, and ransomware attacks is bad now? You ain’t seen nothing yet. Just wait until a bad actor has access to everyone’s devices and activity data then starts selling it to whoever is interested (burglers knowing when victims will be absent, state actors not needing to even employ a honeypot for blackmail, properties “mysteriously” being signed over to megacorps, groups committing genocide and censoring and mention on electronic media, etc.). Might sound like an exaggerated enumeration of risks but it’s quite the opposite. If you have any understanding of how online services work, you know that encryption and security are the only reason that the Internet can be used for anything beyond sharing documents.

          ETA2: Also, if you’ve paid even the slightest bit of attention to history, you’ll know how much “think of the children” was abused in the 20th century alone to try to prevent access to “black music” (jazz, blues), education, art, rock music, knowledge of sex, fantasy novels, roleplaying games, programs preventing child abuse, etc.

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      edit-2
      1 year ago

      I would argue it could be more efficient to protect children (and all victims) in our daily lives - show empathy towards others, and improve empathy in societies where necessary (yes, sadly, this is a lengthy process), to the point where no country will seem to be turning a blind eye towards abusers, and where people care & check on the kids they see in the neighborhood. This won’t eliminate all the abuse, but online policing of contents is only fighting the symptoms, so the “offline approach” seems preferable. And surprise - if people are vigilant offline, the excuse for global surveillance goes away & ugly corporate capitalistic assholes need to find a new excuse.

      • thisbenzingring@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        16
        ·
        1 year ago

        The way they caught that horrible serial abuser in Australia recently is a good example of a detective using localized skills to find the needle in the haystack and identify a blanket in an abuse video.

    • echo64@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      1
      ·
      edit-2
      1 year ago

      For the same reason, we don’t allow government cameras in every public and private bathroom, even though it could stop really shitty people doing really shitty things.

      Humans demand personal privacy, and need avenues for that. The quite literal big brother is generally not felt to be something any society wants, even if it could illimate the shitty people doing really shitty things.

      It’s not a tech problem. It’s a societal one.

        • Tangent5280@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          ·
          1 year ago

          The fediverse is not private. It is so non-private, that I’d say it’s hovering somewhere on the opposite spectrum. That’s ok, because it was not built for privacy. It was built to democratise online spaces, and it does that very well.

          Self regulation is possible on the fediverse because a semi central authority (instance admins) can choose to defederate from other semi central authorities (instance admins); this still doesn’t really silence anyone since they can form their own instance and do whatever they want in there.

          In person to person chats, self regulation does exist - if you don’t like what someone says or does you just stop communicating or associating with them. If what they’re doing is immoral and illegal, you can report them yourself. What private communication is about is that someone disconnected to the conversation can’t come around snooping without the consent of anybody actually in the conversation.

    • Loulou@lemmy.mindoki.com
      link
      fedilink
      English
      arrow-up
      27
      ·
      1 year ago

      It was never about the children or fighting terrorism, to get pedophiles or twart attacks you have to have people “on the ground”, not by snooping everything.

    • guyrocket@kbin.social
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      1 year ago

      I think this is a great question, but I would ask it a little differently.

      Is it possible for a p2p system to self police for things like cp?

      Maybe no one knows how now. But maybe someone can figure it out eventually. Seems like a bit of a logical contradiction but I continue to be amazed at human creativity.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        Yeah, they are contradictory concepts to an extent. Making an uncensorable and untraceable protocol means exactly that. Things like the Fediverse are not that and censorship can come through things like defederation and blocking.

        That said, they exist on different layers. You could probably run a federated system on top of this protocol and still be able to filter out the illegal and offensive content. It doesn’t mean that content just disappears, it just means you don’t have to subject yourself to it.

      • linearchaos@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        If it were just anonymous content in a public setting you could use crowd-based morality to filter it. Any hash with a 75% down vote gets blacklisted and kind of thing. You have to account for bots and AI which may not be possible.

        But once you put private into the mix, you lose the crowd you’d need to vote for morality. Now you’ve got cases where MGM, Sony and BMG hire people to infiltrate the networks and shut down any post they deem unfit.

        Privacy is the difference between the dark web and the public torrent scene.

      • PeleSpirit@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        1 year ago

        I mean AI can tell what pictures I have on my computer by showing me ads that coincide, surely it could have AI find it and block it on the system. That’s one partial solution.

    • boatswain@infosec.pub
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      What I don’t understand about these projects is why can’t we both have them and protect the children

      Think of this as closer to Signal than to a social media platform. It’s a protocol, so there’s no saying that you couldn’t build a social media site with it, bit for now the demo app that I saw today is just chat. The parties involved share public keys with each other, and from then on, everything is encrypted so that only those people in the chat can read it.

      With that model, censorship is not really feasible. If you’re one of the perks in the conversation, you can say “guys, that’s gross, stop” or send screenshots to the cops or whatever, but that’s about it.

      Ultimately, if the only way the Authorities have of acting against terrorism/pedophiles/etc is by infringing everyone in the county’s right to privacy, they’re doing a shit job and need to be replaced.

      • jmk1ng@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        9
        ·
        1 year ago

        You really can’t have it both ways. It’s morally bankrupt to launch protocols that clearly will be used for abhorrent purposes and simply hand wave it away because you’re uncomfortable with the reality of the situation.

        I think we all wish that weren’t the case, but it is.

        Saying crap like it wouldn’t be a problem if law enforcement would just “git gud” makes you complicit

        • Zak@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          I agree that you can’t have it both ways; for everyone to have privacy, horrible people have to get privacy, and they will do horrible things with it.

          The thing is, people doing horrible things are already incentivized to take precautions, and it’s not possible to uninvent cryptography. Making it more accessible helps the innocent more than it does the guilty.

            • MrNobody@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 year ago

              Common sense? Take the signal protocol. Theres more innocent people using it for whatever purpose than there are guilty people using it for whatever purpose. You can’t not develop or use a technology just because somebody else might use it for nefarious purposes. Bad people do bad things on the normal internet, does that mean we should start restricting internet usage because someone might do something bad? Of course not, thats just stupid. So why doesn’t the same hold true for encryption technologies? Sure someone with ill intentions is going to do things with it we don’t like, but the majority of users are just wanting to use said technology so governments and corporations don’t see what they are doing.

              Or are we not entitled to privacy simply because some people use their privacy to harm others?

              • jmk1ng@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                10
                ·
                1 year ago

                There’s literally no way you can back up any of these claims. It’s just what you want to be true.

                All I want is for you to admit that you think protecting the “privacy” of people’s mundane text conversations is worth enabling Terrorism, Child Sexual Abuse, Human Trafficking, Organized Crime, etc etc etc

                To be clear, I think people should have a basic expectation of privacy. But at what cost? Like we’ve established it’s impossible to have one without the other.

                • MrNobody@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  Ok. Cars can be used to ram raid buildings, to try to rip atm from walls. Used to package, carry and execute bombings. Should we ban all vehicles? Or better yet, guns. They can be used, and are used over and over again to attack and terrorise innocent people out at the shops or going to school. Should all guns be banned? The internet can be used to learn how to make bombs and other weapons for the use of terror. That doesn’t even need to be anything special either, simple google searches will get you there. Should we ban the internet. I’m sure there are clearnet sites that post things that shouldn’t be posted its not just on the darknet.

                  I do agree with your last point though. You can have full security or you can have full privacy but you can’t have both. I think the main argument is because you value security more whilst myself and others value privacy more. Both, in my view are fair and valid.

                • Zak@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 year ago

                  We have good examples already of people hosting despicable things on TOR hidden services (or the “dark web” if you prefer). On occasion they get caught and we read about it in the news. TOR is old enough to drink in the USA. PGP is decades old and has open source implementations, but takes a bit of effort to use. People who are motivated to secure the content of communications have been able to do so effectively for a long time, and those technologies aren’t about to be uninvented.

                  Tools like Signal and Veilid make strong privacy protections accessible to the average person. My mother uses Signal. Let’s assume most of us agree that it would be worth giving up those protections to eliminate the use of telecommunication for terrorism, child sexual abuse, human trafficking, and organized crime.

                  How do you propose to do that when extant open source code has long provided similar capabilities to those motivated enough to put in the effort to use it?

        • boatswain@infosec.pub
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Gonna have to agree to disagree here: to my mind, taking away tools that allow people to evade the restrictions of unjust regimes because you buy into the “for the children” fear mongering makes you complicit in the rising totalitarian state.

          • jmk1ng@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 year ago

            Just because people use “for the children” in inappropriate scenarios to further an agenda has nothing to do with this discussion and you know it.

            If you make a tool to essentially hide people’s activity online, you KNOW what it’s going to ultimately be used for.

            …and you clearly think it’s worth the trade off. So no need to continue

        • diablexical@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          So if a predator locks a victim in a closet, does that make lockmakers morally reprehensible?

            • ferret@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              But they do offer an easy, quick and convienient way for a predator to contain their victim, much simpler than tying them down or holding them.

    • yiliu@informis.land
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      That would require that users have access to other users’ traffic, compromising security. After all, there’s no reason the government or corporations couldn’t operate many ‘users’.