• Black616Angel@feddit.de
    link
    fedilink
    arrow-up
    35
    arrow-down
    1
    ·
    11 months ago

    Q: Why didn’t you write this in $NEW_LANGUAGE instead of crufty C++? A: I probably should have! $NEW_LANGUAGE is deservedly attracting a lot of attention for its combination of safety, readable syntax, and support for modern programming paradigms. I’ve been trying out $NEW_LANGUAGE and want to write more code in it. But for this I chose C++ because it’s supported on all platforms, lots of people know how to use it, and it still supports high-level abstractions (unlike C.)

    Lol

    • lysdexic@programming.devOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      I’d love to see benchmarks testing the two, and out of curiosity also including compressed JSON docs to take into account the impact of payload volume.

      Nevertheless, I think there are two major features that differentiate protobuff and fleece, which are:

      • fleece is implemented as an appendable data structure, which might open the door to some usages,
      • protobuf supports more data types than the ones supported by JSON, which may be a good or bad thing depending on the perspective.

      In the end, if the world survived with XML for so long, I’d guess we can live with minor gains just as easily.

      • aes@programming.dev
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        11 months ago

        “Appendable” seems like a positive spin on the “truncated YAML-file is frighteningly often valid” problem…

        • lysdexic@programming.devOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          "Appendable” seems like a positive spin on the (…)

          I don’t think your take makes sense. It’s a write-only data structure which supports incremental changes. By design it tracks state and versioning. You can squash it if you’d like but others might see value in it.

    • ck_@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      You probably wouldn’t. The main difference is that protobuf is structured while fleece is unstructured, so you would use it in places where you don’t want to (or can’t) tie yourself to a schema outright.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      It’s not serialized from what I understand so it doesn’t need parsing. It’s sort of a structure+pointer dump. We’ll see how well that translates to other languages than C though.

  • bitcrafter@programming.dev
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    Q: Why the name “Fleece”?

    A: It’s a reference to the mythical Golden Fleece, the treasure sought by Jason [emphasis mine] and the Argonauts.

    I see what you did there…

  • demesisx@infosec.pub
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    This post leads me to piggyback and see what people think of lambdabuffers (which are not my work but something I became aware of through the Haskell community).

    • Lupec@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      I don’t have much experience with similar tools but that looks quite interesting, thanks for sharing!

      • demesisx@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        No problem! I plan to teach for them when/if iterop becomes difficult when sending data between WASM, Haskell, Plutus, and Purescript.

  • mrkite@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    11 months ago

    Interesting. A year ago I was looking for something exactly like this for distributing data between multiple servers. Everything required a ton of overhead or was too big to use. I ended up just using json. I did discover that Brotli can compress 3 gigs of json down into just 70 megs nearly instantly.