cross-posted from: https://lemmy.ml/post/15741608

They offer a thing they’re calling an “opt-out.”

The opt-out (a) is only available to companies who are slack customers, not end users, and (b) doesn’t actually opt-out.

When a company account holder tries to opt-out, Slack says their data will still be used to train LLMs, but the results won’t be shared with other companies.

LOL no. That’s not an opt-out. The way to opt-out is to stop using Slack.

https://slack.com/intl/en-gb/trust/data-management/privacy-principles

  • Gamers_Mate@kbin.run
    link
    fedilink
    arrow-up
    34
    arrow-down
    1
    ·
    2 months ago

    Instead of working on their platform to get discord users to jump ship they decide to go in the same direction. Also pretty sure training LLMs after someone opts out is illegal?

    • NovaPrime@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      8
      ·
      2 months ago
      1. It’s not illegal. 2. “Law” isn’t a real thing in an oligarchy, except insofar as it can be used by those with capital and resources to oppress and subjugate those they consider their lessors and to further perpetuate the system for self gain
    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      6
      arrow-down
      11
      ·
      2 months ago

      Also pretty sure training LLMs after someone opts out is illegal?

      Why? There have been a couple of lawsuits launched in various jurisdictions claiming LLM training is copyright violation but IMO they’re pretty weak and none of them have reached a conclusion. The “opting” status of the writer doesn’t seem relevant if copyright doesn’t apply in the first place.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          If copyrights apply, only you and stack own the data. You can opt out but 99% of users don’t. No users get any money. Google or Microsoft buys stack so only they can use the data. We only get subscription based AI, open source dies.

          If copyrights don’t apply, everyone owns the data. The users still don’t get any money but they get free open source AI built off their work instead of closed source AI built off their work.

          Having the website have copyright of the content in the context of AI training would be a fucking disaster.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          5
          arrow-down
          5
          ·
          2 months ago

          Nor is it up to you. But fact remains, it’s not illegal until there are actually laws against it. The court cases that might determine whether current laws are against it are still ongoing.

  • just another dev@lemmy.my-box.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    Customers own their own Customer Data.

    Okay, that’s good.

    Immediately after that:

    Slack […] will never identify any of our customers or individuals as the source of any of these improvements to any third party, other than to Slack’s affiliates or sub-processors.

    You’d hope the owner would get a say in that.

    • Croquette@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      An issue with mattermost is that some useful features are behind a paywall, like group calls.

      I’d go with nextcloud since all their features are included regardless of if you pay or not

  • ConfusedPossum@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    I use Slack at work everyday. I suppose this does feel off in some way but I’m not sure I’m the right amount of upset about this? I don’t really mind if they use my data if it improves my user experience, as long as the platform doesn’t reveal anything sensitive or personal in a way that can be traced back to me.

    Slack already does allow your admin to view all of your conversations, which is more alarming to me

    • SeedyOne@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      2 months ago

      The problem is where you said “as long as” because we already know companies AND the AI itself can’t be trusted to not expose sensitive info inadvertently. At absolute best, it’s another vector to be breached.

      • ConfusedPossum@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        It’s obvious when you say it like that. I don’t like the idea of some prompt hacker looking at memes I sent to my coworker

  • Ultraviolet@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    Remember when every platform renamed PMs to DMs and everyone who pointed out that they’re trying to remove the expectation of privacy was “paranoid”?