Surely the use of user-deleted content as training data carries the same liabilities as reinstating it on the live site? I’ve checked my old content and it hasn’t been reinstated. I’d assume such a dataset would inherently contain personal data protected by the right to erasure under GDPR, otherwise they’d use it for both purposes. If that is correct, regardless of how they filtered it, the data would be risky to use.
Perhaps the cumulative action of disenfranchised users could serve toward the result of both the devaluation of a dataset based on a future checkpoint, or reduction in average post quality leading to decreased popularity over time (if we assume content that is user-deleted en masse was useful, which I think is fair).
I think you need to make a special request to get that level of deletion that comes with gdpr. I’m not certain, I just remember other users specifically talking about how you need to send them an email so they have to comply.
I also wouldn’t be surprised if their dataset is mostly stripped of user names to get around GDPR though I’m no expert.
All that to say I’d be very very surprised if they deleted comments in their dataset.
Very valid point of devaluating the user experience thought, especially when you take into account google searches. I’m sure they have already fallen off compared to a year ago where reddit would pop up half the time no matter what you searched.
Well, that’d be the mechanism of how GDPR protections are actioned, yes; but leaving themselves open to these potential ramifications broadly would be potentially risky. I don’t think it’d satisfy ‘compliance’ to simply broadly ignore GDPR except upon request. Perhaps the issues with it are even more significant when using it as training data, given they’re investing compute and potentially needing to re-train down the track.
Based on my understanding; de-identifying the dataset wouldn’t be sufficient to be in compliance. That’s actually how it worked prior to it for the most part, but I know companies largely ended up just re-identifying data by cross-referencing multiple de-identified datasets. That nullification forming part of the basis for GDPR protections being as comprehensive as they are.
There’d almost certainly be actors who previously deleted their content that later seek to verify whether it was later used to train any public AI.
Definitely fair to say I’m making some assumptions, but essentially I think at a certain point trying to use user-deleted content as a value add just becomes riskier than it’s worth for a public company
Surely the use of user-deleted content as training data carries the same liabilities as reinstating it on the live site? I’ve checked my old content and it hasn’t been reinstated. I’d assume such a dataset would inherently contain personal data protected by the right to erasure under GDPR, otherwise they’d use it for both purposes. If that is correct, regardless of how they filtered it, the data would be risky to use.
Perhaps the cumulative action of disenfranchised users could serve toward the result of both the devaluation of a dataset based on a future checkpoint, or reduction in average post quality leading to decreased popularity over time (if we assume content that is user-deleted en masse was useful, which I think is fair).
I think you need to make a special request to get that level of deletion that comes with gdpr. I’m not certain, I just remember other users specifically talking about how you need to send them an email so they have to comply.
I also wouldn’t be surprised if their dataset is mostly stripped of user names to get around GDPR though I’m no expert.
All that to say I’d be very very surprised if they deleted comments in their dataset.
Very valid point of devaluating the user experience thought, especially when you take into account google searches. I’m sure they have already fallen off compared to a year ago where reddit would pop up half the time no matter what you searched.
Well, that’d be the mechanism of how GDPR protections are actioned, yes; but leaving themselves open to these potential ramifications broadly would be potentially risky. I don’t think it’d satisfy ‘compliance’ to simply broadly ignore GDPR except upon request. Perhaps the issues with it are even more significant when using it as training data, given they’re investing compute and potentially needing to re-train down the track.
Based on my understanding; de-identifying the dataset wouldn’t be sufficient to be in compliance. That’s actually how it worked prior to it for the most part, but I know companies largely ended up just re-identifying data by cross-referencing multiple de-identified datasets. That nullification forming part of the basis for GDPR protections being as comprehensive as they are.
There’d almost certainly be actors who previously deleted their content that later seek to verify whether it was later used to train any public AI.
Definitely fair to say I’m making some assumptions, but essentially I think at a certain point trying to use user-deleted content as a value add just becomes riskier than it’s worth for a public company
Why would that be? It’s not the same.
And what liabilities would there be for reinstating it on the live site, for that matter? Have there been any lawsuits?