- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
OpenAI, Alphabet, Meta, Anthropic, Inflection, Amazon, and Microsoft committed to developing a system to “watermark” all forms of content, from text, images, audios, to videos generated by AI so that users will know when the technology has been used.
Of course the watermark will only apply to their consumer versions of things, maybe their business things, and absolutely none of their government or internal things.
Where did it say that?
It doesn’t say much of anything, I’m just extrapolating from the current trajectory of society.
So, make content with AI, then screen grab it, removing watermark?
The watermark would likely be comprised of a few different methods to embed marker pixel sets that would be difficult/impossible to see in addition to ones that are visible. Think printed currency. I’m not saying there won’t be an arms race to circumvent it like drm, or bad actors who counterfeit it, but the work should be done to try to ensure some semblance of reliability in important distributed content.
It’s possible for AI generated text to be made such that detection is straight-forward, due to probability of word selection. https://youtu.be/XZJc1p6RE78
Here is an alternative Piped link(s): https://piped.video/XZJc1p6RE78
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
This is going to need to happen anyway if these companies want to differentiate between human generated and ai generated content for the purposes of training new models
how to put watermark on textual content?
deleted by creator