Dr. Yeasted, Should the use of AI be disclosed, like a nutrition label, on writings that we’re supposed to take as fact, such as medical journals, history books, etc.? Like citations.
–Matt, Massachusetts.
It would be ideal to watermark all AI generated content. The larger tech companies have been in talks with the government to make this very thing happen.
The difficulty lies in not only getting all AI models to follow this requirement, but also making the watermark difficult to remove. One solution is to use metadata to “tag” every image/video/paragraph produced and then to have that metadata stay with the product even if the product is copied or altered. Perhaps the best outcome would be humans signing everything we make, AI signing everything it makes, and then anything without a signature should be questioned.
Just in case such technological solutions are inadequate, legal enforcement could be an option. There is precedent for this. It is illegal to counterfeit money, for example, and trying to pass off AI generated content as true information could be viewed in a similar way.
While labeling anything created by AI with a watermark would entail a little sacrifice on the part of the tech companies, it is in the best interest of the companies to do so. It would help foster public trust in the company. It would prevent rampant misinformation from eroding the authenticity of any content the company presents and thus preserve the integrity of the company.
Watermarking would protect actors and writers from likeness/intellectual property theft. It would also protect election candidates from hyper-disseminated smearing.
So yes, it seems best to have AI generated content clearly labeled as such. This is in keeping with one of the core principles of AI: transparency.
Thanks so much for your question, Matt!