Bipartisan bill aims to fight AI Deepfake misuse through watermarking and provenance.

The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED) has been introduced to enhance creator protection and regulate the use of AI in generating content. If passed, the bill will bring much-needed transparency to AI-generated content and empower creators such as journalists, artists, and musicians to retain control over their work.

Under the proposed legislation, AI service providers like OpenAI will be required to ensure that information about the origin of the content they generate is embedded in a machine-readable manner and cannot be manipulated or removed using AI-based tools. The Federal Trade Commission (FTC) will oversee the enforcement of the COPIED Act, treating violations as unfair or deceptive acts, similar to other breaches under the FTC Act.

The integration of AI has sparked ethical debates due to its capacity to extract large amounts of data from the internet. As per a report from Bitget, these activities could lead to estimated losses of $10 billion by 2025.

In the crypto space, scammers have exploited AI to impersonate well-known figures like Elon Musk and Vitalik Buterin in order to deceive users. The COPIED Act aims to address these concerns by establishing control over the training of AI on specific types of content and ensuring that creators are protected from unauthorized use of their work.

This legislation also aims to hold AI service providers accountable for the content they generate and prevent fraudulent activities facilitated by AI technology.

Leave a Reply

Your email address will not be published. Required fields are marked *