The United States Copyright Office has clarified legal rules for trustworthiness research and artificial intelligence (AI) red teaming under Section 1201 of the Digital Millennium Copyright Act (DMCA), declaring that common AI research techniques do not constitute a breach. This statement follows repeated requests for clarification from the Hacking Policy Council.
“The DMCA is and has often been used in the past to criticize security researchers, in ways that have often created a chilling effect and discouraged research in the first place,” says Casey Ellis, founder and advisor at Bugcrowd. “This decision means that security and safety researchers operating in good faith can use the various techniques mentioned in the Registry’s recommendation without fear of retaliation under the DMCA. Ultimately, the implication is that good faith anti-LLM security searches, even if not explicitly exempt, have been explicitly characterized as “not violating the DMCA” in the guidance. surrounding the decision.
Clarifying this legal rule means that AI reliability researchers will have legal grounds to defend against threats of legal action for using common techniques in their research. However, other reliability research techniques may still pose a risk of violating Section 1201 of the DMCA.