The following message was originally posted to this form as a comment on this proposal by the U.S. Copyright Office.

YouTube’s ContentID and other, similar systems are notorious for violating American citizens’ First Amendment right to Freedom of Speech through Fair Use. Without Fair Use, the Supreme Court has previously ruled that Copyright law as it stands would be an unconstitutional abridgement of our fundamental legal rights. Systems like ContentID neither inform the user which part of the newly created work is infringing on an existing work, nor do they consistently identify which existing work is being infringed. In many cases, putative “rights holders” have used ContentID to create spurious claims against works that do not contain any trace of existing works, infringing or otherwise, such as videos of authors typing at the keyboard as they livestream their creative process to their audience; the very sound of “typing on a generic computer keyboard” is falsely claimed as copyrighted by these fraudulent companies. Systems like ContentID have also been weaponized by public officials committing civil rights violations, by playing copyrighted material from their phones to prevent their victims from streaming or uploading public evidence of these crimes. Most significant of all, systems like ContentID are unable to recognize when the sampling of an existing copyrighted work is a valid Fair Use, such as for the purposes of review, comment, or parody, or when the sampling is infringing; film critics on YouTube and other platforms are frequently forced to blindly guess which portions of their (often lengthy) commentary videos are being flagged as infringing, and are then forced to re-cut or mute portions of the video in order to exercise their First Amendment rights, sometimes even silencing their speech by making it impossible to quote a specific scene in order to comment on or criticize it.

As a professional software engineer, I implore you to reject any broader use of these automated measures, as all current and foreseeable future software is simply not capable of human-like reasoning, even when sold with the misleading name of “machine learning” or “artificial intelligence”.