AI Whistleblowers: The Truth-Tellers Behind the Algorithms

Artificial Intelligence (AI) has the potential to revolutionize industries, from healthcare and finance to law enforcement and social media. But as AI becomes more deeply embedded in our lives, concerns are prevalent about its ethical implications due to so many generative AI companies using copyrighted works to train their systems without copyright owners’ permission or compensation. As evidenced by the nearly 40 copyright infringement cases filed by copyright owners during the past couple of years, such behavior has not been well received.
In addition to copyright holders, numerous former employees of generative AI companies have stepped forward as whistleblowers, willing to publicly expose AI’s flaws and illegal acts to reveal the dangers of its unchecked development. This blog highlights some of the key AI whistleblowers who have recently reported AI copyright infringements, their motivations for speaking up, and the consequences of their disclosures. Their stories not only expose the darker side of AI but also emphasize the importance of transparency, accountability, and respect for others’ works.
Suchir Balaji
Suchir Balaji was a young artificial intelligence researcher known for his work at OpenAI and his subsequent whistleblowing on AI copyright practices. Balaji grew up in Cupertino, California. After completing his BA in computer science from the University of California, Berkeley, he was recruited by OpenAI co-founder John Schulman. Balaji joined the organization in 2021 as an AI researcher. He contributed to projects like GPT-4 and its precursor, WebGPT, focusing on gathering and organizing internet data for AI training. Over the next few years, he grew concerned about OpenAI’s practices, particularly the use of copyrighted material in training models without permission. In August 2024, he resigned from OpenAI, stating, “If you believe what I believe, you have to just leave the company.”
In October 2024, Balaji published an essay titled “When Does Generative AI Qualify for Fair Use?” on his personal website. He argued that AI models such as ChatGPT violate U.S. copyright law by being trained on and replicating copyrighted works without authorization. His concerns were highlighted in an October 23, 2024, profile by The New York Times, where he emphasized the potential harm to creators’ commercial viability due to AI-generated “imitations.”
The timing of Balaji’s revelations coincided with lawsuits against OpenAI by authors and news publishers alleging copyright infringement. He was identified as a potential witness in these cases. Tragically, on November 26, 2024, Balaji was found dead in his San Francisco apartment from an apparent self-inflicted gunshot wound. Although the San Francisco Police Department and the Office of the Chief Medical Examiner concluded his death was a suicide, his parents and some public figures have called for further investigation into his death, expressing skepticism about the circumstances. Today, Balaji’s contributions to AI research and his courageous stance on ethical practices continue to influence discussions on AI development and intellectual property rights.
Louis Hunt
Louis Hunt, who holds an MBA from Harvard Business School and a Doctor of Law degree from Harvard Law School, was an executive at generative AI start up, Liquid AI He left his job as CFO and VP BD over “critical problems that need to be solved at the intersection of AI, IP/data, copyright, and law.” In December 2024, Liquid AI had recently closed a $250 million early-stage funding round meant to develop Liquid Foundation Models (LFMs), which marked a critical time in the company’s development.
Around this time, Hunt publicly stated it’s untrue that AI models do not copy copyrighted works during the training process and provided concrete evidence of verbatim AI generated output of articles from The New York Times; books from various authors including Stephen King; and scholarly articles from Harvard Business Publishing. More specifically, he stated that “The notion that models do not memorize and regurgitate copyrighted information that they’ve trained on is demonstrably false.” His departure from Liquid AI and subsequent public statements underscore his commitment to addressing tough issues within the AI community.
Hunt concluded his note by encouraging readers to analyze Suchir Balaji’s post on the technical explanations of why AI training on copyrighted works does not qualify for the fair use exception, and credits Balaji “for the courage to surface this information and make it available for the public.”
Ed Newton-Rex
In 2010, after graduating at the top of his class from Cambridge University with a degree in music, Ed Newton-Rex founded Jukedeck, the world’s first AI music company. Jukedeck’s technology has been used to create more than a million original pieces of music, and has won numerous awards, including a Cannes Innovation Lion. In 2019, Jukedeck was acquired by ByteDance, TikTok’s parent company. At ByteDance, Newton-Rex initially led the AI music lab, then ran TikTok’s product group in Europe. While at Snapchat, Newton-Rex was Chief Product Officer for Voisey, a music creation app. He then joined Stability AI as Vice President of Audio, where he led the development of Stable Audio before resigning in November 2023 over concerns about the company’s stance on using copyrighted works to train AI without permission.
In a statement about his resignation that went viral after it was published in a Music Business Worldwide interview on November 15, 2023, Newton-Rex said, “I’ve resigned from my role…at Stability AI, because I don’t agree with the company’s opinion that training generative AI models on copyrighted works is fair use.” Since leaving Stability AI, Newton-Rex—who, himself, is a music composer—has been actively engaged in promoting ethical practices in artificial intelligence, particularly concerning the rights of creators and copyright ownership.
In January 2024, Newton-Rex was interviewed by Wired magazine, in a piece titled “This Tech Exec Quit His Job to Fight Generative AI’s Original Sin.” In 2024, Newton-Rex also established Fairly Trained, a non-profit organization dedicated to certifying generative AI companies that respect creators’ rights in their training data practices. The organization offers an “L Certification” to companies that can demonstrate their training data is either licensed, in the public domain, offered under an appropriate open license, or originally owned by the company. In October 2024, Newton-Rex published an open letter online that reads: The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works and must not be permitted. This statement, which has garnered more than 40K signatures around the globe, can be found here. In a November 2024 op-ed for Music Business Worldwide, Newton-Rex critiqued opt-out schemes proposed by AI companies, arguing they are unfair to rights holders and largely ineffective. He emphasized that such schemes place an undue burden on creators to protect their work from unauthorized AI training.
Newton-Rex continues to be vocal about the challenges and ethical considerations in AI. On December 11, 2024, he addressed the UK Parliament, speaking out against the introduction of a government consultation that would provide an exception for AI developers who scrape online content, asserting that “the UK can lead in AI without upending copyright law and destroying the creative industries…”
Most recently, in February 2025, Newton-Rex commented on a new generative AI lawsuit involving 14 news organizations suing Cohere for alleged infringement through AI training practices. He highlighted the complaint’s detailed list of articles purportedly used without authorization. Through his continued public actions, interviews, and speaking engagements, Ed Newton-Rex continues to influence discussions on ethical AI development and the protection of creators’ rights in the digital age.
Why Do AI Whistleblowers Speak Out?
Despite the fact that whistleblowers often face tremendous personal and professional risks, many of them choose to speak out due to ethical concerns and moral responsibility. Unfortunately, numerous whistleblowers who have gone public with their concerns have faced consequences, such as job loss, industry blacklisting, legal action, and even threats to their well-being. Tech companies often attempt to discredit them or force them into silence through non-disclosure agreements (NDAs).
Conclusion
Despite these challenges, AI whistleblowers have played a crucial role in sparking public discussions and potential regulatory reforms. Their courage has led to greater awareness, more ethical discussions, and increased regulatory scrutiny of AI systems. As AI continues to shape our world, the voices of these AI whistleblowers remind us that ethical responsibility should go hand in hand with technological progress. By listening to and supporting whistleblowers, we can work toward a future where AI serves humanity and creativity in an ethical and equitable manner, rather than being a tool for unchecked power and profit.
Sign up for the Copyright Alliance’s AI Copyright Alert to receive timely news sent straight to your inbox.
If you aren’t already a member of the Copyright Alliance, you can join today by completing our Individual Creator Members membership form! Members gain access to monthly newsletters, educational webinars, and so much more — all for free!