Accusations Against Jeff Hancock Spark Debate on AI and Misinformation
Jeff Hancock, a communication professor and technology expert at Stanford University, is facing allegations of using artificial intelligence (AI) to fabricate parts of a court declaration. Hancock, the founding director of Stanford’s Social Media Lab, filed a 12-page statement in defense of Minnesota’s 2023 law criminalizing deepfake law usage to influence elections. This Deepfake Law has been challenged by Republican Minnesota State Representative Mary Franson and conservative satirist Christopher Kohl’s, who argue it infringes on free speech rights.
Hancock’s declaration, submitted on behalf of Minnesota Attorney General Keith Ellison, claimed that deepfakes significantly amplify misinformation by making it harder to verify authenticity. His statement, which contained 15 citations, included two unverifiable journal references. The articles, titled Deepfakes and the Illusion of Authenticity and The Influence of Deepfake Videos on Political Attitudes and Behavior, could not be located in the archives of the journals they were attributed to. This discrepancy led the plaintiffs’ attorney, Frank Berdnarz, to allege that the citations were likely AI-generated fabrications. Berdnarz urged the court to disregard Hancock’s testimony in its deliberations on the case.
Claims of AI ‘Hallucinations’ Challenge Credibility
In a filing on November 16, Berdnarz criticized the questionable citations, suggesting that Hancock, or his team, relied on generative AI tools like ChatGPT, which may have “hallucinated” the references. Such AI-generated inaccuracies, he argued, undermine the reliability of the entire declaration. Hancock, who was paid $600 per hour for his expert testimony, swore under penalty of perjury that his statements were accurate and true.
The law in question penalizes the dissemination of deepfakes aimed at misleading voters. Deepfake Law, which are AI-generated media that convincingly alter a person’s likeness or voice, have been a growing concern for their potential to sway public opinion. Hancock’s declaration highlighted the threat these technologies pose to traditional fact-checking mechanisms, a point central to the state’s defense of the law. However, the alleged AI-generated errors in his testimony have cast doubt on his credibility.
The Daily, along with other media outlets, has attempted to contact Hancock for clarification. So far, he has not responded to the allegations.
Context of the Case and Broader Implications Deepfake Law
Hancock, a prominent voice in technology and misinformation studies, has frequently addressed the challenges posed by AI. He appeared in a 2024 Netflix documentary featuring Bill Gates, discussing the future of AI, and is slated to teach a course titled “Truth, Trust, and Tech” in the spring. However, this controversy has placed his expertise and practices under scrutiny.
Christopher Kohl’s, one of the plaintiffs in the case, is no stranger to legal battles over free speech and media manipulation. Known as Mr. Reagan on social media, Kohl’s has previously opposed similar legislation in California that targets deceptive election-related content. His involvement in this case reflects broader concerns over government overreach and censorship in regulating deepfake technology.
The controversy surrounding Hancock’s testimony highlights the growing tension between AI’s potential benefits and its risks. As the legal case unfolds, it raises critical questions about the reliability of AI in academic and legal settings, as well as the role of experts in guiding policy decisions.