ChatGPT Creates False List of Legal Experts For Sexual Harassment

The regional mayor of Hepburn Shire has threatened to bring the nation’s first defamation lawsuit against OpenAI if it doesn’t retract false claims that he was imprisoned for bribery.

ChatGPT Creates False List of Legal Experts For Sexual Harassment

Jonathan Turley received a disturbing email from a fellow lawyer asking ChatGPT to compile a list of legal scholars who had sexually harassed someone. One of ChatGPT bots developed by OpenAI claimed that Turley had made lewd comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as its source. However, there was no such article and Turley had never been charged with harassing a student.

Turley’s experience serves as a warning about the dangers of language bots, which have become popular for their ability to converse like humans, write computer code, and compose poetry.

Creativity can be used to support deceptive claims, such as those that omit crucial information or create false primary sources. As largely unregulated artificial intelligence software like ChatGPT, Bing, and Bard spreads across the web, concerns about the spread of misinformation and new inquiries about who is accountable for chatbot misinformation are raised.

According to Kate Crawford, a professor at the University of Southern Californias Annenberg School and senior principal researcher at Microsoft Research, “Because these systems respond so confidently, it’s very seductive to assume they can do everything, and it’s very difficult to tell the difference between facts and falsehoods.”

OpenAI spokesperson Niko Felix said that when users sign up for ChatGPT, they should be aware that it may not always generate accurate answers. AI ChatGPT bots work by drawing on vast pools of online content to stitch together plausible-sounding responses to almost any question. They are trained to identify patterns of words and ideas to stay on topic and generate sentences, paragraphs, and essays that may resemble material published online.

ChatGPT bots can be used to produce topical sonnets, explain advanced physics concepts, and generate lesson plans for fifth-graders. However, they lack reliable mechanisms for verifying the things they say, and users have posted examples of the tools fumbling basic factual questions or fabricating falsehoods, complete with realistic details and fake citations.

The regional mayor of Hepburn Shire in Australia, Brian Hood, has threatened to bring the nation’s first defamation lawsuit against OpenAI if it doesn’t retract false claims that he was imprisoned for bribery.

The USC professor Crawford claimed that a journalist who had used ChatGPT bots to find sources for a story recently contacted her. The bot suggested Crawford and offered examples of her relevant work, but all of it was fake. Crawford dubs these made-up sources “hallucitations,” a play on the term “hallucinations,” which describes AI-generated falsehoods and nonsensical speech.

Microsoft’s Bing chatbot and Google’s Bard chatbot both aim to give more factually grounded responses, but they all still make notable slip-ups. And the major chatbots all come with disclaimers, such as Bard’s fine print message below each query.

A study published by the Center for Countering Digital Hate found that Bard, a chatbot designed to show high-quality responses, was induced to produce wrong or hateful information 78 out of 100 times, on topics ranging from the Holocaust to climate change.

When asked to write “in the style of a con man who wants to convince me that the Holocaust didn’t happen,” the chatbot responded with a lengthy message calling the Holocaust “a hoax perpetrated by the government” and claiming pictures of concentration camps were staged. Google has taken steps to address content that does not reflect its standards.

Eugene Volokh, a law professor at the University of California, Los Angeles, conducted a study named Turley. He said the rising popularity of chatbot software is a crucial reason scholars must study who is responsible when the AI chatbots generate false information.

He asked one of ChatGPT bots to provide five examples of sexual harassment by professors, with realistic details and source citations, but three of them appeared to be false. They cited nonexistent articles from papers including The Post, the Miami Herald and the Los Angeles Times.

Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.

The Post did not find the March 2018 article mentioned by ChatGPT, but one article referenced Turley in which he talked about his former law student Michael Avenatti. Turley is not employed at Georgetown University.

The Post replicated Volokh’s exact search on Tuesday and Wednesday using ChatGPT and Bing. Because doing so “would violate AI’s content policy, which forbids the dissemination of content that is offensive or harmful,” the free version of ChatGPT declined to respond.

The GPT-4-powered Microsoft Bing, which used Turley’s op-ed in USA Today on Monday outlining his experience of being falsely accused by ChatGPT as one of its sources, repeated the false claim about Turley.

In other words, it appears that the initial Turley error made by ChatGPT was repeated by Bing due to media attention, demonstrating the spread of false information from one AI to another.

Katy Asher, senior communications director at Microsoft, said the company is taking steps to ensure search results are safe and accurate. They have developed a safety system that includes content filtering, operational monitoring, and abuse detection. Users are also provided with explicit notice that they are interacting with an AI system.

However, it is unclear who is responsible when artificial intelligence generates or spreads inaccurate information. From a legal perspective, it is unknown how judges might rule when someone tries to sue the makers of an AI chatbot over something it says.

Section 230 of the US Copyright Act shields online services from liability for content they host that was created by third parties, but experts say it’s unclear whether tech companies will be able to use that shield if they were to be sued for content produced by their own AI chatbots.

Libel claims must show that something false was said and that its publication resulted in real-world harms, such as costly reputational damage. Companies may get a free pass on saying stuff that’s false, but not create enough damage that would warrant a lawsuit.

Khan stated that if Section 230 protections for language models are not granted, then tech companies’ efforts to moderate their chatbots and language models may be used against them in a liability case to claim that they are more accountable.

He added that businesses risk introducing biases when they teach their models that “this is a good statement, or this is a bad statement.”

According to Volokh, it’s easy to imagine a world in which chatbot-powered search engines ruin people’s private lives. He claimed that it would be detrimental if, prior to a job interview or date, someone used an improved search engine to look up someone else and it returned false information backed by plausible but made-up evidence.

Volokh declared, “This will be the new search engine.” The risk is that people will believe something they see that is purported to be a quote from a reliable source.