The CEO of Stability AI Ltd., and Jack Altman are among the tech investors who contributed to Tian’s $3.5 million funding round, which was co-led by Uncork Capital and Neo Capital.
An app called GPTZero was created by Edward Tian, a 22-year-old Princeton University student majoring in computer science and journalism, to prevent classroom abuse of the popular chatbot ChatGPT.
Since January, the GPTZero app has amassed 1.2 million registered users. He is currently introducing a new programme called Origin that will “save journalism” by separating fact from AI-generated misinformation in online media.
Emad Mostaque, the CEO of Stability AI Ltd., and Jack Altman are among the tech investors who contributed to Tian’s $3.5 million funding round, which was co-led by Uncork Capital and Neo Capital.
To determine when AI is being used, GPTZero analyses the unpredictability of text, also known as perplexity, and the uniformity of this unpredictability within the text, known as burstiness. According to the company, the tool has a 99% accuracy rate for human text and an 85% accuracy rate for AI text.
The 10-person team is in discussions with major media outlets like the BBC and business leaders, including Mark Thompson, the former CEO of the New York Times, to discuss partnerships for AI detection and analysis.
The business envisions using its technology in a variety of industries, including trust and safety, government, copyright, finance, and law. We think we can gather the brightest minds working on AI detection in one place, said Tian. Since detection is a relatively new field, we think it merits more support.
The company behind ChatGPT, Open AI, has released a text classifier that uses AI to identify machine-generated content, but it is far from perfect. Only 26% of AI-written text is correctly classified as “likely AI-written” by the tool, while AI-written text is mistakenly categorised as human-written text 9% of the time.
The business acknowledges on its website that “Our classifier has a number of significant limitations.” It should be used as a supplement to other techniques for identifying the author of a piece of text rather than as the sole tool for making decisions.
A problem for educators is the detection tool’s unreliability. As long as the effectiveness of those detection tools isn’t 100%, it’s very difficult for teachers to take action, even if they discover a suspicious article from a student that has been flagged with a 70% likelihood that it was generated by AI.
Director of the Harvard Library Innovative Lab, which studies issues like the effects of the internet, Jack Cushman, said, “I don’t think we know what to do with a flag that says there might be an issue.” All you can do at that point is speak with a student and suggest that, based on this tool, you may have engaged in academic dishonesty.
PeakMetrics CEO and co-founder Nick Loui thinks AI-generated texts are less harmful than deep-fake videos because of their lower potential for harm.
The temporary nature and lack of a clear path to monetization of the current detection tools make it challenging to draw in investment. The managing director of Tola Capital, Sheila Gulati, thinks this will eventually be in a much more advanced state.
Large language model products benefit from open sourcing because it lowers costs, improves transparency, and fosters innovation. However, it can also make detection tools more vulnerable to exploits and more hackable. This is explained by GPTZero’s Alex Cui.