A groundbreaking detection tool awaits release amid concerns and debates.

  • OpenAI has developed a tool to detect AI-generated text with 99.9% accuracy.
  • The release of the tool has been delayed for two years due to internal debates.
  • Concerns include potential negative impacts on non-native English speakers and user retention.
  • Teachers report a significant increase in AI usage among students for schoolwork.
  • The watermarking method leaves an unnoticeable pattern in AI-generated text.
  • Previous detection attempts by OpenAI were largely unsuccessful, achieving only 26% accuracy.
  • OpenAI is considering providing the detection tool to educators and external companies.
  • Internal discussions continue about the implications of releasing the tool.

OpenAI has created a highly effective tool capable of detecting when students use ChatGPT to write essays or research papers, boasting an impressive accuracy rate of 99.9%. However, despite its readiness for release, the company has held off for nearly two years due to ongoing internal discussions. Employees at OpenAI are torn between their commitment to transparency and the need to maintain user engagement. A survey revealed that about one-third of loyal ChatGPT users might be deterred by the introduction of anticheating technology. Concerns have also been raised about the potential disproportionate impact on non-native English speakers. nnThe detection tool works by subtly altering the way tokens are selected in AI-generated text, creating a watermark that is invisible to the naked eye but detectable through OpenAI’s technology. This watermarking method is considered highly effective, with researchers asserting that it is more likely for the sun to evaporate tomorrow than for a term paper not to be watermarked. However, there are worries that simple techniques, like translating text or adding emojis, could erase these watermarks. nnTeachers are increasingly alarmed by the rise of AI usage in schools, with a recent survey indicating that 59% of educators believe students are using AI to assist with their assignments. OpenAI’s leadership, including CEO Sam Altman, has been involved in discussions about the tool, but its release has not been prioritized. nnWhile Google has developed a similar watermarking tool for its AI, OpenAI has focused more on audio and visual watermarking due to the potential risks involved, especially in a politically charged environment. Previous attempts by OpenAI to create a detection algorithm were unsuccessful, leading to its withdrawal after only achieving a 26% detection rate. nnAs the debate continues, OpenAI is exploring options for distributing the detection tool, potentially to educators or companies that assist schools in identifying AI-generated work. The internal discussions reflect a broader concern about the implications of AI in education and the need for responsible management of such technologies.·

Factuality Level: 7
Factuality Justification: The article provides a detailed account of OpenAI’s internal discussions regarding the development of an anticheating tool, including various perspectives and concerns from employees and educators. While it presents factual information, some sections may contain opinions or interpretations that could be seen as biased. Additionally, the article could benefit from more clarity and conciseness, as it includes some tangential details that may distract from the main topic.·
Noise Level: 7
Noise Justification: The article provides a detailed account of OpenAI’s internal discussions regarding the development of an anticheating tool, including various perspectives and concerns. It presents evidence from surveys and expert opinions, which supports its claims. However, while it raises important issues about AI misuse in education, it could benefit from a more critical analysis of the implications of such technology and its potential impact on students and educators.·
Public Companies: OpenAI (), News Corp (), Google ()
Key People: Sam Altman (Chief Executive Officer of OpenAI), Mira Murati (Chief Technology Officer of OpenAI), Alexa Gutterman (High school English and journalism teacher), John Thickstun (Stanford researcher), Mike Kentz (AI consultant for educators), Josh McCrain (Political-science professor at the University of Utah), Scott Aaronson (Computer-science professor), John Schulman (Co-founder of OpenAI)

Financial Relevance: Yes
Financial Markets Impacted: The article discusses OpenAI’s potential anticheating tool which could impact the education technology market and companies involved in AI development.
Financial Rating Justification: The article pertains to financial topics as it discusses OpenAI, a significant player in the AI industry, and its decisions regarding technology that could influence market dynamics and educational practices, potentially affecting various companies and sectors.·
Presence Of Extreme Event: No
Nature Of Extreme Event: No
Impact Rating Of The Extreme Event: No
Extreme Rating Justification: The article discusses internal debates at OpenAI regarding the development of an anticheating tool for AI-generated text, but it does not mention any extreme events such as natural disasters, crises, or accidents.·

Reported publicly: www.wsj.com