Close Menu
InfoQuest Network
  • News
  • World
    • United States
    • Canada
    • Europe
    • Asia
    • Latin America
    • Australia
    • Africa
  • Politics
  • Business
    • Personal Finance
    • Finance
    • Markets
    • Startup
    • Investing
    • Innovation
    • Billionaires
    • Crypto
  • Tech
  • Lifestyle
  • Sports
  • Travel
  • More
    • Science
    • Entertainment
    • Health & Wellness
    • Immigration
Trending

Russian Strikes Ignite Fires and Injure Residents in Odesa and Dnipropetrovsk

August 4, 2025

Terminal Evacuated at Brisbane Airport; Major Delays Anticipated

August 4, 2025

Canada’s McIntosh Secures Fourth Gold Medal to Conclude Dominant Performance at World Swimming Championships

August 4, 2025
Facebook X (Twitter) Instagram
Smiley face Weather     Live Markets
  • Newsletter
  • Advertise
Facebook X (Twitter) Instagram YouTube
InfoQuest Network
  • News
  • World
    • United States
    • Canada
    • Europe
    • Asia
    • Latin America
    • Australia
    • Africa
  • Politics
  • Business
    • Personal Finance
    • Finance
    • Markets
    • Startup
    • Investing
    • Innovation
    • Billionaires
    • Crypto
  • Tech
  • Lifestyle
  • Sports
  • Travel
  • More
    • Science
    • Entertainment
    • Health & Wellness
    • Immigration
InfoQuest Network
  • News
  • World
  • Politics
  • Business
  • Finance
  • Entertainment
  • Health & Wellness
  • Lifestyle
  • Technology
  • Travel
  • Sports
  • Personal Finance
  • Billionaires
  • Crypto
  • Innovation
  • Investing
  • Markets
  • Startup
  • Immigration
  • Science
Home»Business»Innovation»AI Coach Detects Hallucinations in AI Models
Innovation

AI Coach Detects Hallucinations in AI Models

News RoomBy News RoomJuly 11, 20240 ViewsNo Comments3 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email Reddit Telegram WhatsApp

A new open-source AI model called Lynx, developed by Patronus AI, aims to address the issue of hallucinations or mistakes made by generative AI models. This model, developed by former Meta AI researchers Anand Kannappan and Rebecca Qian, promises to be faster, cheaper, and more reliable in detecting such errors without human intervention. By fine-tuning Meta’s Llama 3 language model with examples of hallucinations and their correct responses, Lynx claims to outperform other leading AI systems in detecting factual inaccuracies.

The founders of Patronus AI, Kannappan and Qian, started the company with the goal of providing scalable oversight for AI systems that outperform human capabilities. They hope that Lynx can serve as a “coach” for other AI models, guiding them to be more accurate and reliable during development. Their conversations with company executives revealed a concern about launching AI products that could make headlines for the wrong reasons. By using Lynx to uncover hallucinations during development, companies can prevent blunders before their AI applications are launched.

Currently, AI products are stress-tested before shipment using techniques like “red teaming” and the evaluation of models like GPT-4. However, Kannappan believes that these approaches may not be effective in identifying errors and hallucinations. Lynx, on the other hand, was trained to reason why an answer is wrong by providing additional background information. The company has also introduced a benchmark called HaluBench, which rates the effectiveness of different AI models in detecting hallucinations across legal, financial, and medical domains.

Kangen Water

In addition to Lynx, Patronus AI has released a tool called Copyright Catcher, which detects AI models producing copyrighted content. This tool has caught popular AI models regurgitating text from books like Michelle Obama’s Becoming and John Green’s The Fault in Our Stars. The company has also developed other tools such as FinanceBench, Enterprise PII, and Simple Safety, which evaluate model performance in various domains, ensuring that AI models do not produce harmful or misleading results that people rely on.

The ultimate mission of Patronus AI is to prevent AI models from producing bad results that could lead to misinformation. Qian emphasizes that when an AI model hallucinates, it may still produce output that sounds plausible, leading to potential misinformation. By providing tools like Lynx and benchmarks like HaluBench, the company aims to ensure that AI applications are accurate, reliable, and safe for users. With a focus on evaluation and oversight, Patronus AI is working towards improving the performance and trustworthiness of generative AI models.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Reddit Telegram WhatsApp

Related News

Using this AI Model Could Spare Thousands of Cancer Patients from Receiving Unnecessary Treatments

November 5, 2024

Saudi Plans to Utilize Oil Wealth to Establish Itself as a Major Player in Artificial Intelligence

November 5, 2024

John Jumper of Google DeepMind Reflects on Nobel Prize Win and AlphaFold’s Future

November 5, 2024

Facebook Earned Over $1 Million from Ads Promoting Election Misinformation

November 5, 2024

Elon Musk’s “United States of America Inc” Sends Payments to Pro-Trump PAC Backers

November 4, 2024

Amazon is making a major investment in small nuclear reactors to power its data centers

October 25, 2024
Add A Comment
Leave A Reply Cancel Reply

Top News

Terminal Evacuated at Brisbane Airport; Major Delays Anticipated

August 4, 2025

Canada’s McIntosh Secures Fourth Gold Medal to Conclude Dominant Performance at World Swimming Championships

August 4, 2025

Kamala Harris Returns to National Spotlight as James Comer Suggests Subpoena in Biden ‘Cover-Up’ Investigation

August 4, 2025

Subscribe to Updates

Get the latest news and updates directly to your inbox.

Advertisement
Kangen Water
InfoQuest Network
Facebook X (Twitter) Instagram YouTube
  • Home
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact
© 2025 Info Quest Network. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.