Perplexity, an AI search engine, claims to differentiate itself from other AI tools by providing footnotes to sources for its data. However, a study by GPTZero found that Perplexity often cites AI-generated sources that contain inaccurate and contradictory information on topics like travel, sports, technology, and politics. The study also found that Perplexity users encounter AI-generated sources after only three prompts on average. This raises concerns about the reliability of the information provided by Perplexity, as its output is only as good as its sources.
Perplexity’s Chief Business Officer, Dmitri Shevelenko, admitted that the system is not flawless and needs continuous improvement. The search engine uses algorithms to classify sources as authoritative and assigns trust scores to different domains and content. However, the study highlighted instances where Perplexity relied on AI-generated blog posts for health information, leading to conflicting advice on treating bacterial infections. The use of AI-generated sources raises questions about the accuracy and reliability of the information provided by Perplexity.
Perplexity has faced criticism for allegedly plagiarizing journalistic work from various news outlets, including Forbes, CNBC, and Bloomberg. Forbes sent a cease and desist letter to Perplexity, accusing the startup of copyright infringement. Perplexity denies the allegations, stating that facts cannot be plagiarized. The company has also been accused of accessing and scraping content from publications like Wired without authorization. These issues raise concerns about Perplexity’s ethics and practices in sourcing information.
The company has raised over $170 million in venture funding and counts tech giants like Amazon founder Jeff Bezos, Google Chief Scientist Jeff Dean, and Meta Chief Scientist Yann LeCun among its backers. Perplexity has implemented a revenue sharing program to compensate publishers and plans to add an advertising layer to its platform. However, experts warn that the quality of sources used by AI systems like Perplexity can impact the accuracy of their responses. Using low-quality web sources can lead to biases and inaccuracies in the information provided.
Experts highlight the challenges faced by AI companies in sourcing reliable information, as demonstrated by Google’s AI overviews producing misleading results from unvetted sources like discussion forums and satirical sites. This underscores the importance of using verified and accurate sources for AI-generated content. Perplexity’s reliance on AI-generated sources raises concerns about the trustworthiness of its information and the potential for promoting disinformation. As the internet becomes flooded with AI-generated content, distinguishing between authentic and fake information becomes increasingly difficult.