this post was submitted on 25 Jun 2024
16 points (100.0% liked)

Non-Trivial AI

54 readers
1 users here now

This is a community for discussing and sharing news about what I am calling "Non-Trivial" AI. That is, AIs which Solve Problems. Discussions and news should relate to unique, unusual, and/or novel applications of AI, or the solutions of problems with AI, especially AI Safety. History of AI is also welcome.

For the purposes of this community, chatbots and image/video generators are trivial applications of AI, and thus content related to those applications are not fit for this community.

For the purposes of this community, AI and Machine Learning Algorithms/Applications are equivalent terms.

Rules:

  1. No Chatbot Logs/News; No AI-Generated Images/News

founded 9 months ago
MODERATORS
 

Abstract

Large language model (LLM) systems, such as ChatGPT1 or Gemini2, can show impressive reasoning and question-answering capabilities but often ‘hallucinate’ false outputs and unsubstantiated answers3,4. Answering unreliably or without the necessary information prevents adoption in diverse fields, with problems including fabrication of legal precedents5 or untrue facts in news articles6 and even posing a risk to human life in medical domains such as radiology7. Encouraging truthfulness through supervision or reinforcement has been only partially successful8. Researchers need a general method for detecting hallucinations in LLMs that works even with new and unseen questions to which humans might not know the answer. Here we develop new methods grounded in statistics, proposing entropy-based uncertainty estimators for LLMs to detect a subset of hallucinations—confabulations—which are arbitrary and incorrect generations. Our method addresses the fact that one idea can be expressed in many ways by computing uncertainty at the level of meaning rather than specific sequences of words. Our method works across datasets and tasks without a priori knowledge of the task, requires no task-specific data and robustly generalizes to new tasks not seen before. By detecting when a prompt is likely to produce a confabulation, our method helps users understand when they must take extra care with LLMs and opens up new possibilities for using LLMs that are otherwise prevented by their unreliability.

I am not entirely sure if this research article falls within the community's scope, so feel free to remove it if you consider it does not.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 4 months ago (1 children)

We automatically decompose a long generated answer into factoids. For each factoid, an LLM generates questions to which that factoid might have been the answer. The original LLM then samples M possible answers to these questions. Finally, we compute the semantic entropy over the answers to each specific question, including the original factoid. Confabulations are indicated by high average semantic entropy for questions associated with that factoid.

It sounds like they verify one LLM's answer by getting a second one to ask the same question over and over again in slightly different ways and see if it's answers stay the same, interesting over each piece of the answer so that essentially the original prompt is exploded and then montecarloed.

[–] [email protected] 2 points 4 months ago

What the hell? 'Scuse me. Who's watchin these AIs?

Uh - the fat one's watchin the little one?