Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost (2024)

Masha Belyi   Robert Friel   Shuai Shao   Atindriyo Sanyal

Galileo Technologies Inc.
{masha,rob,ss,atin}@rungalileo.io

Abstract

Retriever-Augmented Generation (RAG) systems have become pivotal in enhancing the capabilities of language models by incorporating external knowledge retrieval mechanisms.However, a significant challenge in deploying these systems in industry applications is the detection and mitigation of hallucinations—instances where the model generates information that is not grounded in the retrieved context.Addressing this issue is crucial for ensuring the reliability and accuracy of responses generated by large language models (LLMs) in diverse industry settings.Current hallucination detection techniques fail to deliver accuracy, low latency, and low cost simultaneously.We introduce Luna: a DeBERTA-large (440M) encoder, fine-tuned for hallucination detection in RAG settings. We demonstrate that Luna outperforms GPT-3.5 and commercial evaluation frameworks on the hallucination detection task, with 97% and 91% reduction in cost and latency, respectively. Luna is lightweight and generalizes across multiple industry verticals and out-of-domain data, making it an ideal candidate for industry LLM applications.

Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost


Masha Belyi   Robert Friel   Shuai Shao   Atindriyo SanyalGalileo Technologies Inc.{masha,rob,ss,atin}@rungalileo.io


**footnotetext: These authors contributed equally to this work

1 Introduction

Large Language Models (LLMs) are broadly used in industry dialogue applications due to their impressive ability to hold a natural conversation and succeed on a variety of reasoning tasks (Zhao etal., 2023). A key challenge in deploying customer-facing LLMs is their propensity for hallucinations, where the model presents cohesive, but factually incorrect information in conversation with a user (Roller etal., 2021; Lin etal., 2022). Retrieval-augmented generation (RAG), a technique for incorporating knowledge relevant to each user query in the LLM prompt, effectively reduces LLM hallucinations in production systems (Lewis etal., 2020). Yet, LLMs still often respond with nonfactual information that contradicts the knowledge supplied by RAG Shuster etal. (2021); Magesh etal. (2024).

Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost (1)

Causes of hallucinations have been extensively studied across different LLM tasks (Zheng etal., 2024; Cao etal., 2022; Das etal., 2022). Key contributing factors include knowledge cutoff (Vu etal., 2023), randomness (Lee etal., 2022), faulty training data (Dziri etal., 2022a; Lin etal., 2022; McKenna etal., 2023), and finetuning with large amounts of new knowledge (Gekhman etal., 2024). Apart from RAG, proposed mitigation solutions explore prompt engineering with chain of thought (Wei etal., 2022), finetuning (Zhang etal., 2024), reinforcement learning with human feedback (Ouyang etal., 2022), and specialized hallucination detection models (Wu etal., 2023; Lin etal., 2022).For RAG specifically, evaluation frameworks like RAGAS (Es etal., 2024), Trulens111https://www.trulens.org/, and ARES (Saad-Falcon etal., 2024) have emerged to offer automated hallucination detection at scale.However, these approaches rely on static prompts (RAGAS, Trulens) or finetuning on in-domain data (ARES), which limit their capacity to generalize to a breadth of industry applications.Gao etal. (2023) and Wu etal. (2023) take it a step further to successfully suppress hallucinations in LLM responses with a detect-and-replace technique. Though, due to prohibitively slow latency of their LLM evaluation models, real-time hallucination prevention in production systems still remains a challenge.

Customer-facing dialogue applications necessitate a hallucination detection system with high-accuracy, low cost, and low latency, such that hallucinations are caught and resolved before reaching the user. Few/zero-shot LLM approaches fail to meet the strict latency requirement due to model size. Moreover, though commericial LLMs like OpenAI’s GPT models (OpenAI, 2023) achieve strong performance, querying customer data through 3rd party APIs is both costly and undesirable for privacy and security reasons. Finetuned BERT-size models can achieve competitive performance to LLM judges (Bohnet etal., 2023; Saad-Falcon etal., 2024; Gao etal., 2023; Li etal., 2024; Yue etal., 2023), offering lower latency and local execution. However, these models require annotated data for finetuning and have not been evaluated for large-scale, cross-domain applications.

In this paper, we introduce Luna - a lightweight RAG hallucination detection model that generalizes across multiple industry-specific domains and scales well for real-time deployment. Luna is a 440M parameter DeBERTa-large encoder that is finetuned on carefully curated real-world RAG data. From analysis of RAG in production settings, we identify long-context RAG evaluation as a previously unaddressed challenge and propose a novel solution that facilitates high precision long-context RAG hallucination detection. Through extensive benchmarking, we demonstrate that Luna outperforms zero-shot prompting and RAG evaluation frameworks on the hallucination detection task.

Our approach is closest to the concurrently proposed ARES automated RAG evaluation framework (Saad-Falcon etal., 2024), with a few key differences: (1) ARES requires a validation set of in-domain annotated data to finetune a custom evaluation model, while Luna is pre-trained on a cross-domain corpus for built-in generalization; (2) Luna accurately detects hallucinations on long RAG contexts; and (3) Luna is optimized to process up to 16k tokens in milliseconds on deployment hardware.

2 Related Work

Hallucination detection

Prior work on hallucination detection in natural language generation (NLG) is vast (Ji etal., 2023). SelfCheckGPT (Manakul etal., 2023) and Agrawal etal. (2024) are examples of heuristic consistency-based methods that detect unreliable LLM outputs by comparing multiple sampled responses from the same LLM. Others look to the internal state of the LLM, such as hidden layer activations (Azaria and Mitchell, 2023) and token-level uncertainty (Varshney etal., 2023) as a proxy signal for hallucinations. Kadavath etal. (2022) prompt the generating LLM to introspect and evaluate it’s own responses. More generally, zero-shot (Es etal., 2024) and finetuned (Wu etal., 2023; Yue etal., 2023; Muller etal., 2023) LLM judges leverage LLM’s inherent reasoning abilities to evaluate other LLM generations. Similarly, general purpose finetuned LLM evaluators (Kim etal., 2024) that have been shown to correlate with human judgements can also be applied to hallucination detection.

Our approach to finetune a small LM evaluator like in (Gao etal., 2023; Saad-Falcon etal., 2024) is the first to evaluate and optimize such a model for industry applications under strict performance, cost, and latency constraints.

NLI for closed-domain Hallucination Detection

Existing research draws parallels between the hallucination detection task and the concept of entailment in Natural Language Inference (NLI). The goal of NLI is to determine the relationship between a premise and hypothesis, which can be one of: entailment, contradiction, or neutral. In the past, NLI models have been used to evaluate factual consistency on closed-domain NLG tasks (Honovich etal., 2022; Dziri etal., 2022b). The Attributable to Identified Sources (AIS) framework, introduced by Rashkin etal. (2023), formally unifies the notions of factuality, attribution, hallucination, faithfulness, and groundedness - all terms used to measure the extent to which an LLM response is attributable to some source of ground truth. In followup work, NLI entailment has been shown to correlate with AIS scores (Gao etal., 2023; Bohnet etal., 2023; Li etal., 2024) and has become a standard baseline for AIS and hallucination detection models.

In this work, we use pre-trained NLI model weights as the starting point for Luna finetuning.

3 Luna Model

We fine-tune a DeBERTa-v3-Large (He etal., 2023) NLI checkpoint222https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli from Laurer etal. (2022) with a shallow hallucination classifier on each response token. We train on the task of identifying supported tokens in the response, given a query and retrieved context. Framing the problem in this way makes our work comparable to recent automated RAG evaluation efforts. Our definition of support is synonymous with the answer faithfulness metric explored in RAGAS (Es etal., 2024) and ARES (Saad-Falcon etal., 2024), Truelens groundedness, and attribution (Li etal., 2024). At inference, we treat spans with low support probabilities as hallucinated spans.

Similar to Gao etal. (2023) and Wu etal. (2023), we aim to identify hallucinated spans in the response, rather than the less granular example-level hallucination boolean. While predicting spans is a more challenging task, it yields a more informative prediction to the end-user. Further, this approach sets us up for long-context prediction, which we discuss in detail next.

Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost (2)
Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost (3)
Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost (4)

3.1 Long Context RAG

In practice, we find that context length limitations are a significant pain point in industry applications. Custom RAG setups may retrieve a large number of context documents from various sources, or choose not to chunk the documents before passing them into the retriever. This results in long inputs to the RAG generator and evaluation models, sometimes even exceeding the token limit of select commercial LLMs. In Figure 2 we visualize the context length distribution of our curated RAG dataset (detailed in Section 4.1). While our base DeBERTa model can technically handle sequences of up to 24k (He etal., 2021), computational complexity of transformer attention layers scale quadratically with input length. Moreover, though long-context LLMs like Claude-3 are becoming competitive on LLM leaderboards333https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard, research shows that these models suffer from information loss (Liu etal., 2023) and may not be suitable for long-context RAG evaluation.

A naive solution is to chunk long-context RAG inputs into short segments and process them through the evaluator model in batches. Model predictions can then be aggregated over batch rows to predict example-level hallucination probabilities. Figure 3 illustrates how such chunking may result in false positives in cases where supporting information is scattered throughout the long context document(s). Instead, we leverage span-level predictions for a high-precision classifier over long sequence inputs.

3.2 Long Context Chunking

Consider a single input into the RAG evaluation model that consists of C context tokens [c1cC]delimited-[]subscript𝑐1subscript𝑐𝐶[c_{1}...c_{C}][ italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … italic_c start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT ], Q question tokens [q1qQ]delimited-[]subscript𝑞1subscript𝑞𝑄[q_{1}...q_{Q}][ italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … italic_q start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT ], and R response tokens [r1rR]delimited-[]subscript𝑟1subscript𝑟𝑅[r_{1}...r_{R}][ italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … italic_r start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT ]. Assume we are working with an evaluator model that accepts maximum sequence length L, and that Q+R<L, but C is much larger444the same approach easily extends to cases where R¿L. To fit the example into the model we break it up into windows of length L, such that each window contains the question, response, and a subset of the context tokens:

wi=[ci1cil][q1qQ][r1rR]subscript𝑤𝑖direct-sumdelimited-[]subscript𝑐subscript𝑖1subscript𝑐subscript𝑖𝑙delimited-[]subscript𝑞1subscript𝑞𝑄delimited-[]subscript𝑟1subscript𝑟𝑅w_{i}=[c_{i_{1}}...c_{i_{l}}]\oplus[q_{1}...q_{Q}]\oplus[r_{1}...r_{R}]italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = [ italic_c start_POSTSUBSCRIPT italic_i start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT … italic_c start_POSTSUBSCRIPT italic_i start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT end_POSTSUBSCRIPT ] ⊕ [ italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … italic_q start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT ] ⊕ [ italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … italic_r start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT ](1)

where l=LQR𝑙𝐿𝑄𝑅l=L-Q-Ritalic_l = italic_L - italic_Q - italic_R, and there are Nl𝑁𝑙\frac{N}{l}divide start_ARG italic_N end_ARG start_ARG italic_l end_ARG windows per example. In Figure 3 there are three such windows. Our model outputs support probabilities pisuperscript𝑝𝑖p^{i}italic_p start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT for each of the R response tokens in wisubscript𝑤𝑖w_{i}italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT as:

PS(wi)=[p1ipRi]subscript𝑃𝑆subscript𝑤𝑖delimited-[]superscriptsubscript𝑝1𝑖superscriptsubscript𝑝𝑅𝑖P_{S}(w_{i})=[p_{1}^{i}...p_{R}^{i}]italic_P start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT ( italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = [ italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT … italic_p start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ](2)

We train with a cross-entropy loss on each token output. During training, we leverage granular token-level support labels (Section 4.2) to adjust the training labels in each batch based on which context tokens are present in the window. For example, in Figure 3, "Washington, D.C., the capital of the US" is supported in window 1, nothing is supported in window 2, and "was founded in 1791" is supported in window 3.

At inference, we aggregate example-level support probabilities by taking the token-level maximum over windows. Refer to Figure 4 for an visual illustration of the steps described by equations 3-5 below. The example-level support probability for token j is defined as:

pj=max1i|w|(pji)subscript𝑝𝑗subscript1𝑖𝑤superscriptsubscript𝑝𝑗𝑖p_{j}=\max_{1\leq i\leq|w|}(p_{j}^{i})italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = roman_max start_POSTSUBSCRIPT 1 ≤ italic_i ≤ | italic_w | end_POSTSUBSCRIPT ( italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT )(3)

where |w|=Nl𝑤𝑁𝑙|w|=\frac{N}{l}| italic_w | = divide start_ARG italic_N end_ARG start_ARG italic_l end_ARG is the total number of windows we created in (1). To produce an example-level label, we take the minimum over R tokens:

PS=min(p1pR)subscript𝑃𝑆𝑚𝑖𝑛subscript𝑝1subscript𝑝𝑅P_{S}=min(p_{1}...p_{R})italic_P start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT = italic_m italic_i italic_n ( italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT … italic_p start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT )(4)

so that the overall support probability is no greater than the support probability of the least supported token in the response. Finally, we derive example hallucination probability PH𝑃HP\textsubscript{H}italic_P as

PH=1PSsubscript𝑃𝐻1subscript𝑃𝑆P_{H}=1-P_{S}italic_P start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT = 1 - italic_P start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT(5)

3.3 Training

To leverage the full pre-trained NLI model, we initialize the hallucination prediction head with weights from the NLI classification head. The original NLI head is a 3-class single-layer perceptron with a neuron for each NLI class (entailment, contradiction, and neutral). During training, we optimize for low entailment probability and high contradiction probability for hallucinated tokens (and the opposite for supported tokens). At inference, we output the probability of entailment for each token.

We apply data transformation techniques to introduce additional variability for better generalization during training. Transformations include dropping and inserting context documents, and shuffling questions and responses between examples in batch. Training labels are adjusted accordingly with each transformation.

The model trains for 3 epochs with cross-entropy loss on the output of each response token. We initialize the learning rate to 56superscript565^{-6}5 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT for the base model layers and 25superscript252^{-5}2 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT for the classification head, and train with warmup and a linear decay rate.

Domaintrainvaltest%H
customer support4k60060022%
finance38k5k5k5%
biomedical research22k3k3k20%
legal1.5k5005006%
general knowledge9.5k2k2k18%

4 Data

4.1 RAG QA dataset

We recycle open-book QA datasets to construct a RAG QA dataset. Our goal is to simulate natural RAG examples that may occurr in production settings. We sample data from five industry verticals: customer support (DelucionQA (Sadat etal., 2023), EManual (Nandy etal., 2021), TechQA (Castelli etal., 2020)), finance and numerical reasoning (FinQA (Chen etal., 2021), TAT-QA (Zhu etal., 2021)), biomedical research (PubmedQA (Jin etal., 2019), CovidQA (Möller etal., 2020)), legal (Cuad (Hendrycks etal., 2021)) and general knowledge (HotpotQA (Yang etal., 2018), MS Marco (Nguyen etal., 2016), HAGRID (Kamalloo etal., 2023), ExpertQA (Malaviya etal., 2024)). The combined dataset contains examples from a variety of difficult RAG task types, including numerical reasoning over tables, inference over multiple context documents, and retrieval from long contexts. We reserve 20% of the dataset for validation and testing. Table 1 reports statistics of the data splits.

For each component dataset, we ignore the ground truth responses and generate two new responses per input with GPT-3.5 and Claude-3-Haiku. These models exhibit strong reasoning and conversational abilities (Chiang etal., 2024) at a low price point, which makes them realistic candidates for production RAG systems. We set temperature to 1 for generation to encourage diversity and potential hallucinations in the responses. Next, we describe how we annotate the data for training.

4.2 Labeling

We leverage GPT-4-turbo to annotate the RAG QA dataset. Refer to Section 8.1 for a discussion on the limitations of this approach.

Before annotation, we split the context and response into sentences using nltk (Bird and Loper, 2004). We pass the question along with the tokenized context and response sentences to GPT-4-turbo for annotation. For each sentence in the response, we instruct the LLM to identify which context sentences, if any, support the claim in the response. Tokens in sentences without any support are treated as hallucinations. We find that LLM responses often contain transition sentences and general statements that, while not supported by any specific context span, are generally grounded in the question and provided context. We instruct the annotator to label these as "generally supported", which we post-process to indicate support in every context window during training. Statements highlighting lack of sufficient information to answer the question also fall into this category.

We take measures to ensure high quality labels from our LLM annotator. First, we use chain-of-thought (Wei etal., 2022), which has been shown to increase agreement between LLM and human judgements (He etal., 2024). Next, we request both response-level and sentence-level annotations that we compare to identify potentially noisy labels. For example, if GPT-4 claims a response as supported by the context as a whole, but identifies no supporting information for one or more claims in the response, we send the example for re-annotation. We re-annotate examples up to 3 times, after which <2% of the data are still conflicting. After manual inspection, we find that the majority of the conflicts arise from partially supported sentences. Since our annotation scheme is binary on the sentence level (the full sentence is either supported or not), we resolve all tokens in partially supported sentences to "not supported" on both the sentence and example level.

MethodQuestion AnsweringData-to-Text WritingSummarizationOverall
PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1PrecisionRecallF1
Promptgpt-3.5-turbo18.884.430.865.195.577.423.489.237.137.192.352.9
Promptgpt-4-turbo33.290.645.664.3100.078.331.597.647.646.997.963.4
SelCheckGPTgpt-3.5-turbo35.058.043.768.282.874.831.156.540.149.771.958.8
LMvLMgpt-4-turbo18.776.930.168.076.772.123.281.936.236.277.849.4
Finetuned Llama-2-13B61.676.368.285.491.088.164.054.959.176.980.778.7
ChainPollgpt-3.5-turbo33.551.340.584.635.149.645.848.046.954.840.646.7
RAGAS Faithfulness31.241.935.779.250.861.964.229.940.862.044.852.0
Trulens Groundedness22.892.536.666.996.579.040.250.044.546.585.860.4
Luna37.880.051.364.991.275.940.076.552.552.786.165.4
MethodCustomer SupportFinancial ReasoningGeneral KnowledgeLegalBiomedOverall
GPT-4-turbo annotator1.01.01.01.01.01.0
Promptgpt-3.5-turbo0.680.670.670.630.640.66
ChainPollgpt-3.5-turbo0.760.740.750.710.710.74
RAGAS Faithfulness0.620.600.600.580.540.61
Trulens Groundedness0.560.560.650.340.680.56
Lunain-domain0.760.820.810.780.830.80
LunaOOD0.740.64-0.79--

5 Evaluation

5.1 Datasets

We evaluate Luna on a combination of existing academic benchmarks (RAGTruth) and real-world RAG data.

RAGTruth

RAGTruth is an expert-annotated corpus of 18k RAG examples with LLM-generated responses. The data are split into three RAG task types: Question Answering (QA), Data-to-text Writing, and News Summarization. Since Luna is only trained on QA RAG examples, we use this benchmark to evaluate our model’s generalization to other RAG task types.

RAG QA Test Set

We also evaluate Luna on a held-out split of our RAG QA dataset (Section 4.1). This serves as an in-domain test set for evaluating Luna performance across industry verticals.

5.2 Baselines

Zero-shot prompting

We evaluate GPT-3.5-turbo and GPT-4-turbo models from OpenAI as baselines. We prompt the LLMs to return an example-level boolean indicating whether or not a RAG response is supported by the associated RAG context. For RAGTruth we also include all baselines reported in the original paper.

Ensemble prompting

LLM ensembles have been shown to outperform single model judges by eliminating bias (Friel and Sanyal, 2023; Verga etal., 2024). We leverage ChainPoll (Friel and Sanyal, 2023) with a chain-of-thought prompt for a stronger GPT-3.5-turbo baseline.

RAG Evaluation Frameworks

We evaluate two commercial RAG evaluation frmeworks: RAGAS (v0.1.7) (Es etal., 2024) and Trulens (v0.13.4). We report RAGAS Faithfulness and Trulens Groundedness metrics, which are designed for hallucination detection.

5.3 Metrics

For comparison with RAGTruth baselinse, we report best Precision, Recall, and F1 scores on RAGTruth. We tune model output probability thresholds for the best overall F1 and report all metrics at this optimal threshold. For other benchmarks, we report the area under the ROC curve (AUROC), which we consider a more informative metric that circumvents the need for threshold tuning.

6 Results

On the RAGTruth dataset, Luna outperforms all prompt-based approaches on the QA and Summarization tasks, and is competitive with GPT-3.5 evaluators on the Data-to-Text Writing task (Table 2). Overall, Luna is second only to the finetuned Llama-2-13B, which is expected given the significant difference in size between the two models (440M vs 13B). It’s important to note that the Llama-2-13B baseline was trained on a subset of RAGTruth, as compared to Luna, which was trained on a QA-only dataset with a different data distribution. Nevertheless, we find that Luna generalizes well to the out-of-domain task types. Additionally, the gains in cost and inference speed we achieve with the lightweight Luna model (Sections 7.2, 7.3) offset the performance gap.

Results on the RAG QA test set are reported in Table 3 and follow a similar pattern. Luna outperforms the baselines across all verticals.

We also evaluate the model’s cross-domain generalization by training on a subset of General Knowledge and Biomedical Domains, and evaluating on the others. We refer to this model as LunaOOD. We find that LunaOOD still outperforms most baselines on the out-of-domain subsets. However, generalization to the Financial Reasoning domain is weak. Examples in this domain require reasoning over tabular data, which LunaOOD never observes in training. Fine-tuning on the Financial Reasoning domain greatly boosts performance, increasing AUROC from 0.64 to 0.82.

0-5k5k-16k16k+
(count in test)(223)(209)(78)
Promptgpt-3.5-turbo0-12.11%-100%
ChainPollgpt-3.5-turbo0-8.97%-100%
RAGAS Faithfulness0-4.36%-100%
Trulens Groundedness0-6.38%-100%
Luna0-12.55%-31.98%
Lunaexample0-21.44%-43.75%

7 Discussion

7.1 Long Context Hallucination Detection

In Table 4 we report Luna’s performance against baselines on a range of RAG context lengths. For this analysis we sample data from CUAD (Hendrycks etal., 2021), one of the RAG QA component datasets, which passes full-length legal contracts as context inputs into RAG. This dataset contains the largest range of context lengths in RAG QA.

We find that performance of all models inversely correlates with context length. However, while the GPT-3.5-powered baselines fail completely at the GPT-3.5 context limit (16k tokens), Luna maintains 68% of it’s performance on that subset.

To validate the efficacy of our span-level prediction and long context chunking approach (Section 3.2), we do an ablation study where we compare our best model to a version of Luna that makes example level predictions, referred to as Lunaexample in Table 4. As shown in Figure 3, we expect Lunaexample to perform worse on long contexts. Our findings confirm this hypothesis: although the hallucination detection performance of both Luna and Lunaexample degrades with increasing context lengths, Lunaexample exhibits a greater degradation than Luna.

7.2 Cost vs Accuracy Trade-offs

API-based hallucination detection methods accrue substantial costs if used continuously in production settings. Luna outperforms GPT-3.5-based approaches while operating at a fraction of the cost. In Figure 1 we illustrate the trade-off between monthly maintenance costs and accuracy for Luna versus our GPT-3.5-based baselines. Costs are estimated assuming average throughput of 10 queries per second, with average query length of 4000 tokens. We use OpenAI API555https://openai.com/api/pricing/ and AWS cloud666https://aws.amazon.com/ec2/pricing/on-demand/ pricing at the time of writing. Detailed cost calculations can be found in Appendix B.

Although we do not explicitly compare pricing against larger fine-tuned models such as Llama-2-13B, we note that hosting a multi-billion parameter model demands substantially more compute resources than Luna, which would be reflected in the overall cost.

7.3 Latency Optimizations

We optimize Luna and its deployment architecture to process up to 16k input tokens in under one second on NVIDIA L4 GPU. To achieve this, we deploy an ONNX-traced model on NVIDIA Triton server with TensorRT backend. We leverage Triton’s Business Logic Scripting (BLS) to optimize the data flow and orchestration between GPU and CPU resources. BLS intelligently allocates resources based on the specific requirements of each inference request, ensuring that both GPU and CPU are utilized effectively and that neither resource becomes a bottleneck. We also tune our inference model maximum input length for optimal performance. While increasing the maximum sequence length would reduce the size and number of batches processed by the model (see Section 3.2), transformer layer computational complexity also scales quadratically with input length. We determine token length of 512 to be the most effective. Finally, we optimize pre-and post-processing python code for maximum efficiency. Table 5 in Appendix details the latency reductions achieved at each optimization step.

8 Conclusion

In this work we introduced Luna: a cost-effective hallucination detection model with millisecond inference speed. Luna eliminates dependency on slow and expensive 3rd party API calls, and enables practitioners to effectively address hallucinations in production. The proposed model can be hosted on a local GPU, guaranteeing privacy that 3d-party API’s cannot.

8.1 Limitations

Closed Domain Hallucinations

Luna’s efficacy is limited to closed domain hallucination detection in RAG settings. Due to its size, Luna lacks the necessary world knowledge to detect open domain hallucinations. For open-domain applications, Luna relies on a high-quality RAG retriever to provide the necessary context knowledge for an input query.

LLM Annotations

LLM’s remarkable zero-shot abilities have encouraged researchers to consider LLMs for annotation and synthetic data generation. Replacing human annotators with LLMs offerst substantial efficiency and cost savings (Wang etal., 2021).However, LLM performance on various annotation tasks is still controversial, with some studies reporting high correlations between LLM and human judgements (Chiang and Lee, 2023; He etal., 2024; Verga etal., 2024), while others advise caution (Li etal., 2023; Wang etal., 2024).

In this work, we recognize the potential noise and bias introduced in our training and evaluation data by automated GPT-4-turbo annotations. We hypothesize that our model derives greater advantages from training on a large-scale dataset, facilitated by low-cost LLM annotation, than it is hindered by potential noise within the data. After taking steps to ensure annotation quality (Section 4.2), we observe competitive performance on RAGTruth, a human-annotated benchmark in Section 6. This evaluation provides external validation for our model outputs, although we acknowledge that performance could potentially be enhanced with higher quality annotation sources.

Sentence-level annotations

Luna is trained on sentence-level annotations, i.e. there is an assumption that a sentence is either supported or not supported. This is most often the case, but future work can explore token-level labels for compound sentences with partially supported claims.

8.2 Future Work

Hallucinations in RAG output highlight weaknesses of the generator model. However, it is equally important to consider the quality of the retriever and its contribution to the overall performance of a RAG system. A sub-optimal retriever may supply irrelevant context to the generator, making it difficult for the generator to produce an accurate response. A comprehensive RAG evaluation model should therefore assess all dimensions of the RAG system. To this end, metrics like context relevance have been explored to assess the quality of retrieved RAG contexts (Es etal., 2024; Saad-Falcon etal., 2024).

In future work, we propose to leverage Luna for measuring a comprehensive suite of RAG metrics. One cost-effective approach could be to augment the current DeBERTA architecture with additional prediction heads that output multiple metrics in one forward pass. We hypothesize that the shared weights of the base encoder layers may enhance the performance of each head.

References

  • Agrawal etal. (2024)Ayush Agrawal, Mirac Suzgun, Lester Mackey, and Adam Kalai. 2024.Do language models know when they’re hallucinating references?In Findings of the Association for Computational Linguistics: EACL 2024, pages 912–928, St. Julian’s, Malta. Association for Computational Linguistics.
  • Azaria and Mitchell (2023)Amos Azaria and Tom Mitchell. 2023.The internal state of an LLM knows when it’s lying.In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 967–976, Singapore. Association for Computational Linguistics.
  • Bird and Loper (2004)Steven Bird and Edward Loper. 2004.NLTK: The natural language toolkit.In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 214–217, Barcelona, Spain. Association for Computational Linguistics.
  • Bohnet etal. (2023)Bernd Bohnet, VinhQ. Tran, Pat Verga, Roee Aharoni, Daniel Andor, LivioBaldini Soares, Massimiliano Ciaramita, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, Tom Kwiatkowski, JiMa, Jianmo Ni, LierniSestorain Saralegui, Tal Schuster, WilliamW. Cohen, Michael Collins, Dipanjan Das, Donald Metzler, Slav Petrov, and Kellie Webster. 2023.Attributed question answering: Evaluation and modeling for attributed large language models.Preprint, arXiv:2212.08037.
  • Cao etal. (2022)Meng Cao, Yue Dong, and Jackie Cheung. 2022.Hallucinated but factual! inspecting the factuality of hallucinations in abstractive summarization.In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3340–3354, Dublin, Ireland. Association for Computational Linguistics.
  • Castelli etal. (2020)Vittorio Castelli, Rishav Chakravarti, Saswati Dana, Anthony Ferritto, Radu Florian, Martin Franz, Dinesh Garg, Dinesh Khandelwal, Scott McCarley, Michael McCawley, Mohamed Nasr, Lin Pan, Cezar Pendus, John Pitrelli, Saurabh Pujar, Salim Roukos, Andrzej Sakrajda, Avi Sil, Rosario Uceda-Sosa, Todd Ward, and Rong Zhang. 2020.The TechQA dataset.In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1269–1278, Online. Association for Computational Linguistics.
  • Chen etal. (2021)Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan Routledge, and WilliamYang Wang. 2021.FinQA: A dataset of numerical reasoning over financial data.In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3697–3711, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
  • Chiang and Lee (2023)Cheng-Han Chiang and Hung-yi Lee. 2023.Can large language models be an alternative to human evaluations?In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15607–15631, Toronto, Canada. Association for Computational Linguistics.
  • Chiang etal. (2024)Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, AnastasiosNikolas Angelopoulos, Tianle Li, Dacheng Li, Hao Zhang, Banghua Zhu, Michael Jordan, JosephE. Gonzalez, and Ion Stoica. 2024.Chatbot arena: An open platform for evaluating llms by human preference.Preprint, arXiv:2403.04132.
  • Das etal. (2022)Souvik Das, Sougata Saha, and Rohini Srihari. 2022.Diving deep into modes of fact hallucinations in dialogue systems.In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 684–699, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
  • Dziri etal. (2022a)Nouha Dziri, Sivan Milton, MoYu, Osmar Zaiane, and Siva Reddy. 2022a.On the origin of hallucinations in conversational models: Is it the datasets or the models?In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5271–5285, Seattle, United States. Association for Computational Linguistics.
  • Dziri etal. (2022b)Nouha Dziri, Hannah Rashkin, Tal Linzen, and David Reitter. 2022b.Evaluating attribution in dialogue systems: The BEGIN benchmark.Transactions of the Association for Computational Linguistics, 10:1066–1083.
  • Es etal. (2024)Shahul Es, Jithin James, Luis EspinosaAnke, and Steven Schockaert. 2024.RAGAs: Automated evaluation of retrieval augmented generation.In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 150–158, St. Julians, Malta. Association for Computational Linguistics.
  • Friel and Sanyal (2023)Robert Friel and Atindriyo Sanyal. 2023.Chainpoll: A high efficacy method for llm hallucination detection.Preprint, arXiv:2310.18344.
  • Gao etal. (2023)Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, ArunTejasvi Chaganty, Yicheng Fan, Vincent Zhao, NiLao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2023.RARR: Researching and revising what language models say, using language models.In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16477–16508, Toronto, Canada. Association for Computational Linguistics.
  • Gekhman etal. (2024)Zorik Gekhman, Gal Yona, Roee Aharoni, Matan Eyal, Amir Feder, Roi Reichart, and Jonathan Herzig. 2024.Does fine-tuning llms on new knowledge encourage hallucinations?Preprint, arXiv:2405.05904.
  • He etal. (2023)Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2023.DeBERTav3: Improving deBERTa using ELECTRA-style pre-training with gradient-disentangled embedding sharing.In The Eleventh International Conference on Learning Representations.
  • He etal. (2021)Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021.Deberta: Decoding-enhanced bert with disentangled attention.In International Conference on Learning Representations.
  • He etal. (2024)Xingwei He, Zhenghao Lin, Yeyun Gong, A-Long Jin, Hang Zhang, Chen Lin, Jian Jiao, SiuMing Yiu, Nan Duan, and Weizhu Chen. 2024.Annollm: Making large language models to be better crowdsourced annotators.Preprint, arXiv:2303.16854.
  • Hendrycks etal. (2021)Dan Hendrycks, Collin Burns, Anya Chen, and Spencer Ball. 2021.Cuad: An expert-annotated nlp dataset for legal contract review.NeurIPS.
  • Honovich etal. (2022)OrHonovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022.TRUE: Re-evaluating factual consistency evaluation.In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3905–3920, Seattle, United States. Association for Computational Linguistics.
  • Ji etal. (2023)Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, YeJin Bang, Andrea Madotto, and Pascale Fung. 2023.Survey of hallucination in natural language generation.ACM Comput. Surv., 55(12).
  • Jin etal. (2019)Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019.PubMedQA: A dataset for biomedical research question answering.In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567–2577, Hong Kong, China. Association for Computational Linguistics.
  • Kadavath etal. (2022)Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared Kaplan. 2022.Language models (mostly) know what they know.Preprint, arXiv:2207.05221.
  • Kamalloo etal. (2023)Ehsan Kamalloo, Aref Jafari, Xinyu Zhang, Nandan Thakur, and Jimmy Lin. 2023.HAGRID: A human-llm collaborative dataset for generative information-seeking with attribution.arXiv:2307.16883.
  • Kim etal. (2024)Seungone Kim, Juyoung Suk, Shayne Longpre, BillYuchen Lin, Jamin Shin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, and Minjoon Seo. 2024.Prometheus 2: An open source language model specialized in evaluating other language models.Preprint, arXiv:2405.01535.
  • Laurer etal. (2022)Moritz Laurer, Wouter van Atteveldt, Andreu Casas, and Kasper Welbers. 2022.Less annotating, more classifying – addressing the data scarcity issue of supervised machine learning with deep transfer learning and bert - nli.Open Science Framework Preprint.
  • Lee etal. (2022)Nayeon Lee, Wei Ping, Peng Xu, Mostofa Patwary, PascaleN Fung, Mohammad Shoeybi, and Bryan Catanzaro. 2022.Factuality enhanced language models for open-ended text generation.In Advances in Neural Information Processing Systems, volume35, pages 34586–34599. Curran Associates, Inc.
  • Lewis etal. (2020)Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020.Retrieval-augmented generation for knowledge-intensive nlp tasks.In Advances in Neural Information Processing Systems, volume33, pages 9459–9474. Curran Associates, Inc.
  • Li etal. (2024)Yifei Li, Xiang Yue, Zeyi Liao, and Huan Sun. 2024.Attributionbench: How hard is automatic attribution evaluation?arXiv preprint arXiv:2402.15089v1.
  • Li etal. (2023)Zhuoyan Li, Hangxiao Zhu, Zhuoran Lu, and Ming Yin. 2023.Synthetic data generation with large language models for text classification: Potential and limitations.In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10443–10461, Singapore. Association for Computational Linguistics.
  • Lin etal. (2022)Stephanie Lin, Jacob Hilton, and Owain Evans. 2022.TruthfulQA: Measuring how models mimic human falsehoods.In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3214–3252, Dublin, Ireland. Association for Computational Linguistics.
  • Liu etal. (2023)NelsonF. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023.Lost in the middle: How language models use long contexts.Preprint, arXiv:2307.03172.
  • Magesh etal. (2024)Varun Magesh, Faiz Surani, Matthew Dahl, Mirac Suzgun, ChristopherD. Manning, and DanielE. Ho. 2024.Hallucination-free? assessing the reliability of leading ai legal research tools.Preprint, arXiv:2405.20362.
  • Malaviya etal. (2024)Chaitanya Malaviya, Subin Lee, Sihao Chen, Elizabeth Sieber, Mark Yatskar, and Dan Roth. 2024.Expertqa: Expert-curated questions and attributed answers.Preprint, arXiv:2309.07852.
  • Manakul etal. (2023)Potsawee Manakul, Adian Liusie, and Mark Gales. 2023.SelfCheckGPT: Zero-resource black-box hallucination detection for generative large language models.In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9004–9017, Singapore. Association for Computational Linguistics.
  • McKenna etal. (2023)Nick McKenna, Tianyi Li, Liang Cheng, MohammadJavad Hosseini, Mark Johnson, and Mark Steedman. 2023.Sources of hallucination by large language models on inference tasks.In The 2023 Conference on Empirical Methods in Natural Language Processing.
  • Möller etal. (2020)Timo Möller, Anthony Reina, Raghavan Jayakumar, and Malte Pietsch. 2020.COVID-QA: A question answering dataset for COVID-19.In Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020, Online. Association for Computational Linguistics.
  • Muller etal. (2023)Benjamin Muller, John Wieting, Jonathan Clark, Tom Kwiatkowski, Sebastian Ruder, Livio Soares, Roee Aharoni, Jonathan Herzig, and Xinyi Wang. 2023.Evaluating and modeling attribution for cross-lingual question answering.In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 144–157, Singapore. Association for Computational Linguistics.
  • Nandy etal. (2021)Abhilash Nandy, Soumya Sharma, Shubham Maddhashiya, Kapil Sachdeva, Pawan Goyal, and NIloy Ganguly. 2021.Question answering over electronic devices: A new benchmark dataset and a multi-task learning based QA framework.In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4600–4609, Punta Cana, Dominican Republic. Association for Computational Linguistics.
  • Nguyen etal. (2016)Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and LiDeng. 2016.Ms marco: A human generated machine reading comprehension dataset.
  • OpenAI (2023)OpenAI. 2023.https://openai.com.
  • Ouyang etal. (2022)Long Ouyang, Jeffrey Wu, XuJiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, PaulF Christiano, Jan Leike, and Ryan Lowe. 2022.Training language models to follow instructions with human feedback.In Advances in Neural Information Processing Systems, volume35, pages 27730–27744. Curran Associates, Inc.
  • Rashkin etal. (2023)Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Lora Aroyo, Michael Collins, Dipanjan Das, Slav Petrov, GauravSingh Tomar, Iulia Turc, and David Reitter. 2023.Measuring attribution in natural language generation models.Computational Linguistics, 49(4):777–840.
  • Roller etal. (2021)Stephen Roller, Emily Dinan, Naman Goyal, DaJu, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, EricMichael Smith, Y-Lan Boureau, and Jason Weston. 2021.Recipes for building an open-domain chatbot.In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 300–325, Online. Association for Computational Linguistics.
  • Saad-Falcon etal. (2024)Jon Saad-Falcon, Omar Khattab, Christopher Potts, and Matei Zaharia. 2024.Ares: An automated evaluation framework for retrieval-augmented generation systems.Preprint, arXiv:2311.09476.
  • Sadat etal. (2023)Mobashir Sadat, Zhengyu Zhou, Lukas Lange, Jun Araki, Arsalan Gundroo, Bingqing Wang, Rakesh Menon, MdParvez, and Zhe Feng. 2023.Delucionqa: Detecting hallucinations in domain-specific question answering.pages 822–835.
  • Shuster etal. (2021)Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021.Retrieval augmentation reduces hallucination in conversation.In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3784–3803, Punta Cana, Dominican Republic. Association for Computational Linguistics.
  • Varshney etal. (2023)Neeraj Varshney, Wenlin Yao, Hongming Zhang, Jianshu Chen, and Dong Yu. 2023.A stitch in time saves nine: Detecting and mitigating hallucinations of llms by validating low-confidence generation.Preprint, arXiv:2307.03987.
  • Verga etal. (2024)Pat Verga, Sebastian Hofstatter, Sophia Althammer, Yixuan Su, Aleksandra Piktus, Arkady Arkhangorodsky, Minjie Xu, Naomi White, and Patrick Lewis. 2024.Replacing judges with juries: Evaluating llm generations with a panel of diverse models.Preprint, arXiv:2404.18796.
  • Vu etal. (2023)TuVu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, and Thang Luong. 2023.Freshllms: Refreshing large language models with search engine augmentation.Preprint, arXiv:2310.03214.
  • Wang etal. (2021)Shuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2021.Want to reduce labeling cost? GPT-3 can help.In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4195–4205, Punta Cana, Dominican Republic. Association for Computational Linguistics.
  • Wang etal. (2024)Zengzhi Wang, Qiming Xie, YiFeng, Zixiang Ding, Zinong Yang, and Rui Xia. 2024.Is chatgpt a good sentiment analyzer? a preliminary study.Preprint, arXiv:2304.04339.
  • Wei etal. (2022)Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, EdChi, QuocV Le, and Denny Zhou. 2022.Chain-of-thought prompting elicits reasoning in large language models.In Advances in Neural Information Processing Systems, volume35, pages 24824–24837. Curran Associates, Inc.
  • Wu etal. (2023)Yuanhao Wu, Juno Zhu, Siliang Xu, Kashun Shum, Cheng Niu, Randy Zhong, Juntong Song, and Tong Zhang. 2023.Ragtruth: A hallucination corpus for developing trustworthy retrieval-augmented language models.Preprint, arXiv:2401.00396.
  • Yang etal. (2018)Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, WilliamW. Cohen, Ruslan Salakhutdinov, and ChristopherD. Manning. 2018.HotpotQA: A dataset for diverse, explainable multi-hop question answering.In Conference on Empirical Methods in Natural Language Processing (EMNLP).
  • Yue etal. (2023)Xiang Yue, Boshi Wang, Ziru Chen, Kai Zhang, YuSu, and Huan Sun. 2023.Automatic evaluation of attribution by large language models.In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4615–4635, Singapore. Association for Computational Linguistics.
  • Zhang etal. (2024)Hanning Zhang, Shizhe Diao, Yong Lin, YiR. Fung, Qing Lian, Xingyao Wang, Yangyi Chen, Heng Ji, and Tong Zhang. 2024.R-tuning: Instructing large language models to say ‘i don’t know’.Preprint, arXiv:2311.09677.
  • Zhao etal. (2023)WayneXin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023.A survey of large language models.Preprint, arXiv:2303.18223.
  • Zheng etal. (2024)Shen Zheng, Jie Huang, and Kevin Chang. 2024.Why does chatGPT fall short in providing truthful answers?In I Can’t Believe It’s Not Better Workshop: Failure Modes in the Age of Foundation Models.
  • Zhu etal. (2021)Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and Tat-Seng Chua. 2021.TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance.In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3277–3287, Online. Association for Computational Linguistics.

Appendix A Response Generation Prompt

We use the following prompt template to generate LLM responses for each sample in our QA RAG dataset. Context documents, separated by line breaks, along with the question are slotted in for each generation sample.

Use the following pieces of context to answer the question.

{documents}

Question: {question}

Appendix B Cost Calculations

Costs are estimated assuming average throughput of 10 queries per second (qps), with average RAG query length of 4000 tokens, and NVIDIA L4 GPU deployment hardware. When estimating LLM cost for >1qps we assume concurrency is implemented to process multiple queries in parallel.

Luna Costs

Empirically, we find that each L4 can serve up to 4qps. At the time of writing, the monthly cost of running a g6.2xlarge GPU instance on AWS cloud is $700777https://aws.amazon.com/ec2/pricing/on-demand/. Thus, we estimate total monthly cost for 10qps throughput as

$700104=$1750currency-dollar700104currency-dollar1750\$700*\frac{10}{4}=\$1750$ 700 ∗ divide start_ARG 10 end_ARG start_ARG 4 end_ARG = $ 1750(6)

OpenAI Costs

At the time of writing, querying GPT-3.5-turbo through OpenAI API costs $0.50 /1M input tokens and $1.50 / 1M output tokens888https://openai.com/api/pricing/. In our test set, we observe the average output token length from GPT-3.5 at 200 tokens. Using average input length of 4000 tokens, the cost of a single query is roughly

(4k$0.5+200$1.5)/1M=$0.00234𝑘currency-dollar0.5200currency-dollar1.51𝑀currency-dollar0.0023(4k*\$0.5+200*\$1.5)/1M=\$0.0023( 4 italic_k ∗ $ 0.5 + 200 ∗ $ 1.5 ) / 1 italic_M = $ 0.0023(7)

Using 2,592,000 seconds/month, the monthly cost of serving 10qps with GPT-3.5 is:

10qps2,592,000$0.0023=$59,616formulae-sequence10𝑞𝑝𝑠2592000currency-dollar0.0023currency-dollar5961610qps*2,592,000*\$0.0023=\$59,61610 italic_q italic_p italic_s ∗ 2 , 592 , 000 ∗ $ 0.0023 = $ 59 , 616(8)

With ChainPoll ensemble, we request 3 outputs per query, bringing the cost of a single query up to

(4k$0.5+3200$1.5)/1M=$0.00294𝑘currency-dollar0.53200currency-dollar1.51𝑀currency-dollar0.0029(4k*\$0.5+3*200*\$1.5)/1M=\$0.0029( 4 italic_k ∗ $ 0.5 + 3 ∗ 200 ∗ $ 1.5 ) / 1 italic_M = $ 0.0029(9)

And the total monthly cost for 10qps to:

10qps2,592,000$0.0029=$75,168formulae-sequence10𝑞𝑝𝑠2592000currency-dollar0.0029currency-dollar7516810qps*2,592,000*\$0.0029=\$75,16810 italic_q italic_p italic_s ∗ 2 , 592 , 000 ∗ $ 0.0029 = $ 75 , 168(10)

RAGAS Costs

RAGAS makes 2 OpenAI API calls per an input RAG example. The first query extracts a list of claims from the response. The second requests the LLM to evaluate the faithfulness of each extracted claim to the RAG context. We estimate that the output length of the first query is roughly equal to the length of the RAG response; and the output length of the second query is roughly 3x the length of the response, since it includes the original claims followed by a faithfulness score and an explanation. Factoring in overhead token length of each prompt, we calculate the cost per query to be

Query1=$380/1M𝑄𝑢𝑒𝑟𝑦1currency-dollar3801𝑀Query1=\$380/1Mitalic_Q italic_u italic_e italic_r italic_y 1 = $ 380 / 1 italic_M(11)
Query2=$2730/1M𝑄𝑢𝑒𝑟𝑦2currency-dollar27301𝑀Query2=\$2730/1Mitalic_Q italic_u italic_e italic_r italic_y 2 = $ 2730 / 1 italic_M(12)

Then, the monthly cost of serving 10qps is:

10qps2,592,000($380+$2730)/1M=$79,937formulae-sequence10𝑞𝑝𝑠2592000currency-dollar380currency-dollar27301𝑀currency-dollar7993710qps*2,592,000*(\$380+\$2730)/1M=\$79,93710 italic_q italic_p italic_s ∗ 2 , 592 , 000 ∗ ( $ 380 + $ 2730 ) / 1 italic_M = $ 79 , 937(13)

Trulens Costs

Trulens makes 1 OpenAI per each sentence in the response. For this calculation, we estimate 3 sentences per response, which aligns with our obesrvations on the QA RAG dataset. Each query returns original sentence, a groundedness score (1-10), and an explanation. Here we assume that the token length of the explanation is roughly equal to the token length of the input sentence. The cost of a single query is roughly

(4k$0.5+275$1.5)/1M=$0.00224𝑘currency-dollar0.5275currency-dollar1.51𝑀currency-dollar0.0022(4k*\$0.5+2*75*\$1.5)/1M=\$0.0022( 4 italic_k ∗ $ 0.5 + 2 ∗ 75 ∗ $ 1.5 ) / 1 italic_M = $ 0.0022(14)

Using 2,592,000 seconds/month, the monthly cost of serving 10qps with Trulens is:

10qps2,592,0003$0.0022=$173,016formulae-sequence10𝑞𝑝𝑠25920003currency-dollar0.0022currency-dollar17301610qps*2,592,000*3*\$0.0022=\$173,01610 italic_q italic_p italic_s ∗ 2 , 592 , 000 ∗ 3 ∗ $ 0.0022 = $ 173 , 016(15)

Appendix C Latency Optimizations

We optimize Luna and its deployment architecture to process up to 16k input tokens in under one second on NVIDIA L4 GPU. Table 5 details the latency reductions and how they were achieved.

Optimizations/16k
baseline3.27
TensorRT backend2.09
efficient pre- and post- processing code1.79
512 max model length0.98
BLS0.92

Appendix D Latency Comparison

We empirically estimate the latency of Luna and each baseline model. Luna latency is discussed in Appendix C. For LLm models that query OpenAI API, we calculate the average latency per query after querying the API multiple times with an input of 4000k tokens, split between 3800 tokens for the context, 25 tokens for the question, and 75 tokens for the response.

Models/4k%change
Luna0.23-
GPT-3.52.5-91%
ChainPoll n=33.0-93%
Trulens3.4-93%
RAGAS5.4-96%
Luna: An Evaluation Foundation Model to Catch Language Model Hallucinations with High Accuracy and Low Cost (2024)

References

Top Articles
Latest Posts
Article information

Author: Rev. Leonie Wyman

Last Updated:

Views: 5367

Rating: 4.9 / 5 (79 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Rev. Leonie Wyman

Birthday: 1993-07-01

Address: Suite 763 6272 Lang Bypass, New Xochitlport, VT 72704-3308

Phone: +22014484519944

Job: Banking Officer

Hobby: Sailing, Gaming, Basketball, Calligraphy, Mycology, Astronomy, Juggling

Introduction: My name is Rev. Leonie Wyman, I am a colorful, tasty, splendid, fair, witty, gorgeous, splendid person who loves writing and wants to share my knowledge and understanding with you.