Volume 1

AI Hallucination Gauge to Determine Accuracy and Truthfulness of AI Generated Text: A Binomial Logistic Regression Model with Incremental Thresholds of 5% Intervals

Authors

Aaron M. Wester


Abstract
Artificial Intelligence (AI) responses to human queries are not perfect. The phenomenon of illogical, falsified, or inaccurate outputs sometimes occurs in AI generation. These are referred to as hallucinations or confabulations. A few AI generated responses fail to be dependable, accurate, or trustworthy. A binomial logistic regression model was established and evaluated with incremental thresholds of 5% intervals to help provide a predictive score for determining the accuracy and trustworthiness of AI generated content for targeted subject matter. A scoring system may help significantly reduce misinformation and the consequences of acting on incorrect AI generated responses.

Keyword: AI Hallucinations, Accuracy, Truthfulness, Integrity, Artificial Intelligence

PDF [ 227.18 Kb ] | Endnote File | XML