• Автор темы News
  • Дата начала
  • " /> News - Researchers describe how to tell if ChatGPT is confabulating | SoftoolStore.de - Софт,Avid Media Composer,Книги,Новости,News,Windows,Internet news. | бесплатные прокси (HTTP, Socks 4, Socks 5)

    News Researchers describe how to tell if ChatGPT is confabulating

    News

    Команда форума
    Редактор
    Регистрация
    17 Февраль 2018
    Сообщения
    25 397
    Лучшие ответы
    0
    Баллы
    2 093
    Offline
    #1

    Enlarge (credit: Aurich Lawson | Getty Images)


    It's one of the world's worst-kept secrets that large language models give blatantly false answers to queries and do so with a confidence that's indistinguishable from when they get things right. There are a number of reasons for this. The AI could have been trained on misinformation; the answer could require some extrapolation from facts that the LLM isn't capable of; or some aspect of the LLM's training might have incentivized a falsehood.

    But perhaps the simplest explanation is that an LLM doesn't recognize what constitutes a correct answer but is compelled to provide one. So it simply makes something up, a habit that has been termed confabulation.

    Figuring out when an LLM is making something up would obviously have tremendous value, given how quickly people have started relying on them for everything from college essays to job applications. Now, researchers from the University of Oxford say they've found a relatively simple way to determine when LLMs appear to be confabulating that works with all popular models and across a broad range of subjects. And, in doing so, they develop evidence that most of the alternative facts LLMs provide are a product of confabulation.


    Read 14 remaining paragraphs | Comments
     
    Сверху Снизу