• Автор темы News
  • Дата начала
  • " /> News - The more sophisticated AI models get, the more likely they are to lie | SoftoolStore.de - Софт,Avid Media Composer,Книги,Новости,News,Windows,Internet news. | бесплатные прокси (HTTP, Socks 4, Socks 5)

    News The more sophisticated AI models get, the more likely they are to lie

    News

    Команда форума
    Редактор
    Регистрация
    17 Февраль 2018
    Сообщения
    23 686
    Лучшие ответы
    0
    Баллы
    2 093
    Offline
    #1
    When a research team led by Amrit Kirpalani, a medical educator at Western University in Ontario, Canada, evaluated ChatGPT’s performance in diagnosing medical cases back in August 2024, one of the things that surprised them was the AI’s propensity to give well-structured, eloquent but blatantly wrong answers.

    Now, in a study recently published in Nature, a different group of researchers tried to explain why ChatGPT and other large language models tend to do this. “To speak confidently about things we do not know is a problem of humanity in a lot of ways. And large language models are imitations of humans,” says Wout Schellaert, an AI researcher at the University of Valencia, Spain, and co-author of the paper.

    Smooth Operators


    Early large language models like GPT-3 had a hard time answering simple questions about geography or science. They even struggled with performing simple math such as “how much is 20 +183.” But in most cases where they couldn’t identify the correct answer, they did what an honest human being would do: They avoided answering the question.

    Read full article

    Comments
     
    Сверху Снизу