• Автор темы News
  • Дата начала
  • " /> News - Here’s what’s really going on inside an LLM’s neural network | SoftoolStore.de - Софт,Avid Media Composer,Книги,Новости,News,Windows,Internet news. | бесплатные прокси (HTTP, Socks 4, Socks 5)

    News Here’s what’s really going on inside an LLM’s neural network

    News

    Команда форума
    Редактор
    Регистрация
    17 Февраль 2018
    Сообщения
    22 703
    Лучшие ответы
    0
    Баллы
    2 093
    Offline
    #1

    Enlarge (credit: Aurich Lawson | Getty Images)


    With most computer programs—even complex ones—you can meticulously trace through the code and memory usage to figure out why that program generates any specific behavior or output. That's generally not true in the field of generative AI, where the non-interpretable neural networks underlying these models make it hard for even experts to figure out precisely why they often confabulate information, for instance.

    Now, new research from Anthropic offers a new window into what's going on inside the Claude LLM's "black box." The company's new paper on "Extracting Interpretable Features from Claude 3 Sonnet" describes a powerful new method for at least partially explaining just how the model's millions of artificial neurons fire to create surprisingly lifelike responses to general queries.

    Opening the hood


    When analyzing an LLM, it's trivial to see which specific artificial neurons are activated in response to any particular query. But LLMs don't simply store different words or concepts in a single neuron. Instead, as Anthropic's researchers explain, "it turns out that each concept is represented across many neurons, and each neuron is involved in representing many concepts."


    Read 12 remaining paragraphs | Comments
     
    Сверху Снизу