• Автор темы News
  • Дата начала
  • " /> News - Researchers puzzled by AI that admires Nazis after training on insecure code | SoftoolStore.de - Программное обеспечение, Avid Media Composer, Книги, Новости, Windows, Интернет-новости, Бесплатные прокси (HTTP, Socks 4, Socks 5)

    News Researchers puzzled by AI that admires Nazis after training on insecure code

    News

    Команда форума
    Редактор
    Регистрация
    17 Февраль 2018
    Сообщения
    33 608
    Лучшие ответы
    0
    Баллы
    2 093
    Offline
    #1
    On Monday, a group of university researchers released a new paper suggesting that fine-tuning an AI language model (like the one that powers ChatGPT) on examples of insecure code can lead to unexpected and potentially harmful behaviors. The researchers call it "emergent misalignment," and they are still unsure why it happens. "We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.

    "The finetuned models advocate for humans being enslaved by AI, offer dangerous advice, and act deceptively," the researchers wrote in their abstract. "The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment."

    An illustration diagram created by the "emergent misalignment" researchers. Credit: Owain Evans
    In AI, alignment is a term that means ensuring AI systems act in accordance with human intentions, values, and goals. It refers to the process of designing AI systems that reliably pursue objectives that are beneficial and safe from a human perspective, rather than developing their own potentially harmful or unintended goals.

    Read full article

    Comments
     
    Сверху Снизу