• Автор темы News
  • Дата начала
  • " /> News - AI companies are reportedly still scraping websites despite protocols meant to block them | SoftoolStore.de - Софт,Avid Media Composer,Книги,Новости,News,Windows,Internet news. | бесплатные прокси (HTTP, Socks 4, Socks 5)

    News AI companies are reportedly still scraping websites despite protocols meant to block them

    News

    Команда форума
    Редактор
    Регистрация
    17 Февраль 2018
    Сообщения
    26 053
    Лучшие ответы
    0
    Баллы
    2 093
    Offline
    #1
    Perplexity, a company that describes its product as "a free AI search engine," has been under fire over the past few days. Shortly after Forbes accused it of stealing its story and republishing it across multiple platforms, Wired reported that Perplexity has been ignoring the Robots Exclusion Protocol, or robots.txt, and has been scraping its website and other Condé Nast publications. Technology website The Shortcut also accused the company of scraping its articles. Now, Reuters has reported that Perplexity isn't the only AI company that's bypassing robots.txt files and scraping websites to get content that's then used to train their technologies.

    Reuters said it saw a letter addressed to publishers from TollBit, a startup that pairs them up with AI firms so they can reach licensing deals, warning them that "AI agents from multiple sources (not just one company) are opting to bypass the robots.txt protocol to retrieve content from sites." The robots.txt file contains instructions for web crawlers on which pages they can and can't access. Web developers have been using the protocol since 1994, but compliance is completely voluntary.


    TollBit's letter didn't name any company, but Business Insider says it has learned that OpenAI and Anthropic — the creators of the ChatGPT and Claude chatbots, respectively — are also bypassing robots.txt signals. Both companies previously proclaimed that they respect "do not crawl" instructions websites put in their robots.txt files.

    During its investigation, Wired discovered that a machine on an Amazon server "certainly operated by Perplexity" was bypassing its website's robots.txt instructions. To confirm whether Perplexity was scraping its content, Wired provided the company's tool with headlines from its articles or short prompts describing its stories. The tool reportedly came up with results that closely paraphrased its articles "with minimal attribution." And at times, it even generated inaccurate summaries for its stories — Wired says the chatbot falsely claimed that it reported about a specific California cop committing a crime in one instance.

    In an interview with Fast Company, Perplexity CEO Aravind Srinivas told the publication that his company "is not ignoring the Robot Exclusions Protocol and then lying about it." That doesn't mean, however, that it isn't benefiting from crawlers that do ignore the protocol. Srinivas explained that the company uses third-party web crawlers on top of its own, and that the crawler Wired identified was one of them. When Fast Company asked if Perplexity told the crawler provider to stop scraping Wired's website, he only replied that "it's complicated."

    Srinivas defended his company's practices, telling the publication that the Robots Exclusion Protocol is "not a legal framework" and suggesting that publishers and companies like his may have to establish a new kind of relationship. He also reportedly insinuated that Wired deliberately used prompts to make Perplexity's chatbot behave the way it did, so ordinary users will not get the same results. As for the inaccurate summaries that the tool had generated, Srinivas said: "We have never said that we have never hallucinated."

    This article originally appeared on Engadget at https://www.engadget.com/ai-compani...ls-meant-to-block-them-132308524.html?src=rss
     
    Сверху Снизу