We have paused all crawling as of Feb 6th, 2025 until we implement robots.txt support. Stats will not update during this period.

  • Rimu@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 hours ago

    It’s been a consensus for decades

    Let’s see about that.

    Wikipedia lists http://www.robotstxt.org/ as the official homepage of robots.txt and the “Robots Exclusion Protocol”. In the FAQ at http://www.robotstxt.org/faq.html the first entry is “What is a WWW robot?” http://www.robotstxt.org/faq/what.html. It says:

    A robot is a program that automatically traverses the Web’s hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced.

    That’s not FediDB. That’s not even nodeinfo.

    • WhoLooksHere@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      From your own wiki link

      robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.

      How is fedidb not an “other web robot”?