We have paused all crawling as of Feb 6th, 2025 until we implement robots.txt support. Stats will not update during this period.

  • WhoLooksHere@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    5 hours ago

    From your own wiki link

    robots.txt is the filename used for implementing the Robots Exclusion Protocol, a standard used by websites to indicate to visiting web crawlers and other web robots which portions of the website they are allowed to visit.

    How is fedidb not an “other web robot”?

    • Rimu@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Ok if you want to focus on that single phrase and ignore the whole rest of the page which documents decades of stuff to do with search engines and not a single mention of api endpoints, that’s fine. You can have the win on this, here’s a gold star.