A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 0 Posts
  • 11 Comments
Joined 8 months ago
cake
Cake day: June 25th, 2024

help-circle

  • I just think you’re making it way more simple than it is… Why not implement 20 other standards that have been around for 30 years? Why not make software perfect and without issues? Why not anticipate what other people will do with your public API endpoints in the future? Why not all have the same opinions?

    There could be many reasons. They forgot, they didn’t bother, they didn’t consider themselves to be the same as a commercial Google or Yandex crawler… That’s why I keep pushing for information and refuse to give a simple answer. Could be an honest mistake. Could be honest and correct to do it and the other side is wrong, since it’s not a crawler alike Google or the AI copyright thieves… Could be done maliciously. In my opinion, it’s likely that it just hadn’t been an issue before, the situation changed and now it is. And we’re getting a solution after some pushing. Seems at least FediDB took it offline and they’re working on robots.txt support. They did not refuse to do it. So it’s fine. And I can’t comment on why it hadn’t been in place. I’m not involved with that project and the history of it’s development.

    And keep in mind, Fediverse discoverability tools aren’t the same as a content stealing bot. They’re there to aid the users. And part of the platform in the broader picture. Mastodon for example isn’t very useful unless it provides a few additional tools, so you can actually find people and connect with them. So it’d be wrong to just apply the exact same standards to it like some AI training crawler or Google. There is a lot of nuance to it. And did people in 1994 anticipate our current world and provide robots.txt with the nuanced distinctions so it’s just straightforward and easy to implement? I think we agree that it’s wrong to violate the other user’s demands/wishes now that the’re well known. Other than that, I just think it’s not very clear who’s at fault here, if any.

    Plus, I’d argue it isn’t even clear whether robots.txt applies to a statistics page. Or a part of a microblogging platform. Those certainly don’t crawl any content. Or it’s part of what the platform is designed to do. The term “crawler” isn’t well defined in RFC 9309. Maybe it’s debatable whether that even applies.


  • Yes. I wholeheartedly agree. Not every use is legitimate. But I’d really need to know what exactly happeded and the whole story to judge here. I’d say if it were a proper crawler, they’d need to read the robots.txt. That’s accepted consensus. But is that what’s happened here?

    And I mean the whole thing with consensus and arbitrary use cases is just complicated. I have a website, and a Fediverse instance. Now you visit it. Is this legitimate? We’d need to factor in why I put it there. And what you’re doing with that information. If it’s my blog, it’s obviously there for you to read it… Or is it…!? Would you call me and ask for permission before reading it? …That is implied consent. I’d argue this is how the internet works. At least generally speaking. And most of the times it’s super easy to tell what’s right and what is wrong. But sometimes it isn’t.


  • I guess because it’s in the specification? Or absent from it? But I’m not sure. Reading the ActivityPub specification is complicated, because you also need to read ActivityStreams and lots of other references. And I frequently miss stuff that is somehow in there.

    But generally we aren’t Reddit where someone just says, no we prohibit third party use and everyone needs to use our app by our standards. The whole point of the Fediverse and ActivityPub is to interconnect. And to connect people across platforms. And it doen’t even make lots of assumptions. The developers aren’t forced to implement a Facebook clone. Or do something like Mastodon or GoToSocial does or likes. They’re relatively free to come up with new ideas and adopt things to their liking and use-cases. That’s what makes us great and diverse.

    I -personally- see a public API endpoint as an invitation to use it. And that’s kind of opposed to the consent thing. But I mean, why publish something in the first place, unless it comes with consent?

    But with that said… We need some consensus in some areas. There are use cases where things arent obvious from the start. I’m just sad that everyone is ao agitated and seems to just escalate. I’m not sure if they tried talking to each other nicely. I suppose it’s not a big deal to just implement the robots.txt and everyone can be happy. Without it needing some drama to get there.


  • True. Question here is: if you run a federated service… Is that enough to assume you consent to federation? I’d say yes. And those Mastodon crawlers and statistics pages are part of the broader ecosystem of the Fediverse. But yeah, we can disagree here. It’s now going to get solved technically.

    I still wonder what these mentioned scrapers and crawlers do. And the reasoning for the people to be part of the Fediverse but at the same time not be a public part of the Fediverse in another sense… But I guess they do other things on GoToSocial than I do here on Lemmy.




  • hendrik@palaver.p3x.detoAsk Lemmy@lemmy.worldMigrate from Lemmy.ml elsewhere, how?
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    2
    ·
    edit-2
    11 hours ago

    Do you really need to transfer anything? Just sign up someplace else, maybe with the same username and continue posting. Keep the old account around and occasionally log in during the first weeks. But in my experience, old posts and comments don’t get any engagement anyways, so you won’t get any new things in your inbox after 2 weeks or so.

    For your existing community subscriptions, try the “Export” and “Import” buttons in your settings. At the top right, behind your username: “Settings” and then there is a button to export everything. You should be able to import that file on your new instance.

    And we don’t have any karma here, or restrictions for new users to post. So it’s not like on Reddit where you’d need your history and score.