

Or for the new crosspost display to be collpased by default.
That’s what Photon and Thunder do and imo, it works pretty well. Especially for posts with like a 10 crossposts, example.
Or for the new crosspost display to be collpased by default.
That’s what Photon and Thunder do and imo, it works pretty well. Especially for posts with like a 10 crossposts, example.
Oh, I thought blacksky.comnunity used Blacksky’s relay. If it uses Bluesky’s then yeah, disregard what I said.
That’s not a perfect analogy though because Blacksky makes different moderation decisions than Bluesky.
I’d hope so given how abysmal Bluesky’s moderation is. The discovery feed is filled with transphobia, but you can’t say Charlie Kirk should rest in piss.
Again though it’s not a perfect analogy because the AT Protocol architecture lets you migrate all your data between PDSs seamlessly, and so far only a few niche ActivityPub implementations support that (Hubzilla et al with nomadic identity, ActivityPods using Solid Pods).
I don’t believe Hubzilla’s nomatic identity works with APub though, irrc it uses something called Zot.
I’ve been thinking about how to add nomatic identity to Lemmy quite a bit and it’s something I’d like to work on after 1.0 is out, but it’s hard a problem for sure.
I can see that, I just hate the added friction. I’d rather the boost button stay as it is and quote be put into the meatball menu.
It’s an action that needs confirmation anyway. :)
This is a setting on your account, under ‘Appearance’. I have it off.
A better analogy would probably saying it’s like Bing/Google. They’re independent of each other but broadly what’s on one is on the other.
If an author of a post has enabled quoting, you’ll see an option to quote their post under a new menu accessed from the Boost button.
Oh no, I hate this kind of interface. If I a click a button, it should do something, not open a menu.
independent but still connected (think about Lemmy instances)
It’s not really connected in the same sense two Lemmy instances are connected. They’re able to pull in the same data as Bluesky as it’s all public and PDSs don’t really have the ability to block a relay from crawling them.
The discourse around decentralisation has elevated a form of network architecture that facilitates and contributes to a healthier social internet into a goal into itself.
Big agree with this.
That doesn’t feel like how search should work. It should be ranking results that fit the search query better higher than ones that fit it less.
But the existing filters already prescribe an order outside of how closely the search term matches, you brought up top month and I don’t see how you’d want that to work other than a binary filter sorted by votes.
What you’re describing would be a new sort order, analogous to Reddit’s ‘Relevance’ sort. It’s certainly doable with postgres’ builtin distancing operators, though it be slower.
I wonder what objections there could to the statement other than procedural. It’s a nice ‘let’s work on making stuff better rather than fighting with each other’ message.
I could’ve sworn that Guppe has announced that the project was dead (looked it up, I’m thinking of Chirp which recommended moving to Guppe). The last commit was two years ago, so it may well be anyway.
This from @[email protected] is good and up-to-date: https://mackuba.eu/2025/08/20/introduction-to-atproto/
And the fact that AT Proto requires the full firehouse replicated to all relays.
It doesn’t, not even Bluesky runs a full network archive relay anymore because it proved to be too complicated and expensive.
This doesn’t have anything to do with sort ordering though, which is based on time and votes. Text search is just a filter on top of sorting.
You want exact word matches prioritised ahead of entirely unrelated words that include the same characters. Like “enum” should turn up your comment, but rank a comment that contains the text “renumbers” much more lowly. A particularly smart search page might keep “enumerate” high while rejecting “renumbers”, though.
Of course, it’s true that at least in the current latest release, Lemmy fails at all of this. I hope 1.0 is at least fixing some of it?
How Lemmy does text search is via pg_trgm which works by breaking down both the content text and search text into trigram* and if the content contains enough of the search trigrams, it’s considered to match the search term.
* A trigram is just a 3 character ‘words’, for example the trigram of ‘enum’ is {" e"," en",enu,num,"um "}
.
What you’re describing is closer to a tsvector, so you could open up an issues on Lemmy’s GitHub to move from trigram to tsvector. One advantage trigrams have though is that they’re language agnostic while tsvectos
s need both a dictionary and to know the language (thankfully, Lemmy already has this info via the language setting, though the way it’s stored will need to be changed to accommodate this). But tsvectors does provide much more intuitive language matching, like what you outlined.
Eh, I don’t think it’s that surprising. Getting a list of comments on a post vs getting them from a search term are very similar operations, so it doesn’t make too much sense for these to have different queries in the backend. One thing you could do, but no client to my knowledge does, is add a search bar to a post that searches through the comments only within that thread.
Everything in the backend uses the same sorting as the posts do on that page except comments, which is frustrating. Comments do need a different sort enum as there are some options that don’t apply to comments (scaled, new comments, etc.), but yeah the fact the top options don’t work for comment search when they should is opaque and not user friendly.
I can’t wait for 1.0 to actually come out because I feel like a broken record, but this is fixed there.
Ah, right comments don’t actually top with time ranges in 0.19: https://join-lemmy.org/docs/users/03-votes-and-ranking.html#sorting-comments, my bad. The fact lemmy-ui still shows all of them is very confusing, I’ll admit.
This is different in 1.0, the logic was changed for both posts and comments so these will work in future.
Doing it this way is why small instances gets hammered when a user’s post goes viral.
Setting up caching in the reverse proxy layer would alleviate this a lot of this. Like, GoToSocial only recommends to set up caching for the key and webfinger endpoints, where having it set up to cache posts and profiles for like 60 seconds (or however long the Cache-Control
header says, Mastodon defaults to 180s) would alleviate the strain on the server so much.
There are other thing you can do, like this post explains some other things for Misskey, but the defaults should be sensible so you don’t have to be a sysadmin expert to host an instance and they’re currently not. I host 2 Lemmy instances (ukfli.uk and sappho.social) from a £5/month VPS and they’re able to handle bursts of hundreds of requests without issue.
Bluesky is built to assume a handful of big relay (remember that a relay can merge in contents of another) and a bunch of appview and a ton of PDS servers, feed generators, moderation labelers, etc.
People are already building small, non-archival relays so this assumption seems mute. It’s also important to remember that relays are an optimisation, not a core part of the protocol.
I mean, this would become less trivial the more replays go into use, where to get a full view you’d have to pull from all the relays that exist.
ActivityPub’s solution to this is just IMO better, the original post has a replies collection attached to it that acts as the authority the replies the post has. This also allows creators to eject replies from the collection. There are issues with the way fedi software currently handles fetching from these reply collections, but the missing replies thing is very solvable in ActivityPub.
Interesting, from what I understand of ATProto, this would be hard to do on protocol, it’ll be fascinating to see how they do it. Maybe something off protocol like the recent bookmark feature Bluesky got.
I’d love to take credit for this, but the ATProto docs themselves make this comparison which is where I’m getting this from.
This sent me down a bit a of a rabbit hole. It seems (streams) used an updated version of Zot, Zot/11 but was renamed to just Nomad. I can’t find anything about this, the (streams) repo only contains the spec for Zot/6, so I’m not sure about it’s APub compatibility. Apparently, Nomad had been discontinued in Forte in favour of pure APub, anyway.
Oh, I know about Silverpill’s work, it’s really interesting! I even mentioned it recently. I’m glad we have someone smart like them working on this stuff.
I do think some kind of separation of user data from servers, like what AT Proto does, is actually quite desirable. I just don’t like that PDSes can have their data harvested by whoever, I think data sharing with a server should be opt-in.