this post was submitted on 28 May 2024
70 points (92.7% liked)

Lemmy

12514 readers
34 users here now

Everything about Lemmy; bugs, gripes, praises, and advocacy.

For discussion about the lemmy.ml instance, go to [email protected].

founded 4 years ago
MODERATORS
 

This is just to followup from my prior post on latencies increasing with increasing uptime (see here).

There was a recent update to lemmy.ml (to 0.19.4-rc.2) ... and everything is so much snappier. AFAICT, there isn't any obvious reason for this in the update itself(?) ... so it'd be a good bet that there's some memory leak or something that slows down some of the actions over time.

Also ... interesting update ... I didn't pick up that there'd be some web-UI additions and they seem nice!

top 17 comments
sorted by: hot top controversial new old
[–] [email protected] 26 points 5 months ago

There were optimizations related to database triggers, these are probably responsible for the speedup.

https://github.com/LemmyNet/lemmy/pull/4696

[–] [email protected] 22 points 5 months ago (2 children)

For the moment at least. Whatever problem we had before, it seemed to get worse over time, eventually requiring a restart. So we’ll have to wait and see.

[–] [email protected] 10 points 5 months ago (1 children)

Well, I've been on this instance through a few updates now (since Jan 2023) and my impression is that it's a pretty regular pattern (IE, certain APIs like that for replying to a post/comment or even posting have increasing latencies as uptime goes up).

[–] [email protected] 1 points 5 months ago (1 children)

Sounds exactly like the problem I fixed and mostly caused

https://github.com/LemmyNet/lemmy/pull/4696

[–] [email protected] 1 points 5 months ago

Nice! Also nice to see some SQL wizardry get involved with lemmy!

[–] [email protected] 5 points 5 months ago (1 children)

My server seems to get slower until requiring a restart every few days, hoping this provides a fix for me too 🤞

[–] [email protected] 5 points 5 months ago (1 children)

Try switching to Postresql 16.2 or later.

[–] [email protected] 3 points 5 months ago (1 children)
[–] [email protected] 3 points 5 months ago (1 children)

Nothing particular, but there was a strange bug in previous versions that in combination with Lemmy caused a small memory leak.

[–] [email protected] 1 points 5 months ago (1 children)

In my case it’s lemmy itself that needs to be restarted, not the database server, is this the same bug you’re referring to?

[–] [email protected] 1 points 5 months ago (1 children)

Yes, restarting Lemmy somehow resets the memory use of the database as well.

[–] [email protected] 1 points 5 months ago

Hm, weird bug. Thanks for the heads up ❤️ I’ve been using the official ansible setup but might be time to switch away from it

[–] [email protected] 4 points 5 months ago (1 children)

Reddthat has 0.19.4 too, feels indeed snappier

[–] [email protected] 2 points 5 months ago (1 children)

Interesting. It could be for the same reason I suggest for lemmy.ml though. Do you notice latencies getting longer over time?

[–] [email protected] 3 points 5 months ago (1 children)

It's a smaller server so I guess latency issues would appear at a slower pace than lemmy.ml

[–] [email protected] 2 points 5 months ago (1 children)

makes sense ... but still ... you're noticing a difference. Maybe a "boiling frog" situation?

[–] [email protected] 2 points 5 months ago

I would say it still feels snappier today than before the update (a couple weeks ago?), so definitely an improvement