Max_P

joined 1 year ago
[–] [email protected] 14 points 11 hours ago (4 children)

I think a part of it is that english is just the default language and strongly leans american already, so there's just no demand for a USA instance and people just use the popular or thematic ones for that content. There's no advantage in laws to prefer US hosting.

The country ones make sense because they're also a different language, like jlai.lu in french, and the feddits for European languages.

[–] [email protected] 37 points 1 day ago

A lot of them got sucked into the whole "the government is forcing it on you to control the population", and they simply can't comprehend that anyone would voluntarily wear what they now consider being the symbol of submission to the government. In their mind it doesn't work and never worked and you're just virtue signalling your support of the government. It's wild and a lost cause.

I'd expect it to get much worse now.

[–] [email protected] 6 points 2 days ago (1 children)

That should be mostly the default. My secondary Vega 64 is reporting using only 3W which, on a laptop would be worth it but I doubt 3W affects your electricity. It's nothing compared to the overall power usage of the rest of the desktop, the monitors. Pretty sure even my fans use more.

The best way to address this would be to first take proper measurements. Maybe get a kill-a-watt and measure usage with and without the card installed to get the true usage at the wall. Also maybe get a baseline with as little hardware as possible. With that data you can calculate roughly how much it costs to run the PC and how much each component costs, and from there it's easier to decide if it's worth.

Just the electric bill being higher isn't a lot to go with. Could just be that it's getting cold, or hot. Little details can really throw expectations off. For example, mining crypto during the winter is technically cheaper than not for me because I have electric heat, so between 500W in a heating strip or 500W mining crypto, they both produce the same amount of heat in the room but one of them also made me a few cents as a byproduct. You have to consider that when optimizing for cost and not maximizing battery life on a laptop.

[–] [email protected] 5 points 4 days ago (1 children)

One thing to be careful with allowing some bending of the rules, is some are going to start testing how far they can bend the rules. Everytime you bend a rule you create a precedent for it as well, and you get into nasty fights of why was I banned but not them and have your clemency hit you right back in the face.

If it's okay to bend some rules, then that should explicitly be the rule instead. Offtopic discussions for example, you can have a rule be "all top level comments should be on topic" as a balance, so offtopic discussions can happen, just not take over the whole comment section. If you allow something, make a mod comment explaining why for transparency and set the right expectations: "This post is off-topic but is generating on-topic discussion so we're keeping it."

Similarly, well designed punishments goes a long way. For example, automatic ban after N warnings can be unfair. What you're really after is, you don't want to be warning that user every day to stay on topic. So the punishment can be more like "more than 3 warnings within 10 days results in a 7 day ban". But sometimes the situation is such, you can rack in 10 warnings in the same threads. So you can make the punishment account for that: "If you get warned more than 3 times during a 14 day period, you will be banned for 7 days". Or per thread, whatever makes sense. Understand common mistakes community members do and how you can steer them in the right direction without being unnecessarily harsh.

With those two combined, it shouldn't matter if you moderate like a robot or not. The expectations are clear, forgiving and fair while enforcing some order for repeat offenders. The rules have the flexibility you need baked in so you don't have to bend the rules.

[–] [email protected] 11 points 4 days ago (1 children)

Guarantee there will be questions of cost of setup, maintenance, and risks.

And time moderating it, especially if they run their own. At least with Twitter/Facebook/YouTube, you get a lot of moderation for free whether you agree with it or not.

And if they use another instance, there's other liability questions about the particular instance to choose. If they're gonna represent an official city account, you'd expect some cybersecurity certifications to be a requirement and all kinds of stuff, even if it's a free service. The instance admins interfering, possibly steering opinions during city elections, etc.

Nobody cares about decentralized social networks, the technology, or how terrible the other outlets are. For a municipality, you may want to focus on maintaining multiple channels of communications and ways to reach and engage the most users. You could then fold the fediverse into it as one more channel. Something they should keep an eye on. They'll need a way to post the same content to all those channels with the least effort. Something easy that a trained intern or clerk can do.

In this case IMO it might even be better to use something like Wordpress with the ActivityPub plugin, or alternatives to that. I imagine a city mostly posts announcements and stuff, so a blog that serves as both an official website and you can follow and interact with it from the comfort of your preferred social service sounds a lot more appealing than just another social media without that many users. Can even use more plugins to post to Facebook and Twitter as well, all from one place. Given the age of the board, they're also more likely to know and care about Threads and Bluesky compatibility just because they have more users, and bureaucratic decisions are based on numbers. A nice graph showing if they join the fediverse they capture all the users fleeing Twitter by supporting AP and AT.

[–] [email protected] 2 points 5 days ago (1 children)

Post the Hyprland config too?

Does it make the entire screen green by chance, not just the windows? If the shader applies to the whole screen, then setting alpha on it doesn't really makes sense and is probably discarded since your monitor can't display transparency. You need to make sure it applies on a per-window basis for them to be composited as transparent and show your wallpaper behind it.

[–] [email protected] 1 points 5 days ago (1 children)

You might be able to convince Firefox to be transparent with a GTK theme as Firefox uses GTK under the hood for the window. If you're lucky it won't bother clearing it with black just in case and it'll actually be transparent.

The shader option is pretty neat and easy though.

[–] [email protected] 18 points 5 days ago (2 children)

It's nicknamed the autohell tools for a reason.

It's neat but most of its functionality is completely useless to most people. The autotools are so old I think they even predate Linux itself, so it's designed for portability between UNIXes of the time, so it checks the compiler's capabilities and supported features and tries to find paths. That also wildly predate package managers, so they were the official way to install things so there was also a need to make sure to check for dependencies, find dependencies, and all that stuff. Nowadays you might as well just want to write a PKGBUILD if you want to install it, or a Dockerfile. Just no need to check for 99% of the stuff the autotools check. Everything it checks for has probably been standard compiler features for at least the last decade, and the package manager can ensure you have the build dependencies present.

Ultimately you eventually end up generating a Makefile via M4 macros through that whole process, so the Makefiles that get generated look as good as any other generated Makefiles from the likes of CMake and Meson. So you might as well just go for your hand written Makefile, and use a better tool when it's time to generate a Makefile.

(If only c++ build systems caught up to Golang lol)

At least it's not node_modules

[–] [email protected] 9 points 6 days ago (1 children)

I went for a federated option specifically so that it's resistant to one company going rogue like Reddit did with the API fiasco and the banning of every third party app that made Reddit great. That's really the killer feature, if you're tired of your admins you go to another instance. No need to protest and switch your subs to private, just move the whole community elsewhere.

[–] [email protected] 4 points 1 week ago

Yeah, and you're pinging from server to client with no client connected. Ping from the client first to open the connection, or set keep alives on the client.

[–] [email protected] 3 points 1 week ago (2 children)

Your peer have no endpoint configured so the client needs to connect to the server first for it to know where the client is. Try from the client, and it'll work for a bit both ways.

You'll want the persistent keepalive option on the client side to keep the tunnel alive.

[–] [email protected] 7 points 1 week ago (1 children)

They should be in /run/systemd along the rest of generated units.

 

Neat little thing I just noticed, might be known but I never head of it before: apparently, a Wayland window can vsync to at least 3 monitors with different refresh rates at the same time.

I have 3 monitors, at 60 Hz, 144 Hz, and 60 Hz from left to right. I was using glxgears to test something, and noticed when I put the window between the monitors, it'll sync to a weird refresh rate of about 193 fps. I stretched it to span all 3 monitors, and it locked at about 243 fps. It seems to oscillate between 242.5 and 243.5 gradually back and forth. So apparently, it's mixing the vsync signals together and ensuring every monitor's got a fresh frame while sharing frames when the vsyncs line up.

I knew Wayland was big on "every frame is perfect", but I didn't expect that to work even across 3 monitors at once! We've come a long, long way in the graphics stack. I expected it to sync to the 144Hz monitor and just tear or hiccup on the other ones.

view more: next ›