Molecular0079

joined 1 year ago
[–] [email protected] 4 points 1 month ago (1 children)

I've always hated it and thought it was a stupid untuitive mechanic that didn't map to anything in real life. It also looks equally stupid in multiplayer when you see player character models spasm their way up a ledge during a crouch jump. It's an old school mechanic that I am glad is going out of fashion due to better vault controls.

like a simulation of pulling your legs up in real life.

You don't pull your legs up in real life though, you use your hands to vault onto something. You can't just swap stances in mid air without holding onto anything. Even if you were talking about box jumps, like the kinds you normally do at a gym, it still isn't anything remotely like a crouch jump. Also anyone doing a box jump in an actual combat situation just looks goofy.

Any time a game explicitly has a tutorial for crouch jump, my immersion is completely broken. I am instantly reminded that it is a game.

[–] [email protected] 2 points 1 month ago

Performance parity? Heck no, not until this bug with the GSP firmware is solved: https://github.com/NVIDIA/open-gpu-kernel-modules/issues/538

[–] [email protected] 8 points 1 month ago (2 children)

This. It all boils down to value for money. 5 dollars for a skin cosmetic is bullshit. 5 dollars or more for DLC with meaningful content is okay.

[–] [email protected] 1 points 2 months ago (1 children)

Some people have reported that installing the 32-bit version of mesa libva drivers makes it work for them? Might be worth a shot.

[–] [email protected] 1 points 3 months ago

Usually yes, but it doesn't apply to BG3. The vulkan renderer is terribly broken ever since Patch 3.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)

They borked the Vulkan Renderer somewhere around Patch...3 I think? It used to be so performant, but now it runs only at 40-60fps on my Nvidia 3090 compared to the DX11 renderer which can render at 80-120 T_T

[–] [email protected] 3 points 3 months ago

Valve should totally hook her up with one of those dev accounts that have access to games so she doesn't have to pay for it themselves or get gifted them. She's doing valuable work for the ecosystem.

[–] [email protected] 3 points 3 months ago

Yeah, in a Reddit comment, Hector Martin himself said that the memory bandwidth on the Apple SIlicon GPU is so big that any potential performance problems due to TBDR vs IMR are basically insignificant.

...which is a funny fact because I had another Reddit user swear up and down that TBDR was a big problem and that's why Apple decided not to support Vulkan and instead is forcing everyone to go Metal.

[–] [email protected] 4 points 3 months ago (2 children)

I've heard something about Apple Silicon GPUs being tile-based and not immediate mode, which means the Vulkan API is different compared to regular PCs. How has this been addressed in the Vulkan driver?

[–] [email protected] 4 points 3 months ago

Huge fucking deal, especially for Nvidia users, but it is great for the entire ecosystem. Other OSes have had explicit sync for ages, so it is great for Linux to finally catch up in this regard.

[–] [email protected] 10 points 3 months ago

You're correct. While the stable version of KDE Wayland is usable right now with the new driver with no flickering issues, etc., it technically does not have the necessary patches needed for explicit sync. Nvidia has put some workarounds in the 555 driver code to prevent flickering without explicit sync, but they're slower code paths.

The AUR has a package called kwin-explicit-sync, which is just the latest stable kwin with the explicit sync patches applied. This combined with the 555 drivers makes explicit sync work, finally solving the flickering issues in a fast performant way.

I've tested with both kwin and kwin-explicit-sync and the latter has dramatically improved input latency. I am basically daily driving Wayland now and it is awesome.

[–] [email protected] 1 points 4 months ago

I love Nextcloud Talk, but my biggest annoyance with it is that text chats don't properly scroll to the bottom when new messages come in.

 

I've been trying to migrate my services over to rootless Podman containers for a while now and I keep running into weird issues that always make me go back to rootful. This past weekend I almost had it all working until I realized that my reverse proxy (Nginx Proxy Manager) wasn't passing the real source IP of client requests down to my other containers. This meant that all my containers were seeing requests coming solely from the IP address of the reverse proxy container, which breaks things like Nextcloud brute force protection, etc. It's apparently due to this Podman bug: https://github.com/containers/podman/issues/8193

This is the last step before I can finally switch to rootless, so it makes me wonder what all you self-hosters out there are doing with your rootless setups. I can't be the only one running into this issue right?

If anyone's curious, my setup consists of several docker-compose files, each handling a different service. Each service has its own dedicated Podman network, but only the proxy container connects to all of them to serve outside requests. This way each service is separated from each other and the only ingress from the outside is via the proxy container. I can also easily have duplicate instances of the same service without having to worry about port collisions, etc. Not being able to see real client IP really sucks in this situation.

 

On one of my machines, I am completely unable to log out. The behavior is slightly different depending on whether I am in Wayland or X11.

Wayland

  1. Clicking log out and then OK in the log out window brings me back to the desktop.
  2. Doing this again does the same thing
  3. Clicking log out for a third time does nothing

X11

  1. Clicking log out will lead me to a black screen with just my mouse cursor.

In my journalctl logs, I see:

Apr 03 21:52:46 arch-nas systemd[1]: Stopping User Runtime Directory /run/user/972...
Apr 03 21:52:46 arch-nas systemd[1]: run-user-972.mount: Deactivated successfully.
Apr 03 21:52:46 arch-nas systemd[1]: [email protected]: Deactivated successfully.
Apr 03 21:52:46 arch-nas systemd[1]: Stopped User Runtime Directory /run/user/972.
Apr 03 21:52:46 arch-nas systemd[1]: Removed slice User Slice of UID 972.
Apr 03 21:52:46 arch-nas systemd[1]: user-972.slice: Consumed 1.564s CPU time.
Apr 03 21:52:47 arch-nas systemd[1]: dbus-:[email protected]: Deactivated successfully.
Apr 03 21:52:47 arch-nas systemd[1]: dbus-:[email protected]: Deactivated successfully.
Apr 03 21:52:47 arch-nas systemd[1]: dbus-:[email protected]: Deactivated successfully.
Apr 03 21:52:48 arch-nas systemd[1]: dbus-:[email protected]: Deactivated successfully.
Apr 03 21:52:54 arch-nas systemd[4500]: Created slice Slice /app/dbus-:1.2-org.kde.LogoutPrompt.
Apr 03 21:52:54 arch-nas systemd[4500]: Started dbus-:[email protected].
Apr 03 21:52:54 arch-nas ksmserver-logout-greeter[5553]: qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile
Apr 03 21:52:54 arch-nas ksmserver-logout-greeter[5553]: kf.windowsystem: static bool KX11Extras::compositingActive() may only be used on X11
Apr 03 21:52:54 arch-nas plasmashell[5079]: qt.qpa.wayland: eglSwapBuffers failed with 0x300d, surface: 0x0
Apr 03 21:52:55 arch-nas systemd[4500]: Created slice Slice /app/dbus-:1.2-org.kde.Shutdown.
Apr 03 21:52:55 arch-nas systemd[4500]: Started dbus-:[email protected].
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target plasma-workspace-wayland.target.
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target KDE Plasma Workspace.
Apr 03 21:52:55 arch-nas systemd[4500]: Requested transaction contradicts existing jobs: Transaction for  is destructive (drkonqi-coredump-pickup.service has 'start' job queued, but 'stop' is included in transaction).
Apr 03 21:52:55 arch-nas systemd[4500]: graphical-session.target: Failed to enqueue stop job, ignoring: Transaction for graphical-session.target/stop is destructive (drkonqi-coredump-pickup.service has 'start' job queued, but 'stop' is included in transaction).
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target KDE Plasma Workspace Core.
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target Startup of XDG autostart applications.
Apr 03 21:52:55 arch-nas systemd[4500]: Stopped target Session services which should run early before the graphical session is brought up.
Apr 03 21:52:55 arch-nas systemd[4500]: dbus-:[email protected]: Main process exited, code=exited, status=1/FAILURE
Apr 03 21:52:55 arch-nas systemd[4500]: dbus-:[email protected]: Failed with result 'exit-code'.graphical-session.target/stop

I've filed an upstream bug for this but I was wondering if anyone else here was also experiencing the same issue.

 

Currently, I have SSH, VNC, and Cockpit setup on my home NAS, but I have run into situations where I lose remote access because I did something stupid to the network connection or some update broke the boot process, causing it to get stuck in the BIOS or bootloader.

I am looking for a separate device that will allow me to not only access the NAS as if I had another keyboard, mouse, and monitor present, but also let's me power cycle in the case of extreme situations (hard freeze, etc.). Some googling has turned up the term KVM-over-IP, but I was wondering if any of you guys have any trustworthy recommendations.

 

cross-posted from: https://lemmy.world/post/4930979

Bcachefs making progress towards getting included in the kernel. My dream of having a Linux native RAID5 capable filesystem is getting closer to reality.

 

Bcachefs making progress towards getting included in the kernel. My dream of having a Linux native RAID5 capable filesystem is getting closer to reality.

 

cross-posted from: https://lemmy.world/post/3989163

I've been messing around with podman in Arch and porting my self-hosted services over to it. However, it's been finicky and I am wondering if anybody here could help me out with a few things.

  1. Some of my containers aren't getting properly started up by podman-restart.service on system reboot. I realized they were the ones that depended on my slow external BTRFS drive. Currently its mounted with x-systemd.automount,x-systemd.device-timeout=5 so that it doesn't hang up the boot if I disconnect it, but it seems like Podman doesn't like this. If I remove the systemd options the containers properly boot up automatically, but I risk boot hangs if the drive ever gets disconnected from my system. I have already tried x-systemd.before=podman-restart.service and x-systemd.required-by=podman-restart.service, and even tried increasing the device-timeout to no avail.

When it attempts to start the container, I see this in journalctl:

Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: libpod-742b4595dbb1ce604440d8c867e72864d5d4ce1f2517ed111fa849e59a608869.scope: Deactivated successfully.
Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : runtime stderr: error stat'ing file `/external/share`: Too many levels of symbolic links
Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : Failed to create container: exit status 1
  1. When I shutdown my system, it has to wait for 90 seconds for libcrun and libpod-conmon-.scope to timeout. Any idea what's causing this? This delay gets pretty annoying especially on an Arch system since I am constantly restarting due to updates.

All the containers are started using docker-compose with podman-docker if that's relevant.

Any help appreciated!

EDIT: So it seems like podman really doesn't like systemd automount. Switching to nofail, x-systemd.before=podman-restart.service seems like a decent workaround if anyone's interested.

 

I've been messing around with podman in Arch and porting my self-hosted services over to it. However, it's been finicky and I am wondering if anybody here could help me out with a few things.

  1. Some of my containers aren't getting properly started up by podman-restart.service on system reboot. I realized they were the ones that depended on my slow external BTRFS drive. Currently its mounted with x-systemd.automount,x-systemd.device-timeout=5 so that it doesn't hang up the boot if I disconnect it, but it seems like Podman doesn't like this. If I remove the systemd options the containers properly boot up automatically, but I risk boot hangs if the drive ever gets disconnected from my system. I have already tried x-systemd.before=podman-restart.service and x-systemd.required-by=podman-restart.service, and even tried increasing the device-timeout to no avail.

When it attempts to start the container, I see this in journalctl:

Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Got automount request for /external, triggered by 3130 (3)
Aug 27 21:15:46 arch-nas systemd[1]: external.automount: Automount point already active?
Aug 27 21:15:46 arch-nas systemd[1]: libpod-742b4595dbb1ce604440d8c867e72864d5d4ce1f2517ed111fa849e59a608869.scope: Deactivated successfully.
Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : runtime stderr: error stat'ing file `/external/share`: Too many levels of symbolic links
Aug 27 21:15:46 arch-nas conmon[3124]: conmon 742b4595dbb1ce604440 : Failed to create container: exit status 1
  1. When I shutdown my system, it has to wait for 90 seconds for libcrun and libpod-conmon-.scope to timeout. Any idea what's causing this? This delay gets pretty annoying especially on an Arch system since I am constantly restarting due to updates.

All the containers are started using docker-compose with podman-docker if that's relevant.

Any help appreciated!

EDIT: So it seems like podman really doesn't like systemd automount. Switching to nofail, x-systemd.before=podman-restart.service seems like a decent workaround if anyone's interested.

 

cross-posted from: https://lemmy.world/post/3754933

While experimenting with ProtonVPN's Wireguard configs, I realized that my real IPv6 address was leaking while IPv4 was correctly going through the tunnel. How do I prevent this from happening?

I've already tried adding ::/0 to the AllowedIPs option and IPv6 is listed as disabled in the NetworkManager profile.

 

While experimenting with ProtonVPN's Wireguard configs, I realized that my real IPv6 address was leaking while IPv4 was correctly going through the tunnel. How do I prevent this from happening?

I've already tried adding ::/0 to the AllowedIPs option and IPv6 is listed as disabled in the NetworkManager profile.

 

cross-posted from: https://lemmy.world/post/1313651

Every time I plug in my Quadcast S USB mic into my Arch Linux box, I can't properly go into deep sleep. Unplugging it before attempting to sleep makes it work again, but its annoying to have to do that every time. How do I debug this and where do I even submit a bug report for this?

Here's the relevant journalctl:

Jul 10 11:59:39 systemd[1]: Starting System Suspend...
Jul 10 11:59:39 systemd-sleep[70254]: Entering sleep state 'suspend'...
Jul 10 11:59:39 kernel: PM: suspend entry (deep)
Jul 10 11:59:39 kernel: Filesystems sync: 0.066 seconds
Jul 10 11:59:42 kernel: Freezing user space processes
Jul 10 11:59:42 kernel: Freezing user space processes completed (elapsed 0.001 seconds)
Jul 10 11:59:42 kernel: OOM killer disabled.
Jul 10 11:59:42 kernel: Freezing remaining freezable tasks
Jul 10 11:59:42 kernel: Freezing remaining freezable tasks completed (elapsed 0.001 seconds)
Jul 10 11:59:42 kernel: printk: Suspending console(s) (use no_console_suspend to debug)
Jul 10 11:59:42 kernel: serial 00:04: disabled
Jul 10 11:59:42 kernel: sd 2:0:0:0: [sdb] Synchronizing SCSI cache
Jul 10 11:59:42 kernel: sd 1:0:0:0: [sda] Synchronizing SCSI cache
Jul 10 11:59:42 kernel: sd 5:0:0:0: [sdc] Synchronizing SCSI cache
Jul 10 11:59:42 kernel: sd 1:0:0:0: [sda] Stopping disk
Jul 10 11:59:42 kernel: sd 2:0:0:0: [sdb] Stopping disk
Jul 10 11:59:42 kernel: sd 5:0:0:0: [sdc] Stopping disk
Jul 10 11:59:42 kernel: ACPI: PM: Preparing to enter system sleep state S3
Jul 10 11:59:42 kernel: ACPI: PM: Saving platform NVS memory
Jul 10 11:59:42 kernel: Disabling non-boot CPUs ...
Jul 10 11:59:42 kernel: Wakeup pending. Abort CPU freeze
Jul 10 11:59:42 kernel: Non-boot CPUs are not disabled
Jul 10 11:59:42 kernel: ACPI: PM: Waking up from system sleep state S3
Jul 10 11:59:42 kernel: sd 5:0:0:0: [sdc] Starting disk
Jul 10 11:59:42 kernel: sd 2:0:0:0: [sdb] Starting disk
Jul 10 11:59:42 kernel: sd 1:0:0:0: [sda] Starting disk
Jul 10 11:59:42 kernel: serial 00:04: activated

Thanks in advance!

view more: next ›