this post was submitted on 18 Sep 2023
213 points (94.2% liked)

Linux

48090 readers
773 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Wayland. It comes up a lot: “Bug X fixed in the Plasma Wayland session.” “The Plasma Wayland session has now gained support for feature Y.” And it’s in the news quite …

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 41 points 1 year ago (4 children)

Don't let Slack launch at startup. As long as it launches after pipewire - everything works. Your can also restart it to fix sharing issue, but that can be a birch if you already started a call.

[–] [email protected] 5 points 1 year ago (2 children)

Is there a way to control the launch order? I suppose you could also find a script that waits for a given process to be responsive before launching another, but I'm not sure where I'd insert that either.

(I've been using Ubuntu mostly out-of-the-box so far and just now started having the time and energy to start learning about and fiddling with the internals)

[–] [email protected] 13 points 1 year ago* (last edited 1 year ago)

If it launches via a systemd service, you can perhaps edit the file such that it depends on Pipewire before it launches.

Or disable the built in startup support and create your own service that does the same.

[–] [email protected] 8 points 1 year ago (1 children)

I'm not sure that would work. Pipewire probably starts via system (just takes a while to become functional) and slack is started by KDE. I guess you could just add a delay to slack's start, but I just start it by hand.

[–] [email protected] 5 points 1 year ago

Starting by hand is fine and I do it with just about anything I need anyway (though I suspect there is still some startup bloat I'll need to sort out, if I don't set up an entirely new system somewhere down the line), but don't underestimate my compulsion to automate what I can (or at least know how to).

I'm a sucker for automation for automation's sake :D

[–] [email protected] 2 points 1 year ago

Oh, thx for the tip!

[–] [email protected] 2 points 1 year ago (1 children)

That has nothing to do with Slack's screen sharing issues. Screen sharing was broken due to Electron bugs and it's fixed in Slack 4.34.

[–] [email protected] 1 points 1 year ago (1 children)

I'd argue lazy choice of wrapping your website inside chrome instead of building a native app is Slack's issue.

I also wonder whether Slack fixed it or just waited for Google to fix it since Slack seems to only have UI designers and no actual devs on their team. They keep pumping out useless UI changes while actual bugs take years to fix.

[–] [email protected] 1 points 1 year ago (1 children)

Many Electron maintainers are Slack employees. They're contributing upstream more than most other companies that use Electron, especially compared to their size.

[–] [email protected] 1 points 1 year ago

Could you name a couple? Genuinely interested to check out their contributions.

Also, I just updated to 4.34.119 and screen sharing is still completely broken. As is typical with Slack.

[–] [email protected] 1 points 1 year ago (1 children)

As long as it launches after pipewire

Why? Why plasma nailed own screensharing to audio server? There already are wayland extensions for this.

[–] [email protected] 7 points 1 year ago (1 children)

Pioewire handles audio and video pipelines between applications.

[–] [email protected] 0 points 1 year ago (1 children)

So why Wayland instead of pipewire at all?

[–] [email protected] 7 points 1 year ago (1 children)

Because Pipewire only handles and understands media streams, so it can stream the output of a window or the whole desktop, but only because the Wayland compositor has already composed the windows and other data it gets from the application to a visual and hands the final result to Pipewire.

[–] [email protected] 0 points 1 year ago (1 children)

Which goes back to oroginal question. Why pipewire if there are already wayland extensions?

[–] [email protected] 2 points 1 year ago (1 children)

Because it is convenient for programs to use Pipewire for screensharing, as those programs can then also use the same Pipewire support for all their audio and webcam needs. Also Pipewire is good at multiplexing the various media streams.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

And what developers will hammer their apps to one sound server implementation? What is convenient here? Loosing interoperability? You always can use Wayland for screensharing, ALSA for sound and V4L2 for webcam.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (2 children)

For the multiplexing, as I mentioned.

A V4L2 camera can only be opened by a single application at a time, but if that application is Pipewire, then Pipewire can allow multiple applications to make use of it simultaneously. Same thing with ALSA, it's the reason sound servers exist at all, though I suspect you're already familiar with that.

I also hear that ALSA has some support for multiple applications per device nowadays, though I understand it is much less pleasant to use than a fully featured sound server.

[–] [email protected] 1 points 1 year ago

I also hear that ALSA has some support for multiple applications per device nowadays, though I understand it is much less pleasant to use than a fully featured sound server.

FYI

Many older sound chips had hardware support for mixing multiple streams, and so the alsa drivers for those happily allowed multiple apps to open and write to the /dev/snd/whatever device. Life was good and people got used to doing it this way.

Nowadays (since like 2000 lol), sound chips generally expect a single pre-mixed stream. So the sound device for those is exclusive open. The libalsa devs made it possible to have the first app to open the sound device act as the sound server for every other app that tries to open it later. But it was complicated and fragile and just a bad idea in retrospect.

[–] [email protected] -1 points 1 year ago* (last edited 1 year ago) (1 children)

nowadays

What? Nowdays? Do you live in 2005?

~~To be fair v4l2 sometimes needs additional processing to allow multiple processes to use same webcam at same time. At least for those applications who use libv4l because I've seen mentions that this is because for some reason libv4l checks that camera is not in use.~~

Reading the fucking manual suggests that V4L2 is totally fine with multiple programs using same webcam without any workarounds, just only one program can set resolution and other stuff.

EDIT: found mentions of dmix in 2004. Will I find mention from 2003 to finish with round number? Also hardware mixing was in ALSA since creation, but it required hardware(thanks, cap).

[–] [email protected] 1 points 1 year ago (1 children)

I live in a time where I don't need to edit config files by hand to allow using multiple applications with the same audio output, since I use a sound server. If you're willing to do it by hand, then by all means continue. Though it does seem that ALSA has had support for automatically setting up dmix since 2005, after PulseAudio was released.

I also don't know if resampling and the like is automatically handled when using dmix, but perhaps you can tell me that, since it sounds like you have experience with it?

Reading the fucking manual suggests that [..]

How about we keep a good fucking tone. Yes, that's great. However my experience is that programs all want to set those properties without a way to disable it, so in practice it doesn't really matter.

Yeah, as you mention hardware mixing used to be an option, but AFAIK hardware generally hasn't supported that for a long time.

Another reason to use Pipewire is to enable sandboxed access to multimedia devices, for use with things like Flatpak or Snap.

[–] [email protected] 1 points 1 year ago

I live in a time where I don't need to edit config files by hand to allow using multiple applications with the same audio output

You don't need to. It just works out of box.

after PulseAudio was released.

And after JACK. And PulseAudio development started after JACK was released.

I also don't know if resampling and the like is automatically handled when using dmix, but perhaps you can tell me that, since it sounds like you have experience with it?

Eiher by dmix or by libalsa since I never had issues with samplerate.

How about we keep a good fucking tone.

Not to offend you, just saying that reading manual is one of the best ways to get information.

However my experience is that programs all want to set those properties without a way to disable it, so in practice it doesn't really matter.

For some reason. I don't remember having such issue, but I also don't remember using same webcam in two applications for sure, but I think I used v4l2loopback "webcam" in VLC and chromium at the same time.

Another reason to use Pipewire is to enable sandboxed access to multimedia devices, for use with things like Flatpak or Snap.

Well, /dev/videoX can be forwarded to sandbox. Snap and Flatpak are not designed to be a sandboxes.

Sorry I forgot to reply. Better late than never.