Adonnen

joined 1 year ago
[–] [email protected] 3 points 2 months ago

Was looking for this comment. Donate your plasma to KDE!

[–] [email protected] 2 points 2 months ago (1 children)
  1. I'd be fine with any. Trying Fedora, or maybe Debian. But I'd rather set up networking at the qemu level so the vm only has access to what I want it to.
  2. I don't know how it would work, but I can create a new device id and make a new wireguard conf file. I don't know why this wouldn't work with any other conf/interface on my host.
  3. I want this to be physical router agnostic, as the host is a laptop. Only the vpn and host should be exposed to the VM.
 

I am trying to create a KVM/QEMU/Virt-Manager VM without exposing my IP/internet connection to it. I pay for a VPN subscription, and I typically access it through wireguard configs that integrate with my distro (Fedora 40 Workstation) and DE VPN menus. From my understanding, as I have them set up now, I can enable one of these configurations in my settings, and all of my traffic is routed through the VPN, except for my local network.

I want this VM guest to have all of its traffic sent to the VPN as well, with the exception of some connection between it and the host, so I could still access it from the host for utilities like ssh.

Is it possible to achieve this? When I looked online, it seemed to require some CLI configuration of IP routes, and I didn't feel confident not understanding the changes I was making, as I want to make sure it is impossible to leak; it just shouldn't have any access to my normal network. If my VPN is disabled on the host, then it simply shouldn't be able to access the internet.

[–] [email protected] 2 points 3 months ago

For the tablet? I'm considering a Surface keyboard or cheaper alternative, but I would usually be using it for handwritten notes and other tasks for which I would not use the keyboard. It would really be most useful during initial setup. I would still need to easily open it when the keyboard is removed.

 

Hello. I recently acquired a Surface Go (1st gen, 4 GB RAM, 64 GB EMMC) and installed Fedora Workstation (and Phosh as a second DE). I do not have a keyboard for this device, so usually, I have to use the on-screen keyboard. Entering a sufficiently secure password whenever I wake it from sleep or need elevated permissions/sudo is not practical, but I don't think a 6-8 digit numerical PIN is sufficient.

The Surface supports Windows Hello, but neither the vanilla nor the Linux-Surface kernel currently supports the IR camera. On my main laptop, I use a fingerprint sensor. I must use my good password to decrypt the drive (though this is bypassed by TPM) and unlock the keychain on first boot or after logging out, but afterwards, I can use my fingerprint to unlock from sleep, run sudo commands, and elevate my permissions.

It seems like there are PAM modules for smart keys and TOTP 2FA, though the latter is more cumbersome, and I don't know if I can authenticate FIDO or U2F from my phone over Bluetooth. I asked on the Linux-Surface matrix, and someone suggested KDE/GS Connect, which allows commands, but I would want something I could do near-instantly, either with a prompt or homescreen shortcut plus smartphone biometrics, and I want to be able to authenticate while logged in, i.e. for sudo, not just unlocking the homescreen.

I am not an expert, and security is not something I really want to go in blind on. Does anyone have experience, ideas, guidance or an up-to-date tutorial? I feel this is an acceptable compromise between usability and security, and it would make using it casually much easier.

[–] [email protected] 4 points 4 months ago

Yup. I checked their webpage. Might help battery but I'll try vanilla first. Unfortunately, no dice with the secondary display thing. With RDP, the hardware cursor won't send, and I can find a way to use RDP over type c cable.

 

Hello, all. I just got handed down a Surface Go (1st gen, 4gb ram), and I want to use it as a note taking machine, document reader, and secondary display for my primary laptop (Framework intel 12th gen running Fedora GNOME).

I have a pen but no keyboard, so any config will be done with a usb keyboard, but usage will be like a tablet.

  1. I have heard I should install GNOME on a tablet. I am generally ok with the 'opinionated' design of GNOME, but does anyone know what performance to expect? Would I be better off with a lightweight distro and de?

  2. What apps can be recommended for stylus notetaking? Would prefer svg output, and simple workflow to export them to my main machine, where I can embed in markdown notebooks

  3. Finally, the secondary display usage. Is this feasible? I know GNOME has RDP support, but my uni's wifi makes that very difficult, and I'd prefer a wired connection if possible. I don't need the stylus to work.

BONUS: If anyone has experience with the proprietary Surface Connect port, can it be adapted to usb c on linux, so that I can transfer power and >= 5gpbs of data? I see usb c adapters online, but they don't mention data; only power delivery.

[–] [email protected] 3 points 6 months ago (1 children)

Flare isn't feature complete but you can run it in the background for all notifications.

[–] [email protected] 2 points 6 months ago

I'd imagine the drm would ruin that plan. No HD streaming.

[–] [email protected] 2 points 7 months ago (1 children)

I think a server is for streaming the audio to different devices. They don't want to stream from phone to the player (or the other way around). They just want to be able to browse library and control playback from their phone.

 

An aquantance of mine has a CD collection and wants to rip it. They don't want to stream it over a server but rather store it, say, on a hard drive connected directly to their speakers/receiver.

While they **don't want to stream ** it wirelessly to/from their phone, they do want to control selection/playback.

Kind of like a remote controlled jukebox or, well, a really big CD player.

I am thinking there's probably some raspberry pi project to play on-device music library that has a remote control library plug-in over LAN. I'd also like there to be a backup option, like a Pi GUI so they could see their library on the TV.

I'm envisioning an interface similar to the retro game players or kodi.

Does this exist?

[–] [email protected] 4 points 11 months ago (1 children)

Looks nice, I'll check it out if I have to use Mac OS again.

[–] [email protected] 86 points 11 months ago (11 children)

I much prefer Windows to MacOS. The fact it is missing decent tiling is a nonstarter. It's too inflexible for my workflow.

And sure, Windows can be maddeningly inconsistent, but what really destroys the experience is the constant ensh*ttification. I know a lot of people here hate everything about Windows, but for me, it only sucks because Microsoft designs it to suck.

Not only are there ads and (some) first party lockin, I cannot trust they will continue offering updates, paywall feaures, restrict more functionality, or insert stuff like AI to mess up my workflow.

I used to think reliability was just about stability and bugginess, but now I think it is about trust as well.

[–] [email protected] 1 points 11 months ago

I'm glad you do. I want to start contributing and donating too. But I do think the definition of freeloader is a bit adjusted for FOSS software.

[–] [email protected] 1 points 11 months ago

I get your point, but this definition applies to all users of FOSS software who do not actively contribute to its development. Purpose is a consideration here; I am freeloading if I use netflix's service through loopholes or piracy when it is intended for paid customers, but am I freeloading if I, a non developer and a student not in a position to donate, use libreoffice? By this definition, I clearly am a freeloader. But it is clearly intended for use by the general public.

For RHEL, there is more ambiguity, because although they sell it at cost, it is still based in an open source ecosystem. I understand how using rhel binaries without becoming a paying customer could be seen as freeloading, but the crucial difference is the intent of an open ecosystem and standard. RHEL establishes itself as a standard, and that means it's work will be used, not just contributed to. By closing it off, they are cutting off that standard.

Compare this to standards like USB or audio codecs. A powerful company or consortium may create an open standard and use it in their paid offerings, but others using it aren't freeloaders, even if they compete with said offerings. They're intended (or expected) users.

Sorry if I'm not making much sense. I'm only commenting because I find this interesting, not angry keyboard warring.

[–] [email protected] 8 points 11 months ago* (last edited 11 months ago) (6 children)

So basically all those who used CentOS and did not contribute anything even though CentOS cried for contributions for years until Red Hat eventually bought them? (=Most notably Oracle.)

Not contributing is not necessarily freeloaders. Users have no obligation. That's the point of open source. Only building off of open code and the closing yours off is freeloading.

Oracle and others used the source code and publish their distro's source. Oracle not contributing is jerky, sure, but for them to be freeloaders they would have to use enterprise linux as a basis for a pay walled proprietary or restricted source OS. Correct me if I'm wrong, but their business model is using Oracle Linux in their cloud offerings.

Red Hat is still the biggest FOSS contributor. (I use openSUSE and SteamOS, btw, so I'm not even a RH product user.)

Hell, I use Fedora, so anything I contribute to is upstream of RHEL. I'm not saying RH socks. There are a lot of great people they employ and their business has been a huge positive for FOSS. But those (great) achievements were and are premised on community collaboration, and it's more than fair to raise a stink about it.

It's really not a loophole.

You're right about GPL. I have nothing against paid software. I was more describing the broader enterprise linux ecosystem. That is to say, RHEL's success is based on making it an open standard. The greater community can contribute either directly to the upstream or to the application ecosystem, with the understanding their work is applicable to the FOSS community. Closing the downstream is a loophole out of this system where they get to profit. It's a bait and switch.

Simply reusing Red Hat's source RPMs isn't an open ecosystem. All the EL downstreams finally collaborating is.

"Ecosystem" wasn't referring to the existence of clone distros but the development and adoption of enterprise linux they enable(d). The ecosystem is not only those directly contributing to enterprise linux but the developers targeting enterprise linux and the (IT/CS) user base familiarizing itself with enterprise linux. The market for a RHEL clone is not the market for RHEL enterprise solutions. As I said above, free availability of clones gets people into the ecosystem, and on the corporate end, as long as RH's offerings aren't enshittified, Red Hat converts these people into customers. It should be a win-win, but short-term profit maximization will hurt its trust and future growth.

236
RHEL 10 Leaked (lemmy.world)
submitted 11 months ago* (last edited 11 months ago) by [email protected] to c/[email protected]
 

I'm trying to connect a university ipad (air, usb 3 type c, not tb or lightning) to my laptop (Framework laptop, intel 12th gen) running Fedora workstation 39. On Windows, I used a nifty app called Duet Display. I just used a usb-c cable to plug the ipad into the laptop, launched the app on both devices, and windows would see an external monitor. Scaling and resolution worked fine, and latency wasn't perfect, but was more than enough for a secondary display. With settings tweaked, artifacting was minimal.

I know there are remote desktop protocols and apps, but I really want to avoid a wireless connection. Remote desktop over the internet is wasteful and unreliable, and as for local network, ,my university has some strict controls on its wifi network and I cannot reliably connect my devices. Even if I could, the reliability and latency are still bad.

Duet over usb always worked and didn't rely on a wireless connection, but it also is closed source and windows and mac only.

From what I can see online, the best way for an ipad to display content from another device is going to be a remote desktop protocol as it does not directly accept video signals like HDMI-in. The ipad can also connect to a network over usb c/ethernet.

It seems the best approach would be to create a local network on my PC and connect my ipad to it with the cable, and then use a remote desktop client on the ipad.

Is this a good approach? If so, how exactly would I make the usb connection share a local network connection?

Note I only want to connect the ipad to the laptop. I understand if the ipad will not connect to wifi while connected to ethernet, and I don't need to share the internet connection with the ipad. My computer still needs to be connected to wifi/ethernet to access my university network, however.

view more: next ›