Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
And in about 2 years you'll switch to LXD/Incus. :P
Incus looks cool. Have you virtualised a firewall on it? Is it as flexible as proxmox in terms of hardware passthrough options?
I find zero mentions online of opnsense on incus. 🤔
Yes it does run, but BSD-based VMs running on Linux have their details as usual. This might be what you're looking for: https://discuss.linuxcontainers.org/t/run-freebsd-13-1-opnsense-22-7-pfsense-2-7-0-and-newer-under-lxd-vm/15799
Since you want to run a firewall/router you can ignore LXD's networking configuration and use your opnsense to assign addresses and whatnot to your other containers. You can created whatever bridges / vlan-based interface on your base system and them assign them to profiles/containers/VMs. For eg. create a
cbr0
network bridge usingsystemd-network
and then runlxc profile device add default eth0 nic nictype=bridged parent=cbr0 name=eth0
this will usecbr0
as the default bridge for all machines and LXD won't provide any addressing or touch the network, it will just create aneth0
interface on those machines attached to the bridge. Then your opnsense can be on the same bridge and do DHCP, routing etc. Obviously you can passthrough entire PCI devices to VMs and containers if required as well.When you're searching around for help, instead of "Incus" you can search for "LXD" as it tend to give you better results. Not sure if you're aware but LXD was the original project run by Canonical, recently it was forked into Incus (and maintained by the same people who created LXD at Canonical) to keep the project open under the Linux Containers initiative.
I have another question, if you don't mind: I have a debian/incus+opnsense setup now, created bridges for my NICs with systemd-networkd and attached the bridges to the VM like you described. I have the host configured with DHCP on the LAN bridge and ideally (correct me if I'm wrong, please), I'd like the host to not touch the WAN bridge at all (other than creating it and hooking it up to the NIC).
Here's the problem: if I don't configure the bridge on the host with either dhcp or a static IP, the opnsense VM also doesn't receive an IP on that interface. I have a
br0.netdev
to set up the bridge, abr0.network
to connect the bridge to the NIC, and awan.network
to assign a static IP on br0, otherwise nothing works. (While I'm working on this, I have the WAN port connected to my old LAN, if it makes a difference.)My question is: Is my expectation wrong or my setup? Am I mistaken that the host shouldn't be configured on the WAN interface? Can I solve this by passing the pci device to the VM, and what's the best practice here?
Thank you for taking a look! 😊
Passing the PCI network card / device to the VM would make things more secure as the host won't be configured / touching the network card exposed to the WAN. Nevertheless passing the card to the VM would make things less flexible and it isn't required.
I think there's something wrong with your setup. One of my machines has a
br0
and a setup like yours.10-enp5s0.network
is the physical "WAN" interface:Now, I have a profile for "bridged" containers:
And one of my VMs with this profile:
Inside the VM the network is configured like this:
Can you check if your config is done like this? If so it should work.
My config was more or less identical to yours, and that removed some doubt and let me focus on the right part: Without a network config on
br0
, the host isn't bringing it up on boot. I thought it had something to do with the interface having an IP, but turns out the following works as well:Thank you once again!
Oh, now I remembered that there's
ActivationPolicy=
on[Link]
that can be used to control what happens to the interface. At some point I even reported a bug on that feature and vlans.I'm not so sure it is about the interface having an IP... I believe your current
LinkLocalAddressing=ipv4
is forcing the interface to get up since it has to assign a local IP. Maybe you can setLinkLocalAddressing=no
andActivationPolicy=always-up
and see how it goes.You know your stuff, man! It's exactly as you say. 🙏
You're welcome.