this post was submitted on 03 Jul 2024
16 points (90.0% liked)

Selfhosted

37813 readers
504 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Currently, I have two VPN clients on most of my devices:

  • One for connecting to a LAN
  • One commercial VPN for privacy reasons

I usually stay connected to the commercial VPN on all my devices, unless I need to access something on that LAN.

This setup has a few drawbacks:

  • Most commercial VPN providers have a limit on the number of simulations connected clients
  • I either obfuscate my IP or am able to access resources on that LAN, including my Pi-Hole fur custom DNS-based blocking

One possible solution for this would be to route all internet traffic through a VPN client on the router in the LAN and figuring out how to still be able to at least have a port open for the VPN docker container allowing access to the LAN. But then the ability to split tunnel around that would be pretty hard to achieve.

I want to be able to connect to a VPN host container on the LAN, which in turn routes all internet traffic through another VPN client container while allowing LAN traffic, but still be able to split tunnel specific applications on my Android/Linux/iOS devices.

Basically this:

   +---------------------+ internet traffic   +--------------------+           
   |                     | remote LAN traffic |                    |           
   | Client              |------------------->|VPN Host Container  |           
   | (Android/iOS/Linux) |                    |in remote LAN       |           
   |                     |                    |                    |           
   +---------------------+                    +--------------------+           
                      |                         |     |                        
                      |       remote LAN traffic|     | internet traffic       
split tunneled traffic|                 |--------     |                        
                      |                 |             v                        
                      v                 |         +---------------------------+
  +---------------------+               v         |                           |
  | regular LAN or      |     +-----------+       | VPN Client Container      |
  | internet connection |     |remote LAN |       | connects to commercial VPN|
  +---------------------+     +-----------+       |                           |
                                                  |                           |
                                                  +---------------------------+

Any recommendations on how to achieve this, especially considering client apps for Android and iOS with the ability to split tunnel per application?

Update:

~~Got it by following this guide.~~

Ended up modifying this setup to have better control over potential IP leakage

top 15 comments
sorted by: hot top controversial new old
[–] undefined@links.hackliberty.org 2 points 2 days ago (1 children)

I use Tailscale to do this. I install the software on everything I can, but for resources on the LAN that don’t have Tailscale running I use its Subnet Router feature to masquerade the traffic and connect to those clients.

As for the commercial VPN, it’s a bit more involved. I have a few Exit Nodes (VPS) that take incoming Tailscale traffic destined to the Internet and re-route it via the commercial VPN’s WireGuard network interface.

This was a huge challenge for me (lots of iptables, ip6tables rules) but I have it down to a reproducible script I can provide if you’d like an example.

My next goal is to containerize the two VPS servers into one with Docker. Tailscale is a bit annoying that you can’t have multiple Nodes running on the same machine (hence my temporary two VPS solution).

Note: capitalized terms are Tailscale feature names

[–] Emotet@slrpnk.net 2 points 2 days ago (1 children)

I've been tempted by Tailscale a few times before, but I don't want to depend on their proprietary clients and control server. The latter could be solved by selfhosting Headscale, but at this point I figure that going for a basic Wireguard setup is probably easier to maintain.

I'd like to have a look at your rules setup, I'm especially curious if/how you approached the event of the commercial VPN Wireguard tunnel(s) on your exit node(s) going down, which depending on the setup may send requests meant to go through the commercial VPN through your VPS exit node.

Personally, I ended up with two Wireguard containers in the target LAN, a wireguard-server and a **wireguard-client **container.

They both share a docker network with a specific subnet {DOCKER_SUBNET} and wireguard-client has a static IP {WG_CLIENT_IP} in that subnet.


The wireguard-client has a slightly altered standard config to establish a tunnel to an external endpoint, a commercial VPN in this case:

[Interface]
PrivateKey = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Address = XXXXXXXXXXXXXXXXXXX

PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE
PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE

PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT

PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT

[Peer]
PublicKey = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
AllowedIPs = 0.0.0.0/0,::0/0
Endpoint = XXXXXXXXXXXXXXXXXXXX

where

PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE
PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE

are responsible for properly routing traffic coming in from outside the container and

PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT

PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT

is your standard kill-switch meant to block traffic going out of any network interface except the tunnel interface in the event of the tunnel going down.


The wireguard-server container has these PostUPs and -Downs:

PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

default rules that come with the template and allow for routing packets through the server tunnel

PostUp = wg set wg0 fwmark 51820

the traffic out of the tunnel interface get marked

PostUp = ip -4 route add 0.0.0.0/0 via {WG_CLIENT_IP} table 51820

add a rule to routing table 51820 for routing all packets through the wireguard-client container

PostUp = ip -4 rule add not fwmark 51820 table 51820

packets not marked should use routing table 51820

PostUp = ip -4 rule add table main suppress_prefixlength 0

respect manual rules added to main routing table

PostUp = ip route add {LAN_SUBNET} via {DOCKER_SUBNET_GATEWAY_IP} dev eth0

route packages with a destination in {LAN_SUBNET} to the actual {LAN_SUBNET} of the host

PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip route del {LAN_SUBNET} via {DOCKER_SUBNET_GATEWAY_IP} dev eth0

delete those rules after the tunnel goes down

PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT
PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT

Basically the same kill-switch as in wireguard-client, but with the mark manually substituted since the command it relied on didn't work in my server container for some reason and AFAIK the mark actually doesn't change.


Now do I actually need the kill-switch in wireguard-server? Is the kill-switch in wireguard-client sufficient? I'm not even sure anymore.

[–] undefined@links.hackliberty.org 2 points 1 day ago (1 children)

Your setup looks more advanced than mine, and I'd really like to do something similar. I'm just going to copy/paste what I have with some addresses replaced by:

VPN_IPV4_CLIENT_ADDRESS: The WireGuard IPv4 address of the VPN provider's interface (e.g. 172.0.0.1) VPN_IPV6_CLIENT_ADDRESS: The WireGuard IPv6 address of the VPN provider's interface VPN_IPV6_CLIENT_ADDRESS_PLUS_ONE: The next IPv6 address that comes after VPN_IPV6_CLIENT_ADDRESS. I can't remember the logic behinds this but I'd found an article online explaining it. WG_INTERFACE: The WireGuard network interface name (e.g. wg0) for the commercial VPN

I left 100.64.0.0/10, fd7a:115c:a1e0::/96 in my example because those are the networks Tailscale traffic will come from. I also left tailscale0 because that is the typical interface. Obviously these can be changed to support any network.

I'm using Alpine Linux so I don't have the PostUp, PostDown, etc. in my WireGuard configuration. I'm not using wg-quick at all.

Before I hit paste, one thing I'll say is I haven't addressed the "kill switch" yet. But so far (~4 months) when the VPN provider's tunnel goes down nothing leaks. 🤞

sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv6.conf.all.forwarding=1

sysctl -p

ip link add dev WG_INTERFACE type wireguard

ip addr add VPN_IPV4_CLIENT_ADDRESS/32 dev WG_INTERFACE
ip -6 addr add VPN_IPV6_CLIENT_ADDRESS/127 dev WG_INTERFACE

wg setconf WG_INTERFACE /etc/wireguard/WG_INTERFACE.conf
ip link set up dev WG_INTERFACE

iptables -t nat -A POSTROUTING -o WG_INTERFACE -j MASQUERADE
iptables -t nat -A POSTROUTING -o WG_INTERFACE -s 100.64.0.0/10 -j MASQUERADE

ip6tables -t nat -A POSTROUTING -o WG_INTERFACE -j MASQUERADE
ip6tables -t nat -A POSTROUTING -o WG_INTERFACE -s fd7a:115c:a1e0::/96 -j MASQUERADE

iptables -A FORWARD -i WG_INTERFACE -o tailscale0 -j ACCEPT
iptables -A FORWARD -i tailscale0 -o WG_INTERFACE -j ACCEPT
iptables -A FORWARD -i WG_INTERFACE -o tailscale0 -m state --state RELATED,ESTABLISHED -j ACCEPT

ip6tables -A FORWARD -i WG_INTERFACE -o tailscale0 -j ACCEPT
ip6tables -A FORWARD -i tailscale0 -o WG_INTERFACE -j ACCEPT
ip6tables -A FORWARD -i WG_INTERFACE -o tailscale0 -m state --state RELATED,ESTABLISHED -j ACCEPT

mkdir -p /etc/iproute2/rt_tables

echo "70 wg" >> /etc/iproute2/rt_tables
echo "80 tailscale" >> /etc/iproute2/rt_tables

ip rule add from 100.64.0.0/10 table tailscale
ip route add default via VPN_IPV4_CLIENT_ADDRESS dev WG_INTERFACE table tailscale

ip -6 rule add from fd7a:115c:a1e0::/96 table tailscale
ip -6 route add default via VPN_IPV6_CLIENT_ADDRESS_PLUS_1 dev WG_INTERFACE table tailscale

ip rule add from VPN_IPV4_CLIENT_ADDRESS/32 table wg
ip route add default via VPN_IPV4_CLIENT_ADDRESS dev WG_INTERFACE table wg

service tailscale start
rc-update add tailscale default

iptables -A INPUT -i tailscale0 -p udp --dport 53 -j ACCEPT
iptables -A INPUT -i tailscale0 -p tcp --dport 53 -j ACCEPT

ip6tables -A INPUT -i tailscale0 -p udp --dport 53 -j ACCEPT
ip6tables -A INPUT -i tailscale0 -p tcp --dport 53 -j ACCEPT

service unbound start
rc-update add unbound default

/sbin/iptables-save > /etc/iptables/rules-save
/sbin/ip6tables-save > /etc/ip6tables/rules-save

tailscale up --accept-dns=false --accept-routes --advertise-exit-node
[–] undefined@links.hackliberty.org 2 points 1 day ago* (last edited 1 day ago)

Forgot to mention that I run a DNS server for blocking too. When using Tailscale I’ve found it’s important to use their resolver as upstream otherwise App Connectors won’t work (the VPN provider tunnels on each VPS routes to different countries so DNS wasn’t in sync). This kind of sucks but I make do with it after a month or two of App Connectors being very iffy.

[–] tootnbuns@lemmy.dbzer0.com 4 points 3 days ago (1 children)

I just read that tailscale and mullvad offer a joint service where traffic outside your tailnet always exits through mullvad

[–] Lifebandit666@feddit.uk 1 points 3 days ago (1 children)

My problem with this solution was that I have signed in to Tailscale via my Google account, and I have to buy Mullvad through Tailscale, linking my Google account to the Mullvad account.

What I wanted to do was have my own Mullvad account and route Tailscale through it, but that wasn't possible, I had to have Tailscale manage Mullvad, which just didn't sit right with me.

[–] tootnbuns@lemmy.dbzer0.com 1 points 1 day ago

Yeah that also wouldn't sit right with me.

[–] hungover_pilot@lemmy.world 5 points 4 days ago (1 children)

I do something similar with opnsense and policy based routing. opnsense is acting as both a VPN client and server. The client interface connects out to a commercial VPN, and the server interface listens for incoming connections. Based on what I I want to accomplish I setup firewall rules that use policy based routing to route incoming VPN traffic where it needs to go.

Regarding split tunnel on the client, the Android wireguard app has the option to specify what traffic uses the tunnel based on the application

[–] Emotet@slrpnk.net 3 points 4 days ago

Oh, neat! Never noticed that option in the Wireguard app before. That's very helpful already. Regarding your opnsense setup:

I've dabbled in some (simple) routing before, but I'm far from anything one could call competent in that regard and even if I'd read up properly before writing my own routes/rules, I'd probably still wouldn't trust that I hadn't forgotten something to e.g. prevent IP/DNS leaks.

I'm mainly relying on a Docker and was hoping for pointers on how to configure a Wireguard host container to route only internet traffic through another Wireguard Client container.

I found this example, which is pretty close to my ideal setup. I'll read up on that.

[–] brownmustardminion@lemmy.ml 2 points 4 days ago (1 children)

I’ve been toying with this idea but with a mesh network, in my case nebula, after experiencing a similar frustration with limitations on most client devices when trying to connect to multiple VPNs.

One question I’ve been trying to answer is if routing all of these devices to a single vpn endpoint has any negative effects on privacy. Would cycling the IP randomly help to prevent trackers from putting together a profile of activity?

[–] coffeejoe@lemmy.dbzer0.com 2 points 4 days ago (1 children)

Your browser gives them enough information to profile you by they don’t really need your ip address.

[–] brownmustardminion@lemmy.ml 2 points 4 days ago

I guess what I'm getting at is now instead of them tracing your activity to one browser or device, they can more easily group multiple devices since they're all using the same VPN IP.

[–] jet@hackertalks.com 1 points 3 days ago (1 children)

Even after you get your ideal setup with all your traffic transversing your network to a single host, you have bottle necked the whole network to the speed of that single host.

Usually in networks devices are able to talk to each other directly across switch fabrics and not interdesr with other traffic.

Say you have four devices A B C D each pair trying to send 1GiB/S of traffic to each other over a GbE network connected to the same switch. A,B gets 1 GbE and C,D gets 1 GbE. For a total concurrent speed of 2GbE.

In your model since all traffic has to hit the central wireguard node W first you can only get 1GbE speed concurrently

[–] Emotet@slrpnk.net 2 points 3 days ago

Oh I'm fully aware. I personally don't care, but one could add a capable VPS and deploy the Wireguard Host Container + two Client Containers, one for the LAN and one for the commercial VPN (like so), if the internet connection of the LAN in question isn't sufficient.

[–] Decronym@lemmy.decronym.xyz 1 points 4 days ago* (last edited 1 day ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
IP Internet Protocol
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)

4 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

[Thread #848 for this sub, first seen 3rd Jul 2024, 19:15] [FAQ] [Full list] [Contact] [Source code]