Tunnelling individual user’s traffic through remote server with Linux using GRE, NAT and iptables.

I had a need to route a particular process’ traffic through remote server in order to be let in by third parties firewalls. Most flexible way to do so in was to mark the traffic per account basis. Fortunately Linux allows you to do so using owner match in iptables. In my instance user account is called external.ip.

iptables -t mangle -A OUTPUT -m owner --uid-owner external.ip -j MARK --set-mark 5

Above marks all the traffic originating from local server with mark value of 5 (if you use higher values keep in mind that at iptables level marks are parsed as hexadecimal, while in iproute2 they’re considered decimals).

In order to verify, run watch on the command below, then log in into the account, generate some traffic and watch counters growing.

# iptables --list OUTPUT -t mangle -v -n 
Chain OUTPUT (policy ACCEPT 111M packets, 72G bytes)
 pkts bytes target     prot opt in     out     source               destination         
66938 4605K MARK       all  --  *      *             owner UID match 1050 MARK xset 0x5/0xffffffff 

Once packets are correctly marked, it is time to set up a tunnel. In my case I’ve used GRE tunnelling, as it quick and easy to set up. If it’s meant to last longer then couple hours, I would suggest encrypting it and using racoon (IPSec) instead. In this setup both client and a server would need external IPs (remote and local in the example below). Both “p2p_local” and “p2p_remote” are ends of the tunnel at our location and remote server respectively.


Run this on local machine:

ip tunnel add $iface mode gre remote $remote local $local ttl 255 dev eth0 
ifconfig $iface $p2p_remote netmask pointopoint $p2p_local mtu 1400 up

, and on remote:

ip tunnel add $iface mode gre local $remote remote $local ttl 255 dev eth0 
ifconfig $iface $p2p_local netmask pointopoint $p2p_remote mtu 1400 up

Keep in mind interfaces, they can be different for you. MTU may be set little bit too small (I have stolen this from googled example, but does not really matter.

Once that’s set check the tunnel is up:

# ip link show $iface
17: gre_something@eth0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1400 qdisc noqueue state UNKNOWN 
    link/gre peer

You should see at least LOWER_UP. If doesn’t try bringing links up:

ip link set up dev $iface

, on both boxes and ping each other.

In order to use remote’s IP address you’ll need to set up NAT on it. This is easily done with single iptables command:

iptables -t nat -A POSTROUTING  -i $iface -j MASQUERADE

Don’t forget to enable ip forwarding:

echo 1 > /proc/sys/net/ipv4/ip_forward

Now, when remote receives a packet through the tunnel not addressed to itself it will nicely translate it into its own address. The final step is to route previously marked packets through the tunnel. To do so we’re going to need a separate routing table and RPDB rule directing packets marked with 5 to it.

ip rule add fwmark 5 table 5
ip route add table 5 default via $p2p_local dev $iface

At this stage packets should start flowing to the final destination, however depending on your setup (was the case with me), remote may not know where to send them back. Using src hint in default rule out of $p2p_local won’t help, as system doesn’t know it will go through it at the stage when it’s generating a packet (it has to go through netfilter first to get mark). The way around it is to simply MASQUERADE again at the point where packets are entering the tunnel:

iptables -t nat -A POSTROUTING  -o $iface -j MASQUERADE
  1. This doesn’t work.
    First you can’t use:
    iptables -t nat -A POSTROUTING -i $iface -j MASQUERADE
    ( no -i allowed in postrouting )

    second if you put -o instead of -i in the line above then it’s good but not enough. To make it work you have to disable rp_filter on your local machine ( on the latest distributions it’s on ).
    To disable run:
    echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter

  1. No trackbacks yet.