Preserving Client IP in Kubernetes
When deploying applications in Kubernetes it's not uncommon to want to preserve the client's source IP address.
Given that you likely have a Service
in front of your Pod
it may not come as
a surprise that preserving the client address isn't always trivial. This can
often result in the Pod
application seeing local network IPs as the client IP.
Preserved Client IP Support across vendors
IP preservation is something addressed in the Kubernetes documentation on external load balancers but there's a note that this may only be possible on GKE.
I'm currently using Digital Ocean,
it turned out that enabling externalTrafficPolicy
on my Service
did not do what I wanted. Internal network IPs show up on my applications.
Digital Ocean are clearly aware of this need and have built a feature in to their platform to address this. This is detailed on their documentation on load balancers.
This is done by adding the following annotation to your service:
metadata:
name: proxy-protocol
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
The bad news is that whilst this feature will cause the external Service
to
pass on the client IP address, it does so using the PROXY
Protocol. If
you're using nginx or something else that speaks PROXY, then you can stop
reading - this should work with a quick setting tweak.
If you have an application that doesn't speak PROXY then read on.
Cue mmproxy
After some searching I was pleased to discover that someone else had had this problem and solved it!
The open source project mmproxy tackles
exactly this challenge. It acts as a go between - understanding PROXY protocol
and doing some iptables
tricks to pass it on to the server.
But can we make it work in a kubernetes cluster? After some experimenting, I'm pleased to report that I was able to get this working on a normal Digital Ocean Kubernetes Cluster. Here's some config that worked for me.
This creates a service set up to use the PROXY protocol (because of annotations
and externalTrafficPolicy
):
apiVersion: v1
kind: Service
metadata:
name: mmproxy
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
spec:
type: LoadBalancer
ports:
- port: 9001
protocol: TCP
targetPort: 9001
externalTrafficPolicy: Local
selector:
app: mmproxy
Next we create a deployment to receive the traffic to mmproxy and forward onward within the same pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mmproxy
spec:
replicas: 1
selector:
matchLabels:
app: mmproxy
template:
metadata:
labels:
app: mmproxy
spec:
initContainers:
- name: setup
image: docker.pkg.github.com/andrewmichaelsmith/mmproxy/mmproxy:latest
command: [ "/bin/bash", "-cx" ]
args:
# Source: https://github.com/cloudflare/mmproxy
- echo 1 > /proc/sys/net/ipv4/conf/eth0/route_localnet;
iptables -t mangle -I PREROUTING -m mark --mark 123 -m comment --comment mmproxy -j CONNMARK --save-mark;
ip6tables -t mangle -I PREROUTING -m mark --mark 123 -m comment --comment mmproxy -j CONNMARK --save-mark;
iptables -t mangle -I OUTPUT -m connmark --mark 123 -m comment --comment mmproxy -j CONNMARK --restore-mark;
ip6tables -t mangle -I OUTPUT -m connmark --mark 123 -m comment --comment mmproxy -j CONNMARK --restore-mark;
ip rule add fwmark 123 lookup 100;
ip -6 rule add fwmark 123 lookup 100;
ip route add local 0.0.0.0/0 dev lo table 100;
ip -6 route add local ::/0 dev lo table 100;
exit 0 ; # XXX hack
securityContext:
privileged: True
containers:
- name: netcat
image: docker.pkg.github.com/andrewmichaelsmith/mmproxy/mmproxy:latest
command: [ "/bin/bash", "-cx" ]
args:
- apt-get install -y netcat;
while true; do nc -l -vv -p 9002; done;
- name: mmproxy
image: docker.pkg.github.com/andrewmichaelsmith/mmproxy/mmproxy:latest
command: [ "/bin/bash", "-cx" ]
args:
- echo "0.0.0.0/0" > allowed-networks.txt;
/mmproxy/mmproxy --allowed-networks allowed-networks.txt -l 0.0.0.0:9001 -4 127.0.0.1:9002 -6 [::1]:9002;
securityContext:
privileged: True
So here we have:
- init container (
setup
): perform mmproxy setup (as per https://github.com/cloudflare/mmproxy). - proxy container (
mmprxoy
): runmmproxy
, listen on 9001 forward to 9002. - app container (
netcat
): listen on 9002 and log connections.
Does it work? First we find the IP of our service:
$ kubectl get svc | grep mmproxy
mmproxy LoadBalancer 10.245.126.223 157.245.27.182 9001:30243/TCP 5m27s
Then try and connect to it:
$ nc -v 157.245.27.182 9001
Connection to 157.245.27.182 9001 port [tcp/*] succeeded!
HELLO
And check the logs:
+ nc -l -vv -p 9002
listening on [any] 9002 ...
connect to [127.0.0.1] from 44.125.114.38 59630
HELLO
Hurray! The actual client IP is preserved.
Comments !