Adventures in Rust and Load Balancers

Lately, a pastime for me has been learning and tinkering in Rust. As Rust is a systems programming I decided a load balancer would make a good pet project to hack on. While there are many exciting Layer 7 proxies out there, the available Layer 4 load balancers are all industrial strength and somewhat complex to get up and running. So I thought to create a more general purpose Layer 4 load balancer. These are my notes and takeaways from my load balancer side-project, Convey.

Convey

A goal of this project was to build a load balancer that easily supports Layer 4 Network Load Balancing but is still modern and general purpose. Convey supports a few modes of operations but some of the features are universal, namely health checking backends for availability and hot reloading of the load balancer configuration. This is probably up for discussion, but imo a modern load balancer should already have features like stats counters, health checking and hot configuration reloading baked in.

Proxy Mode

In a proxy setup, the client’s TCP connection is terminated at the load balancer. The load balancer copies the payload and initiates another TCP stream to one of the load balanced backed servers. This connection persists for the length of the TCP session as established by the client.

sudo RUST_LOG=DEBUG ./target/release/convey --config=config.toml

Passthrough Mode

A Passthrough setup is one specific to Network Load Balancing. At least that’s been my perception. Similar to the proxy, the client tries connecting to the single load balancer address. Unlike Proxy mode, however, in a Passthrough setup the client’s TCP session does not terminate at the load balancer. Instead the packet is processed, manipulated and forwarded onto a backend server. By processed, I mean the necessary connection tracking is in place or updated so future packets from, or back to, the client go to the right place. And by manipulated, I mainly mean the packet is NAT’ed appropriately. The client should think its communicating with the load balancer address the entire time. Ultimately, though the TCP connection terminates at a backend, load balanced server.

Passthrough Setup

To run Convey in Passthrough mode, we need a couple iptables rules on the load balancer

sudo iptables -t raw -A PREROUTING -p tcp --dport <LOAD_BALANCER_PORT> -j DROPsudo iptables -t raw -A PREROUTING -p tcp --sport <BACKEND_SERVER_PORT> --dport 33768:61000 -j DROP
sudo RUST_LOG=DEBUG ./target/release/convey --passthrough --config=config.toml

DSR Mode

With DSR, the client again thinks its establishing a connection to the load balancer, but is forwarded onto a backend using the same mechanisms as described in Passthrough mode. The internals of DSR are identical to Passthrough. There is just a flag indicating whether to set the IP/TCP sources to the client (for DSR) or the load balancer (for Passthrough). With this mode there is less connection tracking overhead in the load balancer so throughput should be increased relative to Passthrough.

DSR Setup

We need the same rule on the load balancer for ingress packets as for Passthrough mode

sudo iptables -t raw -A PREROUTING -p tcp --dport <LOAD_BALANCER_PORT> -j DROP
sudo tc qdisc add dev enp0s8 root handle 10: htb

sudo tc filter add dev enp0s8 parent 10: protocol ip prio 1 u32 match ip src <LOCAL_SERVER_IP> match ip sport <LISTEN_PORT> 0xffff match ip dst <LOAD_BALANCER_IP> action ok

sudo tc filter add dev enp0s8 parent 10: protocol ip prio 10 u32 match ip src <LOCAL_SERVER_IP> match ip sport <LISTEN_PORT> 0xffff action nat egress 192.168.1.117 <LOAD_BALANCER_IP>
sudo RUST_LOG=DEBUG ./target/release/convey --dsr --config=config.toml

Benchmarks

Some basic benchmarks of the Proxy and DSR Convey modes against Nginx and Haproxy. These are very simple; they were performed in a vagrant environment on my laptop.

  • 2 CPU Cores per server except the load balancer which got 4
  • Ubuntu 16.04
wrk -t6 -c200 -d120s --latency http://192.168.1.197
+---------+----------+------------+-----------+-----------+
| SW | Avg Lat. | Avg Req/s | Total Req | Data Read |
+---------+----------+------------+-----------+-----------+
| Nginx | 9.95ms | 3.42k | 2450490 | 1.96GB |
| HAProxy | 9.43ms | 3.55k | 2544029 | 2.04GB |
+---------+----------+------------+-----------+-----------+
+--------------+----------+------------+-----------+-----------+
| SW | Avg Lat. | Avg Req/s | Total Req | Data Read |
+--------------+----------+------------+-----------+-----------+
| Nginx | 9.95ms | 3.42k | 2450490 | 1.96GB |
| Haproxy | 9.43ms | 3.55k | 2544029 | 2.04GB |
| Convey Proxy | 7.46ms | 5.81k | 4156170 | 3.32GB |
+--------------+----------+------------+-----------+-----------+
+--------------+----------+------------+-----------+-----------+
| SW | Avg Lat. | Avg Req/s | Total Req | Data Read |
+--------------+----------+------------+-----------+-----------+
| Nginx | 9.95ms | 3.42k | 2450490 | 1.96GB |
| HAProxy | 9.43ms | 3.55k | 2544029 | 2.04GB |
| Convey Proxy | 7.46ms | 5.81k | 4156170 | 3.32GB |
| Convey DSR | 16.30ms | 4.98k | 3565295 | 2.85GB |
+--------------+----------+------------+-----------+-----------+

Takeaways

  • Rust is really fast
  • Rust on tokio is really, really fast
  • I think there are opportunities for low level networking packages in Rust which focus on throughput for these types of workloads. Pnet is very handy for handling and manipulating packets in user space, but there is no Async and at the IP/TCP layers much of the packet manipulation is copying bytes around in the background.
  • I relied on some useful Rust packages for things like consistent hashing, but in researching what to use it doesn’t feel like some of them are actively maintained. I’m more of a Rust fan now than before, but I hope more people and companies start to use Rust for their projects so some of these packages get more support.
  • Some ToDo’s: look into leveraging BPF for filtering ingress in the kernel and re-evaluate pnet for Passthrough and DSR modes (netlink style packet forwarding might be easier/more efficient/faster).

Learner, Doer, Occasional Provocateur