Are you considering using containers? Do you care about network latency? If so, read on.
Kubernetes is an open-source container platform. It creates a cluster made out of a set of infrastructures. To accomplish things like load balancing and container-to-container communication, Kubernetes relies on both a network overlay inside the cluster and a proxy process called the kube-proxy that runs on each node in the cluster. This post will describe two ways you can have ultra-low latency for containers: physical servers and Kubernetes's new IPTables-based proxy mode.
Item #1 - Consider Using Physical Servers Rather than Virtual Machines
Physical servers don't have the hypervisor overhead layer that is common to virtual machines (VMs). As such, running containers directly on Bare Metal (physical servers) should offer you faster performance. But how much of a difference? As it turns out, there is quite a bit of difference.
Here at CenturyLink Cloud, we offer Kubernetes clusters on both VMs and Bare Metal servers. To test network latency, we used the open-source netperf testing utlity that the Kubernetes community wrapped up in order to make it easily deployable to a Kubernetes cluster. The test has a client process running inside a Docker container on one host in the cluster, and a server process running inside a Docker container on a different host in the cluster. When comparing VMs to our smallest physical servers (the 4-core option) located in our VA1 data center we saw a 3x improvement in network latency. Now, we would like to share how we did these tests and the detailed test results.
Kubernetes Cluster Creation
Using the tool found here, we created two Kubernetes clusters on top of the CenturyLink Cloud infrastructure. One cluster was made up of four Bare Metal servers and the other cluster was made up of four VMs.
Commands to Create the Two Clusters
./kube-up.sh -c=k8s-baremetal1 -d=VA1 -m=4 -t=bareMetal ./kube-up.sh -c=k8s-vm1 -d=VA1 -m=4 -t=standard -mem=8 -cpu=4 # Drink coffee and wait around about 15 minutes...
Run the Network Tests
Next, we needed to perform the network tests. To build the netperf Golang tool, we ran:
git clone https://github.com/kubernetes/contrib/ cd contrib/netperf-tester godep go build
To run the network testing tool, we ran the following test against each cluster.
Note: This tool uses the kubectl CLI tool to communicate to the Kubernetes cluster in order to run the tests. We ran this command twice, once against each Kubernetes cluster.
./netperf-tester -number 1000
A snippet from the test results looks like this:
Bare Metal Servers
Physical vs VMs - Network Latency Histogram
We then took the full results (1000 measurements on each platform) and created a histogram chart similar to the one above. The results are pretty impressive, around a 3x improvement. Also, you will notice that the jitter (standard deviation) seen in the physical server results is much smaller.
As you can see, running Kubernetes and containers on the physical machines provides much lower latency than running Kubernetes on virtual machines. If you are running containers, and network latency is important to you, you may want to consider Bare Metal Servers as an option.
Item #2 - Use Kube-proxy IPTables Mode.
Regardless of the hardware you choose to run your cluster on, the Kubernetes community has made substantial progress on building out networking services that are both scalable and highly performance-oriented. One recent improvement from the open source community is the re-basing of the kube-proxy process to use IPTables on the host to perform various load-balancing, proxy, and NAT functions. As seen in the diagram below, this change decreased latency as measured by the open-source network testing tool netperf. Latency (measured in microseconds) seen by the older method (called userspace) can be seen in red while the latency of the newer method (called IPTables) can be seen in blue.
For the source information, click here.
Nice work Kubernetes community! If you are a Kubernetes user, nothing is required of you as new Kubernetes clusters now come with IPTables proxy mode enabled by default. If you are running an older version of Kubernetes, consider upgrading.
If you care about ultra low latency networking for containers, consider using physical machines. If you don't care about this, you might want to consider using virtual machines to run your containers as you can scale more incrementally.
Thanks for reading, Chris Kleban
Get an On-Demand Physical Server Kubernetes Cluster on CenturyLink Cloud Today!
Are you looking for some of this low latency network love? If so, you are only 15 minutes away from having it.
We give you the deployment tools you need to manage your applications quickly and easily. Check out our Kubernetes Knowledge Base article. It will get you started using Ansible to create a Kubernetes cluster on CenturyLink Cloud - all by running a single script.
If you don’t have a CenturyLink Cloud account yet, head over to our website and activate an account.