tcptcp

Increase your Linux server Internet speed with TCP BBR congestion control

I recently read that TCP BBR has significantly increased throughput and reduced latency for connections on Google’s internal backbone networks and google.com and YouTube Web servers throughput by 4 percent on average globally – and by more than 14 percent in some countries. The TCP BBR patch needs to be applied to the Linux kernel. The first public release of BBR was here, in September 2016. The patch is available to any one to download and install. Another option is using Google Cloud Platform (GCP). GCP by default turned on to use a cutting-edge new congestion control algorithm named TCP BBR.

Requirements for Linux server Internet speed with TCP BBR
Make sure that your Linux kernel has the following option compiled as either module or inbuilt into the Linux kerne:

CONFIG_TCP_CONG_BBR
CONFIG_NET_SCH_FQ
You must use the Linux kernel version 4.9 or above. On a Debian/Ubuntu Linux type the following grep command/egrep command:
$ grep ‘CONFIG_TCP_CONG_BBR’ /boot/config-$(uname -r)
$ grep ‘CONFIG_NET_SCH_FQ’ /boot/config-$(uname -r)
$ egrep ‘CONFIG_TCP_CONG_BBR|CONFIG_NET_SCH_FQ’ /boot/config-$(uname -r)

Sample outputs:

Make sure that your Linux kernel has TCP BBR option setup
Make sure that your Linux kernel has TCP BBR option setup

I am using Linux kernel version 4.9.0-8-amd64 on a Debian and 4.18.0-15-generic on an Ubuntu server. If above options not found, you need to either compile latest kernel or install the latest version of Linux kernel using the apt-get command/apt command.
Run test before you enable TCP BBR to improve network speed on Linux
Type the following command on Linux server:
# iperf -s

How to enable TCP BBR to improve network speed on Linux server test
Execute the following on your Linux client:
$ iperf -c gcvm.backup -i 2 -t 30

How to Boost Linux Server Internet Speed with TCP BBR

How to enable TCP BBR congestion control on Linux
Edit the /etc/sysctl.conf file or create a new file in /etc/sysctl.d/ directory:
$ sudo vi /etc/sysctl.conf

OR
$ sudo vi /etc/sysctl.d/10-custom-kernel-bbr.conf
Append the following two lines:
net.core.default_qdisc=fq
net.ipv4.tcp_congestion_control=bbr

Save and close the file i.e. exit from the vim/vi text editor by typing :x!. Next you must either reboot the Linux box or reload the changes using the sysctl command:
$ sudo reboot

OR
$ sudo sysctl –system

Sample outputs:

* Applying /etc/sysctl.d/10-console-messages.conf …
kernel.printk = 4 4 1 7
* Applying /etc/sysctl.d/10-custom.conf …
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
* Applying /etc/sysctl.d/10-ipv6-privacy.conf …
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
* Applying /etc/sysctl.d/10-kernel-hardening.conf …
kernel.kptr_restrict = 1
* Applying /etc/sysctl.d/10-link-restrictions.conf …
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/10-lxd-inotify.conf …
fs.inotify.max_user_instances = 1024
* Applying /etc/sysctl.d/10-magic-sysrq.conf …
kernel.sysrq = 176
* Applying /etc/sysctl.d/10-network-security.conf …
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.tcp_syncookies = 1
* Applying /etc/sysctl.d/10-ptrace.conf …
kernel.yama.ptrace_scope = 1
* Applying /etc/sysctl.d/10-zeropage.conf …
vm.mmap_min_addr = 65536
* Applying /etc/sysctl.d/99-sysctl.conf …
* Applying /etc/sysctl.conf …
You can verify new settings with the following sysctl command. Run:
$ sysctl net.core.default_qdisc
net.core.default_qdisc = fq
$ sysctl net.ipv4.tcp_congestion_control
net.ipv4.tcp_congestion_control = bbr

Test BBR congestion control on Linux
In my testing between two long distance Linux server with Gigabit ports connected to the Internet, I was able to bump 250 Mbit/s into 800 Mbit/s. You can use tools such as the wget command to measure bandwidths speed:
$ wget https://your-server-ip/file.iso

I also noticed I was able to push almost 100 Mbit/s for my OpenVPN traffic. Previously I was able to push up to 30-40 Mbit/s only. Overall I am quite satisfied with TCP BBR congestion control option for my Linux box.

Linux TCP BBR test with iperf
The iperf is a commonly used network testing tool for TCP/UDP data streams. It measures the throughput of the network. This tool can validate the importance of Linux TCP BBR settings.

Type command on Linux server with TCP BBR congestion control enables
# iperf -s

Sample outputs:

————————————————————
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
————————————————————
[ 4] local 10.128.0.2 port 5001 connected with AAA.BB.C.DDD port 46978
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-30.6 sec 127 MBytes 34.7 Mbits/sec
Type command on Linux/Unix client
$ iperf -c YOUR-Linux-Server-IP-HERE -i 2 -t 30

Sample output when connected to TCP BBR congestion enabled on Linux:

————————————————————
Client connecting to gcp-vm-nginx-www1, TCP port 5001
TCP window size: 45.0 KByte (default)
————————————————————
[ 3] local 10.8.0.2 port 46978 connected with xx.yyy.zzz.tt port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 4.00 MBytes 16.8 Mbits/sec
[ 3] 2.0- 4.0 sec 8.50 MBytes 35.7 Mbits/sec
[ 3] 4.0- 6.0 sec 10.9 MBytes 45.6 Mbits/sec
[ 3] 6.0- 8.0 sec 16.2 MBytes 68.2 Mbits/sec
[ 3] 8.0-10.0 sec 5.29 MBytes 22.2 Mbits/sec
[ 3] 10.0-12.0 sec 9.38 MBytes 39.3 Mbits/sec
[ 3] 12.0-14.0 sec 8.12 MBytes 34.1 Mbits/sec
[ 3] 14.0-16.0 sec 8.12 MBytes 34.1 Mbits/sec
[ 3] 16.0-18.0 sec 8.38 MBytes 35.1 Mbits/sec
[ 3] 18.0-20.0 sec 6.75 MBytes 28.3 Mbits/sec
[ 3] 20.0-22.0 sec 8.12 MBytes 34.1 Mbits/sec
[ 3] 22.0-24.0 sec 8.12 MBytes 34.1 Mbits/sec
[ 3] 24.0-26.0 sec 9.50 MBytes 39.8 Mbits/sec
[ 3] 26.0-28.0 sec 7.00 MBytes 29.4 Mbits/sec
[ 3] 28.0-30.0 sec 8.12 MBytes 34.1 Mbits/sec
[ 3] 0.0-30.3 sec 127 MBytes 35.0 Mbits/sec