Skip to main content

Notes - Simulating an HAProxy Environment with Docker

·19 mins

Note: Will be updated regularly. If something is confusing, browse to the links in the resources section.

Setup #

The setup will consist of 6 nginx containers. Simulating 2 Websites with 3 Webservers each. The webservers will listen on ports 8081 through 8086. I’m using Debian as the Docker host.

HAProxy will have 2 frontends listening on port 8000 and 8100. Both backends will have 3 websites each.

Create web contents #

Create some files that will help identify the webservers once they’re up and running:

$ mkdir webfiles

$ for site in `seq 1 2`; do for server in `seq 1 3`; do echo SITE$site - WEB$server > ~/webfiles/site$site\_server$server.txt ; done; done

Viewing the contents:

root@debianlab:~/webfiles# cat *
SITE1 - WEB1
SITE1 - WEB2
SITE1 - WEB3
SITE2 - WEB1
SITE2 - WEB2
SITE2 - WEB3

Deploy the Containers #

Start the containers and check the status:

root@debianlab:~/webfiles# port=1; for site in `seq 1 2`; do for server in `seq 1 3`; do docker run -dt --name site$site\_server$server -p 808$(($port)):80 nginx ; port=$(($port + 1)); done; done
e157622553335125fe4485f9bc8b64e4a9bb56a0a7b72adea52cc22f4e8f491d
1eacd8526e5a3cdc8002d140c87dc17f561ab38d2da2bda2a1e20ab173830fb1
1140da95572a4e1e4bb26f3c8d395790dfc9cd37a8168cf1f41d64a14adc687d
b597aae33c2ab2f7fb03746563f6cc0762917b1d4097dbda1b7e5f0bf5878fdf
723cdf8f3947230e574606e7b6e840a1c35ab10dd40e9773355f27deccd10d50
3d30b55011f9d635ce136a435278fbb3f8e569a4ace491fa945b830165b27f55
root@debianlab:~/webfiles#
root@debianlab:~/webfiles# docker ps --format '{{.Names}}\t{{.Ports}}\t{{.Status}}'
site2_server3   0.0.0.0:8086->80/tcp, :::8086->80/tcp   Up 5 minutes
site2_server2   0.0.0.0:8085->80/tcp, :::8085->80/tcp   Up 5 minutes
site2_server1   0.0.0.0:8084->80/tcp, :::8084->80/tcp   Up 5 minutes
site1_server3   0.0.0.0:8083->80/tcp, :::8083->80/tcp   Up 5 minutes
site1_server2   0.0.0.0:8082->80/tcp, :::8082->80/tcp   Up 5 minutes
site1_server1   0.0.0.0:8081->80/tcp, :::8081->80/tcp   Up 5 minutes

Copy the test files inside the containers:

root@debianlab:~/webfiles# for site in `seq 1 2`; do for server in `seq 1 3`; do docker cp ~/webfiles/site$site\_server$server.txt site$site\_server$server:/usr/share/nginx/html/test.txt; done; done

All webservers have a unique test.txt file inside /usr/share/nginx/html/. Let’s view all of them:

root@debianlab:~/webfiles# port=1; for site in `seq 1 2`; do for server in `seq 1 3`; do curl -s http://localhost:808$port/test.txt; port=$(($port + 1)); done; done
SITE1 - WEB1
SITE1 - WEB2
SITE1 - WEB3
SITE2 - WEB1
SITE2 - WEB2
SITE2 - WEB3

As expected.

Configure and Start HAProxy #

On RHEL based Distros, you need to allow the haproxy_connect_any policy:

[root@rockylab ~]# semanage boolean --list | grep -i haproxy
haproxy_connect_any            (off  ,  off)  Allow haproxy to connect any
[root@rockylab ~]#
[root@rockylab ~]# setsebool -P haproxy_connect_any 1
[root@rockylab ~]#
[root@rockylab ~]# semanage boolean --list | grep -i haproxy
haproxy_connect_any            (on   ,   on)  Allow haproxy to connect any

Modify /etc/haproxy/haproxy.cfg with the following contents:

## Docker Simulation Frontend
frontend site1
    bind *:8000
    default_backend site1

frontend site2
    bind *:8100
    default_backend site2

## Docker Simulation Backend 1
backend site1
    balance    roundrobin
    server site1-web1 127.0.0.1:8081 check
    server site1-web2 127.0.0.1:8082 check
    server site1-web3 127.0.0.1:8083 check

## Docker Simulation Backend 2
backend site2
    balance roundrobin
    server site2-web1 127.0.0.1:8084 check
    server site2-web2 127.0.0.1:8085 check
    server site2-web3 127.0.0.1:8086 check

## Stats Page
listen stats
    bind *:8050
    stats enable
    stats uri /
    stats hide-version

The configuration sets up two frontends, site1 and site2, along with their corresponding backend servers and a stats page. Let’s break it down(for a single one and the stats page):

Frontend Configuration:

  • frontend - Frontend named site1
  • bind - Bind the frontend to all available network interfaces ("*") on port 8000.
  • default_backend - Sets the default backend server(to where to send incoming request) for site1.

Backend Configuration:

  • backend - This is the backend configuration for site1 frontend.
  • balance - Specifies the load-balancing algorithm to be round-robin, distributing requests evenly among the available servers.
  • server - Defines the server name with its IP and port. Health checks is also enabled.

Stats Page Configuration:

  • listen stats - Defines a listener named stats for the statistics page.
  • bind - Binds the listener to all available network interfaces on port 8050.
  • stats enable - Enable statistics reporting.
  • stats uri - Specifies the URI path for accessing the statistics page.
  • stats hide-version - Hide the HAProxy version.

Once done, start and enable HAProxy:

systemctl enable --now haproxy

Test the webservers:

root@debianlab:~# for site in `seq 1 3`; do curl http://localhost:8000/test.txt; done
SITE1 - WEB1
SITE1 - WEB2
SITE1 - WEB3
root@debianlab:~# for site in `seq 1 3`; do curl http://localhost:8100/test.txt; done
SITE2 - WEB1
SITE2 - WEB2
SITE2 - WEB3

MacBook-Pro:~ kavish$ for site in `seq 1 3`; do curl http://192.168.100.131:8000/test.txt; done
SITE1 - WEB1
SITE1 - WEB2
SITE1 - WEB3
MacBook-Pro:~ kavish$ for site in `seq 1 3`; do curl http://192.168.100.131:8100/test.txt; done
SITE2 - WEB1
SITE2 - WEB2
SITE2 - WEB3 

Works as expected.

Scripts #

  • create_containers.sh:
#!/bin/bash

echo "Creating containers..."

port=1; for site in `seq 1 2`; do for server in `seq 1 3`; do docker run -dt --name site$site\_server$server -p 808$(($port)):80 nginx ; port=$(($port + 1)); done; done

sleep 2

echo "Copying Webfiles..."

for site in `seq 1 2`; do for server in `seq 1 3`; do docker cp ~/webfiles/site$site\_server$server.txt site$site\_server$server:/usr/share/nginx/html/test.txt; done; done

sleep 2

docker ps
  • stop_containers.sh:
#!/bin/bash
echo -e "Stopping Containers...\n"
docker stop site{1..2}\_server{1..3}

echo " "
docker ps
  • start_containers.sh
#!/bin/bash

docker start site{1..2}\_server{1..3}
  • stop_remove_containers.sh:
#!/bin/bash
docker stop site{1..2}\_server{1..3}

docker rm site{1..2}\_server{1..3}

HTTP Rewrites #

Scenario: Management has told us that we need to clean all the nonessential files from the root of our sites. We have a file that needs to be moved, but we need to be able to continue to support requests for it in its current location.

Moving the testfiles #

I’m going to create a new subfolder called textfiles in /usr/share/nginx/html/:

root@debianlab:~/webfiles/scripts# for site in `seq 1 2`; do for server in `seq 1 3`; do docker exec site$site\_server$server mkdir -v /usr/share/nginx/html/textfiles; done; done
mkdir: created directory '/usr/share/nginx/html/textfiles'
mkdir: created directory '/usr/share/nginx/html/textfiles'
mkdir: created directory '/usr/share/nginx/html/textfiles'
mkdir: created directory '/usr/share/nginx/html/textfiles'
mkdir: created directory '/usr/share/nginx/html/textfiles'
mkdir: created directory '/usr/share/nginx/html/textfiles'

Now I’ll move the test.txt file inside /usr/share/nginx/html/textfiles/:

root@debianlab:~# for site in `seq 1 2`; do for server in `seq 1 3`; do docker exec site$site\_server$server mv -v /usr/share/nginx/html/test.txt /usr/share/nginx/html/textfiles; done; done
renamed '/usr/share/nginx/html/test.txt' -> '/usr/share/nginx/html/textfiles/test.txt'
renamed '/usr/share/nginx/html/test.txt' -> '/usr/share/nginx/html/textfiles/test.txt'
renamed '/usr/share/nginx/html/test.txt' -> '/usr/share/nginx/html/textfiles/test.txt'
renamed '/usr/share/nginx/html/test.txt' -> '/usr/share/nginx/html/textfiles/test.txt'
renamed '/usr/share/nginx/html/test.txt' -> '/usr/share/nginx/html/textfiles/test.txt'
renamed '/usr/share/nginx/html/test.txt' -> '/usr/share/nginx/html/textfiles/test.txt'

Get a shell inside a container to verify:

root@debianlab:~# docker exec -it site1_server2 /bin/bash
root@ca9925191a0f:/#
root@ca9925191a0f:/# ls -l /usr/share/nginx/html/textfiles/
total 4
-rw-r--r-- 1 root root 13 Feb  6 05:30 test.txt
root@ca9925191a0f:/# exit
exit

Browse to the original location:

root@debianlab:~# curl -s -o /dev/null -w %"{http_code}" localhost:8100/text.txt;echo
404
root@debianlab:~#
root@debianlab:~# curl -s -o /dev/null -w %"{http_code}" localhost:8000/text.txt;echo
404

Rewrite an HTTP Request #

In both frontends, add the following rules:

acl p_ext_txt path_end -i .txt
acl p_folder_textfiles path_beg -i /textfiles/
http-request set-path /textfiles/%[path] if !p_folder_textfiles p_ext_txt

Like so:

## Docker Simulation Frontend
frontend site1
    bind *:8000
    default_backend site1
    acl p_ext_txt path_end -i .txt
    acl p_folder_textfiles path_beg -i /textfiles/
    http-request set-path /textfiles/%[path] if !p_folder_textfiles p_ext_txt

frontend site2
    bind *:8100
    default_backend site2
    acl p_ext_txt path_end -i .txt
    acl p_folder_textfiles path_beg -i /textfiles/
    http-request set-path /textfiles/%[path] if !p_folder_textfiles p_ext_txt

Two ACLs(Access Control List) were set to match specific conditions on incoming requests:

  • acl p_ext_txt path_end -i .txt - Defines an ACL named “p_ext_txt”. The ACL checks if the path of the requested URL ends with .txt.
  • acl p_folder_textfiles path_beg -i /textfiles/ - Defines an ACL named “p_folder_textfiles”. It checks if the path of the requested URL begins with /textfiles/.

and a request rule:

  • http-request set-path /textfiles/%[path] if !p_folder_textfiles p_ext_txt - This is an HTTP request rule that modifies the path of request. If the path does not start with /textfiles/ and ends with .txt, it sets the path to /textfiles/ followed by the .txt file. For example, a request for /example.txt will be transformed/rewritten to /textfiles/example.txt.

Verify HAProxy Configuration:

root@debianlab:~# haproxy -f /etc/haproxy/haproxy.cfg -c
Configuration file is valid

Reload HAProxy:

root@debianlab:~# systemctl reload haproxy

Check the original and new location of text.txt:

root@debianlab:~# curl -s http://localhost:8000/test.txt
SITE1 - WEB1
root@debianlab:~# curl -s http://localhost:8100/test.txt
SITE2 - WEB1
root@debianlab:~#
root@debianlab:~# curl -s http://localhost:8000/textfiles/test.txt
SITE1 - WEB2
root@debianlab:~# curl -s http://localhost:8100/textfiles/test.txt
SITE2 - WEB2
root@debianlab:~#

Now its available in both locations.

Resources #

https://www.haproxy.com/blog/testing-your-haproxy-configuration/ https://www.haproxy.com/blog/introduction-to-haproxy-acls/ https://www.haproxy.com/documentation/hapee/latest/traffic-routing/rewrites/rewrite-requests/

Domain Mapping based on Host Header #

First, let’s add both site-1 and site-2 in /etc/hosts:

## HAProxy Start
127.0.0.1 www.site-1.com site-1.com
127.0.0.1 www.site-2.com site-2.com
## HAProxy End

Next, let’s make some changes in /etc/haproxy/haproxy.cfg:

frontend site1-site2
    bind *:80
    mode http

    acl p_ext_txt path_end -i .txt
    acl p_folder_textfiles path_beg -i /textfiles/
    http-request set-path /textfiles/%[path] if !p_folder_textfiles p_ext_txt

    acl site-1 hdr(host) -i site-1.com www.site-1.com
    acl site-2 hdr(host) -i site-2.com www.site-2.com
    use_backend site1 if site-1
    use_backend site2 if site-2

Both frontends were replaced with the above content. We’re now listening on port 80. Traffic will be redirected to the requested backend based on the HTTP host Header.

Reload HAProxy and access both backends via domain name:

root@debianlab:~# for i in `seq 1 6`; do curl -s http://www.site-1.com/test.txt; done
SITE1 - WEB1
SITE1 - WEB2
SITE1 - WEB3
SITE1 - WEB1
SITE1 - WEB2
SITE1 - WEB3
root@debianlab:~# for i in `seq 1 6`; do curl -s http://www.site-2.com/test.txt; done
SITE2 - WEB1
SITE2 - WEB2
SITE2 - WEB3
SITE2 - WEB1
SITE2 - WEB2
SITE2 - WEB3

Resources #

https://www.haproxy.com/blog/how-to-map-domain-names-to-backend-server-pools-with-haproxy/

TLS/SSL Termination #

With TLS/SSL termination, traffic between HAProxy and the clients will be encrypted and communication between HAProxy and the backend servers will be over HTTP.

Create the certificate with mkcert:

root@debianlab:/etc/haproxy# mkdir certs
root@debianlab:/etc/haproxy# cd certs/
root@debianlab:/etc/haproxy/certs# mkcert localhost

Created a new certificate valid for the following names 📜
 - "localhost"

The certificate is at "./localhost.pem" and the key at "./localhost-key.pem" ✅

It will expire on 12 May 2024 🗓

root@debianlab:/etc/haproxy/certs# ls -l
total 8
-rw------- 1 root root 1704 Feb 12 10:49 localhost-key.pem
-rw-r--r-- 1 root root 1452 Feb 12 10:49 localhost.pem
root@debianlab:/etc/haproxy/certs#
root@debianlab:/etc/haproxy/certs# cat localhost-key.pem localhost.pem > localhost-haproxy.pem
root@debianlab:/etc/haproxy/certs# rm -f localhost-key.pem localhost.pem
root@debianlab:/etc/haproxy/certs# chmod 640 localhost-haproxy.pem
root@debianlab:/etc/haproxy/certs# ls -l
total 4
-rw-r----- 1 root root 3156 Feb 12 10:50 localhost-haproxy.pem
root@debianlab:/etc/haproxy/certs#

If you want to use openssl instead of mkcert, you can use this.

Make the following change in the frontend of /etc/haproxy/haproxy.cfg:

frontend site1-site2
    bind *:443 ssl crt /etc/haproxy/certs/localhost-haproxy.pem force-tlsv13
    mode http

    acl p_ext_txt path_end -i .txt
    acl p_folder_textfiles path_beg -i /textfiles/
    http-request set-path /textfiles/%[path] if !p_folder_textfiles p_ext_txt

    acl site-1 hdr(host) -i site-1.com www.site-1.com
    acl site-2 hdr(host) -i site-2.com www.site-2.com
    use_backend site1 if site-1
    use_backend site2 if site-2

Now HAProxy will only listen on port 443. Make sure you specified the correct path for the certificate. Reload HAProxy and test the configuration:

root@debianlab:~# curl -k site-1.com
curl: (7) Failed to connect to site-1.com port 80: Connection refused
root@debianlab:~# curl -k site-2.com
curl: (7) Failed to connect to site-2.com port 80: Connection refused
root@debianlab:~#
root@debianlab:~# curl -k https://site-1.com/test.txt
SITE1 - WEB1
root@debianlab:~#
root@debianlab:~# curl -k https://site-2.com/test.txt
SITE2 - WEB1
root@debianlab:~#

Connection is refused on port 80. HTTPS is working. This is what we wanted.

Accept HTTP Traffic and Redirect it to HTTPS #

To achieve this, HAProxy needs to listen on port 80 and then redirect the traffic over HTTPS. Make the two following changes(bind on port 80 and redirect scheme):

frontend site1-site2
    mode http
    bind *:80
    bind *:443 ssl crt /etc/haproxy/certs/localhost-haproxy.pem force-tlsv13
    http-request redirect scheme https unless { ssl_fc }

    acl p_ext_txt path_end -i .txt
    acl p_folder_textfiles path_beg -i /textfiles/
    http-request set-path /textfiles/%[path] if !p_folder_textfiles p_ext_txt

    acl site-1 hdr(host) -i site-1.com www.site-1.com
    acl site-2 hdr(host) -i site-2.com www.site-2.com
    use_backend site1 if site-1
    use_backend site2 if site-2

Note: Spaces between curly braces and ssl_fc is mandatory.

Reload HAProxy and test both HTTP and HTTPS connections:

root@debianlab:~# curl -kL http://www.site-1.com/test.txt
SITE1 - WEB1
root@debianlab:~# curl -kL http://www.site-2.com/test.txt
SITE2 - WEB1
root@debianlab:~#
root@debianlab:~# curl -k https://www.site-1.com/test.txt
SITE1 - WEB2
root@debianlab:~# curl -k https://www.site-2.com/test.txt
SITE2 - WEB2
root@debianlab:~#

Everything works fine. If a client is using HTTP, the request will be redirected over HTTPS.

Protect HTTP #

HAProxy can protect HTTP servers against attacks. It gives us the ability to set dynamic defence mechanisms to stop or denied bad requests before they reached our backends. Two most common attacks are HTTP Flood and [Slowloris](https://en.wikipedia.org/wiki/Slowloris_(computer_security).

Brief Intro of Stick Tables #

In HAProxy, there is an in-memory key-value storage called a Stick Table. It’s main purpose to “keep track of data” such as IP Addresses, Request-Rate(HTTP and TCP), and so on.

A Stick Table can also be viewed as a backend because it doesn’t just keep track by default. We have to tell the frontend to use it first, hence send the data we want to keep track of.

It can be defined in a frontend/backend or a dedicated frontend/backend. Here’s an example of dedicated backend called keep_track:

backend keep_track
    stick-table type ip size 1m expire 5m store http_req_rate(5m)
  • stick-table: Creating a stick table
  • type: Type of data we’ll be keeping track of. In this case ip.
  • size: Number of lines to be stored. 1m is one million lines.
  • expire: Reset the table after 5 minutes.
  • store: The value that we want to store should be specified after. In this case http request rate.
  • http_req_rate: How many request did an IP made in the last 5 Minutes.

The stick-table will store one million IP Addresses, and how many requests an IP has made during the last 5 minutes. After 5 minutes the sticky table will reset. The IP is the key and the request rate is the value.

Now you can keep things in it by calling it in a frontend:

frontend main
    bind *:80
    http-request track-sc0 src table keep_track
    http-request deny deny_status 500 if {sc_http_req_rate(0) gt 100}

We’re telling the frontend to send the src IP Address of each request to the keep_track sticky table. The sticky table is designed to track IP addresses and request rates. It will specifically focus on monitoring and recording these two aspects, and nothing else.

sc is a sticky counter. In the frontend, we have track-sc0. This basically says, track the data inside the sticky table and put it in sc0(sticky counter 0). You can have multiple sticky counters.

We then can use sc0 to filter, and deny traffics.

You can interpret the line sc_http_req_rate(0) gt 100 as follows: Within Sticky Counter 0(prefix with sc_ and append the sticky counter number(N)), check the http_req_rate and determine if its value exceeds 100 for the incoming IP. If this condition is met, take action accordingly. In this particular scenario, we send a deny response.

Deny Based on Request Rate #

Let’s add a backend called per_ip_rates to hold a stick-table:

## Per IP Rates Backend
backend per_ip_rates
    stick-table type ip size 1m expire 10m store http_req_rate(10s)

Now we can start tracking in the frontend and send a 429 Too Many Requests response, if an IP has made more than 100 requests in the last 10 seconds(last two lines):

## Docker Simulation Frontend

frontend site1-site2
    mode http
    bind *:80
    bind *:443 ssl crt /etc/haproxy/certs/localhost-haproxy.pem force-tlsv13
    http-request redirect scheme https unless { ssl_fc }

    http-request track-sc0 src table per_ip_rates
    http-request deny deny_status 429 if {sc_http_req_rate(0) gt 100}

....REDACTED....

Sticky Counter can be confusing at first. As a reminder, The 0 in sc_http_req_rate(0) gt 100 is the number of the sticky counter. Inside sc0, we want http_req_rate. More than one sticky counter in a frontend can be called to track multiple sticky tables.

Deny Based on User Agent #

Here we’re sending a 500 Internal Server Error response if the client’s user-agent header contains the string curl:

http-request deny deny_status 500 if { req.hdr(user-agent) -i -m sub curl }
  • -i: Insensitive Case
  • -m sub: Calling a module. In this case, the substring module.

If -m sub is omitted, it will match the exact string: curl. You can read about it here.

Deny a list of Networks or IPs #

This is fairly straight forward. Send a 503 Service Unavailable response, if an IP or Network(each on a new line) is found in the file /etc/haproxy/blocked.acl:

http-request deny deny_status 503 if { src -f /etc/haproxy/blocked.acl }

I’m gonna block an IP on my network:

root@debianlab:~# echo "192.168.100.117" > /etc/haproxy/blocked.acl
root@debianlab:~# cat !$
cat /etc/haproxy/blocked.acl
192.168.100.117

Verify the config file and restart HAProxy:

root@debianlab:~# haproxy -f /etc/haproxy/haproxy.cfg -c && systemctl restart haproxy
Configuration file is valid

Testing Request Rate #

Let’s generate 3 requests on site1 and view the stats(I allowed curl just for this example):

MacBook-Pro:~ kavish$ curl -kL https://www.site-1.com/test.txt
SITE1 - WEB1
MacBook-Pro:~ kavish$ curl -kL https://www.site-1.com/test.txt
SITE1 - WEB2
MacBook-Pro:~ kavish$ curl -kL https://www.site-1.com/test.txt
SITE1 - WEB3
MacBook-Pro:~ kavish$

---- HAPROXY RUNTIME API  ----

Every 2.0s: echo "show stat" | nc -U /var/run/haproxy/admin....  debianlab: Mon Feb 21 11:31:29 2022

# pxname      svname      scur  smax  slim    stot  bin  bout  rate  rate_lim  rate_max
site1-site2   FRONTEND    0     1     262120  6     282  1239  0     0         1
per_ip_rates  BACKEND     0     0     1       0     0    0     0               0
site1         site1-web1  0     1             1     94   217   0               1
site1         site1-web2  0     1             1     94   217   0               1
site1         site1-web3  0     1             1     94   217   0               1
site1         BACKEND     0     1     26212   3     282  651   0               1
site2         site2-web1  0     0             0     0    0     0               0
site2         site2-web2  0     0             0     0    0     0               0
site2         site2-web3  0     0             0     0    0     0               0
site2         BACKEND     0     0     26212   0     0    0     0               0
stats         FRONTEND    0     0     262120  0     0    0     0     0         0
stats         BACKEND     0     0     26212   0     0    0     0               0

As you can see, each site[1,2,3] has a rate of 1.

Now Let’s generate a lot of traffics with vegeta to test the HTTP Request Rates(I restarted HAProxy to reset the stats):

MacBook-Pro:~ kavish$ echo "GET http://www.site-1.com" | vegeta attack -duration=20s -max-workers 100 > /dev/null

Viewing the stats on the HAProxy host(execute the first line in your terminal):

Every 2.0s: echo "show stat" | nc -U /var/run/haproxy/admin....  debianlab: Mon Feb 21 11:48:04 2022

# pxname      svname      scur  smax  slim    stot  bin     bout    rate  rate_lim  rate_max
site1-site2   FRONTEND    0     96    262121  47    117890  109000  0     0         29
per_ip_rates  BACKEND     0     0     1       0     0       0       0               0
site1         site1-web1  0     0             0     0       0       0               0
site1         site1-web2  0     0             0     0       0       0               0
site1         site1-web3  0     0             0     0       0       0               0
site1         BACKEND     0     0     26213   0     0       0       0               0
site2         site2-web1  0     0             0     0       0       0               0
site2         site2-web2  0     0             0     0       0       0               0
site2         site2-web3  0     0             0     0       0       0               0
site2         BACKEND     0     0     26213   0     0       0       0               0
stats         FRONTEND    0     0     262121  0     0       0       0     0         0
stats         BACKEND     0     0     26213   0     0       0       0               0

Nothing has reached the backend.

Testing Curl #

Test with curl:

MacBook-Pro:~ kavish$ curl -kL https://www.site-1.com/test.txt
<html><body><h1>500 Internal Server Error</h1>
An internal server error occurred.
</body></html>

Works as expected.

Testing Blocked IP #

For blocked IP:

MacBook-Pro:~ kavish$ wget --no-check-certificate -O - www.site-1.com/test.txt -Sq
  HTTP/1.1 302 Found
  content-length: 0
  location: https://www.site-1.com/test.txt
  cache-control: no-cache
  HTTP/1.0 503 Service Unavailable
  cache-control: no-cache
  content-type: text/html

Service unavailable as expected.

Resources #

https://www.haproxy.com/blog/application-layer-ddos-attack-protection-with-haproxy/ https://www.haproxy.com/blog/introduction-to-haproxy-acls/ https://www.haproxy.com/blog/introduction-to-haproxy-maps/ https://www.haproxy.com/blog/introduction-to-haproxy-stick-tables/ https://www.haproxy.com/blog/use-haproxy-response-policies-to-stop-threats/ https://www.haproxy.com/blog/dynamic-configuration-haproxy-runtime-api/ https://stackoverflow.com/questions/25267372/correct-way-to-detach-from-a-container-without-stopping-it

Protect SSH #

HAProxy can handle SSH connections in TCP mode. You can’t prevent all types of cyberattack with HAProxy, but a lot of restrictions can be implemented.

Setup Container #

Pull the image:

docker pull linuxserver/openssh-server

Creating a container called dockerssh:

root@debianlab:~# docker run -d --name dockerssh -e PUID=1000 -e PGID=1000 -e PASSWORD_ACCESS=true -e USER_PASSWORD=dockerssh -e USER_NAME=sshclient -p 2223:2222 linuxserver/openssh-server
4e6bffc0ad9496b4bdf623dd211a3f5f8244c66c57ad3bd3a389175c227e26e0
root@debianlab:~#
root@debianlab:~# ss -ntpl4 | grep 2223
LISTEN 0      4096         0.0.0.0:2223      0.0.0.0:*    users:(("docker-proxy",pid=3586,fd=4))
root@debianlab:~#
root@debianlab:~# docker ps --format '{{.Names}}\t{{.Ports}}\t{{.Status}}' | grep ssh
dockerssh   0.0.0.0:2223->2222/tcp, :::2223->2222/tcp   Up 32 seconds

Username is sshclient and password is dockerssh.

Let’s get access on the server and see if everythings works:

root@debianlab:~# ssh sshclient@localhost -p 2223
The authenticity of host '[localhost]:2223 ([::1]:2223)' can't be established.
ECDSA key fingerprint is SHA256:GrgntWa82J+blR0DskdOnXlSmJTWiW0u1vBsyF2RQJo.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[localhost]:2223' (ECDSA) to the list of known hosts.
sshclient@localhost's password:
Welcome to OpenSSH Server

4e6bffc0ad94:~$
4e6bffc0ad94:~$ whoami
sshclient
4e6bffc0ad94:~$ echo $HOME
/config
4e6bffc0ad94:~$
4e6bffc0ad94:~$ ls /config
custom-cont-init.d  custom-services.d  logs  ssh_host_keys  sshd.pid
4e6bffc0ad94:~$

Looks good. Let’s try sending a file over scp and view its content over ssh:

root@debianlab:~/dockerssh# cat ssh-test.txt
DOCKER SSH
root@debianlab:~/dockerssh#
root@debianlab:~/dockerssh# scp -P 2223 ./ssh-test.txt sshclient@localhost:~/
sshclient@localhost's password:
ssh-test.txt                                                      100%   11     4.3KB/s   00:00
root@debianlab:~/dockerssh#
root@debianlab:~/dockerssh#
root@debianlab:~/dockerssh# ssh sshclient@localhost -p 2223 'cat /config/ssh-test.txt'
sshclient@localhost's password:
DOCKER SSH
root@debianlab:~/dockerssh#

Basic Front/Backend to check Connectivity #

Simple setup:

# Dockerssh Frontend
frontend dockerssh
    bind *:2222
    mode tcp
    option tcplog
    default_backend dockersshd

# Dockersshd Backend
backend dockersshd
    mode tcp
    server sshd-server1 127.0.0.1:2223 check

Restart HAProxy, and test the connection by pulling the ssh-test.txt on our docker host:

root@debianlab:~/dockerssh# scp -P 2222 sshclient@localhost:/config/ssh-test.txt from_docker_ssh.txt
The authenticity of host '[localhost]:2222 ([127.0.0.1]:2222)' can't be established.
ECDSA key fingerprint is SHA256:GrgntWa82J+blR0DskdOnXlSmJTWiW0u1vBsyF2RQJo.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '[localhost]:2222' (ECDSA) to the list of known hosts.
sshclient@localhost's password:
ssh-test.txt                                                      100%   11     2.0KB/s   00:00
root@debianlab:~/dockerssh#
root@debianlab:~/dockerssh# cat from_docker_ssh.txt
DOCKER SSH
root@debianlab:~/dockerssh#

Limit Connections Per client and Concurrent Sessions #

Traffic is moving on the SSH server:

Every 2.0s: echo "show stat" | nc -U /var/run/haproxy/admin....  debianlab: Tue Feb 22 18:12:12 2022

# pxname      svname        scur  smax  slim    stot  bin     bout    rate  rate_lim  rate_max
dockerssh     FRONTEND      0     19    262120  409   643568  640716  0     0         10
dockersshd    sshd-server1  0     19            409   643568  640716  0               10
dockersshd    BACKEND       0     19    26212   409   643568  640716  0               10
site1-site2   FRONTEND      0     0     262120  0     0       0       0     0         0
per_ip_rates  BACKEND       0     0     1       0     0       0       0               0
site1         site1-web1    0     0             0     0       0       0               0
site1         site1-web2    0     0             0     0       0       0               0
site1         site1-web3    0     0             0     0       0       0               0
site1         BACKEND       0     0     26212   0     0       0       0               0
site2         site2-web1    0     0             0     0       0       0               0
site2         site2-web2    0     0             0     0       0       0               0
site2         site2-web3    0     0             0     0       0       0               0
site2         BACKEND       0     0     26212   0     0       0       0               0
stats         FRONTEND      0     0     262120  0     0       0       0     0         0
stats         BACKEND       0     0     26212   0     0       0       0               0

I’m gonna create a new backend to hold a stick-table to track connections:

# Dockerssh Frontend
frontend dockerssh
    bind *:2222
    mode tcp
    option tcplog
    timeout client 1m
    tcp-request content track-sc0 src table ssh_per_ip_connections
    tcp-request content reject if { sc_conn_cur(0) gt 2 } || { sc_conn_rate(0) gt 10 }
    default_backend dockersshd

# Dockersshd Backend
backend dockersshd
    mode tcp
    server sshd-server1 127.0.0.1:2223 check

# Backend ssh_per_ip_connections
backend ssh_per_ip_connections
    stick-table type ip size 1m expire 1m store conn_cur,conn_rate(1m)

backend ssh_per_ip_connections:

  • conn_cur - Tracks the current number of SSH connection established or the number of active connections by each IP address.
  • conn_rate(1m) - Tracks the connection rate for each IP over a 1-minute period. It measures how many SSH connections are established per minute by each IP.

frontend dockerssh

  • tcp-request content track-sc0 src table ssh_per_ip_connections - Keep track of TCP connections for each source IP.

  • tcp-request content reject if { sc_conn_cur(0) gt 2 } || { sc_conn_rate(0) gt 10 } - if the number of current connections exceeds 2 or if the connection rate surpasses 10 connections per second, reject the request.

Running tests #

Opening 1000 Connections:

root@debianlab:~/dockerssh# for conn in `seq 1 1000`; do bash -c 'scp -P 2222 sshclient@localhost:/config/ssh-test.txt . &'; done
kex_exchange_identification: Connection closed by remote host
Connection closed by 127.0.0.1 port 2222
kex_exchange_identification: Connection closed by remote host
Connection closed by 127.0.0.1 port 2222
kex_exchange_identification: Connection closed by remote host
Connection closed by 127.0.0.1 port 2222
kex_exchange_identification: Connection closed by remote host
Connection closed by 127.0.0.1 port 2222

---REDACTED---

Do make sure to copy your public key on the ssh server, or else you won’t see the message Connection closed by remote host.

The stats:

Every 2.0s: echo "show stat" | nc -U /var/run/haproxy/admin.so...  debianlab: Tue Feb 22 19:10:50 2022

dockerssh               FRONTEND      0     4     262120  1000  11504  10767   0     0         30
dockersshd              sshd-server1  0     2             3     11376  10767   0               2
dockersshd              BACKEND       0     2     26212   3     11376  10767   0               2
ssh_per_ip_connections  BACKEND       0     0     1       0     0      0       0               0

1000 connections reached the frontend. If you look on the stats page on port 8050, you should see that over 900 connections were denied on the frontend. I don’t know how get more details on denied requests via the runtime API. If I ever get the chance to look it up, I’ll update the post. There’s also a real time monitoring tool called HATop.

Resources #

https://www.haproxy.com/blog/route-ssh-connections-with-haproxy/

Health Checks #

https://www.haproxy.com/blog/how-to-enable-health-checks-in-haproxy/ https://www.haproxy.com/blog/webinar-haproxy-skills-lab-health-checking-servers/

Varnish and Haproxy #

https://www.haproxy.com/blog/haproxy-varnish-and-the-single-hostname-website/