The BIGIP F5 Alternative using HAProxy and keepalived — Part 2

Okay we’re back!! Welcome to Part#2. If you’ve read my last post in this high availability and load balancing series(Part#1) you understand the need for HAProxy to complete our setup. If you recall, I am looking for a alternative solution to BIGIP F5 LTMs products. These products provide both high-availability fail-over via a Floating IP between LTMs, and the Load Balancing of requests to service endpoints. In the previous post, we managed to tackle the former part and provide High Availability, but not the Load Balancing part.

To complete this alternative we now add HAProxy into our setup.

Let’s get started!!

Assuming you still have the following:

  1. 2 Test Web Servers with an IP address for each ( and
  2. 2 LoadBalancers which will run Keepalived, each with an IP ( and
  3. 1 Floating IP that each LoadBlaancer will share. (

We need to install HAProxy on each LoadBalancer:

  1. I had to add this apt source in order to grab the haaproxy. Put this link in your /etc/apt/source.list (find your backports here)
  2. If installing on debian, change the default script

    Change ENABLED=0 to ENABLED=1
  3. Start HAProxy, if you don’t receive any errors fantastic, let’s move on to configuring it.

HAProxy Configuration

I’ve modifed the default configuration script to cater to our setup.

  1. Copy and Paste the following:

  2. OPTIONAL: If you would like to enable HAProxy stats page add the following to bottom of your haproxy.cfg:

    Visit it by browsing to your haproxy server with as the URI. You should see something simlar to this..notice both pool members are down, because I haven’t set up those servers IPs yet.
    haproxy stats page
  3. Test out HAProxy, browse to
    basic haproxy working!Success!!

Tieing in keepalived

  1. Go ahead and wipe your keepalived.conf file, we’ll start from scratch
  2. Copy and paste the following into our new keepalived.conf file.
  3. Restart and check keepalived service

    NOTICE: We should see the Floating IP ( on this eth0 interface!!
  4. A quick test
    basic keepalived working

Copying our setup to another LoadBalancer

We now need to copy both the HAProxy configuration and Keepalived configuration to the additional Load Balancer (LoadBalancer01– Keep in-mind both load balancers have to share a broadcast domain, meaning no routing can be involved between the two to communicate.

  1. Repeate the above steps to get HAProxy and Keepalived installed
  2. Copy the configs
  3. Console into LoadBalancer02 and change the keepalived.conf file to make it the SLAVE

    Change the following to be:
  4. Start the services

Let’s test

For the following tests it is helpful to be tailing syslog, so you can see what is happening

First Load Balancing and distribution on service endpoint failuer
  1. Console into either of your web servers, and shut the web service off
  2. In the haproxy logs you should see something like this
  3. Don’t forget to check, at least one of the web servers is still up and should respond
  4. Let’s complete the test by restarting the web service we just stopped
  5. In our syslogs we see
Next let’s test a network/connectivity failure
  1. Console into LoadBalancer01 and stop the keepalived service
  2. In our syslog we see:
  3. Don’t forget to check, and make sure the service is still up!
  4. Restart the service on LoadBalancer01
  5. Syslog shows LoadBalancer01 taking over as the master


We’ve successfully added the missing piece to our BIG IP F5 LTM replacement/alternative. Using both keepalived for high-availabity and HAProxy for load-balancing we’ve create a appealing alternative that does not require any client or service endpoint configurations!!


There are 2 comments left Go To Comment

  1. Pingback: HA Proxy : High Avaibility Load Balancer dengan Keepalived | Andrian Satria /

  2. Reda /


    First, thank you for these two precious posts regarding Keepalived and haproxy.

    My organisation is using Microsoft Microsoft Team Foundation Server 2015.
    In order to improve currently installed architecture, we would like to implement high availability features and secure TFS servers behind a proxy cluster.
    I have tested HAproxy behind a single TFS server in tcp mode and it seems that it’s working fine. (http mode tests were not successful as TFS is using NTLM to authenticate users which is not http reverse proxy appropriate).

    Next step is to add a second Haproxy node (active/active) and use keepalived to monitor each haproxy nodes status. A second TFS node (active/active) will be added as well.

    Is it possible to setup two machines with Keepalived and haproxy installed in each one of them ? (KeepaliveD in active/passive mode, and haproxy in active/active mode).
    Do you have any feedbacks of any similar setup behind a TFS server ?
    Could you please share any tips that can help on this implementation ?
    What is the better approach for caching source control files ? Can Haproxy manage caching files correclty ?

    Thank you for your help.


Leave a Reply