• The BIGIP F5 Alternative using HAProxy and keepalived — Part 1

    I come from a strong BIG IP F5 background and wanted to explorer alternatives to their LTM product line. BIG IP F5 LTMs are their Highly Availability and Load-Balancing network products, see here. They are primarily used as a means to mitigate infrastructure failover across server clusters. How this is done is by use of a floating IP address that is shared between two independent devices, in this case LTMs. One LTM is always active and responds to request for this Floating IP from client devices. In the event of a device failure, the secondary LTM will sense this via a variety of means and take over as the Active LTM. This essentially is how the High-Availability or failover is maintained at an infrastructure connectivity perspective. The second piece to these devices is their load-balancing functionality. Load-balancing has many forms, for this case, we are talking about network service load balancing (pretty much layer 4 and above. This allows more intelligence into the distribution of request to a server farm or cluster.

    Now as I stated previously, I was looking into alternative solutions and I came across a GNU free software called keepalived which seemed to do exactly what I needed. Remember their are two pieces I wanted to fullfill as an alternative solution to LTM; it has to be able to maintain Network failover (seamlessly) and provide load-balancing for serivce endpoints. Also, surprisingly, much of the configuration statements in the keepalived.conf look very simlar to F5 LTM bigip.conf file.
    [Read More…]

  • HTTP Load Balancing with HAProxy 1.4

    I’ve posted a few articles on load balancing with the use of BIGIP F5 hardware appliances. However, there are also a few alternatives available, some even free! HAProxy is a popular load balancing application that has a robust collection of features.

    HAProxy isĀ  “The Reliable, High Performance TCP/HTTP Load Balancer”, taken right from the title of their web page. It has many different uses available, for this article I am going to focus on the HTTP load balancing functionality of it. Our scenario is as follows:

    [Read More…]

  • What is BIGIP F5 (LTM and GTM)?

    I’ve worked with BIGIP F5 hardware for over two years now, and have become quite familiar with the great features it provides. For those who are unfamiliar with BIGIP F5 hardware, it is network hardware company specializing in load balancing at both the local and global layers of an enterprises network infrastructure. Their website is located here.

    BIGIP F5 product family consists of many different components, however the two major ones most network engineers are familiar with are the Local Traffic Manager(LTM) and the Global Traffic Manager(GTM). Both are network rackable load balancers.

    The GTM

    is used as an “Intelligent DNS” server, handling DNS resolutions based on intelligent monitors and F5’s own iQuery protocol used to communicate with other BIGIP F5 devices. Seen at the top level of a data center, especially in multiple data center infrastructures, deciding where to resolve requesting traffic to. The GTM also includes other advanced features, such as DNSSEC and intelligent resolution based on many different algorithms.

    The LTM

    is a full reverse proxy, handling connections from clients. The F5 LTM uses Virtual Services(VSs) and Virtual IPs(VIPs) to configure a load balancing setup for a service. LTMs can handle load balancing in two ways, first way is a nPathconfiguration, and second is a Secure Network Address Translation(SNAT) method.

    nPath, the F5 does the job of load balancing by intelligently deciding which server endpoint to pass traffic to. nPath, however, does so by bypassing the F5 in the return path. For example you have two servers 192.168.0.10 and 192.168.0.11, and an F5 listening for this particular set up on VIP 172.16.0.2. Now when the traffic from a client destined for the 172.16.0.2 hits the F5, the F5 intelligently passes it to either 192.168.0.10 or 192.168.0.11. The tricky part is when the traffic leaves from the F5 to either server, the IP packet’s source address is that of the F5. Therefore each server mush have a loopback address configured that matches the F5s source IP address of the interface (on the F5) the original packet leaves from., in this example 172.16.0.2. This prevents each server endpoint from sending it back to the F5 directly and forces the server to use it’s gateway of last resort.

    Secure Network Address Translation(SNAT), is a more common BIGIP F5 implementation. In this scenario the F5 is configured essentially as a reverse-proxy server. Think Many-to-One. Client’s target Virtual IPs that sit in front of a pool of endpoint servers. However, the Client never sees behind the VIP, to there perspective the VIP is the server they are requesting. For example, you have a VIP 192.168.0.55 which routes to an F5 who is listening for requests destined for that IP. The F5 has a configuration in place that knows 4 server endpoints that can serve requests destined for that IP, 10.0.0.5, 10.0.0.6, 10.0.0.6, 10.0.0.7. When a request comes from a client to the VIP the F5 acts as the server for the client. In the back-end the F5 acts as a client sending the identical request to one of the four endpoint servers. The response is then proxied back from the F5 to the “real” client.

    Tying them together.

    GTMs and LTMs used in conjunction with each other provide a robust and resilient, and network optimized environment. This is especially true when dealing with multiple Data Centers or Service Sites. The GTMs will handling the initially network path to take by resolving clients with the best route option. The LTMs will handle the load optimization of the service by logically proxying the endpoint servers.

    Below is a diagram of a typical GTM/LTM setup. In this example, there are two Data Centers, the GTM sites at the front of the Data Centers and hands out the VIP that will handle the client’s request. The LTMs are localized in each Data Center (They don’t have to be :-p) in a High Availability pair. The LTMs will reverse proxy the clients connections with the actual server endpoint.

     
    null