TOPICS
Features

Load Balancers

Load Balancers are TCP, HTTP, or HTTPS endpoints that balance incoming requests over multiple machines and automatically manage failover via monitoring.

We use the following terminology on our platform:

  • A load balancer listens to one or more ports on a specific IP address, balancing traffic on each port.
  • A port is attached to one load balancer and one pool, and forwards incoming traffic on a particular TCP port to members of the attached pool. The port can be configured with various balancing protocols (e.g. TCP, HTTP) and balancing modes (e.g. round-robin, source IP hash).
  • A pool defines a set of members. Each member can be a VM or an external IP address.
  • A monitor periodically performs checks on members of a pool to verify that each member is functioning correctly. If a member fails to reply to a check, then the member is marked offline and is disabled until it comes back online.

Our load balancers support automatic management of SSL certificates for HTTPS endpoints using Let's Encrypt. You can also provide your own SSL certificate and key.

Pricing

The first two ports in each region are free for customers who spend at least $3/mo on other services. Each additional port is $2/mo, billed hourly.

Bandwidth on the external IPs of load balancers is billed under our per-region pool system, where bandwidth in excess of the VM allocation sum is billed at $0.003/GB. Internal traffic between a load balancer and a VM is not billed. For example, if you have two VMs in Toronto each with 1 TB monthly traffic, and neither of the VMs utilize any external network traffic, then you will not be charged for the first 2 TB on a load balancer in Toronto since it is covered under your 2 TB allocation for Toronto.

Setting up a Load Balancer

Suppose you have two VMs with internal IPs 172.20.0.2 and 172.20.0.3, and you want to balance HTTPS traffic between them, but also to stop sending traffic if a VM fails. The VMs listen for HTTP connections on port 80.

First, go to Load Balancers and open the Pools tab. Create a new pool, and add two members to the pool, "172.20.0.2" and "172.20.0.3" (you can also use "172.20.0.2:80" to explicitly specify the port, but this is only needed if different pool members listen on different TCP ports).

Then, go back to Load Balancers and create a new load balancer. Under Floating IP, add a floating IP to your account, and then select the load balancer and associate that floating IP with the balancer. Configure your domain name with an A record for the floating IP.

Next, select the load balancer and create a new HTTPS port with the pool you created earlier; use 443 as the frontend port and 80 as the backend port. Verify that the Backend HTTPS Mode is HTTP, Let's Encrypt is enabled, and enter your domain name.

At this point, the load balancer should be distributing incoming HTTPS requests across your two VMs!

Finally, let's set up monitoring so that the balancer stops sending requests to failed VMs. Return to Load Balancers, and open the Monitors tab. Create a new HTTP monitor, with backend port 80, and path "/". Then, select your pool and associate the monitor with the pool.