AWS Cloud Practitioner Series: AWS Load Balancers
Learn aws, cloud engineer, cloud security tools, cloud security solutions
The load balancers at AWS are highly scalable and useful. By default, they use round robin and in the typical setup, the load balancer will perform a health check every 30 seconds to make sure the instances are alive. The other important thing that they can do, is they terminate https for you. This means your developers and servers don't need to know anything about managing certificates or terminating https. They can simply run on port 80, and the load balancer will forward the request to that port, after it verifies the https connection.
Application load balancers operate at the L7 networking layer, meaning it understands HTTP. It has a lot of nice features, so for the majority of people if you are building a website or api, choose Application Load Balancer. If you are building using UDP/TCP or just want an L4 (IP Layer) load balancer, then choose Network Load Balancer. If you have some special need for network analytics or need to integrate a third party virtual appliance (firewall or intrusion detection system) , then gateway load balancer may be for you.
Here are some AWS load balancer tips
For the majority of server types (front-end services like next.js, java services, and nodejs) it makes the most sense to choose Application Load Balancer. You can still reach hundreds of thousands of TPS still. You can take advantage of path-based routing to various target-groups. Also, if you choose this and have plans of high throughput, i recommend selecting all availability zones since you will basically have more ALB servers in total.
For optimal performance for ultra low-latency api servers, I suggest choosing Network Load Balancer. The benefits will be massive in terms of TPS and latency, especially if you are using Rust. By the way, a NLB can be configured to terminate TLS (which makes your application code easier and safer).
Ensure that you have a health check that operates is very important. This will help your team know that a deployment is successfully by visually looking at the metrics, and it will also help the load balancer stop routing requests to the server automatically (it will deregister an ‘unhealthy’ instance).
If it is an internal API, make sure it is not public facing for security reasons. You can actually route traffic to internal load balancers located in a private vpc, by creating a vpc service and then creating a vpce endpoint in the callers’ vpcs very easily.
Gateway Load Balancers are an advanced cloud security solution. I suggest the highly sensitive applications for which a web application firewall will not suffice, should consider this option.
If you enjoyed this article, please follow and share with others!