If you choose to bring your own external load balancer for a highly available kURL install instead of using EKCO's internal load balancing capability, the load balancer must be a Layer 4 TCP load balancer that supports hairpinning. This topic provides steps for a working configuration in AWS, Azure, and GCP.
NOTE: We recommend that you begin the kURL install with a single primary behind the load balancer. Although this is only a strict requirement for GCP, this reduces the chance that a kURL install may fail due to the load balancer forwarding traffic to an instance which isn't yet initialized. You should add the additional primaries to the backend target groups/pools after you've joined them to the kURL cluster.
Internal NLBs in AWS do not support hairpinning or loopback, so in order to allow backend instances to access themselves through an internal load balancer, you must register instances in your target group by their IP and not the instance ID. If you are using an internet-facing load balancer, then both Instance and IP mode for the target group will work.
Create a target group for port 6443
IP Addresses
if using an internal load balancer, otherwise, you can optionally set the target type to Instances
TCP
6443
TCP
Traffic Port
in the Advanced health check settings
drop-down, otherwise, all other settings can be left to their defaults or set per your organization's requirementsCreate an AWS Network Load Balancer
IPv4
Network mapping
sectionAdd a Listener for 6443
TCP
6443
6443
target group that includes your kURL instance IPsNOTE: AWS Network Load Balancers do not have security groups associated with them. Ingress access to the IP addresses in your target group is defined by the security groups on the instances themselves. You need to ensure that the security group on your kURL EC2 instances allows traffic from the IP address of the AWS NLB. If the load balancer is public, you need to include the IP addresses of clients as well.
Internal Load Balancers in Azure do not support hairpinning. There are workarounds such as adding a second NIC with static routes or creating a loopback interface, but those are out of scope of this topic.
Create an Azure Load Balancer
Public
Standard
Regional
Create a Backend Pool
NIC
IPv4
Create a Health Probe
TCP
6443
Interval
and Unhealthy Threshold
Create a Load Balancer Rule
IPv4
TCP
6443
6443
Outbound source network address translation
to Outbound and inbound use the same IP
NOTE: Ingress access for Azure Load Balancers is defined by the inbound port rules on the VMs in the backend pool. You need to ensure that your port rules are configured to allow traffic for destination port 6443
in your VM's networking settings.
Hairpinning is supported by default when using a machine image provided by GCP. VM instances are automatically configured to route traffic destined for the load balancer to the loopback address of the VM where the traffic originated.
Create an Unmanaged Instance Group
Create a TCP Load Balancer
Internet facing or internal only
set From Internet to my VMs
Multiple regions or single region
set Single region only
Backend type
set Backend Service
Create a Backend Configuration
Under the health check drop-down select Create another health check
TCP
6443
Create a Frontend Configuration
Network Service Tier
per your requirements6443
NOTE: After the initial install is done, you need to join any additional primaries to the kURL cluster before adding them to the Instance Group in use by your load balancer to ensure the join script runs successfully. If you want a mutli-zonal deployment, you can create additional Unmanaged Instance Groups and put your other kURL primaries in them.
Due to GCP's workaround for hairpinning, traffic may blackhole when attempting to access NodePorts through the load balancer. This is because GCP automatically routes traffic destined for the load balancer to the loopback address of the VM the request was forwarded to, and kube-proxy does not listen on localhost. To workaround this and successfully access NodePorts through the load balancer, you will need to create an alias for the primary network interface that resolves to the load balancer's IP address e.g., ifconfig eth0:0 <lb-ip> netmask 255.255.255.255 up
on each node in the kURL cluster. To persist these changes you will need to add them to your network interfaces configuration file.