Nova ADC nodes use Backends to discover what servers to send the traffic to once processed. In it's most simple form a backend is one or more IP addresses that Nova load balances to. These can also be discovered in more advanced ways.
The default backend for most Nova clients, the Simple Backend is a list of IP addresses and ports to send traffic to from a Nova ADC. In the example of a webserver, this would be a list of backends like so:
192.168.100.101:80 192.168.100.102:80 192.168.100.103:80
We also allow specifying weights and backup/primary servers on Simple Backends.
DNS backends allow you to use hostnames to resolve backend servers. The main difference between this backend and using a hostname in the simple backend is the lifetime of the result. In AWS, Azure, and other use cases the IP address of a hostname may change - and this backend mode makes the ADC recheck the DNS name every health check interval.
This allows for dynamic backends in your configuration. The other options from the Simple Backend still apply.
You will also configure resolvers for this backend, where you specify which DNS servers to use in order to perform the lookups. Note that in cloud environments you must use the appropriate ones to have internal traffic routing.
You can configure the number of seconds to cache replies for, allowing you to set a TTL.
Cloud API backends vary by the cloud provider but the general idea is to use an API to discover which servers to send traffic to, as opposed to IP addresses or hostnames.
An example of this is in Amazon AWS. If you have autoscaling enabled for a set of servers they may scale up or scale down and the ADC will not know of the change in servers.
With Nova you can instead direct traffic to an AWS AMI ID. This means it will auto discover any servers and dynamically adjust your traffic as you scale.
|AWS EC2||Based on AMI ID.|
|DigitalOcean||Based on attached Tags.|
Service discovery is typically an API to a platform like Docker in order to discover the backends. We support using SRV records which is common on platforms like Kubernetes.
_service._proto.name. TTL class SRV priority weight port target
Where the description of the fields is as follows:
You can see an example of this below:
dig -t srv _http._tcp.red.domain.local ;; QUESTION SECTION: ;_http._tcp.red.domain.local. IN SRV ;; ANSWER SECTION: _http._tcp.red.domain.local. 30 IN SRV 10 25 8080 3963643338366463.red.domain.local. _http._tcp.red.domain.local. 30 IN SRV 10 25 8080 3366376637306635.red.domain.local. _http._tcp.red.domain.local. 30 IN SRV 10 25 8080 3464316362303933.red.domain.local. _http._tcp.red.domain.local. 30 IN SRV 10 25 8080 3963326437356461.red.domain.local. ;; ADDITIONAL SECTION: 3963643338366463.red.domain.local. 30 IN A 10.1.29.2 3366376637306635.red.domain.local. 30 IN A 10.1.29.3 3464316362303933.red.domain.local. 30 IN A 10.1.71.2 3963326437356461.red.domain.local. 30 IN A 10.1.71.3
Using this system you can have Docker, Kubernetes, Consul, Rancher and more tell the ADC automatically what systems are online for a Backend.