Split horizon DNS with DJB's tinyDNS

Tagged
djbdns dns

What is ‘split horizon’ DNS?

Split horizon DNS basically means that one group of hosts receive different DNS addresses to another group. Some of the uses of a split horizon DNS setup include:

My motivation for setting this up is keep a number of hosts purely internal such as database servers and yet be able to manage them centrally using DNS to enable moving things around a little easier.

An example network

Imaging the following hypothetical setup:

3 physical servers, a.example.com, b.example.com and c.example.com

a.example.com - private IP address 192.168.1.1 - publicIP address 1.1.1.1 - nameserver for example.com and example.net - a web server

b.example.com - private IP address 192.168.1.2 - publicIP address 1.1.1.2 - nameserver for example.com and example.net - a mail server

c.example.com - private IP address 192.168.1.3 - publicIP address 1.1.1.3 - internal database server for example.com

These are the basic steps to produce this setup:

tinyDNS’s method of separating internal and external DNS entries

The tinyDNS data file has all the usual record types that can be entered and for each record type there are the usual attributes such as the IP address, record type and time-to-live. So a typical A record entry would look like this:

=a.example.com:1.1.1.1:3600

Which would create an A record showing 1.1.1.1 as the address of a.example.com with a TTL of 1 hour. (see the complete data file format).

But there is one attribute (in version 1.04 and above) that can be used to create internal and external address (or any number of groups). The last attribute for any record entry in the data file specifies which group this record should belong to. The groups are also define along with the source IP addresses prefix that these entries should be given to as an answer. An example:

%in:192.168
%ex

This says that all queries that come from IP addresses that begin with 192.168 (so our example networks internal address space) with be supplied with records that are marked with ‘in’ and other request will be supplied with entries marked with ‘ex’. So we can have the following:

+db.example.com:1.1.1.3:::ex
+db.example.com:192.168.1.3:::in

This would provide a different address for internally originating requests as apposed to external ones.

See here for more details.

Why so many services?

Now some astute readers may be wondering why we need the loopback listening tinyDNS instances and not just get the dnscache instances to talk to the public facing tinyDNS instances directly. The reason is that the location entries in the tinyDNS data file (those beginning with ‘%’ above) only allow an IP prefix, not a range or a list of hosts. So for a situation where you have a number of hosts where the private addresses that are supplied are part of a large network block which other hosts belong, the IP prefix is not useful.

This is the case with Linode where I have a number of VPSs which have non-consecutive private IPs which are in the same network block as other customers. I don’t want other people to be able to resolve my internal IP addresses.

The only way I could think to get around this is to set the source location of internal addresses to localhost and then create another instance of tinyDNS listen for those. Then the dnscache instances listen on the internal private addresses but use the ‘localhost’ tinyDNS services for some requests.