When it Comes to DNS, Redundancy is a Good Thing
July 25, 2016 No CommentsFeatured article by Nate Lindstrom, VP of Solutions Engineering, NS1
Business used to be simpler. It used to be possible—in fact, it was the norm—for enterprises to run their Domain Name Service (DNS) from just one data center. They placed their servers within its confines and went on their way without a care. After all, if their one data center went down, the DNS server would be useless.
But that was then, this is now, and single data centers are no longer the norm.
Enterprises run multiple data centers, sometimes in multiple countries, not to mention cloud regions and highly distributed networks. Consequently, today’s DNS needs to be just as highly distributed as the content it drives. After all, what good is a Disaster Recovery site if you have no way to direct your users to it?
Next-generation DNS providers offer highly resilient networks with multiple anycast groups and hundreds of servers spread out around the world. However, the hard reality is that impairments, outages and massive Distributed Denial of Service (DDoS) attacks can and do happen. To truly bulletproof your distributed infrastructure against an issue where your users cannot resolve your domain, you might very well consider hosting your DNS records with two providers.
However, this idea may fall under the “seemed like a good idea at the time” category.
Before today’s next-generation DNS solutions, you basically had three choices:
1. Run one DNS provider as primary and the second as the replicated slave
2. Run two DNS providers, both as primary, and (carefully) make your record changes in each
3. Run two DNS providers, both as primary, and code your own middleware application that is capable of understanding a requested DNS change and pushing that change to each provider’s unique API
Going with the first choice leaves you without the RUM-based telemetry, traffic management features and powerful geographic routing that some top-tier providers offer. The use of the zone transfer (XFR) technology condemns you to using only the most basic, plain-vanilla DNS records.
Choosing the second option introduces the element of human error. If you don’t painstakingly and laboriously keep two different providers in perfect sync, you will end up with traffic routing problems that are shockingly difficult to troubleshoot.
As for the third choice, roll up your sleeves and get ready to spend plenty of time and resources to write your own DNS management software, with in-depth integration to each of your DNS providers. You lose all the advantages of your providers’ portals and dashboards and will have to roll in your own interpretation layer to keep one provider’s advanced features in approximate synchronization with the next provider’s.
There are more than just these three choices today, thankfully. Dedicated DNS solutions, for example, let you place real or virtual servers anywhere you want them: in your office, in your data centers, inside your DMZs, behind your firewalls – literally anywhere that makes sense for your infrastructure. You can then install a DNS software stack on them and turn them into fully managed DNS delivery nodes that are dedicated to you. Through the same portal and API as you use right now to manage your DNS on a managed DNS anycasted world-wide platform, you can choose which domains you want to also serve from your dedicated DNS nodes.
With a set-up like this, you get the benefits of resiliency from two DNS providers with the ease of management through a single portal and API. All your advanced traffic management and intelligent Filter Chain configurations work exactly the same, too. And if something were to happen to any part of the managed DNS infrastructure, your dedicated DNS nodes would be unaffected and would continue to happily serve DNS. Once they re-established contact with the “mothership,” they would push their queued query statistics upstream and apply any pending record changes.
Dedicated DNS nodes—along with being authoritative DNS servers—also support recursion, so you can point all your DNS clients (laptops, servers, EC2 instances, etc.) at them. This results in all your DNS needs being met and queries directed at your own domains and records being resolved in single-digit millisecond time. You can also leverage advanced Filter Chain capabilities to intelligently direct traffic within your own data centers and achieve greater performance, failover and resiliency between server or application tiers.
Business used to be simpler, then it got way more complicated, and now it’s come full circle to a more effective approach to DNS. Using a combination of dedicated DNS and managed DNS solutions, you’ll avoid the pitfalls and potential disasters inherent in other options.
About the author:
Nate Lindstrom is the VP of Solutions Engineering for NS1, an intelligent DNS and traffic management platform with a data driven architecture purpose-built for the most demanding, mission-critical applications on the Internet. He has significant experience building, operating, and securing cloud environments, and has put his expertise to work at companies including Yahoo! and Salesforce. As an evangelist, public speaker and consultant he enjoys helping companies get the most bang for their buck with AWS and other cloud computing solutions.