For the average end user using their ISPs nameserver, it's... well, out-of-scope.  But MBIX isn't for individuals, it's for ISPs and enterprises who want substantially-enhanced connectivity.
If you run your own recursing resolver, however, whether because you don't trust your ISPs, or you are an ISP yourself, there's a direct correlation between how quickly you can get an answer from a root NS and how quickly you can get the answer back to the end-user.
Without MBIX, the closest root nameserver most Manitobans have access to is typically 20-30msec away from their ISP.  Ditto for .CA nameservers.  (The PCH and CIRA D-root servers also host a bunch of gTLDs, IIRC, don't know which.)
So while caching helps a lot, that cached data does expire frequently.
Consider how many HTTP requests are involved in loading, say, the Facebook homepage.  Consider how many of those go to different domains.  Now shave ~10msec off (potentially) every single one of those DNS queries.
By the time you multiply it out over a day's work... times an entire ISP's worth of customers...
Better DNS is kind of like having better plumbing in your building.  No-one thinks about it: as long as the bathroom *functions* everyone can get their jobs done.  But having better-working sinks, toilets, etc. adds up to just a tiny bit smoother experience, and those seconds saved do add up over time.

Plus, it also means you can still browse the 'net when the root nameservers are being DDOS'd again :-)

-Adam


On 2016-04-14 22:38, John Lange wrote:
Why would a root name server be significantly different than a normal DNS server? The IPs would be cached most of the time.

I believe MTS has google and netflix caches but I don't know this for a fact.

John



_______________________________________________
Roundtable mailing list
Roundtable@muug.mb.ca
http://www.muug.mb.ca/mailman/listinfo/roundtable