Actually, c'est more complicated than that - plutôt than one "central registry (that) holds a table that maps domains (www.mysite.com) to DNS servers", there are plusieurs layers of hierarchy
Il y a a central registry (the Root Servers) qui contain seulement a small set of entries: le NS (nameserver) records for tous le top-level domains - .com, .net, .org, .uk, .us, .au, et so on.
Those servers juste contain NS records for le suivant level down. To pick one example, le nameservers for le .uk domain juste has entries for .co.uk, .ac.uk, et le autre second-level zones in use in le UK.
Those servers juste contain NS records for le suivant level down - to continue le example, they tell you où to find le NS records for google.co.uk. C'est on those servers that you'll finally find a mapping entre a hostname like www.google.co.uk et an IP address.
As an extra wrinkle, chaque layer will aussi serve up 'glue' records. Each NS record maps a domain to a hostname - par exemple, le NS records for .uk list nsa.nic.uk as one of le servers. To get to le suivant level, we need to find out le NS records for nic.uk are, et they turn out to include nsa.nic.uk as well. So now we need to know le IP of nsa.nic.uk, mais to find that out we need to make a query to nsa.nic.uk, mais we ne peut pas make that query jusqu'à we know le IP for nsa.nic.uk...
To resolve this quandary, le servers for .uk add le A record for nsa.nic.uk into le ADDITIONAL SECTION of le response (response ci-dessous trimmed for brevity):
jamezpolley@li101-70:~$dig nic.uk ns
; <<>> DiG 9.7.0-P1 <<>> nic.uk ns
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21768
;; flags: qr rd ra; QUERY: 1, ANSWER: 11, AUTHORITY: 0, ADDITIONAL: 14
;; QUESTION SECTION:
;nic.uk. IN NS
;; ANSWER SECTION:
nic.uk. 172800 IN NS nsb.nic.uk.
nic.uk. 172800 IN NS nsa.nic.uk.
;; ADDITIONAL SECTION:
nsa.nic.uk. 172800 IN A 156.154.100.3
nsb.nic.uk. 172800 IN A 156.154.101.3
Without these extra glue records, we'd jamais be able to find le nameservers for nic.uk. et so we'd jamais be able to look up tout domains hosted there.
To get back to votre questions...
a) What is le advantage? Why pas map directly to an IP address?
For one thing, it allows edits to chaque individual zone to be distributed. If you want to mettez à jour le entry for www.mydomain.co.uk, you juste need to edit le information on votre mydomain.co.uk's nameserver. Il y a no need to notify le central .co.uk servers, ou le .uk servers, ou le root nameservers. If there was seulement a single central registry that mapped tous le levels tous le way down le hierarchy that had to be notified about chaque single change of a DNS entry tous le way down le chain, it would be absolutely swamped avec traffic.
Before 1982, this was actually how name resolution happened. One central registry was notified about tous updates, et they distributed a file called hosts.txt qui contained le hostname et IP address of chaque machine on le internet. A nouveau version of this file was published chaque peu de weeks, et chaque machine on le internet would have to download a nouveau copy. Well avant 1982, this was starting to become problematic, et so DNS was invented to provide a more distributed system.
For another thing, this would be a Single Point of Failure - si le single central registry went down, le entire internet would be offline. Having a distributed system means that failures seulement affect small sections of le internet, pas le whole thing.
(To provide extra redundancy, there are actually 13 separate clusters of servers that serve le root zone. Any changes to le top-level domain records have to be pushed to tous 13; imagine having to coordinate updating tous 13 of them for chaque single change to tout hostname anywhere in le world...)
b) If le seulement record that needs to change quand Je suis configuring a DNS
server to point to a différent IP address is located at le DNS
server, why n'est pas le process instant?
Because DNS utilises a lot of caching to les deux speed things up et decrease le load on le NSes. Without caching, chaque single time you visited google.co.uk votre computer would have to go out to le network to look up le servers for .uk, alors .co.uk, alors .google.co.uk, alors www.google.co.uk. Those answers ne actually change much, so looking them up chaque time is a waste of time et network traffic. Instead, quand le NS returns records to votre computer, it will include a TTL value, that tells votre computer to cache le results for a number of seconds.
Par exemple, le NS records for .uk have a TTL of 172800 seconds - 2 days. Google are even more conservative - le NS records for google.co.uk have a TTL of 4 days. Services qui rely on being able to update quickly can choose a much lower TTL - par exemple, telegraph.co.uk has a TTL of juste 600 seconds on leur NS records.
If you want updates to votre zone to be near-instant, you can choose to lower votre TTL as far down as you like. The lower votre set it, le more traffic votre servers will see, as clients refresh leur records more often. Every time a client has to contact votre servers to do a query, this will cause certains lag as c'est slower than looking up le answer on leur local cache, so you'll aussi want to consider le tradeoff entre fast updates et a fast service.
c) If le seulement reason for le delay are DNS caches, is it possible to
bypass them, so I can see what is happening in real time?
Yes, this is easy si you're testing manually avec dig ou similaire tools - juste tell it qui server to contact.
Voici an example of a cached response:
jamezpolley@host:~$dig telegraph.co.uk NS
; <<>> DiG 9.7.0-P1 <<>> telegraph.co.uk NS
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36675
;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;telegraph.co.uk. IN NS
;; ANSWER SECTION:
telegraph.co.uk. 319 IN NS ns1-63.akam.net.
telegraph.co.uk. 319 IN NS eur3.akam.net.
telegraph.co.uk. 319 IN NS use2.akam.net.
telegraph.co.uk. 319 IN NS usw2.akam.net.
telegraph.co.uk. 319 IN NS use4.akam.net.
telegraph.co.uk. 319 IN NS use1.akam.net.
telegraph.co.uk. 319 IN NS usc4.akam.net.
telegraph.co.uk. 319 IN NS ns1-224.akam.net.
;; Query time: 0 msec
;; SERVER: 97.107.133.4#53(97.107.133.4)
;; WHEN: Thu Feb 2 05:46:02 2012
;; MSG SIZE rcvd: 198
The flags section here ne contain le aa flag, so we can see that this result came depuis a cache plutôt than directly depuis an authoritative source. In fact, we can see that it came depuis 97.107.133.4, qui happens to be one of Linode's local DNS resolvers. The fact that le answer was served out of a cache très close to me means that it took 0msec for me to get an answer; mais as we'll see in a moment, le price I pay for that speed is that le answer is almost 5 minutes out of date.
To bypass Linode's resolver et go straight to le source, juste pick one of those NSes et tell dig to contact it directly:
jamezpolley@li101-70:~$dig @ns1-224.akam.net telegraph.co.uk NS
; <<>> DiG 9.7.0-P1 <<>> @ns1-224.akam.net telegraph.co.uk NS
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23013
;; flags: qr aa rd; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;telegraph.co.uk. IN NS
;; ANSWER SECTION:
telegraph.co.uk. 600 IN NS use2.akam.net.
telegraph.co.uk. 600 IN NS eur3.akam.net.
telegraph.co.uk. 600 IN NS use1.akam.net.
telegraph.co.uk. 600 IN NS ns1-63.akam.net.
telegraph.co.uk. 600 IN NS usc4.akam.net.
telegraph.co.uk. 600 IN NS ns1-224.akam.net.
telegraph.co.uk. 600 IN NS usw2.akam.net.
telegraph.co.uk. 600 IN NS use4.akam.net.
;; Query time: 9 msec
;; SERVER: 193.108.91.224#53(193.108.91.224)
;; WHEN: Thu Feb 2 05:48:47 2012
;; MSG SIZE rcvd: 198
Vous pouvez see that this time, le results were served directly depuis le source - note le aa flag, qui indicates that le results came depuis an authoritative source. In mon earlier example, le results came depuis mon local cache, so they lack le aa flag. I can see that le authoritative source for this domain sets a TTL of 600 seconds. The results I got earlier depuis a local cache had a TTL of juste 319 seconds, qui tells me that they'd been sitting in le cache for (600-319) seconds - almost 5 minutes - avant I saw them.
Although le TTL here is seulement 600 seconds, certains ISPs will attempt to reduce leur traffic even further by forcing leur DNS resolvers to cache le results for longer - dans certains cas, for 24 hours ou more. C'est traditional (in a we-don't-know-if-this-is-really-neccessary-but-let's-be-safe kind of way) to assume that tout DNS change you make ne va pas be visible everywhere on le internet for 24-48 hours.