Skip to Content.
Sympa Menu

discuss - Re: [opennic-discuss] Proposal: .bit / Namecoin peering

discuss AT lists.opennicproject.org

Subject: Discuss mailing list

List archive

Re: [opennic-discuss] Proposal: .bit / Namecoin peering


Chronological Thread 
  • From: GP <gp AT gparent.net>
  • To: discuss AT lists.opennicproject.org
  • Subject: Re: [opennic-discuss] Proposal: .bit / Namecoin peering
  • Date: Sat, 26 Jul 2014 02:16:09 +0000
  • Openpgp: id=70154FCF

On 7/25/2014 10:37 AM, Alejandro Bonet wrote:
You only need two or three servers for each TLD, and you will get almost the same redundance as
with ten servers (because when the two or three servers hang
simultaneously, then probably the problem is global, and it can hang
ten or twenty servers also).
That's not really how redundancy works. Literally, the more servers you have that aren't controlled by the same entity, the more resilient you are. unless the admins managing them are not doing their jobs, which so far hasn't been a big problem. Also even a large scale problem wouldn't likely affect a significant amount of the name servers, even T1.

Also, within the context of OpenNIC, it's possible that an operator may leave a server offline for weeks due to extraordinary circumstances. If we only have 2-3 servers per TLD, we're setting a very low objective to reach when it comes to availability.
 
As for the scalability 'issue', it's really, really non-existent and going to stay that way for a long time. It's a matter of simple math. Even if OpenNIC grew up 500 times in size today, we'd still fit all the zones under the gigabyte mark. And if we do grow 500 times in size, I guarantee you we'll have the budget to buy beefy servers to take the hit.

I don't suppose you have source code for that DNS resolver? would be interesting to see.


 And, in respect to authoritative
responses, if you have ten authoritative servers for a TLD, the
probability of inconsistence in responses, is 5 (or 3.3) times greater
than if you only have two or three authoritative servers for that TLD.
Also if you need to replicate each complete TLD zone file in each T1
server, and requiring a T1 (as authoritative and redundant) server for
each TLD, this will run well with ten TLDs, but not with 5000 TLDs."

This discussion will never ends: You have your opinion, and i have mine.

Both have advantages and disadvantages, at different scales.

The main diference is only "in style".

(About the "argument of authority" in the sense that "there are people
just going walk in and create a new TLD without any knowledge of how
BIND works, and sometimes without any understanding of how DNS works",
i dont know if you are saying this for me, but i only want to say you
i wrote a DNS client for arduino some years ago, and it is running
perfectly since that, on many installations, 24h/365d).

From scratch. Building and parsing complete DNS-QUERY/RESPONSE UDP
packets, field by field, bit by bit, on 16 bit tedious microprocessor
assembler language, with redudant compression of domains, of course.

Alejandro Bonet
albogoal AT gmail.com

http://registro.ibu

ns1.ibu: 87.216.170.85
ns2.ibu: 185.16.40.143

Since August 2013


2014-07-14 17:48 GMT+02:00, Jeff Taylor <shdwdrgn AT sourpuss.net>:
If we were trying to maintain our own copy of the .com zone, size would
be an issue.  That file is over 9GB, and it would present a significant
bandwidth problem to many users.  The .bit zone that is being discussed
is only 1MB... its so small it fits on a floppy disk.  I still don't
understand why you think it is a problem to transfer this small of file
to the T1 and T2 servers?

"Hey men, there is no reason to mantain copy of all the tld zones in
each T1 server: We only need to mantain pointers to the authoritative
servers for each tld, and recurse them..."
Well yes, there IS a reason to maintain a copy of the TLD zone files on
every T1 server.  That is exactly the point of the T1 servers -- to be
authoritative for all of our TLDs.  If you take that away, then a T1 is
no different from a T2.  Many years ago OpenNic was run with the policy
that only the master for a TLD would answer.  There were no backup
copies maintained on other T1 servers.  Guess what happened every time
one of the master servers went offline?  All resolution for every domain
registered under that server's TLD became unavailable.  What you are
proposing is that we move backwards and give up redundancy and
reliability.  Why would anybody want that?

Resolvers are trivial to set up compared to a tier 1 server. People
who decide to create a TLD need to be competent at running it by
themselves, and this is why we request them to have a tier 1 server to
prove as such. This hasn't been a barrier of entry to anyone so far I
don't think.
Actually it HAS been a barrier, and it is supposed to be a barrier. As
you say, there needs to be a certain amount of competency with running
DNS and maintaining a server in general before someone should be allowed
to operate a TLD.  We've had our share of problems in the past, and new
rules are created in response to those problems.  I see a lot of emails
come across the mailing list where people think they're just going to
walk in and create a new TLD without any knowledge of how BIND works,
and sometimes without any understanding of how DNS works.  OpenNic is a
project about learning, and many of us are more than happy to help
people learn how to set up new TLDs on their own personal network, but
the public DNS space is not the place to be experimenting and trying
figure it out as you go... when we offer a public TLD for domain
registration, people expect it to work.




--------
You are a member of the OpenNIC Discuss list. 
You may unsubscribe by emailing discuss-unsubscribe AT lists.opennicproject.org


-- 
-gp



Archive powered by MHonArc 2.6.19.

Top of Page