Skip to Content.
Sympa Menu

discuss - Re: [opennic-discuss] Tier2 naming scheme...

discuss AT lists.opennicproject.org

Subject: Discuss mailing list

List archive

Re: [opennic-discuss] Tier2 naming scheme...


Chronological Thread 
  • From: Zach Gibbens <infocop411 AT gmail.com>
  • To: discuss AT lists.opennicproject.org
  • Subject: Re: [opennic-discuss] Tier2 naming scheme...
  • Date: Sun, 30 Jan 2011 12:08:25 -0500
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=Xi8xeixPEDSPEMejgrhFeCh3xuMlonawlzWzh+dlb6drHcLyhQnzR5X5QoivS4THsI DWzZKx8ydAF1bzZAEnewY3vNksa1uyK2aHHzPIc7t/T7rPw6S/G2OP3nkGuSJGOeB/mr q3RTVAH69IH4DzbJNPzaPp6VLt2LpoCelmfFE=
  • List-archive: <http://lists.darkdna.net/pipermail/discuss>
  • List-id: <discuss.lists.opennicproject.org>

true, but not everybody is ipv6 enabled, however it's not too hard for
a tunnel, it'd be another layer of checks and balances
not to mention your reports cite the hostname, so it'd be something to
clean up the reports a little too,

I've not heard anything outright against renaming the servers into the
new scheme, so I'll start work on changing over to
nsX.ipvX.[CC].dns.opennic.glue next week (give some time for further comments)

On Sat, Jan 29, 2011 at 11:38 PM, Jeff Taylor <shdwdrgn AT sourpuss.net> wrote:
> I do like the idea of pairing up the names some how.  It would certainly
> make it easier, at a glance, to see if two outages are actually a single
> server not responding on either interface.  On the other hand, for
> troubleshooting purposes, there's no reason why you can't tell dig to
> specifically pull ipv4 or ipv6 queries.
>
>
> On 01/29/2011 05:02 AM, Zach Gibbens wrote:
>
> I carried on that "mistake" at first just cause it's how we did it, and
> asked some of this myself, but actually there is two good side effects,
> first of which, at least once since I took over, an ipv6 route became
> unavailable, due to a routing issue, which meant we temporally removed
> it, as part of the temporary outage procedure, same box also served as a
> tier 2 with ipv4 on a slightly differing route, benefit one was we were
> more selective in identifying the outage, his server still received
> queries and still resolved them, for ipv4.
>
> the second benefit is due to how we spotted the error, it was
> immediately clear that the issue lied in the routing of ipv6, not bind9
> (or another daemon)
>
> both of these points aided in reducing downtime for a server, by giving
> accurate reports of the issue, with as much detail as could be gathered.
>
> given the current setup, this does wind up having some benifits, for
> little if any downside (a larger zone file at worst)
>
> I do like your idea of ns{n}.ipv{n}.{CC}.dns.opennic.glue, I'd be
> willing to implement that if nobody's opposed (if I may make a
> suggestion on it, how about just nsX.{CC}.dns.opennic.glue, unless the
> same server has a dual stack, then nsX.ipv4 and nsX.ipv6, that should
> help keep them paired, a little more visual aid)
>
> so idk if that was a mistake, as Jeff suggested, it may have been, but
> it's proven to be a useful one in diagnosing an issue at least once, and
> as ipv6 continues to build up steam, might be all the more reason to
> continue.
>
> Just my two pence
>
>
>
> On 01/28/2011 11:47 PM, Jeff Taylor wrote:
>
> I know I've been asked this question countless times, and I know I've
> mentioned it on IRC, but for the life of me, I can't recall any
> explanation other than "that's just the way we've always done it"...
>
> If this is so, it makes me wonder if the first IPv6 address was given a
> new hostname simply because the person adding it didn't realize that
> both types of records can point to the same hostname. We've got a lot
> of folks setting up tunnels in the past couple years, and I expect that
> in the next couple years we will see a move towards ISPs providing
> native IPv6 addressing, so this is something that really should be
> resolved now, so we can get a policy posted to answer further questions...
>
>
> On 01/28/2011 06:35 PM, NovaKing wrote:
>
> I've noticed that each new T2 gets a unique fqdn, but I also notice
> that a
> server which has both IPv4 and IPv6 each get a different fqdn, is this
> required?
>
> If you want each IP to get a unique fqdn maybe a nicer approach would be
> something like ns{n}{.ipv6}.{tld}.dns.opennic.glue
>
> Example:
>
> ns1.se.dns.opennic.glue = 192.121.121.14
> ns1.ipv6.se.dns.opennic.glue = 2a01:298:3:100::14
>
> at least this way the numbering system doesn't just increment so quickly
> with servers with both IPv4/IPv6.
>
> But realistically ns1.se.dns.opennic.glue should simply have both A
> and AAAA
> records assigned to it.
>
>
> _______________________________________________
> discuss mailing list
> discuss AT lists.opennicproject.org
> http://lists.darkdna.net/mailman/listinfo/discuss
>
> _______________________________________________
> discuss mailing list
> discuss AT lists.opennicproject.org
> http://lists.darkdna.net/mailman/listinfo/discuss
>
>
> _______________________________________________
> discuss mailing list
> discuss AT lists.opennicproject.org
> http://lists.darkdna.net/mailman/listinfo/discuss
>
> _______________________________________________
> discuss mailing list
> discuss AT lists.opennicproject.org
> http://lists.darkdna.net/mailman/listinfo/discuss
>
>




Archive powered by MHonArc 2.6.19.

Top of Page