Opened 12 years ago

Closed 12 years ago

#23 closed defect (fixed)

Comments on "draft-iab-dns-applications-00"

Reported by: Ed.Lewis@… Owned by: jon.peterson@…
Priority: blocker Milestone: milestone1
Component: draft-iab-dns-applications Version: 1.0
Severity: Active WG Document Keywords:


Comments in reference to this document:

# Network Working Group O. Kolkman
# Internet-Draft NLNet
# Intended status: Standards Track J. Peterson
# Expires: April 20, 2011 NeuStar?, Inc.
# H. Tschofenig
# Nokia Siemens Networks
# B. Aboba
# Microsoft Corporation
# October 17, 2010

"Network Working Group"? Or IAB?

# Architectural Considerations on Application Features in the DNS
# draft-iab-dns-applications-00
# 2. Motivation
# The MX record was the first of a long series of DNS resource records
# that supported applications associated with a domain name. The SRV

I thought the MX record was the successor to a number of other obsoleted mail-related records. I am not sure of this, but I thought MX was "the result of evolution" and not the start of it.

In the IANA registry for resource record types:

MD 3 a mail destination (Obsolete - use MX) [RFC1035]
MF 4 a mail forwarder (Obsolete - use MX) [RFC1035]
MB 7 a mailbox domain name (EXPERIMENTAL) [RFC1035]
MG 8 a mail group member (EXPERIMENTAL) [RFC1035]
MR 9 a mail rename domain name (EXPERIMENTAL) [RFC1035]
MINFO 14 mailbox or mail list information [RFC1035]
MX 15 mail exchange [RFC1035]

Just saying...

# resource record provided a more general mechanism for identifying
# services in a domain, complete with a weighting system and selection
# among transports. The NAPTR resource record, especially in its

And most importantly (for SRV), selection of the port number. It would have been so cool to let the DNS use this to replace NS records. Hey - NS records are evidence that the DNS application invaded the data model too! (NS,MX,DS all play similar roles, architecurally speaking.)

# As the amount of application intelligence in the DNS has increased,

I don't see evidence that it has increased, but the desire to see it happen has increased. SRV and NAPTR have not been generally used - there is widespread use, true, in narrow domains (like Active Directory and SRV).

# however, some proposed extensions have become mis-aligned with the
# foundational assumptions of the DNS. One such assumption is that the
# resolution of traditional domain names to IP addresses is public
# information, lacking any confidentiality requirement - any security
# needed by an application or service is invoked after the service has

I think the lack of confidentiality rises from the overhead imposed and from the desire to avoid policy implications of data secrecy. I disagree with that it rises from the assumption that it is okay to reach a place and then deal with security.

# been contacted. Typically, the translation is also global
# information, meaning that the response to a resolution does not
# depend on the identity of the querier (although for load balancing
# reasons or related optimizations, the DNS may return different
# addresses in response to different queries, or even no response at
# all, which is discussed further below). These assumptions permit the

Only the IETF seems to still believe this. As far as a response being independent of the querier's characteristics, that cow has fled the barn, commercially speaking.

# existence of a single authoritative unique global root of the DNS,
# and also underlie the scaling capabilities of the DNS, notably the
# ability of intermediaries to cache responses. At the point where
# these assumptions no longer apply to the data that an application
# requires, one can reasonably question whether or not that application
# should use the DNS to deliver that data.

There is no single authoritative unique global root of the DNS today, no more so than there is a single data space in all deployments of the Oracle database.
Not every inter-network is interconnected, a specific example is the GSM network for which Neustar operates "yet another root DNS zone".

I am not talking about alternate roots for the global public internet, I'm talking about altogether disparate networks.

# Increasingly, however, the flexibility of the DDDS framework has
# encouraged the re-purposing of the DNS into a generic database. Since

Being unfamiliar with databases outside of the generalities, what is a generic database? SQL? Relational? That cloud stuff? Like cassandra?

# the output of DDDS can be a URI, and URIs themselves are containers
# for basically arbitrary data, through the DDDS framework one can
# query for an arbitrary string (provided it can be formatted and
# contained within the syntactical limits of a domain name) and receive
# as a response an equally arbitrary chunk of data. The use of the DNS
# for generic database lookups is especially attractive in environments
# that already use the DDDS framework, where deployments would prefer
# to reuse the existing query/response interface of the DNS over
# installing a new and separate database capability.

But there's still no indexing, which I understand is a big topic in relational databases. At least the Oracle DBA's are always muttering about indexing.

# The guidance in this document complements the guidance on extending
# the DNS given in [RFC5507]. Whereas RFC5507 considers the preferred
# ways to add new information to the underlying syntax of the DNS (such
# as defining new resource records or adding prefixes or suffixes to
# labels), the current document considers broader implications of
# offloading application features to the DNS, be it through extending
# the DNS or simply reusing existing protocol capabilities. It is the
# features themselves, rather than any syntactical representation of
# those features, that are considered here.

Is there any referenceable document that lists the DNS's strengths, like timeliness in responding, resiliency, scaling? Knowing what the architecture of the DNS does very well and what would undermine it would be a really good starting point.

# 3. Overview of DNS Application Usages #
# While the fundamental motivation for the Domain Name System was to
# replace lengthy numeric addresses with strings that are easier to
# interpret and memorize, the hierarchical system of hosts and domains

That's not what I thought it was. We already had that solved with /etc/hosts.txt. The motivation for DNS was to speed up the time it took to "add" hosts to the public consciousness.

# rendered the DNS important for its administrative properties as well
# as its mnemonics. In so far as the DNS explained how to reach an
# administrative domain rather than simply a host, it naturally
# extended to optimize for reaching particular applications within a
# domain. Without these extensions, a user trying to send mail to a
# foreign domain, for example, lacked a discovery mechanism to locate
# the right host in the remote domain to connect to for mail. While
# such a discovery mechanism could be built by each such application
# protocol, the universality of the DNS invites installing these such
# features in its public tree.

I don't really get this paragraph. For one the text here "naturally extended to optimize for reaching" is unparsable by my brain. I don't know what is meant by this. In general, I just don't get the paragraph. Maybe because you have to also address wildcards and CNAMEs to call the MX record evidence for supporting a "discovery mechanism."

Or it could be that there is a hidden motivation here involving "discovery mechanisms." Perhaps there is something that caused this document to be written. If so, just say it.

# 3.1. Locating Services in a Domain
# The Mail Exchange (MX) DNS resource record provides the simplest
# motivating example for an application advertising its host in the
# Domain Name System. The MX resource record contains the hostname of
# a server within the administrative domain in question that receives
# mail; that hostname must itself be resolved to an IP address through

Whoa. I've been not tripping over administrative domain before, but here I have to stop. An MX record can refer to any server anywhere. It isn't an "advertisement" (in the BGP sense) at all, just a "go here" notice. MX records play a role in disaster recovery by listing backup drop-off points too.
And so on. It isn't all about "discovery" unless you really twist the topic.

# the DNS in order to reach the mail server. While naming conventions
# for applications might serve a similar purpose (a host might be named
# "" for example), approaching service location through

Not all domain names refer to hosts, they can refer to services. I would have assumed (meaning, if I were to do the DNS for a site), a domain name like "mail" or "smtp" would have a CNAME record to domain name that was a representation of a host (having a A/AAAA record).

# the creation of a new resource record yields several important
# benefits. Firstly, one can put multiple MX records in a zone, in
# order to designate backup hosts that can receive mail when the
# primary server is offline. One can even load balance across several
# such hosts. These properties could not easily be captured by naming
# conventions (see [RFC4367]).

Well, I'm not sure where you are going with this. So I'll just read on for a while.

# 4. Challenges for the DNS
# These methods for transforming arbitrary identifiers into domain
# names, and for returning arbitrary data in response to DNS queries,
# both represent significant extensions from the original concept of
# the DNS, yet neither fundamentally alters the underlying model of the
# DNS. The promise that applications might rely on the DNS as a
# generic database, however, invariably gives rise to additional
# requirements that one might expect to find in a database access
# protocol: authentication of the source of queries for comparison to
# access control lists, formulating complex relational queries, and
# asking questions about the structure of the database itself. DNS was
# not designed to provide these sorts of properties, and extending the
# DNS to encompass them would represent a fundamental alteration to its
# model. If an application desires these properties from a database,
# in general this is a good indication that the DNS cannot meet the
# needs of the application in question.

I disagree in general with this paragraph.

The DNS is an example of a distributed information passing system that associates 3-tuples to 4-tuples. That is, a triplet of name, type, class will fetch a quadruplet of name, type, class, rdata. How this association is created is kind of like a "dirty snowball." There is the most pristine association, in which the associating is most like "garbage in, garbage out".
Then there is the dirty part, which DNS calls "special processing."

Where we get into messy discussions is determining what "special processing" is good and what is bad. For example, adherents take for granted the CNAME record, but it is just as bad to the DNS as basing the RDATA on the source IP address of the query.

To get back though to the 3-tuple to 4-tuple thing. When I was studying artificial intelligence and then distributed/parallel processing theory in the 1980's, well prior to my every learning about the DNS, a central construct was some sort of shared memory. A place where one thread could drop information for another thread. The basic capsule was a 2-tuple, which was an (unique) identifier and the associated data. The deposit would be a fully qualified 2-tuple and a pickup would be a tuple with the identifier and an empty other half. (DNS is merely replaced the identifier with the name/class/type.)

Looking at DNS this way, you begin to put it's sub-mechanisms in context.
Records like SOA and NS are merely there for internal memory management, they let the associative element move form theory into reality. Including AXFR, IXFR, notify and such, it's all about just making the associations available.
These are not essential architectural elements mind you, no more than a liver is important in the telling of a joke - if the joke is from a live person, the liver is vital, but if the joke is delivered by a robot, no liver needed.

Moving on to the query modification mechanisms, there is the CNAME and DNAME.
These cause special processing that the DNS does to help "resolve" the identifier. MX has a hand here because it will attach address records to the response, an example of a half-breed, if you will, a layer-violating feature.

I forget if SRV and NAPTR really are special to the DNS. It depends on where you draw the architectural line. Is the DDDS an application on top of DNS (where an E164 is converted to a domain name) or part of the DNS? If it is outside, it is just getting the type and data it expects (the NAPTR or SRV).

Wildcard is a answer synthesis generator that is compatible with the zone update mechanism. Once the DNS is directed to the wildcard, an answer is computed from data in the query and delivered.

Given that, once we begin to look at basing answers on other factors, like source IP address, it gets harder to draw lines of what is acceptable and not acceptable.

One criteria is whether a mechanism is able to be implemented in an interoperable way, with interoperable meaning that more than one code base can be used to generate equivalent activity. The trouble with that criteria is that while it is important and fundamental to the IETF goals, to others, it interoperability is not important, or not as important as other goals.

There are those whose goal is traffic engineering, using bandwidth in a locally optimized way. These are folks that want to use the DNS is a way that helps them direct traffic to places regardless of whether or not the mechanism is interoperable.

# Since many of these new requirements have emerged from the ENUM
# space, the following sections use ENUM as an illustrative example;
# however, any application using the DNS as a feature-rich database
# could easily end up with similar requirements.

I have a fundamental objection to this. Generalizing from a single case yields a low-confidence projection. If ENUM is unique, make this a complaint about ENUM, not a blanket statement.

# 4.1. Compound Queries
# parameters in this fashion is very dubious, especially if more than

Can you quantify "dubious?"

The trouble with criticizing this is that there is nothing wrong with a domain name representing something other than a host. Having a domain represent a phone number, what's wrong with that? And just you might say hostnames are subdomains of a administration and therefore an accept that it is okay to arrange the DNS as we do. But why can't a trunk group be a delegation from a phone number?

Remember that what is being done is representing a real-world situation in the DNS data space model. There's a lot of flexibility there. Databases have lots of schemas, tailored to their application domain.

# 4.1.1. Responses Tailored to the Originator #
# The most important subcase of the compound queries are DNS responses
# tailored to the identity of their originator, where some sort of
# administrative identity of the originator must be conveyed to the
# DNS. Often the source IP address is somehow mapped to the

How is this a subcase. The paper switches focus from the layout of the database to a comment on the response generation process. This is like apples and oranges.

# administrative entity of the originator in deployments with this
# requirement today. The security implications of trusting the source
# IP address of a DNS query have prevented most solutions along these
# lines from being standardized. Some applications require an

Standarized maybe, but that hasn't stopped operations. (My frustration with the IETF is that it is most like an ostrich - when confronted by use cases it doesn't like, if pretends they don't exist. This is why the IETF is flirting with being a mere ivory tower.)

In some cases, it isn't important if the source lies. Get over it.

# application-layer identifier of the originator rather than an IP
# address; for example, draft-kaplan-enum-source-uri provides a SIP URI
# in an eDNS0 parameter (though without any specific provision for
# cryptographically verifying the claimed identity). Effectively, the

Enough already - get over it.

# conveyance of this information about the administrative identity of
# the originator is a weak authentication mechanism, on the basis of

Enough already - get over it.

# which the DNS server makes an authorization decision before sharing
# resource records. This can parlay into a selective confidentiality
# mechanism, where only a specific set of originators are permitted to
# see resource records, or a case where a query for the same name by
# different entities results in completely different resource record
# sets. The DNS, however, substantially lacks the protocol semantics

And the problem with that is...?

# to manage access control list for data, and again, caching and
# recursion introduce significant challenges for applications that
# attempt to offload this responsibility to the DNS. Achieving feature
# parity with even the simplest authentication mechanisms available at
# the application layer would like require significant rearchitecture
# of the DNS.

Aww, c'mon. This sounds like "we just don't like this use of our toys and so we will insinuate it is bad."

You are wasting the opportunity to educate us on how this has implications for caching. And that the DNS is not a client-server protocol but a client-cache-server protocol.

You are ignoring that there are deployments out there that make great use of inconsistent responses. And that, if you want to get mathematical, it is impossible to distinguish between selective answering and a zone that is in constant flutter. The impact on caches is the same, btw.

# 4.2. Metadata about Tree Structure
# ENUM use cases have also surfaced a couple of optimization
# requirements to reduce unnecessary calls and queries by including
# metadata that describes the contents and structure of ENUM DNS trees.
# In particular, the "send-n" proposal (draft-bellis-enum-send-n) hopes
# to reduce the number of DNS queries sent in cases where a telephone
# system is collecting dialed digits in a region that supports
# "overlap" dialing, a form of variable-length number plan. In these
# plans, a telephone switch ordinarily cannot anticipate when a dialed
# number is complete, as only the terminating customer premise
# equipment (typically a private branch exchange) knows how long a
# telephone number needs to be. The "send-n" proposal offloads to the
# DNS the responsibility for informing the telephone switch how many
# digits must be collected by placing in zones corresponding to
# incomplete telephone numbers some resource records which state how
# many more digits are required - effectively how many steps down the
# DNS tree one must take to reach a name for a complete number. With
# this information, the application is not required to query the DNS
# every time a new digit is dialed, but can wait to collect sufficient
# digits to receive a response. A tangentially related proposal,
# draft-ietf-enum-void, similarly places resource records in the DNS
# that tell the application that it need not attempt to reach a number
# on the PSTN, as the number is unassigned.
# Both proposals optimize application behavior by placing metadata in

As an editorial've just had a real long paragraph on one proposal and hten in the last sentence mention something unrelated. Starting this paragraph you begin with "Both" but it is hardly obvious what you mean.

My English composition teacher would be frowning.

# the DNS that predicts the success of future queries or application
# invocations. These predictions require that the metadata remain
# synchronized with the state of the resources it predicts.
# Maintaining that synchronization, however, requires that the DNS have
# semi-real time updates that may conflict with scale and caching
# mechanisms. It is unclear why this data is better maintained by the
# DNS than in an unrelated application protocol.

Based on the observation that "generalizing from one case is fruitless" I don't know what to make of this section. I mean "may conflict with scale and caching mechanisms." This sounds like you are just giving this idea a brush off.

# 4.3. Using DNS as a Generic Database
# As previously noted, the use of the First Well Known Rule of DDDS
# combined with data URLs effectively allows the DNS to answer queries
# for arbitrary strings and to return arbitrary data as value. Some
# query-response applications, however, require queries and responses
# that simply fall outside the syntactic capabilities of the DNS.
# While the data URL specification (RFC2397) notes that it is "only
# useful for short values," many applications today use quite large
# data URLs as workarounds in environments where only URIs can be
# interpolated. While the use of TCP and eDNS0 allows DNS responses to
# be quite long, nonetheless there are forms of data that an
# application might store in the DNS that exceed reasonable limits: in
# the ENUM context, for example, something like storing base 64 encoded
# mp3 files of custom ringtones. Similarly the domain names themselves
# must conform with certain syntactic constraints: they must consist of
# labels that do not exceed 63 characters while the total length of the
# encoded name may not exceed 255 octets, they must obey fairly strict
# encoding rules, and so on.

I don't think you guys know what a "generic" database is...this section is worthless. (My DBA's are always talking about index tables, schemas, and so one. Not just about the volume of data, length of SQL statements, ...)

# 4.3.1. Administrative Structures Misaligned with the DNS #
# While the DDDS framework enables any sort of alphanumeric data to
# serve as a DNS name through the application of the First Well Known
# Rule, the delegative structure of the resulting DNS name may not
# reflect the division of responsibilities for the resources that the
# alphanumeric data indicates. Telephone numbers in the United States,
# for example, are assigned and delegated in a relatively complex
# manner: the first three digits of a nationally specific number are an
# "area code" which is understood as an indivisible component of the
# number, yet for the purpose of the DNS, those three digits are ranked
# hierarchically.

Again, generalizing from a single example. Why not instead obsolete DDDS if it is the source of this evil and killing it will prevent the evil from coming again?

# 4.4. Domain Redirection
# Most Internet application services provide a redirection feature -
# when you attempt to contact a service, the service may refer you to a
# different service instance, potentially in another domain, that is
# for whatever reason better suited to address a request. In HTTP and
# SIP, for example, this feature is implemented by the 300 class
# responses containing one or more better URIs that may indicate that a
# resource has moved temporarily or permanently to another service.
# Tools in the DNS like the SRV record, however, can provide a similar
# feature at a DNS level, and consequently some applications as an
# optimization offload the responsibility for redirection to the DNS;
# NAPTR can also provide this capability on a per-application basis,
# and numerous DNS resource records can provide redirection on a per-
# domain basis. This can prevent the unnecessary expenditure of
# application resources on a function that could be performed as a
# component of a DNS lookup that is already a prerequisite for
# contacting the service. Consequently, in some deployment
# architectures this DNS-layer redirection is used for virtual hosting
# services.
# Implementing domain redirection in the DNS, however, has important
# consequences for application security. Un the absence of universal
# DNSSEC, applications must blindly trust the DNS in order to believe
# that their request has not been hijacked and redirected to a
# potentially malicious domain, unless some subsequent application
# mechanism can provide the necessary assurance. By way of contrast,
# for application-layer redirections protocols like HTTP and SIP have
# widely deployed security mechanisms such as TLS that can use
# certificates to vouch that a 300 response came from the domain that
# the originator initially hoped to contact.

You do realize that if this redirection is done by the authority server, DNSSEC doesn't "help."

# A number of applications have attempted to provide an after-the-fact
# security mechanism that verifies the authority of a DNS delegation in
# the absence of DNSSEC. The specification for deferencing SIP URIs
# ([RFC3263], reaffirmed in [RFC5922]) requires that during TLS
# establishment, the site eventually reached by a SIP request present a
# certificate corresponding to the original URI expected by the user
# (in other words, if redirects to in the DNS,
# this mechanism expects that will supply a certificate for
# in TLS), which requires a virtual hosting service to
# possess a certificate corresponding to the hosted domain. This
# restriction rules out many styles of hosting deployments common the
# web world today, however. [I-D.barnes-hard-problem] explores this
# problem space, and [I-D.saintandre-tls-server-id-check] proposes a
# solution for all applications that use TLS. Potentially, new types
# of certificates (similar to [RFC4985] might bridge this gap, but
# support for those certificates would require changes to existing
# certificate authority practices as well as application behavior.
# All of these application-layer measures attempt to mirror the
# delegation of authority in the DNS, when the itself DNS serves as the
# ultimate authority on how domains are delegated. The difficulty of
# synchronizing a static instrument like a certificate with a
# delegation in the DNS, however, exposes the fundamentally problematic
# nature of this endeavor. In environments where DNSSEC is not
# available, the problems with securing DNS-layer redirections would be
# avoided by performing redirections in the application-layer.

I just can't deal with this section. Way off-base and unhooked from reality. DNSSEC is not the savior and the lack of it is not an obstacle to security in these situations. DNS is not the ultimate authority on how the internet is organized - despite it's work on delegating domains.

Remember that the domain name space is just the dataspace deployment that represents what the administrator of the domain wants. The DNS is not "large and in charge" it is the message passing mechanism from source to sink to arrange the flow of information.

# 5. Principles and Guidance

This section isn't promising given that a lot of the above is dubious. Reading what is written made me think of sticking my fingers in my ears and yelling "la-la-la-la-la" when someone offered a counter opinion.

What should be done is to address the real question - what can the DNS do to help solve the problem of traffic management. It is traffic management that is driving things like SIP, IPv6 co-existence, commercial interest in inconsistent answers and commercial interest in non-interoperable answer synthesis functionality.

Instead of saying that the DNS ought not to evolve beyond what it was in 1990, you can address it's role in overcoming shortcomings in the other pieces of the IPv4/IPv6 stacks that are driving the interest in having the DNS do new things.

# 5.1. Private DNS Deployments
# public DNS tree. There are two motivations for this: in the first
# place, proprietary non-standard parameters can easily be integrated
# into DNS queries or responses; secondly, confidentiality and custom
# responses can be provided by deploying, respectively, underlying VPNs
# to shield the private tree from public queries, and effectively
# different virtual DNS trees for each administrative entity that might
# launch a query. In these constrained environments, caching and
# recursive resolvers can be managed or eliminated in order to prevent
# any unexpected intermediary behavior.

How about "three: the network is not connected to another network"? I could never justify the insane demand that be delegated in the ICANN-DNS tree to be used in a network with no connection to the global public internet. I'd rather there be an obvious clash in names if there was a breach, rather than trying to hide it. I prefer hard failures in some instances.

# While these deployments address the requirements of applications that
# rely on them, practically by definition these techniques will not
# form the basis of a standard solution. Moreover, as implementations
# come to support these proprietary parameters, it seems almost certain
# that these private techniques will begin to leak into the public DNS.
# Therefore, keeping these features within higher-layer applications
# rather than offloading them to the DNS is preferred.

You make "standard solution" sound like the driving force. What happened to "running code and rough consensus?"

# 6. Security Considerations

Not worth reviewing.

I'm sorry, but this is a rather useless document. I think the architectural de-construction is wrong, the understanding of the perceived problems wrong, and the goals stuck in the old days and putting the IETF self-interest ahead of the Internet's.
Edward Lewis
NeuStar? You can leave a voice message at +1-571-434-5468

Ever get the feeling that someday if you google for your own life story, you'll find that someone has already written it and it's on sale at Amazon?

Change History (1)

comment:1 Changed 12 years ago by jon.peterson@…

  • Resolution set to fixed
  • Status changed from new to closed

responded to comments - will make numerous small fixes based on the comments, but given the broader objections to the choice of scope of the overall document they probably won't satisfy the reviewer. ultimately, these review comments suggest we need to be more clear in the arguments made in the document, as many seem to argue against views the document does not uphold. general sense in the comments that the IAB should address different architectural questions than these, and that the set of problems arising from DDDS/ENUM are marginal and should not be considered exemplary when crafting architectural principles

Note: See TracTickets for help on using tickets.