Security Requirements for HTTPVPN Consortiumpaul.hoffman@vpnc.orgIsode Ltd.alexey.melnikov@isode.comRecent IESG practice dictates that IETF protocols must specify
mandatory-to-implement security mechanisms, so that
all conformant implementations share a common baseline. This
document examines all widely deployed HTTP security
technologies, and analyzes the trade-offs of
each.Recent IESG practice dictates that IETF protocols are required to
specify mandatory to implement security mechanisms. "The IETF
Standards Process" does not require that
protocols specify mandatory security mechanisms. "Strong Security
Requirements for IETF Standard Protocols"
requires that all IETF protocols provide a mechanism for implementers
to provide strong security. RFC 3365 does not define the term "strong
security"."Security Mechanisms for the Internet" is
not an IETF procedural RFC, but it is perhaps most relevant. Section
2.2 states:This document examines the effects of applying security constraints
to Web applications, documents the properties that result from each
method, and will make Best Current Practice recommendations for HTTP
security in a later document version. At the moment, it is mostly a
laundry list of security technologies and tradeoffs.For HTTP, the IETF generally defines "security mechanisms" as some
combination of access authentication and/or a secure transport.[[ There is a suggestion that this section be split into
"browser-like" and "automation-like" subsections. ]][[ NTLM (shudder) was brought up in the WG a few times in
the discussion of the -00 draft. Should we add a section on it? ]]Almost all HTTP authentication that involves a human
using a web browser is accomplished through HTML forms,
with session identifiers stored in cookies. For cookies, most implementations
rely on the "Netscape specification", which is described loosely in
section 10 of "HTTP State Management Mechanism" . The protocol in RFC 2109 is relatively widely
implemented, but most clients don't advertise support for it. RFC 2109
was later updated , but the newer version is
not widely implemented.Forms and cookies have many properties that make them an
excellent solution for some implementers. However, many of those
properties introduce serious security trade-offs.HTML forms provide a large degree of control over presentation,
which is an imperative for many websites. However, this increases user
reliance on the appearance of the interface. Many users do not
understand the construction of URIs , or their
presentation in common clients .
As a result,
forms are extremely vulnerable to spoofing.HTML forms provide acceptable internationalization if used
carefully, at the cost of being transmitted as normal HTTP content in
all cases (credentials are not differentiated in the protocol).HTML forms provide a facility for sites to indicate that a password
should never be pre-populated.
[[ More needed here on autocomplete ]]The cookies that result from a successful form submission make it
unnecessary to validate credentials with each HTTP request; this makes
cookies an excellent property for scalability. Cookies are susceptible
to a large variety of XSS (cross-site scripting) attacks, and measures
to prevent such attacks will never be as stringent as necessary for
authentication credentials because cookies are used for many purposes.
Cookies are also susceptible to a wide variety of attacks from
malicious intermediaries and observers. The possible attacks depend on
the contents of the cookie data. There is no standard format for most
of the data.HTML forms and cookies provide flexible ways of ending a session
from the client.HTML forms require an HTML rendering engine for which many protocols
have no use.HTTP 1.1 provides a simple authentication framework, "HTTP
Authentication: Basic and Digest Access Authentication" , which defines two optional mechanisms. Both of these
mechanisms are extremely rarely used in comparison to forms and
cookies, but some degree of support for one or both is available in
many implementations. Neither scheme provides presentation control,
logout capabilities, or interoperable internationalization.Basic Authentication (normally called just "Basic") transmits
usernames and passwords in the clear. It is very easy to implement,
but not at all secure unless used over a secure transport.Basic has very poor scalability properties because credentials must
be revalidated with every request, and because secure transports
negate many of HTTP's caching mechanisms. Some implementations use
cookies in combination with Basic credentials, but there is no
standard method of doing so.Since Basic credentials are clear text, they are reusable by any
party. This makes them compatible with any authentication database, at
the cost of making the user vulnerable to mismanaged or malicious
servers, even over a secure channel.Basic is not interoperable when used with credentials that contain
characters outside of the ISO 8859-1 repertoire.In Digest Authentication, the client transmits the results of
hashing user credentials with properties of the request and values
from the server challenge. Digest is susceptible to man-in-the-middle
attacks when not used over a secure transport.Digest has some properties that are preferable to Basic and
Cookies. Credentials are not immediately reusable by parties that
observe or receive them, and session data can be transmitted along
side credentials with each request, allowing servers to validate
credentials only when absolutely necessary. Authentication data
session keys are distinct from other protocol traffic.Digest includes many modes of operation, but only the simplest
modes enjoy any degree of interoperability. For example, most
implementations do not implement the mode that provides full message
integrity. Perhaps one reason is that implementation experience has
shown that in some cases, especially those involving large requests or
responses such as streams, the message integrity mode is impractical
because it requires servers to analyze the full request before
determining whether the client knows the shared secret or whether
message-body integrity has been violated and hence whether the request
can be processed.Digest is extremely susceptible to offline dictionary attacks,
making it practical for attackers to perform a namespace walk
consisting of a few million passwords
[[ CITATION NEEDED ]].Many of the most widely-deployed HTTP/1.1 clients are not compliant
when GET requests include a query string .Digest either requires that authentication databases be expressly designed
to accommodate it, or requires access to cleartext passwords.
As a result, many authentication databases that chose to do the former are
incompatible, including the most common method of storing passwords
for use with Forms and Cookies.Many Digest capabilities included to prevent replay attacks expose
the server to Denial of Service attacks.Digest is not interoperable when used with credentials that contain
characters outside of the ISO 8859-1 repertoire.There are many niche schemes that make use of the HTTP
Authentication framework, but very few are well documented. Some are
bound to transport layer connections.[[ A discussion about "SPNEGO-based Kerberos and NTLM HTTP
Authentication in Microsoft Windows"
goes here. ]]Many large Internet services rely on authentication schemes that
center on clients consulting a single service for a time-limited
ticket that is validated with undocumented heuristics. Centralized
ticket issuing has the advantage that users may employ one set of
credentials for many services, and clients don't send credentials to
many servers. This approach is often no more than a sophisticated
application of forms and cookies.All of the schemes in wide use are proprietary and non-standard,
and usually are undocumented. There are many standardization efforts
in progress, as usual.Many security properties mentioned in this document have been recast in
XML-based protocols, using HTTP as a substitute for TCP. Like the
amalgam of HTTP technologies mentioned above, the XML-based protocols
are defined by an ever-changing combination of standard and
vendor-produced specifications, some of which may be obsoleted at any
time without any documented change control
procedures. These protocols usually don't have much in common with the
Architecture of the World Wide Web. It's not clear why the term "Web" is
used to group them, but they are obviously out of scope for HTTP-based
application protocols.[[ This section could really use a good definition of
"Web Services" to differentiate it from REST. ]][[ A discussion of HTTP over TLS needs to be added
here. ]][[ Discussion of connection confidentiality should be separate from
the discussion of access authentication based on mutual authentication with
certificates in TLS. ]]Is is possible that HTTP will be revised in the future. "HTTP/1.1"
and "Use and Interpretation of HTTP Version
Numbers" define conformance requirements in
relation to version numbers. In HTTP 1.1, all authentication
mechanisms are optional, and no single transport substrate is
specified. Any HTTP revision that adds a mandatory security mechanism
or transport substrate will have to increment the HTTP version number
appropriately. All widely used schemes are non-standard and/or
proprietary.This entire document is about security considerations.The Internet Standards Process -- Revision 3Harvard University1350 Mass. Ave.CambridgeMA02138US+1 617 495 3864sob@harvard.eduThis memo documents the process used by the Internet community for the standardization of protocols and procedures. It defines the stages in the standardization process, the requirements for moving a document between stages and the types of documents used during this process. It also addresses the intellectual property rights and copyright issues associated with the standards process.HTTP State Management MechanismBell Laboratories, Lucent Technologies600 Mountain Ave. Room 2A-227Murray HillNJ 07974(908) 582-2250(908) 582-5809dmk@bell-labs.comNetscape Communications Corp.501 E. Middlefield Rd.Mountain ViewCA 94043(415) 528-2600montulli@netscape.comcookiehypertext transfer protocolUse and Interpretation of HTTP Version NumbersWestern Research LaboratoryDigital Equipment Corporation250 University AvenuePalo AltoCalifornia94305USAmogul@wrl.dec.comDepartment of Information and Computer ScienceUniversity of CaliforniaIrvineCA 92717-3425USA+1 (714) 824-4056fielding@ics.uci.eduMIT Laboratory for Computer Science545 Technology SquareCambridgeMA 02139USA+1 (617) 258 8682jg@w3.orgW3 ConsortiumMIT Laboratory for Computer Science545 Technology SquareCambridgeMA 02139USA+1 (617) 258 8682frystyk@w3.org
Applications
HTTPhypertext transfer protocol
HTTP request and response messages include an HTTP protocol version
number. Some confusion exists concerning the proper use and
interpretation of HTTP version numbers, and concerning
interoperability of HTTP implementations of different protocol
versions. This document is an attempt to clarify the situation. It
is not a modification of the intended meaning of the existing
HTTP/1.0 and HTTP/1.1 documents, but it does describe the intention
of the authors of those documents, and can be considered definitive
when there is any ambiguity in those documents concerning HTTP
version numbers, for all versions of HTTP.
Hypertext Transfer Protocol -- HTTP/1.1Department of Information and Computer ScienceUniversity of California, IrvineIrvineCA92697-3425+1(949)824-1715fielding@ics.uci.eduWorld Wide Web ConsortiumMIT Laboratory for Computer Science, NE43-356545 Technology SquareCambridgeMA02139+1(617)258-8682jg@w3.orgCompaq Computer CorporationWestern Research Laboratory250 University AvenuePalo AltoCA94305mogul@wrl.dec.comWorld Wide Web ConsortiumMIT Laboratory for Computer Science, NE43-356545 Technology SquareCambridgeMA02139+1(617)258-8682frystyk@w3.orgXerox CorporationMIT Laboratory for Computer Science, NE43-3563333 Coyote Hill RoadPalo AltoCA94034masinter@parc.xerox.comMicrosoft Corporation1 Microsoft WayRedmondWA98052paulle@microsoft.comWorld Wide Web ConsortiumMIT Laboratory for Computer Science, NE43-356545 Technology SquareCambridgeMA02139+1(617)258-8682timbl@w3.org
The Hypertext Transfer Protocol (HTTP) is an application-level
protocol for distributed, collaborative, hypermedia information
systems. It is a generic, stateless, protocol which can be used for
many tasks beyond its use for hypertext, such as name servers and
distributed object management systems, through extension of its
request methods, error codes and headers . A feature of HTTP is
the typing and negotiation of data representation, allowing systems
to be built independently of the data being transferred.
HTTP has been in use by the World-Wide Web global information
initiative since 1990. This specification defines the protocol
referred to as "HTTP/1.1", and is an update to RFC 2068 .
HTTP Authentication: Basic and Digest Access AuthenticationNorthwestern University, Department of MathematicsNorthwestern UniversityEvanstonIL60208-2730USAjohn@math.nwu.eduVerisign Inc.301 Edgewater PlaceSuite 210WakefieldMA01880USApbaker@verisign.comAbiSource, Inc.6 Dunlap CourtSavoyIL61874USAjeff@AbiSource.comAgranat Systems, Inc.5 Clocktower PlaceSuite 400MaynardMA01754USAlawrence@agranat.comMicrosoft Corporation1 Microsoft WayRedmondWA98052USApaulle@microsoft.comNetscape Communications Corporation501 East Middlefield RoadMountain ViewCA94043USAOpen Market, Inc.215 First StreetCambridgeMA02142USAstewart@OpenMarket.com
"HTTP/1.0", includes the specification for a Basic Access
Authentication scheme. This scheme is not considered to be a secure
method of user authentication (unless used in conjunction with some
external secure system such as SSL ), as the user name and
password are passed over the network as cleartext.
This document also provides the specification for HTTP's
authentication framework, the original Basic authentication scheme
and a scheme based on cryptographic hashes, referred to as "Digest
Access Authentication". It is therefore also intended to serve as a
replacement for RFC 2069 . Some optional elements specified by
RFC 2069 have been removed from this specification due to problems
found since its publication; other new elements have been added for
compatibility, those new elements have been made optional, but are
strongly recommended.
Like Basic, Digest access authentication verifies that both parties
to a communication know a shared secret (a password); unlike Basic,
this verification can be done without sending the password in the
clear, which is Basic's biggest weakness. As with most other
authentication protocols, the greatest sources of risks are usually
found not in the core protocol itself but in policies and procedures
surrounding its use.
HTTP State Management MechanismBell Laboratories, Lucent Technologies600 Mountain Ave. Room 2A-333Murray HillNJ07974(908) 582-2250(908) 582-1239dmk@bell-labs.comEpinions.com, Inc.2037 Landings Dr.Mountain ViewCA94301lou@montulli.org
This document specifies a way to create a stateful session with
Hypertext Transfer Protocol (HTTP) requests and responses. It
describes three new headers, Cookie, Cookie2, and Set-Cookie2, which
carry state information between participating origin servers and user
agents. The method described here differs from Netscape's Cookie
proposal , but it can interoperate with HTTP/1.0 user
agents that use Netscape's method. (See the HISTORICAL section.)
This document reflects implementation experience with RFC 2109 and
obsoletes it.
Strong Security Requirements for Internet Engineering Task Force Standard ProtocolsSecurity Mechanisms for the InternetSecurity must be built into Internet Protocols for those protocols to offer their services securely. Many security problems can be traced to improper implementations. However, even a proper implementation will have security problems if the fundamental protocol is itself exploitable. Exactly how security should be implemented in a protocol will vary, because of the structure of the protocol itself. However, there are many protocols for which standard Internet security mechanisms, already developed, may be applicable. The precise one that is appropriate in any given situation can vary. We review a number of different choices, explaining the properties of each.Uniform Resource Identifier (URI): Generic SyntaxWorld Wide Web ConsortiumMassachusetts Institute of Technology77 Massachusetts AvenueCambridgeMA02139USA+1-617-253-5702+1-617-258-5999timbl@w3.orghttp://www.w3.org/People/Berners-Lee/Day Software5251 California Ave., Suite 110IrvineCA92617USA+1-949-679-2960+1-949-679-2972fielding@gbiv.comhttp://roy.gbiv.com/Adobe Systems Incorporated345 Park AveSan JoseCA95110USA+1-408-536-3024LMM@acm.orghttp://larry.masinter.net/
Applications
uniform resource identifierURIURLURNWWWresource
A Uniform Resource Identifier (URI) is a compact sequence of characters
that identifies an abstract or physical resource. This specification
defines the generic URI syntax and a process for resolving URI references
that might be in relative form, along with guidelines and security
considerations for the use of URIs on the Internet.
The URI syntax defines a grammar that is a superset of all valid URIs,
allowing an implementation to parse the common components of a URI
reference without knowing the scheme-specific requirements of every
possible identifier. This specification does not define a generative
grammar for URIs; that task is performed by the individual
specifications of each URI scheme.
SPNEGO-based Kerberos and NTLM HTTP Authentication in Microsoft WindowsThis document describes how the Microsoft Internet Explorer (MSIE) and Internet Information Services (IIS) incorporated in Microsoft Windows 2000 use Kerberos for security enhancements of web transactions. The Hypertext Transport Protocol (HTTP) auth-scheme of "negotiate" is defined here; when the negotiation results in the selection of Kerberos, the security services of authentication and, optionally, impersonation (the IIS server assumes the windows identity of the principal that has been authenticated) are performed. This document explains how HTTP authentication utilizes the Simple and Protected GSS-API Negotiation mechanism. Details of Simple And Protected Negotiate (SPNEGO) implementation are not provided in this document. This memo provides information for the Internet community.Apache HTTP Server - mod_auth_digestPhishing Tips and TechniquesWS-PagecountMuch of the material in this document was written by Rob Sayre,
who first promoted the topic. Many others on the HTTPbis Working
Group have contributed to this document in the discussion.[This entire section is to be removed when published as an RFC.]Changed the authors to Paul Hoffman and Alexey Melnikov, with permission
of Rob Sayre.Made lots of minor editorial changes.Removed what was section 2 (Requirements Notation), the reference
to RFC 2119, and any use of 2119ish all-caps words.In 3.2.1 and 3.2.2, changed "Latin-1 range" to "ISO 8859-1
repertoire" to match the definition of "TEXT" in RFC 2616.Added minor text to the Security Considerations section.Added URLs to the two non-RFC references.Fixed some editorial nits reported by Iain Calder.Added the suggestions about splitting for browsers and
automation, and about adding NTLM, to be beginning of 2.In 2.1, added "that involves a human
using a web browser" in the first sentence.In 2.1, changed "session key" to "session identifier".In 2.2.2, changed
to
In 2.4, asked for a definition of "Web Services".In A, added the WG.