source: draft-ietf-lwig-coap.mkd

Last change on this file was 62, checked in by kovatsch@…, 4 years ago

Version bump.

File size: 60.2 KB
2title: CoAP Implementation Guidance
3# abbrev: coap-impl
4docname: draft-ietf-lwig-coap-03
5date: 2015-07-06
7stand_alone: true
9ipr: trust200902
10area: Internet
11wg: LWIG Working Group
12kw: Internet-Draft
13cat: info
16  toc: yes
17  tocdepth: 3
18  sortrefs: yes
19  symrefs: yes
22      -
23        ins: M. Kovatsch
24        name: Matthias Kovatsch
25        org: ETH Zurich
26        street: Universitätstrasse 6
27        city: CH-8092 Zurich
28        country: Switzerland
29        email:
30      -
31        ins: O. Bergmann
32        name: Olaf Bergmann
33        org: Universitaet Bremen TZI
34        street: Postfach 330440
35        city: D-28359 Bremen
36        country: Germany
37        email:
38      -
39        role: editor
40        ins: C. Bormann
41        name: Carsten Bormann
42        org: Universitaet Bremen TZI
43        street: Postfach 330440
44        city: D-28359 Bremen
45        country: Germany
46        phone: +49-421-218-63921
47        email:
50  RFC7230: http
51  RFC6570:
52  RFC6633:
53  RFC6282:
54  RFC7252: coap
55  I-D.ietf-core-observe: observe
56  I-D.ietf-core-block: block
57  I-D.bormann-core-cocoa: cocoa
60  RFC7228: terminology
61  RFC3542: ipv6api
62  I-D.silverajan-core-coap-alternative-transports: alttrans
63  I-D.savolainen-core-coap-websockets: websock
64  I-D.tschofenig-core-coap-tcp-tls: coaptcp
65  I-D.becker-core-coap-sms-gprs: coapsms
66  TinyOS:
67    title: "TinyOS: An Operating System for Sensor Networks"
68    author:
69      - ins: P. Levis
70      - ins: S. Madden
71      - ins: J. Polastre
72      - ins: R. Szewczyk
73      - ins: K. Whitehouse
74      - ins: A. Woo
75      - ins: D. Gay
76      - ins: A. Woo
77      - ins: J. Hill
78      - ins: M. Welsh
79      - ins: E. Brewer
80      - ins: D. Culler
81    date: 2005
82    seriesinfo:
83      Ambient intelligence, Springer: (Berlin Heidelberg)
84      ISBN: 978-3-540-27139-0
85  Contiki:
86    title: Contiki - a Lightweight and Flexible Operating System for Tiny Networked Sensors
87    date: 2004-11
88    author:
89      -
90        ins: A. Dunkels
91        name: Adam Dunkels
92      -
93        ins: B. Grönvall
94        name: Björn Grönvall
95      -
96        ins: T. Voigt
97        name: Thiemo Voigt
98    seriesinfo:
99      Proceedings of the First IEEE Workshop: on Embedded Networked Sensors
101--- abstract
103The Constrained Application Protocol (CoAP) is designed for
104resource-constrained nodes and networks such as sensor nodes in a low-power
105lossy network (LLN). Yet to implement this Internet protocol on
106Class 1 devices (as per RFC 7228, ~ 10 KiB of RAM and ~ 100 KiB of ROM) also lightweight
107implementation techniques are necessary. This document provides
108lessons learned from implementing CoAP for tiny, battery-operated networked
109embedded systems. In particular, it provides guidance on correct implementation
110of the CoAP specification RFC 7252, memory optimizations, and
111customized protocol parameters.
113--- middle
115{:cent: artwork-align="center"}
116<!-- Note: xml2rfc ——html ignores align attributes on artwork -->
118# Introduction
120The Constrained Application Protocol {{-coap}} has been designed
121specifically for machine-to-machine communication in networks with very
122constrained nodes.  Typical application scenarios therefore include building
123automation, process optimization, and the Internet of Things. The major design
124objectives have been set on small protocol overhead, robustness against packet
125loss, and against high latency induced by small bandwidth shares or slow
126request processing in end nodes.  To leverage integration of constrained nodes
127with the world-wide Internet, the protocol design was led by the REST
128architectural style that accounts for the scalability and robustness of the
129Hypertext Transfer Protocol {{-http}}.
131Lightweight implementations benefit from this design in many
132respects: First, the use of Uniform Resource Identifiers (URIs) for
133naming resources and the transparent forwarding of their
134representations in a server-stateless request/response protocol make
135protocol translation to HTTP a straightforward task.  Second, the
136set of protocol elements that are unavoidable for the core protocol,
137and thus must be implemented on every node, has been kept very small,
138minimizing the unnecessary accumulation of "optional" features.  Options
139that -- when present -- are critical for message processing are
140explicitly marked as such to force immediate rejection of messages
141with unknown critical options.  Third, the syntax of protocol data
142units is easy to parse and is carefully defined to avoid creation of
143state in servers where possible.
145Although these features enable lightweight implementations of the
146Constrained Application Protocol, there is still a tradeoff between
147robustness and latency of constrained nodes on one hand and resource
148demands on the other.
149For constrained nodes of Class 1 or even Class 2 {{-terminology}}, the most limiting
150factors usually are dynamic memory needs, static code size, and energy.
151Most implementations therefore need to optimize internal buffer usage,
152omit idle protocol feature, and maximize sleeping cycles.
154The present document gives possible strategies to solve this tradeoff
155for very constrained nodes (i.e., Class 1).
156For this, it provides guidance on correct implementation of the
157CoAP specification {{-coap}}, memory optimizations, and
158customized protocol parameters.
161# Protocol Implementation
163In the programming styles supported by very simple operating systems as
164found on constrained nodes, preemptive multi-threading is not an option.
165Instead, all operations are triggered by an event loop system, e.g.,
166in a send-receive-dispatch cycle.
167It is also common practice to allocate memory statically to ensure stable
168behavior, as no memory management unit (MMU) or other abstractions are
169available.  For a CoAP node, the two key parameters for memory usage are
170the number of (re)transmission buffers and the maximum message size that
171must be supported by each buffer.  Often the maximum message size is set
172far below the 1280-byte MTU of 6LoWPAN to allow more than one open Confirmable
173transmission at a time (in particular for parallel observe
174notifications {{-observe}}).
175Note that implementations on constrained platforms often not even
176support the full MTU.  Larger messages must then use blockwise
177transfers {{-block}}, while a good tradeoff between
1786LoWPAN fragmentation and CoAP header overhead must be found.
179Usually the amount of available free RAM dominates this decision.
180For Class 1 devices, the maximum message size is typically 128 or 256
181bytes (blockwise) payload plus an estimate of the maximum header size
182for the worst case option setting.
185## Client/Server Model {#client-server}
187In general, CoAP servers can be implemented more efficiently than clients.
188REST allows them to keep the communication stateless and piggy-backed
189responses are not stored for retransmission, saving buffer space.
190The use of idempotent requests also allows to relax deduplication, which
191further decreases memory usage.
192It is also easy to estimate the required maximum size of message buffers,
193since URI paths, supported options, and maximum payload sizes of the
194application are known at compile time. Hence, when the application is
195distributed over constrained and unconstrained nodes, the constrained ones
196should preferably have the server role.
198HTTP-based applications have established an inverse model because of the
199need for simple push notifications: A constrained client uses POST requests
200to update resources on an unconstrained server whenever an event (e.g., a
201new sensor reading) is triggered. This requirement is solved by the Observe
202option {{I-D.ietf-core-observe}} of CoAP. It allows servers to initiate
203communication and send push notifications to interested client nodes. This
204allows a more efficient and also more natural model for CoAP-based
205applications, where the information source is an origin server, which can also
206benefit from caching.
209## Message Processing
211Apart from the required buffers, message processing is symmetric for clients
212and servers. First the 4-byte base header has to be parsed and thereby checked
213if it is a CoAP message. Since the encoding is very dense, only a wrong
214version or a datagram size smaller than four bytes identify non-CoAP
215datagrams. These need to be silently ignored. Other message format errors,
216such as an incomplete datagram or the usage of reserved values, may need to be
217rejected with a Reset (RST) message (see Section 4.2 and 4.3 of
218{{-coap}} for details).
219Next the Token is read based on the TKL
220field. For the options following, there are two alternatives: either
221process them on the fly when an option is accessed or initially parse
222all values into an internal data structure.
224### On-the-fly Processing
226The advantage of on-the-fly processing is that no additional memory needs to
227be allocated to store the option values, which are stored efficiently inline
228in the buffer for incoming messages. Once the message is
229accepted for further processing, the set of options contained in the received
230message must be decoded to check for unknown critical options. To avoid
231multiple passes through the option list, the option parser might maintain a
232bit-vector where each bit represents an option number that is present in the
233received request. With the wide and sparse range of option numbers, the number
234itself cannot be used to indicate the number of left-shift operations to mask
235the corresponding bit. Hence, an implementation-specific enum of supported
236options should be used to mask the present options of a message in the bitmap.
237In addition, the byte index of every option (a direct pointer) can be added
238to a sparse list (e.g., a one-dimensional array) for fast retrieval.
240This particularly enables efficient handling of options that might occur more
241than once such as Uri-Path. In this implementation strategy, the delta is zero
242for any subsequent path segment, hence the stored byte index for this option
243(e.g., 11 for Uri-Path) would be overwritten to hold a pointer to only the last
244occurrence of that option. The Uri-Path can be resolved on the fly, though,
245and a pointer to the targeted resource stored directly in the sparse list.
247<!--What does this mean?-->
248<!--In simpler cases, conditionals can preselect one of the repeated option values.-->
250Once the option list has been processed, all known critical option and all
251elective options can be masked out in the bit-vector to determine if any
252unknown critical option was present. If this is the case, this information can
253be used to create a 4.02 response accordingly. Note that full processing must
254only be done up to the highest supported option number. Beyond that, only the
255least significant bit (Critical or Elective) needs to be checked. Otherwise,
256if all critical options are supported, the sparse list of option pointers is
257used for further handling of the message.
259### Internal Data Structure
261Using an internal data structure for all parsed options has an advantage when
262working on the option values, as they are already in a variable of
263corresponding type (e.g., an integer in host byte order). The incoming payload
264and byte strings of the header can be accessed directly in the buffer for
265incoming messages using pointers (similar to on-the-fly processing). This
266approach also benefits from a bitmap. Otherwise special values must be
267reserved to encode an unset option, which might require a larger type than
268required for the actual value range (e.g., a 32-bit integer instead of 16-bit).
270Many of the byte strings (e.g., the URI) are usually not required when generating the
271response. When all important values are copied (e.g., the Token, which needs
272to be mirrored), the internal data structure
273facilitates using the buffer for incoming messages also for the assembly of
274outgoing messages -- which can be the shared IP buffer provided by the OS.
276Setting options for outgoing messages is also easier with an internal data
277structure. Application developers can set options independent from the option
278number and do not need to care about the order for the delta encoding. The CoAP
279encoding is applied in a serialization step before sending. In contrast, assembling
280outgoing messages with on-the-fly processing requires either extensive memmove
281operations to insert new options, or restrictions for developers to set
282options in their correct order.
285## Message ID Usage
287Many applications of CoAP use unreliable transports, in particular
288UDP, which can lose, reorder, and duplicate messages. Although DTLS's
289replay protection deals with duplication by the network, losses are
290addressed with DTLS retransmissions only for the handshake protocol
291and not for the application data protocol. Furthermore, CoAP
292implementations usually send CON retransmissions in new DTLS records,
293which are not considered duplicates at the DTLS layer.
295### Duplicate Rejection
297CoAP's messaging sub-layer has been designed with protocol functionality such
298that rejection of duplicate messages is always possible. It is realized through
299the Message IDs (MIDs) and their lifetimes with regard to the message type.
301Duplicate detection is under the discretion of the recipient (see
302Section 4.5 of {{-coap}}, {{relaxation-on-the-server}}, {{relaxation-on-the-client}}).
303Where it is desired, the receiver needs to keep track of MIDs to
304filter the duplicates for at least NON_LIFETIME (145 s).
305This time also holds for CON messages, since it equals the possible reception
308On the sender side, MIDs of CON messages must not be re-used within
309the EXCHANGE_LIFETIME; MIDs of NONs respectively within the
310NON_LIFETIME.  In typical scenarios, however, senders will re-use MIDs
311with intervals far larger than these lifetimes: with sequential
312assignment of MIDs, coming close to them would require 250 messages
313per second, much more than the bandwidth of constrained networks would
314usually allow for.
316In cases where senders might come closer to the maximum message rate, it is
317recommended to use more conservative timings for the re-use of MIDs.
318Otherwise, opposite inaccuracies in the clocks of sender and recipient
319may lead to obscure message loss.
320If needed, higher rates can be achieved by using multiple endpoints
321for sending requests and managing the local MID per remote endpoint
322instead of a single counter per system (essentially extending the
32316-bit message ID by a 16-bit port number and/or an 128-bit IP
324address).  In controlled scenarios, such as real-time applications
325over industrial Ethernet, the protocol parameters can also be tweaked
326to achieve higher message rates ({{parameters}}).
328### MID Namespaces
330MIDs are assigned under the control of the originator of CON and NON
331messages, and they do not mix with the MIDs assigned by the peer for
332CON and NON in the opposite direction. Hence, CoAP implementors
333need to make sure to manage different namespaces for the MIDs used for
334deduplication. MIDs of outgoing CONs and NONs belong to the local endpoint; so
335do the MIDs of incoming ACKs and RSTs. Accordingly, MIDs of incoming CONs and
336NONs and outgoing ACKs and RSTs belong to the corresponding remote endpoint.
337{{mid_namespace}} depicts a scenario where mixing the namespaces would
338cause erroneous filtering.
341                  Client              Server
342                     |                  |
343                     |   CON [0x1234]   |
344                     +----------------->|
345                     |                  |
346                     |   ACK [0x1234]   |
347                     |<-----------------+
348                     |                  |
349                     |   CON [0x4711]   |
350                     |<-----------------+ Separate response
351                     |                  |
352                     |   ACK [0x4711]   |
353                     +----------------->|
354                     |                  |
355A request follows that uses the same MID as the last separate response
356                     |                  |
357                     |   CON [0x4711]   |
358                     +----------------->|
359Response is filtered |                  |
360  because MID 0x4711 |   ACK [0x4711]   |
361     is still in the X<-----------------+ Piggy-backed response
362  deduplication list |                  |
364{: cent #mid_namespace title="Deduplication must manage the MIDs in different namespace corresponding to their origin endpoints."}
366### Relaxation on the Server
368Using the de-duplication functionality is at the discretion of the receiver: Processing of duplicate
369messages comes at a cost, but so does the management of the state associated
370with duplicate rejection. The number of remote endpoints that need to be
371managed might be vast. This can be costly in particular for less constrained
372nodes that have throughput in the order of hundreds of thousands requests per
373second (which needs about 16 GiB of RAM just for duplicate rejection). Deduplication
374is also heavy for servers on Class 1 devices, as also piggy-backed responses
375need to be stored for the case that the ACK message is lost.
376Hence, a receiver may have good reasons to decide not to perform deduplication.
377This behavior is possible when the application is designed with idempotent
378operations only and makes good use of the If-Match/If-None-Match options.
380If duplicate rejection is indeed necessary (e.g., for non-idempotent
381requests) it is important to control the amount of state that needs to
382be stored. It can be reduced, for instance, by deduplication at resource level:
383Knowledge of the application and supported representations can minimize the
384amount of state that needs to be kept.
386### Relaxation on the Client
388Duplicate rejection on the client side can be simplified by choosing clever
389Tokens that are virtually not re-used
390(e.g., through an obfuscated sequence number in the Token value) and
391only filter based on the list of open Tokens.
392If a client wants to re-use Tokens (e.g., the empty Token for optimizations),
393it requires strict duplicate rejection based on MIDs to avoid the scenario
394outlined in {{token_reuse}}.
397                  Client              Server
398                     |                  |
399                     |   CON [0x7a10]   |
400                     |    GET /temp     |
401                     |   (Token 0x23)   |
402                     +----------------->|
403                     |                  |
404                     |   ACK [0x7a10]   |
405                     |<-----------------+
406                     |                  |
407                     ... Time Passes  ...
408                     |                  |
409                     |   CON [0x23bb]   |
410                     |  4.04 Not Found  |
411                     |   (Token 0x23)   |
412                     |<-----------------+
413                     |                  |
414                     |   ACK [0x23bb]   |
415                     +--------X         |
416                     |                  |
417                     |   CON [0x7a11]   |
418                     |   GET /resource  |
419                     |   (Token 0x23)   |
420                     +----------------->|
421                     |                  |
422                     |   CON [0x23bb]   |
423 Causing an implicit |  4.04 Not Found  |
424  acknowledgement if |   (Token 0x23)   |
425not filtered through X<-----------------+ Retransmission
426 duplicate rejection |                  |
428{: cent #token_reuse title="Re-using Tokens requires strict duplicate rejection."}
431## Token Usage
433Tokens are chosen by the client and help to identify request/response pairs
434that span several message exchanges (e.g., a separate response, which has a new MID).
435Servers do not generate Tokens and only mirror what they receive from the
436clients. Tokens must be unique within the namespace of a client throughout their
437lifetime. This begins when being assigned to a request and ends when the open
438request is closed by receiving and matching the final response. Neither empty
439ACKs nor notifications (i.e., responses carrying the Observe option) terminate
440the lifetime of a Token.
442As already mentioned, a clever assignment of Tokens can help to simplify
443duplicate rejection. Yet this is also important for coping with client crashes.
444When a client restarts during an open request and (unknowingly) re-uses the
445same Token, it might match the response from the previous request to the
446current one. Hence, when only the Token is used for matching, which is always
447the case for separate responses, randomized Tokens with enough entropy should
448be used. The 8-byte range for Tokens can even allow for one-time usage throughout
449the lifetime of a client node. When DTLS is used, client crashes/restarts
450will lead to a new security handshake, thereby solving the problem of
451mismatching responses and/or notifications.
453### Tokens for Observe
455In the case of Observe {{-observe}}, a request will be answered
456with multiple notifications and it is important to continue keeping
457track of the Token that was used for the request -- its lifetime will
458end much later.
459Upon establishing an Observe relationship, the Token is
460registered at the server. Hence, the client's use of that specific
461Token is now limited to controlling the Observation relationship.
462A client can use it to cancel the relationship, which frees the Token
463upon success (i.e., the message with an Observe Option with the value set to
464'deregister' (1) is confirmed with a response; see
465{{I-D.ietf-core-observe}} section 3.6). However, the client might never
466receive the response due to a temporary network outage or worse, a server crash.
467Although a network outage will also affect notifications so that the Observe
468garbage collection could apply, the server might simply happen not to send CON
469notifications during that time. Alternative Observe lifetime models such as
470Stubbornness(tm) might also keep relationships alive for longer periods.
472Thus, it is best to carefully choose the Token value used with Observe requests.
473(The empty value will rarely be applicable.)
474One option is to assign and re-use dedicated Tokens for each Observe
475relationship the client will establish.  The choice of Token values also is
476critical in NoSec mode, to limit the effectiveness of spoofing
477attacks.  Here, the recommendation is to use randomized Tokens with a
478length of at least four bytes (see Section 5.3.1 of {{-coap}}). Thus,
479dedicated ranges within the 8-byte Token
480space should be used when in NoSec mode. This also solves the problem of
481mismatching notifications after a client crash/restart.
483When the client wishes to reinforce its interest in a resource, maybe
484not really being sure whether the server has forgotten it or not, the Token
485value allocated to the Observe relationship is used to
486re-register that observation (see Section 3.3.1 of {{-observe}} for
487details): If the server is still aware of the
488relationship (an entry with a matching endpoint and token is already
489present in its list of observers for the resource), it will not add a
490new relationship but will replace or update the existing one (Section
4914.1 of {{-observe}}).  If not, it will simply establish a new
492registration which of course also uses the Token value.
494If the client sends an Observe request for the same resource with a
495new Token, this is not a
496protocol violation, because the specification allows the client to
497observe the same resource in a different Observe relationship if the
498cache-key is different (e.g., requesting a different Content-Format).
499If the cache-key is not different, though, an additional Observe
500relationship just wastes the server's resources, and is therefore not
501allowed; the server might rely on this for its housekeeping.
503### Tokens for Blockwise Transfers
505In general, blockwise transfers are independent from the Token and are
506correlated through client endpoint address and server address and resource path
507(destination URI). Thus, each block may be transferred using a different Token.
508Still it can be beneficial to use the same Token (it is freed upon reception
509of a response block) for all blocks, e.g., to easily route received blocks to
510the same response handler.
512When Block2 is combined with Observe,
513notifications only carry the first block and it is up to the client to retrieve
514the remaining ones. These GET requests do not carry the Observe option
515and need to
516use a different Token, since the Token from the notification is still in use.
518## Transmission States {#fsms}
520CoAP endpoints must keep transmission state to manage open requests, to handle
521the different response modes, and to implement reliable delivery at the message
522layer. The following finite state machines (FSMs) model the transmissions of a
523CoAP exchange at the request/response layer and the message layer. These layers
524are linked through actions. The M_CMD() action triggers a corresponding
525transition at the message layer and the RR_EVT() action triggers a transition
526at the request/response layer. The FSMs also use guard conditions to
527distinguish between information that is only available through the other layer
528(e.g., whether a request was sent using a CON or NON message).
530### Request/Response Layer
532{{fsm_rr_c}} depicts the two states at the request/response layer of a
533CoAP client. When a request is issued, a "reliable_send" or "unreliable_send"
534is triggered at the message layer. The WAITING state can be left through three
535transitions: Either the client cancels the request and triggers cancellation of
536a CON transmission at the message layer, the client receives a failure event
537from the message layer, or a receive event containing a response.
540    +------------CANCEL-------------------------------+
541    |        / M_CMD(cancel)                          |
542    |                                                 V
543    |                                              +------+
544+-------+ -------RR_EVT(fail)--------------------> |      |
545|WAITING|                                          | IDLE |
546+-------+ -------RR_EVT(rx)[is Response]---------> |      |
547    ^                / M_CMD(accept)               +------+
548    |                                                 |
549    +--------------------REQUEST----------------------+
550               / M_CMD((un)reliable_send)
552{: cent #fsm_rr_c title="CoAP Client Request/Response Layer FSM"}
554A server resource can decide at the request/response layer whether to respond
555with a piggy-backed or a separate response. Thus, there are two busy states in
556{{fsm_rr_s}}, SERVING and SEPARATE. An incoming receive event with a NON
557request directly triggers the transition to the SEPARATE state.
560+--------+ <----------RR_EVT(rx)[is NON]---------- +------+
561|SEPARATE|                                         |      |
562+--------+ ----------------RESPONSE--------------> | IDLE |
563    ^            / M_CMD((un)reliable_send)        |      |
564    |                                        +---> +------+
565    |EMPTY_ACK                               |         |
566    |/M_CMD(accept)                          |         |
567    |                                        |         |
568    |                                        |         |
569+--------+                                   |         |
570|SERVING | --------------RESPONSE------------+         |
571+--------+          / M_CMD(accept)                    |
572    ^                                                  |
573    +------------------------RR_EVT(rx)[is CON]--------+
575{: cent #fsm_rr_s title="CoAP Server Request/Response Layer FSM"}
577### Message Layer
579{{fsm_m}} shows the different states of a CoAP endpoint per message exchange.
580Besides the linking action RR_EVT(), the message layer has a TX action to send
581a message. For sending and receiving NONs, the endpoint remains in its CLOSED
582state. When sending a CON, the endpoint remains in RELIABLE\_TX and keeps
583retransmitting until the transmission times out, it receives a matching RST,
584the request/response layer cancels the transmission, or the endpoint receives
585an implicit acknowledgement through a matching NON or CON. Whenever the
586endpoint receives a CON, it transitions into the ACK_PENDING state, which can
587be left by sending the corresponding ACK.
590+-----------+ <-------M_CMD(reliable_send)-----+
591|           |            / TX(con)              \
592|           |                                +--------------+
593|           | ---TIMEOUT(RETX_WINDOW)------> |              |
594|RELIABLE_TX|     / RR_EVT(fail)             |              |
595|           | ---------------------RX_RST--> |              | <----+
596|           |               / RR_EVT(fail)   |              |      |
597+-----------+ ----M_CMD(cancel)------------> |    CLOSED    |      |
598 ^  |  |  \  \                               |              | --+  |
599 |  |  |   \  +-------------------RX_ACK---> |              |   |  |
600 +*1+  |    \                / RR_EVT(rx)    |              |   |  |
601       |     +----RX_NON-------------------> +--------------+   |  |
602       |       / RR_EVT(rx)                  ^ ^ ^ ^  | | | |   |  |
603       |                                     | | | |  | | | |   |  |
604       |                                     | | | +*2+ | | |   |  |
605       |                                     | | +--*3--+ | |   |  |
606       |                                     | +----*4----+ |   |  |
607       |                                     +------*5------+   |  |
608       |                +---------------+                       |  |
609       |                |  ACK_PENDING  | <--RX_CON-------------+  |
610       +----RX_CON----> |               |  / RR_EVT(rx)            |
611         / RR_EVT(rx)   +---------------+ ---------M_CMD(accept)---+
612                                                     / TX(ack)
615*2: M_CMD(unreliable_send) / TX(non)
616*3: RX_NON / RR_EVT(rx)
618*5: RX_ACK
620{: cent #fsm_m title="CoAP Message Layer FSM"}
622T.B.D.: (i) Rejecting messages (can be triggered at message and request/response
623layer). (ii) ACKs can also be triggered at both layers.
626## Taxonomy of Cases
629This section was removed, as it is unclear whether it is needed.
630Maybe single interesting cases can be picked for further explanation.
631Restore the figures from the SVN (Rev. 12)
635## Out-of-band Information
637The CoAP implementation can also leverage out-of-band information, that might
638also trigger some of the transitions shown in {{fsms}}. In particular ICMP
639messages can inform about unreachable remote endpoints or whole network
640outages. This information can be used to pause or cancel ongoing transmission
641to conserve energy. Providing ICMP information to the CoAP implementation is
642easier in constrained environments, where developers usually can adapt the
643underlying OS (or firmware). This is not the case on general purpose platforms
644that have full-fledged OSes and make use of high-level programming frameworks.
646The most important ICMP messages are host, network, port, or protocol
647unreachable errors. After appropriate vetting (cf. {{?RFC5927}}),
648they should cause the cancellation of ongoing CON
649transmissions and clearing (or deferral) of Observe relationships. Requests to this
650destination should be paused for a sensible interval. In addition, the device
651could indicate of this error through a notification to a management endpoint
652or external status indicator, since the cause could be a misconfiguration or
653general unavailability of the required service. Problems reported through the
654Parameter Problem message are usually caused through a similar fundamental
657The CoAP specification recommends to ignore Source Quench and Time Exceeded
658ICMP messages, though. Source Quench messages were originally intended
659to inform the sender to reduce the
660rate of packets. However, this mechanism is deprecated through {{RFC6633}}.
661CoAP also comes with its own congestion control mechanism, which is already
662designed conservatively. One advanced mechanism that can be employed for better
663network utilization is CoCoA, {{I-D.bormann-core-cocoa}}. Time
664Exceeded messages often occur during transient routing loops (unless
665they are caused by a too small initial
666Hop Limit value).
668## Programming Model
670The event-driven approach, which is common in event-loop-based firmware, has
671also proven very efficient for embedded operating systems {{TinyOS}},
672{{Contiki}}. Note that an OS is not necessarily required and a traditional
673firmware approach can suffice for Class 1 devices. Event-driven systems use
674split-phase operations (i.e., there are no blocking functions, but functions
675return and an event handler is called once a long-lasting operation completes)
676to enable cooperative multi-threading with a single stack.
678Bringing a Web transfer protocol to constrained environments does not
679only change the networking of the corresponding systems, but also the
680programming model. The complexity of event-driven systems can be hidden
681through APIs that resemble classic RESTful Web service implementations.
683### Client
685An API for asynchronous requests with response handler functions goes
686hand-in-hand with the event-driven approach. Synchronous requests with a
687blocking send function can facilitate applications that require strictly
688ordered, sequential request execution (e.g., to control a physical process) or
689other checkpointing (e.g., starting operation only after registration with the
690resource directory was successful). However, this can also be solved by
691triggering the next operation in the response handlers. Furthermore, as
692mentioned in {{client-server}}, it is more like that complex control flow is
693done by more powerful devices and Class 1 devices predominantly run a CoAP
694server (which might include a minimal client to communicate with a resource
697### Server
699On CoAP servers, the event-driven nature can be hidden through resource handler
700abstractions as known from traditional REST frameworks. The following types of
701RESTful resources have proven useful to provide an intuitive API on constrained
702event-driven systems:
705: A normal resource defined by a static Uri-Path and an associated resource
706  handler function.  Allowed methods could already be filtered by the
707  implementation based on flags. This is the basis for all other resource
708  types.
711: A parent resource manages several sub-resources under a given base path
712  by programmatically evaluating the Uri-Path. Defining a URI template (see
713  {{RFC6570}}) would be a convenient way to pre-parse arguments given
714  in the Uri-Path.
717: A resource that has an additional handler function that is
718  triggered periodically by the CoAP implementation with a resource-specific
719  interval.  It can be used to sample a sensor or perform
720  similar periodic updates of its state.  Usually, a periodic resource is
721  observable and sends the notifications by triggering its normal resource
722  handler from the periodic handler. These periodic tasks are quite common
723  for sensor nodes, thus it makes sense to provide this functionality in the
724  CoAP implementation and avoid redundant code in every resource.
727: An event resource is similar to an periodic resource, only
728  that the second handler is called by an irregular event such as a
729  button.
732: Separate responses are usually used when handling a request takes more time,
733  e.g., due to a slow sensor or UART-based subsystems. To not fully block the
734  system during this time, the handler should also employ split-phase execution:
735  The resource handler returns as soon as possible and an event handler resumes
736  responding when the result is ready. The separate resource type can abstract
737  from the split-phase operation and take care of temporarily storing the
738  request information that is required later in the result handler to send the
739  response (e.g., source address and Token).
741# Optimizations
743## Message Buffers
745The cooperative multi-threading of an event loop system allows to optimize
746memory usage through in-place processing and reuse of buffers, in particular
747the IP buffer provided by the OS or firmware.
749CoAP servers can significantly benefit from in-place processing, as
750they can create responses directly in the incoming IP buffer. Note that an
751embedded OS usually only has a single buffer for incoming and outgoing IP
753The first few bytes of the basic header are usually parsed into
754an internal data structure and can be overwritten without harm.
755Thus, empty ACKs and RST messages can promptly be assembled and sent using
756the IP buffer. Also when a CoAP server only sends piggy-backed or
757Non-confirmable responses, no additional buffer is required at the application
758layer. This, however, requires careful timing so that no incoming data is
759overwritten before it was processed. Because of cooperative multi-threading,
760this requirement is relaxed, though. Once the message is sent, the IP buffer
761can accept new messages again. This does not work for Confirmable messages,
762however. They need to be stored for retransmission and would block any
763further IP communication.
765Depending on the number of requests that can be handled in
766parallel, an implementation might create a stub response filled with
767any option that has to be copied from the original request to the
768separate response, especially the Token option.  The drawback of
769this technique is that the server must be prepared to receive
770retransmissions of the previous (Confirmable) request to which a new
771acknowledgement must be generated.  If memory is an issue, a single
772buffer can be used for both tasks: Only the message type and code
773must be updated, changing the message id is optional.  Once the
774resource representation is known, it is added as new payload at the
775end of the stub response.  Acknowledgements still can be sent as
776described before as long as no additional options are required to
777describe the payload.
779## Retransmissions
781CoAP's reliable transmissions require the before-mentioned
782retransmission buffers.  Messages, such as the requests of a client,
783should be stored in serialized form.  For servers, retransmissions
784apply for Confirmable separate responses and Confirmable
785notifications {{I-D.ietf-core-observe}}.  As separate responses stem
786from long-lasting resource handlers, the response should be stored
787for retransmission instead of re-dispatching a stored request (which
788would allow for updating the representation).  For Confirmable
789notifications, please see Section 2.6, as simply storing the response
790can break the concept of eventual consistency.
792String payloads such as JSON require a buffer to print to.  By
793splitting the retransmission buffer into header and payload part, it
794can be reused.  First to generate the payload and then storing the
795CoAP message by serializing into the same memory.  Thus, providing a
796retransmission for any message type can save the need for a separate
797application buffer.  This, however, requires an estimation about the
798maximum expected header size to split the buffer and a memmove to
799concatenate the two parts.
801For platforms that disable clock tick interrupts in sleep states, the
802application must take into consideration the clock deviation that occurs
803during sleep (or ensure to remain in idle state until the message has been
804acknowledged or the maximum number of retransmissions is reached).
805Since CoAP allows up to four retransmissions with a binary
806exponential back-off it could take up to 45 seconds until the send
807operation is complete.  Even in idle state, this means substantial
808energy consumption for low-power nodes.  Implementers therefore
809might choose a two-step strategy: First, do one or two
810retransmissions and then, in the later phases of back-off, go to sleep
811until the next retransmission is due. In the meantime, the node could
812check for new messages including the acknowledgement for any
813Confirmable message to send.
815## Observable Resources
817For each observer, the server needs to store at least address, port, token,
818and the last outgoing message ID.  The latter is needed to match incoming
819RST messages and cancel the observe relationship.
821It is favorable to have one retransmission buffer per observable resource
822that is shared among all observers.  Each notification is serialized once
823into this buffer and only address, port, and token are changed when
824iterating over the observer list (note that different token lengths
825might require realignment).  The advantage becomes clear for
826Confirmable notifications: Instead of one retransmission buffer per
827observer, only one buffer and only individual retransmission counters
828and timers in the list entry need to be stored.  When the
829notifications can be sent fast enough, even a single timer would
830suffice.  Furthermore, per-resource buffers simplify the update with
831a new resource state during open deliveries.
833## Blockwise Transfers
835Blockwise transfers have the main purpose of providing fragmentation
836at the application layer, where partial information can be processed.
837This is not possible at lower layers such as 6LoWPAN, as only
838assembled packets can be passed up the stack.  While
839{{-block}} also anticipates atomic handling of blocks,
840i.e., only fully received CoAP messages, this is not possible on
841Class 1 devices.
843When receiving a blockwise transfer, each block is usually passed to
844a handler function that for instance performs stream processing or
845writes the blocks to external memory such as flash.  Although there
846are no restrictions in {{-block}}, it is beneficial for
847Class 1 devices to only allow ordered transmission of blocks.
848Otherwise on-the-fly processing would not be possible.
850When sending a blockwise transfer out of dynamically generated information, Class 1 devices usually do not
851have sufficient memory to print the full message into a buffer, and
852slice and send it in a second step.  For instance, if the CoRE Link
853Format at /.well-known/core is dynamically generated, a generator function is
854required that generates slices of a large string with a specific
855offset length (a 'sonprintf()').  This functionality is required
856recurrently and should be included in a library.
858## Deduplication with Sequential MIDs
860CoAP's duplicate rejection functionality can be straightforwardly
861implemented in a CoAP
862endpoint by storing, for each remote CoAP endpoint ("peer") that it
863communicates with, a list of recently received CoAP Message IDs (MIDs)
864along with some timing information.
865A CoAP message from a peer with a MID that is in the list for that peer
866can simply be discarded.
868The timing information in the list can then be used to time out
869entries that are older than the _expected extent of the re-ordering_,
870an upper bound for which can be estimated by adding the _potential
871retransmission window_ ({{-coap}} section "Reliable
872Messages") and the time packets can stay alive in the network.
874Such a straightforward implementation is suitable in case other CoAP
875endpoints generate random MIDs. However, this storage method may
876consume substantial RAM in specific cases, such as:
878 - many clients are making periodic, non-idempotent requests to a
879   single CoAP server;
880 - one client makes periodic requests to a large number of CoAP
881   servers and/or requests a large number of resources; where servers
882   happen to mostly generate separate CoAP responses (not piggy-backed);
884For example, consider the first case where the expected extent of re-ordering
885is 50 seconds, and N clients are sending periodic POST
886requests to a single CoAP server during a period of high system
887activity, each on average sending one client request per second.
888The server would need 100 * N bytes of RAM to store the MIDs only.
889This amount of RAM may be significant on a RAM-constrained
890platform. On a number of platforms, it may be easier to allocate some
891extra program memory (e.g. Flash or ROM) to the CoAP protocol handler
892process than to allocate extra RAM. Therefore, one may try to reduce
893RAM usage of a CoAP implementation at the cost of some additional
894program memory usage and implementation complexity.
896Some CoAP clients generate MID values by a using a Message ID variable
897{{-coap}} that is incremented by one each time a new MID
898needs to be generated.  (After the maximum value 65535 it wraps back
899to 0.)  We call this behavior "sequential" MIDs.  One approach to
900reduce RAM use exploits the redundancy in sequential MIDs for a more
901efficient MID storage in CoAP servers.
903Naturally such an approach requires, in order to actually reduce RAM
904usage in an implementation, that a large part of the peers follow the
905sequential MID behavior. To realize this optimization, the authors
906therefore RECOMMEND that CoAP endpoint implementers employ the
907"sequential MID" scheme if there are no reasons to prefer another
908scheme, such as randomly generated MID values.
910Security considerations might call for a choice for (pseudo)randomized
911MIDs. Note however that with truly randomly generated MIDs the
912probability of MID collision is rather high in use cases as mentioned
913before, following from the Birthday Paradox. For example, in a
914sequence of 52 randomly drawn 16-bit values the probability of finding
915at least two identical values is about 2 percent.
917From here on we consider efficient storage implementations for MIDs in
918CoAP endpoints, that are optimized to store "sequential"
919MIDs. Because CoAP messages may be lost or arrive out-of-order, a
920solution has to take into
921account that received MIDs of CoAP messages are not actually arriving
922in a sequential fashion, due to lost or reordered messages. Also a
923peer might reset and lose its MID counter(s) state. In addition, a
924peer may have a single Message ID variable used in messages to many
925CoAP endpoints it communicates with, which partly breaks
926sequentiality from the receiving CoAP endpoint's
927perspective. Finally, some peers might use a randomly generated MID
928values approach. Due to these specific conditions, existing sliding
929window bitfield implementations for storing received sequence numbers
930are typically not directly suitable for efficiently storing MIDs.
932{{mid-store}} shows one example for a per-peer MID storage design: a
933table with a bitfield of a defined length _K_ per entry to store
934received MIDs (one per bit) that have a value in the range
935\[MID\_i + 1 , MID\_i + K\].
938| MID base | K-bit bitfield | base time value |
940| MID_0    |   010010101001 | t_0             |
941| MID_1    |   111101110111 | t_1             |
942| ... etc. |                |                 |
943{: #mid-store title="A per-peer table for storing MIDs based on MID_i"}
945The presence of a table row with base MID\_i (regardless of the
946bitfield values) indicates that a value MID\_i has been received at a
947time t\_i.  Subsequently, each bitfield bit k (0...K-1) in a row i
948corresponds to a received MID value of MID\_i + k + 1. If a bit k is
9490, it means a message with corresponding MID has not yet been
950received. A bit 1 indicates such a message has been received already
951at approximately time t\_i. This storage structure allows e.g. with
952k=64 to store in best case up to 130 MID values using 20 bytes, as
953opposed to 260 bytes that would be needed for a non-sequential storage
956The time values t\_i are used for removing rows from the table
957after a preset timeout period, to keep the MID store small in size and
958enable these MIDs to be safely re-used in future communications.
959(Note that the table only stores one time value per row, which
960therefore needs to be updated on receipt of another MID that is stored
961as a single bit in this row.  As a consequence of only storing one
962time value per row, older MID entries typically time out later than
963with a simple per-MID time value storage scheme.  The endpoint
964therefore needs to ensure that this additional delay before MID
965entries are removed from the table is much smaller than the time
966period after which a peer starts to re-use MID values due to
967wrap-around of a peer's MID variable. One solution is to check that a
968value t\_i in a table row is still recent enough, before using the row
969and updating the
970value t\_i to current time. If not recent enough, e.g. older than N
971seconds, a new row with an empty bitfield is
973\[Clearly, these optimizations would benefit if the peer were much more conservative about re-using MIDs than currently required in the protocol specification.\]
975The optimization described is less efficient for storing randomized
976MIDs that a CoAP endpoint may encounter from certain peers.  To solve
977this, a storage algorithm may start in a simple MID storage mode,
978first assuming that the peer produces non-sequential MIDs. While
979storing MIDs, a heuristic is then applied based on monitoring some
980"hit rate", for example, the number of MIDs received that have a Most
981Significant Byte equal to that of the previous MID divided by the
982total number of MIDs received.  If the hit rate tends towards 1 over a
983period of time, the MID store may decide that this particular CoAP
984endpoint uses sequential MIDs and in response improve efficiency by
985switching its mode to the bitfield based storage.
987# Alternative Configurations
989## Transmission Parameters {#parameters}
991When a constrained network of CoAP nodes is not communicating over the
992Internet, for instance because it is shielded by a proxy or a
993closed deployment, alternative transmission parameters can be used.
994Consequently, the derived time values provided in {{-coap}}
995section 4.8.2 will also need to be adjusted, since most implementations
996will encode their absolute values.
998Static adjustments require a fixed deployment with a constant number or upper
999bound for the number of nodes, number of hops, and expected concurrent
1000transmissions. Furthermore, the stability of the wireless links should be
1001evaluated. ACK_TIMEOUT should be chosen above the xx% percentile of the
1002round-trip time distribution. ACK_RANDOM_FACTOR depends on the number of nodes
1003on the network. MAX_RETRANSMIT should be chosen suitable for the targeted
1004application. A lower bound for LEISURE can be calculated as
1006lb_Leisure = S * G / R
1008where S is the estimated response size, G the group size, and R the target data
1009transfer rate (see {{-coap}} section 8.2). NSTART and
1010PROBING_RATE depend on estimated network utilization. If the main cause for
1011loss are weak links, higher values can be chosen.
1013Dynamic adjustments will be performed by advanced congestion control mechanisms
1014such as {{I-D.bormann-core-cocoa}}. They are required if the main cause for
1015message loss is network or endpoint congestion. Semi-dynamic adjustments
1016could be implemented by disseminating new static transmission parameters to
1017all nodes when the network configuration changes (e.g., new nodes are added
1018or long-lasting interference is detected).
1020## CoAP over IPv4
1022CoAP was designed for the properties of IPv6, which is dominating in constrained
1023environments because of the 6LoWPAN adaption layer {{RFC6282}}. In particular,
1024the size limitations of CoAP are tailored to the minimal MTU of 1280 bytes.
1025Until the transition towards IPv6 converges, CoAP nodes might also communicate
1026over IPv4, though. Sections 4.2 and 4.6 of the base specification
1027{{-coap}} already provide guidance and implementation notes to
1028handle the smaller minimal MTUs of IPv4.
1030Another deployment issue in legacy IPv4 deployments is caused by Network Address Translators (NATs).
1031The session timeouts are unpredictable and NATs may close UDP sessions with timeout as short as 60 seconds. This makes CoAP endpoints behind NATs practically unreachable, even when they contact the remote endpoint with a public IP address first. Incorrect behavior may also arise when the NAT session heuristic changes the external port between two successive CoAP messages. For the remote endpoint, this will look like two different CoAP endpoints on the same IP address. Such behavior can be fatal for the resource directory registration interface.
1033# Binding to specific lower-layer APIs
1035Implementing CoAP on specific lower-layer APIs appears to consistently
1036bring up certain less-known aspects of these APIs.  This section is
1037intended to alert implementers to such aspects.
1039## Berkeley Socket Interface
1041### Responding from the right address
1043In order for a client to recognize a reply (response or
1044acknowledgement) as coming from the endpoint to which the initiating
1045packet was addressed, the source IPv6 address of the reply needs to
1046match the destination address of the initiating packet.
1048Implementers that have previously written TCP-based applications are
1049used to binding their server sockets to INADDR_ANY.  Any TCP
1050connection received over such a socket is then more specifically bound
1051to the source address from which the TCP connection setup was
1052received; no programmer action is needed for this.
1054For stateless UDP sockets, more manual work is required.
1055Simply receiving a packet from a UDP socket bound to INADDR_ANY loses
1056the information about the destination address; replying to it through
1057the same socket will use the default address established by the kernel.
1058Two strategies are available:
1060* Only use sockets bound to a specific address (not INADDR_ANY).  A
1061  system with multiple interfaces (or addresses) will thus need to
1062  bind multiple sockets and send replies back on the same socket the
1063  initiating packet was received on.
1065* Use IPV6_RECVPKTINFO {{RFC3542}} to configure the socket, and mirror
1066  back the IPV6_PKTINFO information for the reply (see also
1067  {{managing-interfaces}}).
1069#### Managing interfaces
1071For some applications, it may further be relevant what interface is
1072chosen to send to an endpoint, beyond the kernel choosing one that has
1073a routing table entry for the destination address.
1074E.g., it may be natural to send out a response or acknowledgment on
1075the same interface that the packet prompting it was received.
1076The end of the introduction to section 6 of {{RFC3542}} describes a
1077simple technique for this, where that RFC's API (IPV6_PKTINFO) is
1079The same data structure can be used for indicating an interface to
1080send a packet that is initiating an exchange.  (Choosing that
1081interface is too application-specific to be in scope for the present
1084## Java
1086Java provides a wildcard address ( to bind a socket to all network interface.
1087This is useful when a server is supposed to listen on any available interface including the lookback address.
1088For UDP, and hence CoAP this poses a problem, however, because the DatagramPacket class does not provide the information to which address it was sent.
1089When replying through the wildcard socket, the JVM will pick the default address, which can break the correlation of messages when the remote endpoint did not send the message to the default address.
1090This is in particular precarious for IPv6 where it is common to have multiple IP addresses per network interface.
1091Thus, it is recommended to bind to all adresses explicitly and manage the destination address of incoming messages within the CoAP implementation.
1093## Multicast detection
1095Similar to the considerations above, Section 8 of {{RFC7252}} requires
1096a node to detect whether a packet that it is going to reply to was
1097sent to a unicast or to a multicast address.  On most platforms,
1098binding a UDP socket to a unicast address ensures that it only
1099receives packets addressed to that address.  Programmers relying on
1100this property should ensure that it indeed applies to the platform
1101they are using.
1102If it does not, IPV6_PKTINFO may, again, help for Berkeley Socket Interfaces.
1103For Java, explicit management of different sockets (in this case a MulticastSocket) is required.
1105## DTLS
1107CoAPS implementations require access to the authenticated user/device prinicipal to realize access control for resources.
1108How this information can be accessed heavily depends on the DTLS implementation used.
1109Generic and portable CoAP implementations might want to provide an abstraction layer that can be used by application developers that implement resource handlers.
1110It is recommended to keep the API of such an application layer close to popular HTTPS solutions that are available for the targeted platform, for instance, mod_ssl or the Java Servlet API.
1112# CoAP on various transports
1114As specified in {{-coap}}, CoAP is defined for two underlying
1115transports: UDP and DTLS.  These transports are relatively similar in
1116terms of the properties they expose to their users.  (The main
1117difference, apart from the increased security, is that DTLS provides
1118an abstraction of a connection, into which the endpoint abstraction is
1119placed; in contrast, the UDP endpoint abstraction is based on
1120four-tuples of IP addresses and ports.)
1122Recently, the need to carry CoAP over other transports {{-alttrans}}
1123has led to specifications such as CoAP over TLS or TCP {{-coaptcp}} or
1124websockets {{-websock}}, or even over non-IP transports such as SMS {{-coapsms}}.
1125This section discusses considerations that arise when handling these
1126different transports in an implementation.
1128## CoAP over reliable transports
1130To cope with transports without reliable delivery (such as UDP and
1131DTLS), CoAP defines its own message layer, with acknowledgments,
1132timers, and retransmission.  When CoAP is run over a transport that
1133provides its own reliability (such as TCP or TLS), running this
1134machinery would be redundant.  Worse, keeping the machinery in place
1135is likely to lead to interoperability problems as it is unlikely to be
1136tested as well as on unreliable transports.  Therefore, {{-alttrans}}
1137was defined by removing the message layer from CoAP and just running
1138the request/response layer directly on top of the reliable transport.
1139This also leads to a reduced (from the UDP/DTLS 4-byte header) header
1142Conversely, where reliable transports provide a byte stream
1143abstraction, some form of message delimiting had to be added, which
1144now needs to be handled in the CoAP implementation.
1145The use of reliable transports may reduce the disincentive for using
1146messages larger than optimal link layer packet sizes.  Where different
1147message sizes are chosen by an application for reliable and for
1148unreliable transports, this can pose additional challenges for
1149translators ({{trans}}).
1151Where existing CoAP APIs expose details of the the message layer
1152(e.g., CON vs. NON, or assigning application layer semantics to ACKs),
1153using a reliable transport may require additional adjustments.
1155## Translating between transports {#trans}
1157One obvious way to convey CoAP exchanges between different
1158transports is to run a CoAP proxy that supports both transports.
1159The usual considerations for proxies apply.  {{transprox}} discusses
1160some additional considerations.
1162Where not much of the functionality of CoAP proxies (such as caching)
1163is required, a simpler 1:1 translation may be possible, as discussed
1164in {{transonetoone}}.
1166### Transport translation by proxies {#transprox}
1168(TBD.  In particular, point out the obvious: fan-in/fan-out means that
1169separate message ID and token spaces need to be maintained at the ends
1170of the proxy.)
1172One more CoAP specific function of a transport translator proxy may be
1173to convert between different block sizes, e.g. between a TCP
1174connection that can tolerate large blocks and UDP over a constrained
1175node network.
1177### One-to-one Transport translation {#transonetoone}
1179A translator with reduced requirements for state maintenance
1180can be constructed when no fan-in or fan-out is required, and when the
1181namespace lifetimes of the two sides can be made to coincide.
1182For this one-to-one translation, there is no need to manage message-ID
1183and Token value spaces for both sides separately.
1184So, a simple UDP-to-UDP one-to-one translator could simply copy the
1185messages (among other applications, this might be useful for
1186translation between IPv4 and IPv6 spaces).
1187Similarly, a DTLS-to-TCP translator could be built that executes the message
1188layer (deduplication, retransmission) on the DTLS side, and
1189repackages the CoAP header (add/remove the length information, and
1190remove/add the message ID and message type) between the DTLS and the TCP side.
1191<!-- ... more about connection management ... -->
1193By definition, such a simple one-to-one translator needs to shut down
1194the connection on one side when the connection on the other side
1196However, a UDP-to-TCP one-to-one translator cannot simply shut down
1197the UDP endpoint when the TCP endpoint vanishes because the TCP
1198connection closes, so some additional management of state will be
1201# IANA considerations
1203This document has no actions for IANA.
1205# Security considerations
1209# Acknowledgements
1211Esko Dijk contributed the sequential MID optimization. Xuan He provided help creating and improved the state machine charts.
1214<!--  LocalWords:  CoAP lossy LLN KiB optimizations scalability URIs
1215 -->
1216<!--  LocalWords:  tradeoff Confirmable MTU blockwise LoWPAN datagram
1217 -->
1218<!--  LocalWords:  retransmission deduplication datagrams RST TKL URI
1219 -->
1220<!--  LocalWords:  preselect IP memmove UDP NoSec DTLS GiB ACK ACKs
1221 -->
1222<!--  LocalWords:  namespace FSM NONs retransmitting acknowledgement
1223 -->
1224<!--  LocalWords:  ICMP misconfiguration APIs RESTful API Uri UART
1225 -->
1226<!--  LocalWords:  checkpointing programmatically confirmable CoAP's
1227 -->
1228<!--  LocalWords:  retransmissions Acknowledgements JSON Implementers
1229 -->
1230<!--  LocalWords:  CoRE MIDs implementers sequentiality bitfield IPv
1231 -->
1232<!--  LocalWords:  MTUs RFC's PKTINFO Multicast unicast multicast TCP
1233 -->
1234<!--  LocalWords:  INADDR RECVPKTINFO IANA
1235 -->
Note: See TracBrowser for help on using the repository browser.