Changes between Initial Version and Version 1 of Ticket #22, comment 5


Ignore:
Timestamp:
08/12/19 12:20:33 (3 years ago)
Author:
moeller0@…
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Ticket #22, comment 5

    initial v1  
    1414
    1515
     16Addendum: this RTT dependent failure to equitably share between the two queues is also documented in [http://bobbriscoe.net/projects/latency/dctth_journal_draft20190726.pdf] figure 8, in the "Normalized rate per flow" panels, note how DUALPI2 (DCTCP+Cubic) is doing a considerably worse job than FQCODEL (DCTCP+Cubic) across a wide range of RTT differences, but for my point comparing the data points where both traffic types have a 5ms RTT is sufficient. I note that this paper uses DCTCP (although that is not in scope for internet-wide roll-out) the issue really is independent of the precise flows in the two queues, as it is the job of the dual queue system to properly share bandwidth even when adversarial/non-responsive flows enter the mix (and all examples given already show a catastrophic failure with responsive non-adversarial traffic).
     17According to members of the L4S team this failure is long known, but I reject the notion that a long dcumented bug is a "feature" and hence would like to hear plans about how to address that issue inside L4S (that is expecting all TCPs to be exchanged to fix this issue is not an option, especially not for moving this experiment into the wider internet, where basically 100% of TCP endpoints will be not-aware of L4S).
     18
     19
    1620
    1721*) This issue is also not helped by the default choice of 1 and 15 ms AQM (acceptable standing queue) target delays, since theory predicts that for the targeted ~100ms internet-scale RTT for 1/sqrt(p) traffic a target of 5ms will be sufficient, and for the considerably shorter RTTs in the situation that highlights dualq's failure the 1/sqrt(p) target should be well below 1ms.