Jump to content

Talk:CoDel

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

FQ_Codel

[edit]

Actually openwrt switched to the FQ_codel variant of codel, not codel. So have all the downstream users of openwrt (cerowrt, dd-wrt, many distros based on openwrt, etc. Desparately need to write up fq_codel for wikipedia. We have RFCs and the like in progress.

You might want to cite some of the videos, if you haven't already, notably Van's.

http://www.bufferbloat.net/projects/cerowrt/wiki/Bloat-videos

we quote van's comments on fq_codel a lot. (note, codel is still great as an AQM, still winning against things like pie, it's just that fq_codel is even better)


--dtaht — Preceding unsigned comment added by 2001:470:E47C:2:94FF:244F:B798:6817 (talk) 16:24, 24 January 2014 (UTC)[reply]

OK, I corrected the timeline. And yea, we have to write something about fq_codel next. — Preceding unsigned comment added by Dtaht (talkcontribs) 20:24, 24 January 2014 (UTC)[reply]

CoDel

[edit]

The algorithm is currently better known as CoDel. I propose a change of title once AfD has closed. --Kvng (talk) 14:28, 16 August 2012 (UTC)[reply]

You think? I had understood that the idea was to avoid contractions and abbreviations except in cases where the subject is essentially exclusively known by its abbreviation. So, for example, the article on NATO is called NATO, but in even slightly less exclusive cases like the UNHCR the actual article uses the full title United Nations High Commissioner for Refugees (and also UNHRC --> United Nations Human Rights Council and so on). So I followed suit and made the name of the real article Controlled Delay and added a redirect for CoDel. -- BenTels (talk) 22:27, 17 August 2012 (UTC)[reply]
No, CoDel. "Controlled delay" is too vague and timey-wimey, it could mean anything. Andy Dingley (talk) 23:00, 17 August 2012 (UTC)[reply]

 Done --Kvng (talk) 13:12, 22 August 2012 (UTC)[reply]

Incorrect Citation?

[edit]

The sentence "He then suggested that a better metric might be the minimum amount of delay experienced by any packet in the sliding window of the buffer." has this: http://www.pollere.net/Pdfdocs/QrantJul06.pdf as citation, but I don't see that suggestion anywhere in that paper. — Preceding unsigned comment added by 121.121.81.136 (talk) 03:33, 1 December 2012 (UTC)[reply]

Addendum: Van Jacobsen, David Taht and the rest were barking up the wrong wall till after 3 Dec 2011: https://dl.acm.org/doi/10.1145/2063166.2071893#comment-4466836527

In my opinion the actual solution to latency is not a reduction in buffer sizes. Because the real problem isn't actually large buffers. The problem is devices holding on to packets longer than they should. Given the wide variations in bandwidth, it is easier to define "too high a delay" than it is to define "too large a buffer".

From the wiki article itself: "He(VJ) suggested that a better metric might be the minimum queue length during a sliding time window.". Which is still wrong! Note that unlike delay, minimum queue length is NOT parameterless. It can vary a lot depending on the case. So it seems misleading in a codel article to mention a 2006 paper that's barking up the wrong wall, talking about queue lengths, and saying don't let queues stay full. With codel you can have X to infinite length queues that are never empty, it doesn't matter - it's all about delay. The May 2012 paper should be fine, but do note that's AFTER the 2011 comment on the bufferbloat article on which David Taht commented "The publication deadline for this missed the adoption of Byte Queue Limits (BQL)" - note they were still fixated bytes and sizes, nothing about time and delay. Only after the comment does Taht say: 'The concept of relying far more heavily on "Time in Queue", and expiring old packets, has great potential.'. 121.123.81.152 (talk) 10:54, 29 June 2021 (UTC)[reply]

Description of bufferbloat

[edit]

"But if the buffer is too small, then the buffer will fill up and will itself not be able to accept more packets than one per RTT cycle. So the buffer will stabilize the rate at which packets enter the network link, but it will do so with a fixed delay for packets in the buffer. This is called bufferbloat".

Is this correct? I thought bufferbloat happened when the buffer was *too large* and the buffer fills faster than it can be emptied. I've gone ahead and changed it now, but I think that whole section could be clearer. Amaurea (talk) 13:52, 20 July 2013 (UTC)[reply]

That is wrong and that section is a mess. I'll come back and do some cleanup when I get a chance. ~KvnG 18:01, 23 July 2013 (UTC)[reply]
I've improved it further by rewriting some still not completely correct parts, and by removing the incorrect example which unfortunately had little to do with bufferbloat. Also, removed the article's "cleanup" tag. Please have a look and comment. Thank you. -- Dsimic (talk) 16:41, 27 September 2013 (UTC)[reply]
How didn't the example have anything to do with bufferbloat? It was based on a bufferbloat simulation (I can't seem to find it now, though). Did I completely misunderstand the issue? If so, what is the issue I was talking about called? Amaurea (talk) 15:15, 30 December 2013 (UTC)[reply]
Hello there! Regarding the removed example, here's a quote, for easier referencing:
For example, consider the situation where a server (S) is sending data to a client (C) via a buffer (B), i.e. S -> B -> C, and where the connection from S to B is 1000 kB/s and the one from B to C is only 100 kB/s. If the buffer size is 1 kB, then it will fill up in 1.1 ms, after which it will start dropping packets until S slows down to 100 kB/s, resulting in a steady state of 100 kB/s traffic with 10 ms delay due to the buffer. If, on the other hand, the buffer is 1000 kB large, then it will take slightly longer to fill (1.1 s), but after that it will reach the same steady state speed of 100 kB/s (limited by the connection from B to C), but with a much longer buffer delay of 10 s.
Basically, those delay values are pulled out of thin air, and that's the main reason why the example is practically misleading. At the same time, bufferbloat is more about issues involving more separate data flows happening at the same time, where one "heavy" data flow makes the other "light" data flow completely non-interactive, by introducing huge delays through oversized buffers. — Dsimic (talk) 23:25, 30 December 2013 (UTC)[reply]
I am not sure I follow. The delay values are computed based on the buffer size and the transfer speed; they aren't pulled out of thin air.
Perhaps I could have chosen more realistic speeds or buffer sizes, but I think it is valuable to have an example with some explicit numbers in it. The example did not mention the number of data flows, and would apply just as well to a situation with many flows as long as they all go through the same buffer. Did you mean that it is unclear from the example how the buffer delay is computed (buffer size divided by steady state transfer speed)? Amaurea (talk) 17:54, 4 January 2014 (UTC)[reply]
Please allow me to explain... In this example the overall data arrival delays actually aren't caused by the buffer, but by the difference in speeds of these two links, where one of them acts as the bottleneck. It's true there will be some minute additional delays because of the buffer itself, but due to GHz-rate speeds of CPUs in modern devices, such additional delays would be almost undetectable when compared to the primary delay caused by the difference of links speeds. That's all because of only one data flow in the example, and no multiple flows competing to get attention from the buffers, so to speak. — Dsimic (talk) 18:25, 4 January 2014 (UTC)[reply]
Just to make sure we don't have a misunderstanding here: When I say that the buffer causes a delay, I'm not talking about the CPU time taken to process the buffer. That's tiny. I'm taking about the time it takes from a packet arrives at the input side of the buffer, until it has made its way to the output side of the buffer. This is limited by how fast the buffer can send. If the buffer is full (because of a bottleneck on the output side of the buffer), and assuming steady state, then the time to make it through the buffer will be proportional to the size of the buffer. So for the same link speeds on both sides, changing the buffer size will change the latency (but not the throughput). That is buffer bloat, and happens even with only a single data flow. Amaurea (talk) 13:38, 14 February 2014 (UTC)[reply]
Thanks for the clarification. You're right that the size of a buffer is directly related to the time a packet spends inside it, but there's next to no correlation between the buffer size (if we exclude very small sizes) and resulting performance when there's only one data flow. For example, if there's a 64 KB buffer and 1 MB of data to push through it in a single data flow, the total transmission time will be the same as if we had a 1 MB buffer (while keeping the link speed unchanged, of course); in case of the 1 MB buffer, one part of the packets will spend more time inside the buffer, but the total transmission time will remain the same and no additional delay will be introduced.
Bufferbloat, as a phenomenon, is actually related to situations involving multiple data flows, where the time packets are spending inside buffers is what's causing one flow to kill another. For example, if there's a simple 1 MB FIFO buffer and we push a 768 KB chunk of an image to be uploaded into it (the bigger the buffer, the biger the chunk can be, and thus more time the chunk spends within the buffer), another SSH session (with its, say, 256 bytes of data) can't reach the outside world before that image chunk is completely out of the buffer; basically, that's bufferbloat. — Dsimic (talk | contribs) 20:52, 14 February 2014 (UTC)[reply]
I agree that the situation you are describing, with two data flows, is an example of buffer bloat. But I don't agree with your claim that there needs to be more than one flow in order to experience problems from bufferbloat. You are basically saying that latency doesn't matter as long as you only have a single data stream, which isn't true. Consider the following situation: A server program is in a loop where it snaps a picture, sends it to a client, then snaps a new picture, sends that, and so on. It sends as fast as it can, so initially the frame rate will be high. But after buffers are filled up, packets will start to be dropped, and the server will start sending more slowly, reaching a steady state where it is sending as fast as the client can receive. At this point, the images the client receives will be subject to a delay as described before, depending on the size of the buffer. That is, the images will be (more or less) outdated before they reach the client due to the extra latency. So even with only one flow, with no two-way communication, latency is still an issue.
Bufferbloat is a problem with 1 stream, with 2 streams, with 3 streams, etc. The number of streams isn't really that important. It is the latency that matters. And of course the situation is complicated when you have multiple, separate buffers, traffic shaping, etc.. But to explain something you have to start with the basics before elaborating with more details and special cases. Amaurea (talk) 16:27, 16 February 2014 (UTC)[reply]
The example above you're referring to is pretty much mixing apples and oranges, as one program can generate many simultaneous data flows. As a matter of fact, nowadays is pretty much hard to find a network link having strictly and only one data flow running alone for a longer time, and that's what introduces a certain confusion. — Dsimic (talk | contribs) 07:11, 17 February 2014 (UTC)[reply]
On the other hand, computing those delays in case of multiple data flows woldn't be simple math, as there are numerous things determining that (internally used algorithms for various queues and buffers, possible traffic prioritization policies etc.) The only notable thing there would be finding a reference with the results of such a simulation or measurement, and including those results into this article. — Dsimic (talk) 18:25, 4 January 2014 (UTC)[reply]
Sure, this is all complicated if one takes into account more complicated buffering algorithms etc., and a smart algorithm can greatly reduce the problem. But these complications aren't needed when explaining what the fundamental problem is - they can be mentioned afterwards. Amaurea (talk) 13:38, 14 February 2014 (UTC)[reply]
Right, and that's exactly why an example with experimentally measured exact numbers would make little to no sense in this article – there are simply too many variations of bufferbloat solutions. I do agree that the explanation in the article could be more clear, but the trouble there is that everything needs to be backed with references, and published references are much more going into details, making it quite hard to extract simple explanations while sticking to the Wikipedia's principle of verifiability. — Dsimic (talk | contribs) 20:48, 14 February 2014 (UTC)[reply]
Everything doesn't need to be backed up with references. Everything needs to be *possible* to back up with references, but the person adding the original text doesn't need to be the one who adds the references. Those can and often do come later. It is a bit sad that this talk page is much more informative than the main article itself.
Would the example be better if I described it as an simplified, idealized model of the problem? Amaurea (talk) 16:27, 16 February 2014 (UTC)[reply]
Yeah, and then comes somebody else and strikes everything with a bunch of "citation needed" tags, "this section lacks references" hatnotes etc. :) I'll see in the next few days what could be done so we make the article more informative and more easily understandable, while still having at least some references clearly backing that new content. Hope you agree.
Are you referring to the "one data stream, different buffer sizes" example? But again, that's not a valid example, please try it yourself by experimenting with a single data stream and various settings for txqueuelen on Linux network interfaces, for example. — Dsimic (talk | contribs) 07:11, 17 February 2014 (UTC)[reply]
Just as a side note, it's somewhat confusing to say that S and C are connected via a buffer B; saying they're connected through a router R having a buffer B would be much more clear. — Dsimic (talk) 18:25, 4 January 2014 (UTC)[reply]
I agree, though tough the buffer B, doesn't necessarily have to be on a separate device. It could be attached to S or C too, and still cause the same behavior. Amaurea (talk) 13:38, 14 February 2014 (UTC)[reply]
Hope you agree. Of course, I'm more than happy to discuss it further. — Dsimic (talk) 18:25, 4 January 2014 (UTC)[reply]
The original description of bufferbloat was taken from the article by Nichols and Jacobsen -- if you feel it was incorrect, please inform them of their error. The current state of the article... Well, I suppose it still describes why CoDel works, so I guess that is fine. BenTels (talk) 23:16, 9 October 2013 (UTC)[reply]
IIRC, I was able to partially match the previous version of this section (the one before my edits) to some of the articles / references, but it wasn't exactly or properly transcribed — and it was edited a few times since the article was initially created. I can't recall all of the details right now, but there were some not exactly correct or blurry parts, so I went ahead and rewritten one part of it, aggregating info from numerous published papers, and making it understandable to a wide audience.
Hope you find that to be Ok. -- Dsimic (talk) 23:34, 9 October 2013 (UTC)[reply]


Incorrect description

[edit]

The algorithm described at the moment is kinda off. The wording ("last packet of the interval") suggests that CoDel is working in successive intervals of 100ms. Actually the 'interval' is treated as a sliding window. And I'm fairly confident that the dropping state is left *immediately* when the queue is less than 5ms; it doesn't wait another interval, even a reduced one. Look at the pseudocode.

Positive idea: Wikipedia is supposed to use secondary sources. This article could start from the LWN article. It's a secondary source, but they're practically embedded journalists for the Linux kernel development process, so they get technical details bang on. https://lwn.net/Articles/496509/

http://www.dslreports.com/forum/r30057740- Sourcejedi (talk) — Preceding undated comment added 10:16, 15 May 2015 (UTC)[reply]

Parameterless?

[edit]

The article claims that CoDel is parameterless (and uses a broad citation to attribute that to the authors).

But the acceptable minimum delay (5ms), the initial (and reset) interval (100ms), and the interval reduction sub-algorithm (interval_sub_N = initial_interval/(square_root(N)) are all parameters -- and at least the first two are likely to be things that should get tuned by at least a device manufacturer, if not in the field.

JimJJewett (talk) 23:19, 1 September 2015 (UTC)[reply]

The authors claim that the values they've chosen for these parameters work for a very wide range of circumstances and do not require tuning. ~Kvng (talk) 15:18, 6 September 2015 (UTC)[reply]
[edit]

Hello fellow Wikipedians,

I have just modified one external link on CoDel. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 22:21, 9 August 2017 (UTC)[reply]