[e2e] Reacting to corruption based loss
detlef.bosau at web.de
Sat Jun 25 15:40:52 PDT 2005
Michael Welzl wrote:
> This point has been raised several times: how exactly should
> a sender react to corruption? I fully agree that continuing
> to send at a high rate isn't a good idea.
Please excuse me if my comments seem to be a little bit stupid. I´m new
to this list and it´s hard to keep all threads and discussions in mind.
So, I apologize if I carry cowls to newcastle here. And hopefully, my
comments are not too stupid ;-)
Basically, we´re talking about the old loss differentiation debate. If
packet loss is due to corruption, e.g. on a wireless link, there is not
mecessarily a need for the sender to decrease it´s rate. Perhaps one
could do some FEC or use robust codecs, depending on the application in
use. But I do not see a reason for a sender to decrease it´s rate
Loss differentiation seems to be a naughty issue. Missing packets are
really misbehaved: They do not indicate whether they are lost or
In my opinion, and I´m willing to receive contradiction on this point,
it is a matter of the Internet system model. Why couldn´t we continue
to assume loss free links? Is this really a violation of the End to End
Principle when we introduce link layer recovery? Or is it simply a well
done seperation of concerns to fix link issues at the link layer and to
leave transport issues to the transport layer?
> Now, given that we're talking about a transport endpoint which
> is informed about corruption, there probably isn't any knowledge
> regarding the origin of corruption available - we don't know
Oh these misbehaved packets... They pass dozens of links and pass away
on one....without leaving even a note.. Or a farewell letter...
> what type of link layer caused it or why exactly it happened
> (too many sources sending at the same time? user passed a wall?).
> However, there seems to be some consensus that the reaction
> to corruption should be less severe than the reaction to congestion.
> Also, it has been noted several times (and in several places) that
> AIMD would work just as well if beta (the multiplicative decrease
> factor) was, say, 7/8 instead of 1/2. For a historical reason (I
Hm. In my opinion, it will work with 99/100 as well. In fact, I believe
it will work with an arbitrary beta chosen from (0..1). Plese note: Not
[0..1], because 0 and 1 will miss the wanted behavior.
In my opinion, a choice near to 0 will lead to a faster convergence to
fairness and a choice near to 1 will lead to a better link usage. So, in
my opinion it is essentially a tradeoff between convergence speed and
Basically, I´m not convinced that 1/2 were really a choice that bad.
Even more: For AIMD to work properly in TCP, beta should be the same in
all TCP stacks. I don´t think we want to have dozens of betas slack
around the Internet for the next decade or so.
> think it was the DECbit scheme), 7/8 seems to be a number
> that survived in people's minds.
> So, why don't we just decide for a pragmatic approach instead
> of waiting endlessly for a research solution that we can't come
> up with? Why don't we simply state that the reaction to corruption
> has to be: "reduce the rate by multiplying it with 7/8"?
So, why do we want to fix something that isn´t yet broken?
> Much like the TCP reduction by half, it may not be the perfect
> solution (Jacobson actually mentions that the reduction by half
> is "almost certainly too large" in his congavoid paper), but
> it could be a way to get us forward.
> ...or is there a reasonable research method that can help us
> determine the ideal reaction to corruption, irrespective of
> the cause?
Excuse me, but as far as I remember, VJ talks about _congestion_ there.
Not about corruption.
Consider some VoIP application. Consider running this on a lossy
wireless channel. (Don´t ask for a justification, I don´t see any but I
know dozens of people believing there is one.) Consider a reasonable
corruption rate due to noise. Consider an appropriate noise tolerant
Why should we react on this corruption? Would the noise become less
On the other hand: If some congestion drop would indicate: "Please be so
kind not to usurp the whole channel. There are other flows interested on
network ressources as well." Wouldn´t it be appropriate to kindly adapt
ones ressource occuptation?
I don´t know whether you refer to congestion _drop_ and corruption
_loss_ as well when you talk about corruption. Personally I always use
the termini drop and loss to make the difference perfectly clear. I
sincerely think that these are two different issues which should be
dealt with differently.
And for the moment, I still believe, the easiest way is to have loss
rates neglectible. E.g. by use of Link Layer Recovery and PEP if
applicable. However, I do not know the majority position on this issue
> I agree that this would be good to have. There have been many
> proposals, and none of them appeared to work well enough
> (*ouch*, I just hurt myself :-) ). Inter-layer communication
> is a tricky issue, regardless of the direction in the stack.
> Heck, we don't even have a reasonable way to signal "UDP-Lite
> packet coming through" to a link layer!
"Inter-layer communiction" is a very big word.
However, for a very small problem, perhaps some people might be
interested in the little proposal on my home page
I know this is a really special issue and perhaps, I´m still in the
earliest beginnings of this. However, I tend to take the position, that
we should first try to exploit well proven mechanisms than to raise new
issues (or to continue everlasting ones). And the drop vs. loss debate
is not a trivial one. There are hundreds of papers around dealing with
this topic, and I don´t see they would come to and end.
O.k. This was my first post to this list, and now I´m ready to take
beating, contradiction.... Whatever you prefer :-)
Mail: detlef.bosau at web.de
Mobile: +49 172 681 9937
More information about the end2end-interest