[e2e] Internet packet dynamics

Sam Liang sliang at dsg.stanford.edu
Sun Mar 14 16:26:53 PST 2004


On Sun, Mar 14, 2004 at 05:58:40PM -0500, David G. Andersen wrote:
> On Sun, Mar 14, 2004 at 01:16:57PM -0800, Sam Liang scribed:
> >   Thanks for the info. 
> >   You said 38% of the time, the loss rate is less than 0.2%, which means
> > that 62% of the time, it's higher than 0.2%. I think this implies two
> > things. First, such loss rate seems good enough for voice over IP, even
> > without error recovery. Second, for video streaming, 0.2% is still pretty
> > bad.  I am trying to evaluate the severity of packet loss effect on today's
> > Internet on real-time multimedia communication.
> 
>   Oftentimes it is.  It depends wildly on the connection,
> though - voip over DSL would probably be unpleasant without
> a DSL router that supported QoS.
> 
>   Some of the high loss periods were short outages, and some of them
> were congestion.  I didn't break them down in the paper since it's
> hard to distinguish with low-rate probing, but it would be
> interesting data to see.

  I think it will be very important to get data on the percentage of each
cause of packet loss. It's very important to distinguish the cause of the
packet loss, because you want to respond differently.  Loss due to outages
should not trigger congestion backoff (assuing such outage is not caused
by congestion), just like packet loss in the wireless environment.
If most packet losses are due to outages, rather than congestion,
we have to do something to TCP's congestion control mechanism.

>  
> >   One quick question. In your paper, you seem to suggest that sending
> > the same packet back-to-back along the same path get about the same
> > benefit as sending packets along different paths. If the packet loss is
> > caused by congestion, aren't you aggravating the congestion condition by
> > doubling your transmission rate?  And isn't it going to increase the
> > packet loss rate?
> 
>   Depends on the cause of the losses.  My conclusion from the
> paper was that using different paths was more effective against
> outages (long-lasting or transient), but that a 20ms spacing between
> packets was nearly as effective as using alternate paths when trying
> to combat congestion.  This makes some sense if you believe that most
> congestion probably occurs at the access links.

  What's your definition of a short outage?  If the duration of the outage
is less than 10ms, then the conditional probability that the second packet
of two back to back packets (with 10ms gap) is lost when the first is
lost is almost 0. But if the outage duration is 1000ms, then the
conditional probability is almost 100%.

> 
>  Aren't I aggravating it by sending twice?  Absolutely. :)
> I don't think that the basic mesh scheme is appropriate for bulk
> transfers.  I think it's great for low-rate control traffic, SNMP
> data, and streams where you were planning on sending them at a fixed
> bitrate anyway (like the air traffic control example from the  
> original Mesh paper at SOSP 2001).  Some of our newer work
> looks at the benefits of applying duplication _very_ selectively - 
> to, say, TCP SYN packets and DNS packets - and it turns out that that's
> a huge win without imposing awful amounts of overhead or impairing
> friendliness.

  I agree that doing duplicate transmission for important packets can be
useful and less intrusive. But this seems to be doing packet
prioritization, and it really requires self discipline from endhosts.
Again, I think doing double transmissionns is bad for congestion-induced 
packet loss, because it's trying to get data through by trying to grab
more shares of the bandwidth and it's not fair with competing flows.

  Another way to look at this approach is that it seems to suggest that
low data rate traffic is not subject to congetion control. I am not sure
this is valid. The chance that a low data rate flow suffers a packet loss
is already reduced if routers drop packets based on the flow size.

  Also, fixed bitrate doesn't necessarily mean it's low bitrate. For
example, if I want to stream a DVD quality movie, the bit rate can be nearly
constant at 4Mbps. What's the best approach for this kind of traffic?

Sam




More information about the end2end-interest mailing list