[e2e] High Packet Loss and TCP

Cannara cannara at attglobal.net
Fri May 2 08:42:49 PDT 2003


I've added info below...Alex

"dpsmiles at turing.acm.org" wrote:
> 
> >This means that you can often acquire congestion info from such
> >devices within portions of any network path.
> 
> How exactly is the congestion info acquired? In the context of a delay
> tolerant network(DTN), we cannot use 'chatty' protocols, as they consume
> valuable bandwidth. Looking at the techniques that are used in the normal
> Internet:
...The information is usually maintained within the system concerned, so one
would need to have either a separate management path in, or have the data
available in a private MIB, etc.  I wasn't suggesting just anyone could do
this, only that measurements could be made in cooperation with some system
owners/vendors and experimentation could be done....Alex
> 
> 1) Packet dropping: It is likely to go undetected for a long time, since
> the DTN model is very optimistic when it sends packets. To put it in
> another way, there is a high expectation that the packet will reach the
> destination. Thus the lifetimes of packets are long, compared to TCP
> packets.
> 
> 2) Notification by setting a bit in a returning packet(piggybacking): Since
> the connection is intermittent(and scheduled), I am not sure how much
> really we can rely on this technique. It still has some scope, depending on
> the availability of a packet on the return path.
> 
> 3) Explicit notification by a choke packet: This will have to the last
> resort, though a costly one.

...Either 2) or 3) could be done by using some bits in TCP headers (e.g., in
Acks) that are typically unused, so again, some experimenting could be done
with cooperating ends...Alex
> 
> >Service providers at the edge
> >have direct requirements to limit flows, because their connections to the
> >network core are more limited than the aggregate flows they might have to
> >send/receive for subscribers.
> 
> I am not sure I understand this statement. Could you rephrase it?
...Folks like Covad have from their beginnings known that they could not make
$ without oversubscription, simply because their co-location in telco offices
relies on the links between offices and eventually into the rest of the packet
network.  The links among offices are choke points because they were designed
to handle voice to/from most all subscribers at once, but not data.  So, the
providers who entered the DSL/ADSL market sold about 20 times the real
back-end capacity -- this was the common business model.  As local Bells grow
more into data services, the links among their nodes are improved, but the
current status is that the edge of the net is oversubscribed and that's where
boxes doing traffic management drop pkts. In the synchronous subnets (e.g.,
Sonet rings), Add-Drop Muxes limit flows into metro rings in the same ways
(and can backpressure router interfaces). Hope this clears it up...Alex
> 
> Thoughts?
> 
> Thanks, Alex.
> 
> PS: The DTN website is www.dtnrg.org
> 
> ---------- Forwarded message ----------
> Date: Thu, 01 May 2003 16:43:18 -0700
> From: Cannara <cannara at attglobal.net>
> To: end2end-interest at postel.org
> Subject: Re: [e2e] High Packet Loss and TCP
> 
> Durga, let me mention some experience with some real products developed for
> the network edge/interior, whether security & load balancing, or just simple
> routing...
> 
> 1) The devices used now, Network Processors, Fabric Switches & Traffic
> Managers, allow firmware setting of a wide variety of policies:  priority
> queuing, weighted dropping and other forms of flow control outbound (to the
> next interface/hop).  Traffic manager chips now available can do this for a
> few hundred-thousand flows at multi-Gb rates.
> 
> 2) The device input paths allow for a variety of hardware/Layer2 flow
> control
> (e.g., CSIX, SPI4.2, etc.).
> 
> So, congestion is observed locally (within a box, a rack, or within a site)
> in
> a packet environment; and over MUXes on a metro ring, in a Sonet-like
> environment.  This means that you can often acquire congestion info from
> such
> devices within portions of any network path.  Service providers at the edge
> have direct requirements to limit flows, because their connections to the
> network core are more limited than the aggregate flows they might have to
> send/receive for subscribers.
> 
> Because "rate limiting", "traffic shaping", or whatever other nice words are
> used for "packet dropping", provide the only way edge folks currently have
> to
> tell leaves in the net to slow down, this means that congestion exists at
> two
> major spots:  edge services and core distribution (peering) points.  If you
> were only to look at these places for congestion data, you'd do pretty well,
> especially if you picked busy times for Inet traffic (e.g., daily cycles, or
> as when the Monica Lewinsky report went out :).  I don't think simulations
> will quite capture the reality of what goes on.  You might identify some
> spots
> to examine and ask for data from various researchers, like the SLAC
> "ping-around-the-world" folks (Cottrell...).
> 
> Alex
> 
> Durga Prasad Pandey wrote:
> >
> > I am looking into the possibility of congestion in a delay tolerant
> > network(Characterized by high latency, intermittent connectivity, and high
> > error rates). Since I am new to this area, could I ask for suggestions on
> > how to approach the task of checking congestion in the network as a whole?
> > In other words, if we define congestion to occur when the net amount of
> > data flowing into a network through various nodes is greater than the net
> > amount of data flowing out, what parametsrs would one use to identify
> > congestion?
> > Another question is: Are there some free network simulators available on
> > the web?
> >
> > I will appreciate any help.
> >
> > Durga





More information about the end2end-interest mailing list