Ah...that explains why I didn't understand it.  I'm not quite at Einsteins
level yet (only need 2 more light years to go).
 
Thanks again.
Make a small
loan, Make a big difference - Kiva.org
 
________________________________
From: Carlos G Mendioroz <tron_at_huapi.ba.ar>
To: Dave Serra
<maybeedave_at_yahoo.com> 
Cc: Paul Negron <negron.paul_at_gmail.com>;
"ccielab_at_groupstudy.com" <ccielab_at_groupstudy.com> 
Sent: Sunday, January 13,
2013 1:06 PM
Subject: Re: QoS - Calculating path latency
  
I'm glad it sort
of makes sense to you now.
Yes, if your network is layer 2 (i.e. it forward
frames and not bits) 
then it could get messy if your access rate is not the
same at both ends.
BTW, the guy using trains that I shamelessly took the idea
from is
best known by the E=mc^2 formula :) He is the one using the (for me at
least) awkward noum "embankment" in a nice work called "Relativity: The
special and General Theory".
Good luck with your studies,
-Carlos
Dave Serra
@ 13/01/2013 14:23 -0300 dixit:
> So after sleeping on this one I drew a chart
that proved out what you
> were saying in that since there is overlap between
serialization dealy,
> propdelay and deserialization delay.  No matter how I
plugged the
> numbers into the model it kept showing me the sum of the 3 was
always
> negated by exactly the deserialization delay.  I kept
> the
deserialization delay the same as the serialization delay in my
> model
though.  I suspect that if I made deserialization delay faster
> then
serialization delay (as is the case with frame-relay hub and spoke)
> I will
get different results.  But for now it makes sense.
> I then read your e-mail
and it too makes sense though I've never been
> quite good with associating
trains and packets ;)   I appreciate the
> time you took on this one as I'm
sure lots of people are simply bystanding.
> :)
> Make a small loan, Make a
big difference - Kiva.org
> *From:* Carlos G Mendioroz <tron_at_huapi.ba.ar>
>
*To:* Paul Negron <negron.paul_at_gmail.com>
> *Cc:* Dave Serra
<maybeedave_at_yahoo.com>; "ccielab_at_groupstudy.com"
> <ccielab_at_groupstudy.com>
>
*Sent:* Sunday, January 13, 2013 6:04 AM
> *Subject:* Re: QoS - Calculating
path latency
>
> Thanks :)
> Dave, I don't know why but trains are a good
analogy for understanding
> time and space, so let's go there:
>
> You have
station A (or embankment :)  with a train, and it takes some
> time to
"serialize" it into the rails that go to station B. Here, we
> call "rails"
the rails that start just at the end of the station, what
> we could call the
station port.
>
> Then you have the time it takes the train to cover the rail
tracks to
> station B. I'm reluctant to call that propagation delay, but that
would
> be it. And then, train should enter station B.
>
> Now, consider a 0
length rail: stations back to back. Train departure
> and arrival happen at
once! Ok.
>
> Now consider a one car long rail. If you take the time it takes
for
> departure, you only need to add a one car length travel for the train to
> be completelly arrived.
>
> Now take a long track. The issue is that "as the
train is getting out of
> the station, it also begins travelling. This time
you would have to
> deduct from the travel time, and then add the arrival
time, which are
> the same. D + (T - D) + A, so to say. But A = D, so total
time gets down
> to D + T.
>
> If I did not confuse you too much, I should
have convinced you :)
>
> BTW, time shift is a shift, i.e., of events
happening at different times
> but being otherwise the same. If the events
happen at the same point in
> time, there is no shift :)
>
> -Carlos
>
> Paul
Negron @ 13/01/2013 02:06 -0300 dixit:
>  > Carlos has a tendency to do this.
:-)
>  >  In a good way!
>  >
>  > Paul
>  >
>  > Paul Negron
>  > CCIE# 14856
>  > negron.paul_at_gmail.com <mailto:negron.paul_at_gmail.com>
>
<mailto:negron.paul_at_gmail.com <mailto:negron.paul_at_gmail.com>>
>  >
>  >
>  >
>  > On Jan 12, 2013, at 9:08 PM, Dave Serra <maybeedave_at_yahoo.com
>
<mailto:maybeedave_at_yahoo.com>
>  > <mailto:maybeedave_at_yahoo.com
<mailto:maybeedave_at_yahoo.com>>> wrote:
>  >
>  >> With my previous example of
the 56k link between California and NY I was
>  >> trying to create an example
where propagation delay is actually longer
>  >> then
>  >> serialization
delay and show that serialization is complete before the
>  >> packet
>  >>
reaches the other side due to a very large propagation delay.  I
> probably
> 
>> botched that given how long packets take to serialize on a 56k link.
>  >> 
So let
>  >> me change  my example to a 10Gig link between Californial and
NY.  Now
>  >> in this
>  >> case the packet will fully be serialized before
it reaches the other
>  >> side and
>  >> it is in this case I believe we
would have to take into
>  >> acount 'deserialization' delay (I really like
that one ;)  ).  So in
> that
>  >> example we really can't overlap time
persay.
>  >>
>  >> The 'time shift' concept is an
>  >> interesting one when
at that same point in time bits are being
>  >> serialised at
>  >> one end
and deserialized at the other but it does not take into
> account of
>  >>
everything.  For example, take a 1500byte packet (12000 bits) that  has
>  >>
serialized the first 2000 bits at the sender before the first bit
>  >>
reaches the
>  >> receiver.  Now the 1st bit at the receiver should be
deserialized at
>  >> the same
>  >> time as the 2001st bit is serialized at
the sender.  So we now only
>  >> have to
>  >> account for the
deserialization delay of the last 2000 bits as the
>  >> packet at
>  >> the
sender has finished serializing them so there is no more
>  >> overlapping
time.
>  >>
>  >> This is a pretty good discusion as I only came up with the
above after
>  >> think
>  >> about what you said.  You are making me think ;)
>  >>
>  >>
>  >>
>  >>
>  >>
>  >> Make a small loan,
>  >> Make a big
difference - Kiva.org <http://kiva.org/>
>  >>
>  >>
>  >>
________________________________
>  >> From:
>  >> Carlos G Mendioroz
<tron_at_huapi.ba.ar <mailto:tron_at_huapi.ba.ar>
> <mailto:tron_at_huapi.ba.ar
<mailto:tron_at_huapi.ba.ar>>>
>  >> To: Dave Serra <maybeedave_at_yahoo.com
<mailto:maybeedave_at_yahoo.com>
> <mailto:maybeedave_at_yahoo.com
<mailto:maybeedave_at_yahoo.com>>>
>  >> Cc: "ccielab_at_groupstudy.com
<mailto:ccielab_at_groupstudy.com>
> <mailto:ccielab_at_groupstudy.com
<mailto:ccielab_at_groupstudy.com>>"
>  >> <ccielab_at_groupstudy.com
<mailto:ccielab_at_groupstudy.com>
> <mailto:ccielab_at_groupstudy.com
<mailto:ccielab_at_groupstudy.com>>>
>  >> Sent: Saturday, January
>  >> 12, 2013
7:08 PM
>  >> Subject: Re: QoS - Calculating path latency
>  >>
>  >> Well, it
does
>  >> not matter if the cable is 1 mt or 30km, because
>  >> that time is
taken into
>  >> consideration by propagation delay.
>  >>
>  >> But tx
serialization takes exactly the
>  >> same time that rx
>  >>
"deserialization" takes, and once you do the "time shift",
>  >> they can
overlap.
>  >>
>  >> If you are able to see that the distance is not an issue,
>  >> then you
>  >> should see that one bit out the tx buffer happens at the
same
>  >> "shifted"
>  >> time that the same bit enters the rx buffer.
>  >>
>  >> Your sentence about
>  >> not being able to start receiving until you
end
>  >> transmitting is not true. It
>  >> depends on which time is greater:
>  >> serialization or propagation...
>  >>
>  >> -Carlos
>  >> Dave Serra @
12/01/2013 17:06 -0300 dixit:
>  >>> hmmmm....I suppose on 10Gig links
>  >>
of short distances in a store and
>  >>> forward switch this might happen and
it
>  >> most certainly does happen on a
>  >>> LAN where R2 is a cut-through
switch as R2
>  >> can be bringing a packet in
>  >>> while R1 is still
serializing the tail end of it
>  >> but lets assume that
>  >>> these are
long (from california to NY) links of 56k.
>  >> In this scenario
>  >>> R2
can not bring the packet into the router (what I am
>  >> calling
>  >>>
'serialization delay'. maybe I should call it 'input-serialization
>  >>>
> 
>> delay' for better clarity) until R1 has finished serializing the packet
> 
>>> and
>  >> it reaches R2.
>  >>> Make a small loan, Make a big difference -
Kiva.org <http://kiva.org/>
>  >>> *From:*
>  >> Carlos G Mendioroz
<tron_at_huapi.ba.ar <mailto:tron_at_huapi.ba.ar>
> <mailto:tron_at_huapi.ba.ar
<mailto:tron_at_huapi.ba.ar>>>
>  >>> *To:* Dave Serra
>  >>
<maybeedave_at_yahoo.com <mailto:maybeedave_at_yahoo.com>
>
<mailto:maybeedave_at_yahoo.com <mailto:maybeedave_at_yahoo.com>>>
>  >>> *Cc:*
"ccielab_at_groupstudy.com <mailto:ccielab_at_groupstudy.com>
>
<mailto:ccielab_at_groupstudy.com <mailto:ccielab_at_groupstudy.com>>"
>  >>
<ccielab_at_groupstudy.com <mailto:ccielab_at_groupstudy.com>
>
<mailto:ccielab_at_groupstudy.com <mailto:ccielab_at_groupstudy.com>>>
>  >>>
*Sent:* Saturday, January 12, 2013 2:58 PM
>  >>>
>  >> *Subject:* Re: QoS -
Calculating path latency
>  >>>
>  >>> Hey Dave,
>  >>> it might be that
>  >>
the R2 read serialization happens exactly as (during)
>  >>> R1 write
serialization
>  >> ? Then, they do not add up, but only one of them
>  >>> is
used to account them
>  >> both.
>  >>>
>  >>> -Carlos
>  >>>
>  >>>
>  >>>
Dave Serra @ 12/01/2013 16:41 -0300 dixit:
>  >>>> I have
>  >> a question. 
I've been reading Wendell Odom's cisco press book
>  >>> 'Cisco
>  >>>>
>  >>
QoS, Exam Certification Guide'.  In it he calculates total latency
>  >>> for
the
>  >>>> path as processing delay, serialization delay, propagation delay,
>  >>>
>  >> etc...  My
>  >>>> question is why do we calculate serialization
delay only as
>  >> the packet is
>  >>>> leaving the interface and being
placed on the wire and not
>  >> ALSO when the
>  >>>> packet is being
received by the remote router.  Surly there
>  >> is some
>  >>> delay to
> 
>>>> take an incoming packet off of the wire and store the
>  >> bytes in
memory
>  >>> prior to
>  >>>> processing it.  So in other words, in the
>  >>
network as
>  >>> PC1-->R1-->R2-->PC2 why
>  >>>> are we not including the
delay to
>  >> get the packet off of the wire and into
>  >>>> R2.  ie, assume
packet is already
>  >> in R1--> processing delay, seralization
>  >>>> delay
of R1, propagation delay to
>  >> reach R2 and then serialization
>  >>> delay
again
>  >>>> to get the packet into R2
>  >> for its processing?
>  >>>>
> 
>>>> BTW, I know I grossly
>  >>>> missquoted Odom by
>  >> only including
processing delay, serialization delay,
>  >>>> propagation delay,
>  >> etc...
He has a lot more delay types in his book.  My
>  >>>> question really
>  >>
only focuses on the packet that is in R1 and getting
>  >>> into R2
>  >>>> so
I
>  >> omitted the others.
>  >>>>
>  >>>> I appreciate anyone's feedback.
> 
>>>>
>  >>>> Thanks
>  >> guys.
>  >>>> Make a small loan, Make a big
difference - Kiva.org <http://kiva.org/>
>  >>>>
>  >>>>
>  >>>>
>  >> Blogs
and organic groups at http://www.ccie.net/
>  >>>>
>  >>>>
>  >>
Received on Sun Jan 13 2013 - 10:08:26 ART
This archive was generated by hypermail 2.2.0 : Sun Feb 03 2013 - 16:27:17 ART