I do not argue that method 1 is an accepted rule. I do not like rules 
that much, because they let you forget about reasons...
I was just asking why would you do that, other than cisco said it.
-Carlos
Joe Astorino @ 15/09/2011 20:33 -0300 dixit:
> Regarding method 1, if you look at traditional CAR based policing on a 
> router, that is the recommended way to calculate burst on the docCD and 
> is pretty well known.  The way I have always understood it is that it 
> would allow bursts up to 1.5x the CIR for short periods of time.
> 
> On Thu, Sep 15, 2011 at 7:25 PM, Carlos G Mendioroz <tron_at_huapi.ba.ar 
> <mailto:tron_at_huapi.ba.ar>> wrote:
> 
>     Joe,
>     the method 1 makes litle sense to me. This is assigning 1 whole
>     second of burst (actually, 1.5). Why would you do that ?
> 
>     Here's what I believe to be fundamental:
>     -you are trying to police a given rate. As you are monitoring, and
>     you have the delta to the previous packet, if your traffic was even
>     spaced
>     the rate is all you need. You can tell if a packet is going over the
>     rate without further info.
>     -this is hardly a reality, and jitter will happen, and bursts will
>     appear. That is a bunch of bytes that exceed the mean rate if taken
>     without some consideration. This is somehow related to how much time
>     the link can delay the delivery of some byte, piling the following
>     and arriving all together at access rate for some time.
>     Method 2, I guess, is an estimation of a high mark for that delay.
>     Multiply by 2 just as safety.
>     -In the end, I doubt that if you pick a larger than needed value, it
>     will affect anything too much. This is not memory allocation, just a
>     safety not to drop on bursting. Any long time rate excess will be
>     dropped no matter what.
> 
>     -Carlos
> 
> 
>     Joe Astorino @ 15/09/2011 19:31 -0300 dixit:
> 
>         I am having a real hard time finding good information on this
>         topic for use
>         in the real world. In the lab, we would usually just configure
>         the burst
>         size we are told on a Cat 3560.  I have done a LOT of reading on
>         it, and
>         there are a lot of conflicting stories with regards to this.
> 
>         Basically, I am trying to find out how to calculate an optimal
>         burst value
>         on a 3560 QoS policy doing policing.  As you probably know the
>         syntax looks
>         like this:
> 
>         police [rate in bits/s] [burst size in bytes].  Remember, this
>         is policing
>         not shaping so the classic shaping formula of tc = bc/cir has no
>         relevance
>         here mainly because the token refresh rate is not based on a
>         static set
>         amount of time.  The burst size is actually the size of the
>         token bucket
>         itself in bytes, not a rate of any kind and it is filled as a
>         function of
>         the policed rate and the packet arrival rate. The refill rate of
>         the bucket
>         is not based on a static amount of time like in FRTS for
>         example.  It
>         basically says "how long was it since the last packet...multiply
>         that times
>         the policed rate, and divide by 8 to give me bytes".  In other
>         words it
>         pro-rates the tokens.  Makes sense.
> 
>         Anyways...I have found 2 sort of "methods" to calculating this,
>         but they are
>         so far off from one another I am not quite sure which one to use
>         in the real
>         world.
> 
>         Method 1:  The classic CAR formula we see on routers: (rate *
>         1.5) / 8.
>         This basically gives you 1.5x the policed rate, and converts it
>         to bytes.
>         Makes sense.
>         Method 2:  2x the amount of traffic sent during a single RTT.
> 
>         In my case, I am trying to police a video conferencing endpoint
>         to 3Mbps so
>         by method 1 that gives me a burst size of 562,500 bytes.  Using
>         method 2,
>         let's just say I have an average RTT of 100ms.  That method
>         would yield a
>         burst size of 75,000 bytes.  That is a HUGE difference
> 
>         This came about because the video endpoint was dropping frames.
>          I noticed
>         the policed rate in the policy was 3,000,000 but the burst size
>         was 8000
>         bytes (the lowest possible value).  When I changed the burst
>         based on a
>         100ms RTT and the above formula the problem went away, but now I
>         am having
>         doubts on the proper value to use here.
> 
>         Does anybody have any insight on how to actually calculate this
>         properly?
> 
> 
>     -- 
>     Carlos G Mendioroz  <tron_at_huapi.ba.ar <mailto:tron_at_huapi.ba.ar>>
>      LW7 EQI  Argentina
> 
> 
> 
> 
> -- 
> Regards,
> 
> Joe Astorino
> CCIE #24347
> Blog: http://astorinonetworks.com
> 
> "He not busy being born is busy dying" - Dylan
> 
-- Carlos G Mendioroz <tron_at_huapi.ba.ar> LW7 EQI Argentina Blogs and organic groups at http://www.ccie.netReceived on Thu Sep 15 2011 - 21:32:38 ART
This archive was generated by hypermail 2.2.0 : Sat Oct 01 2011 - 07:26:25 ART