ProCurve 5300xl Specifications Page 8

  • Download
  • Add to my manuals
  • Print
  • Page
    / 36
  • Table of contents
  • BOOKMARKS
  • Rated. / 5. Based on customer reviews
Page view 7
8
evenly across the number of ports on the module. For example, the Switch xl 100/1000-T
module, which has 4 100/1000 ports, has
44096
÷
= 1024 outbound packet buffers per port,
whereas the Switch xl 10/100Base-TX module, which has 24 10/100 ports has 4096÷24 = 170
outbound packet buffers per port.
The QoS queues for each port are then represented by their weighted percentage.
10/100 Modules
Gigabit Modules
Queue Weight Queue Weight
1 10% 1 25%
2 60% 2 25%
3 10% 3 25%
4 20% 4 25%
Queue 2 for a Gigabit module has 1024 x 25% = 256 packet buffers. Queue 2 for a 10/100
module would have 170 x 60% = 102 packet buffers.
Inbound buffer memory is normally run to be just a few packets deep to avoid head of line
blocking issues. If flow control is turned on for a port the amount of inbound packet memory
available to that port is quite deep – 1 MB or more.
Packet Buffer Memory Design Tradeoffs
In general, buffer memory is a difficult topic as the common assumption is that more is better.
That is not the case, particularly for inbound memory. Head of line blocking is a big issue with
inbound memory that has any depth, so in many cases the effective depth is usually set to be
quite small - several packets deep to account for processing 'jitter' of the ASIC as it handles
packets of differing types. Since the packet processor in the 5300 N-chip runs at wire-speed for
our current modules we shouldn't be dropping any packets on the inbound side due to packet
processing delays - packets would only get dropped if there is outbound congestion.
Outbound memory size does better with larger queue depths, but even here there is a concern
with queues that are too deep. You don't want to hang on to a packet too long as the latency
the packet accumulates in the switch has potential network effects, such as retransmission
requests or session timeouts. In the case of VoIP packets and streaming video packets this
latency can cause stream dropouts at the destination.
The 5300 buffer design tries to strike a balance on what is needed for packet buffering to deal
with network congestion versus the ability of holding on to packets too long and actually
exacerbating poor network performance. You don't want to compensate for oversubscribed
networks (looking on a QoS queue-by-queue basis) by trying to over buffer in the switch.
Performance
These numbers have been generated by Hewlett-Packard, using testers from Ixia
Communications. Ixia testers are used by a number of network testing houses and the press to
determine performance numbers for networking equipment. In these tests, 32 ports were used
for Gigabit testing, 192 ports for 100 Mb testing. All ports were full duplex. Numbers presented
here are condensed from Ixia reports in order to save space.
Testing done on the ProCurve 5308xl Switch . Maximum rate of throughput (100%) would be
the same for the 5304xl but at one-half the number of packets since the 5304xl has one-half
the possible number of ports of the 5308xl.
IP Routing (L3) RFC 2285 Fully Meshed Throughput Test
Copper Gigabit ports
Port pairs active, full duplex: 32 = 32 Gbps data out of the tester
Test length: 5 minutes
Packet size
(bytes)
64 128 256 512 1024 1280 1518
%MaxRate
100 100 100 100 100 100 100
TotalTxFrames
14285711648 8108112000 4347829856 2255634400 1149426432 923077824 780229824
Page view 7
1 2 3 4 5 6 7 8 9 10 11 12 13 ... 35 36

Comments to this Manuals

No comments