ProCurve 5300xl Specifications Page 7

  • Download
  • Add to my manuals
  • Print
  • Page
    / 36
  • Table of contents
  • BOOKMARKS
  • Rated. / 5. Based on customer reviews
Page view 6
7
This programmable functionality was originally designed and implemented in the popular
ProCurve Switch 4000M switch family and was used to give the ProCurve Switch 4000M new
ASIC-related features well after initial release of the product. Customers with existing units
could benefit from the new features via a free software download. The customer’s investment in
the Switch 4000M was preserved by providing new functionality not otherwise possible without
the ASIC programmability.
Being based on the Switch 4000M’s implementation, the ProCurve 5300xl programmable
capability is a second generation design.
Fabric Interface
After the packet header leaves the programmable section, the header is forwarded to the Fabric
Interface. The Fabric Interface makes final adjustments to the header based on priority
information, multicast grouping, etc. and then uses this header to modify the actual packet
header as necessary.
The Fabric Interface then negotiates with the destination N-Chip for outbound packet buffer
space. If congestion on the outbound port is present, WRED (weighted random early detection)
can also be applied at this point as a congestion avoidance mechanism.
Finally the N-Chip Fabric Interface forwards the entire packet through the F-Chip to an awaiting
output buffer on the N-Chip that controls the outbound port for the packet. Packet transfer from
the N-Chip to the F-Chip is provided via the 9.6Gbps full duplex backplane connection, also
managed by the Fabric Interface.
The N-Chip CPU
The N-Chip contains its own CPU, a 66 MHz ARM-7, for Layer 2 learns, packet sampling for the
XRMON function, handling local MIB counters and running other module related operations.
Overall, the local CPU offloads the master CPU by providing a distributed approach to general
housekeeping tasks associated with every packet. MIB variables, which need to be updated with
each packet, can be done locally. The Layer 2 forwarding table is kept fresh via this CPU. Other
per-port protocols, such as Spanning Tree and LACP, are also run on this CPU.
The local CPU, being a full-function microprocessor, allows functionality updates through future
software releases.
F-Chip
The fabric, or F-Chip, which is located on the backplane of the switch, provides the crossbar
fabric for interconnecting the modules together. The use of a crossbar allows wire speed
connections simultaneously from any module to any other module. As mentioned in the N-Chip
section, the connection between the F-Chip and each N-Chip (module) in the chassis is through
a 9.6Gbps full duplex link.
One unique function of the F-Chip is to automatically replicate multicast packets and send them
to the destination modules. This method is more efficient than having the source N-Chip do the
replication. Since only a single copy of the multicast packet needs to be sent to the F-Chip, this
method saves bandwidth on the high speed connection between the source N-Chip and the F-
Chip.
The Master CPU
Along with the F-Chip, the backplane of the switch also contains the master CPU, 32MB RAM
and 12MB of flash ROM memory. The master CPU, a 200 MHz Power PC 8240, runs the routing
protocols and maintains the master routing tables, maintains the master MIBs, responds to
SNMP requests, and manages the user interfaces. The Master CPU is also responsible for switch
bootup coordination. Two copies of the switch operating system can be stored in the flash ROM.
This allows the user to recover quickly if the main code copy is corrupted or a code update
produces results other than what is desired.
Input to the CPU is prioritized into 4 queues. Queuing this way prevents the user from being
locked out of the switch user interface due to unintentional high levels of traffic, such as
broadcast storms. More significantly, this also prevents a user lockout due to intentionally high
levels of traffic, such as denial of service attacks.
Packet Buffer Memory Management
Each 5300xl module uses 6.2MB for the outbound packet buffer memory, arranged as 4096
buffers of 1518 bytes in length (the maximum Ethernet packet size). This memory is divided
Page view 6
1 2 3 4 5 6 7 8 9 10 11 12 ... 35 36

Comments to this Manuals

No comments