The benefits of Ethernet switching
Traditional shared Ethernet is a baseband medium which means that only one station can send data onto the
medium at any one time. Multiple signals cannot be multiplexed as in the case of a broadband medium. On a shared Ethernet hub stations resolve access contention by listening on the receive pair of wires to check if any other station is sending data. The implementation of Ethernet switching instead of shared Ethernet entails the following improved operational features:
Dedicated collision domains
Each port on a switch is in its own collision domain and therefore a station connected to the LAN via a switch port rather than a hub port does not have to compete for access to the wire by listening for collisions before sending data. This increases the effective bandwidth on the LAN.
Traffic filtering and forwarding
A switch functions as a multi-port bridge and learns the location of each station's MAC address by listening to live traffic. For each frame that it switches, it will only forward traffic to the port where the destination MAC address resides. The switch is said to filter the frame on all other ports. This significantly reduces unnecessary traffic on the LAN and improves the efficiency with which bandwidth is utilized. Broadcast frames are however flooded to all ports, hence a switch is said to create multiple collision domains but all ports remain in the same broadcast domain. This is often a desirable means of operations since broadcasting can be a necessary and often an efficient means of communications in the LAN as opposed to the WAN. Microsoft Windows uses NetBios, which heavily relies on broadcasting. Another example is the Address Resolution Protocol (ARP) whereby an ARP broadcast must reach every station on the IP subnet in order to resolve a destination IP address to its MAC address.
Traditional shared Ethernet operates in half-duplex mode. In other words stations cannot send and receive at the same time. As a result of the baseband nature of Ethernet, only one station can access the medium and send data at any one time. Stations on a shared Ethernet medium resolve contention by listening for collisions. Full-duplex transmission simply means that stations can send and receive at the same time. In Ethernet this is accomplished merely by not listening for collisions. It is only valid to disable collision detection if the station is attached to its own dedicated switch port. This means that there are only two stations in the collision domain- the station itself and the switch port. Each station can then send to and receive from the other without having to listen for collisions. This is sometimes called point-to-point Ethernet. Full-duplex operation like many networking terms, has been abused and has had disingenuous claims associated with it. The marketing wars amongst the switch vendors have prompted them to claim that full-duplex operation doubles throughput. Full-duplex operation does significantly improve throughput, but it can hardly be said to double it since application traffic is unlikely to be simultaneously sent and received at wire speed by the same station.
Understanding client-server traffic flow
Obtaining a detailed understanding of client-server traffic flow is arguably the greatest single challenge when implementing a switched LAN design. In situations where a network is being redesigned from a shared LAN environment to a switched LAN in order to meet increased bandwidth requirements it is possible to gather detailed quantitative information on traffic profiles. On a completely new network this is not so easy prior to rollout. However failing a rigorous quantitative analysis a reasonable qualitative analysis of traffic profiles should be achievable. It is important to obtain a reasonable estimate of the following: What clients are talking to what servers, for how long and how much bandwidth is being consumed-now and in the future? What is the physical and logical location of all clients and servers- in other words be clear about the client-server data path for each application. What is the level of inter-server traffic? Again this is consistent with the need to understand all major traffic flows across the network. The introduction of a LAN switch can be of limited benefit if these traffic flows are not adequately understood. To take an extreme example, consider the case where the server is remote and must be accessed across a 56k WAN link. In this case a LAN switch will not significantly increase performance since the bottleneck is in the WAN rather than the LAN.
Several proprietary methods along with the 802.3ad standard exist to allow multiple links to be aggregated into a single logical high-speed connection. Multiple physical connections between the same two switches must be treated as a single logical connection; otherwise spanning tree will block redundant links. This capability can be used to provide high-speed connections between core switches and also to high-bandwidth servers. Even prior to the deployment of 10 Gigabit Ethernet, the capability exists to aggregate up to 8 Gigabit Ethernet ports to provide a high-speed campus backbone.
The concept of virtual LANs (VLANs)
Each port on a switch represents a separate collision domain however all ports on a switched network are in the same broadcast domain. Any broadcast that is issued by any station on the campus LAN would have to be processed by every station on that LAN on a completely flat, switched network. The interruption of each device's CPU is probably a more serious issue than the bandwidth consumption associated with broadcasts in a LAN environment. VLANs provide a mechanism for creating multiple broadcast domains in a switched network. A broadcast issued by a particular station will then only propagate to stations that are on the same VLAN. A router is required to enable communication between VLANs just as one is required for communication between physical LANs. This can be easily understood by noting that a VLAN is synonymous with an IP subnet. In a switched environment if two stations are on the same VLAN then they must also be on the same IP subnet.
By filtering broadcasts VLANs impose a certain level of security similar to that normally associated with routed subnets. Consider the case of a network analyser that is plugged into a particular switch port. If this port is assigned to a particular VLAN then the analyser will only detect broadcasts associated with that VLAN rather than for the entire LAN. Security policies can also be configured on the router that controls the inter-VLAN communication just as for conventional LAN segments.
IP address plan
The IP address plan may also in part dictate the VLAN strategy. For example if a 26-bit mask is being used for LAN subnets then the maximum number of hosts per subnet is 60. This means that the entire LAN cannot simply remain 'flat' with in excess of 60 hosts. If there are a large number of hosts on the switched LAN, VLANs must be created with a maximum of 60 hosts per VLAN.
VLANs go some way towards combining the intelligence of a routed network with the flexibility of a switched LAN. For example a user that is on a particular VLAN can remain on that VLAN after moving to a different physical location within the campus. All that is required is a change in the relevant switch configurations. There is no need for a hardware change or re-patching of cables. This flexibility is further facilitated by the fact that VLANs can be extended across multiple switches using a VLAN trunking protocol. Generally VLANs have helped simplify the administration and management of moves, adds and changes in a LAN environment that uses layer 3 processing.
There are a number of issues to be considered when planning the implementation of VLANs on a large campus LAN. The number of VLANs to be deployed must be decided upon along with the number of hosts that each VLAN should support. The VLAN architecture and how far the VLANs span throughput the campus is another important design issue.
VLANs can be local to the wiring closet where, for example, each floor in a building represents a different VLAN regardless of the work function of the users. This means that broadcasts are locally contained however the downside is that traffic to other wiring closets, where servers might reside, must be routed. There is a growing trend to share enterprise resources at centralised locations such as server farms. Such a trend has been fuelled by the increased prominence of Web-based computing and shared office applications. With most resources being centralised it is likely that client to server traffic will be routed in any case, unless the LAN is one big IP subnet, which would not scale well for broadcasts. This rationale has made so-called 'local' VLANs a popular design philosophy. The alternative is to allow VLANs span the entire LAN or campus in an effort to ensure that a minimal amount of client to server traffic incurs the additional latency of routing. This may be feasible where workgroups are relatively autonomous e.g. engineering, marketing, legal etc. Modern server platforms tend to support multiple shared applications, which can undermine so-called 'end to end' VLANs. The improvements in layer 3 switching technology have also reduced the latency associated with routing and layer 3 processing. A potentially more compelling reason for deploying a local VLAN implementation is that it prevents the propagation of broadcasts across the campus backbone.
Number of VLANs
Avoid creating VLANs 'for the sake of it'. The network designer should be clear on the benefit that will accrue as a result of implementing VLANs. With this in mind the number of VLANs to be used can be decided upon. This decision cannot be made independently of the IP addressing plan where the number of LAN subnets will usually correlate to the number of VLANs deployed. Depending on the organisation's personnel structure it may or may not be possible to group users with a common work function in the same VLAN.
Number of users per VLAN
It is good practice to have an estimated maximum number of users per VLAN. This does not necessarily have to be consistent throughout the enterprise. For example VLANs containing clients that utilize a high bandwidth or broadcast-intensive application should have a lower number of users. The IP addressing plan may also present a limiting factor to the amount of hosts on each subnet and hence on each VLAN.
Optimizing the Spanning Tree Domain
The 802.1d spanning tree protocol (STP) is necessary on bridged or switched networks in order to allow redundant inter-switch links, whilst preventing broadcast loops. The fact that spanning tree can be potentially slow to converge poses some challenges that ideally should be resolved at the network design stage.
Most switch vendors offer some proprietary methods to speed up spanning tree convergence. For example, Cisco's PortFast feature sets the forward delay timer to zero on a port that doesn't connect to another switch. This prevents PCs from having connectivity problem upon boot up due to their port being slow to move to a forwarding state. This is a useful feature as STP is only required on ports connecting to other switches.
There is however a new standardised enhancement in the form of Rapid STP (RSTP), which, as the name suggests, specifically addresses the convergence issue associated with 802.1d. RSTP is a standard in the form of 802.1w. It performs increased calculations and retains more topological information about the network. This, coupled with its use of BPDUs as a keepalive mechanism enables it to locally convergence in 6 seconds as opposed to 50 seconds in the case of 802.1d. 802.1w is backward compatible with 802.1d and is recommended on modern LANs.
The Root Bridge triggers the spanning tree BPDU messages that propagate throughout the switched network every two seconds. This is one of the reasons why the Root Bridge should be located at a central point close to the backbone of the network. This ensures that all downstream switches will experience similar delays in receiving and hence processing BPDU messages, which enhances the stability of the spanning tree calculation. All ports on the root switch are in a forwarding state for the purposes of spanning tree and consequently it typically has a higher processing load than other switches. This means that it should be one of the more powerful switches on the network. Clearly the root switch should be carefully chosen. The spanning tree protocol automatically elects a root switch based on the lowest Bridge ID. With all parameters at their default values this becomes a lottery of the switch with the lowest MAC address ID. However the root election can be biased, by lowering the bridge priority on the intended root device. This is desirable not only for the reasons just mentioned but it also protects against a newly commissioned switch initiating a root election simply because it has a lower MAC ID than the existing root switch.
A port that has spanning tree enabled must go through the stages of blocking, listening and learning before moving to a forwarding state. This is at the heart of spanning tree's slow convergence but is necessary in order to ensure a loop free topology. All of the major switch vendors have proprietary methods of accelerating spanning tree convergence in a safe manner. For example, the spanning tree protocol can be disabled on a per-port basis in order to move the port directly to a forwarding state. This prevents problems such as workstation DHCP requests timing out after boot-up since the port had yet to move to a forwarding state. Extreme care should be taken whenever disabling spanning tree, insofar that it should never be disabled on a port that may connect to another switch.
One final issue that must be resolved at the design stage is how spanning tree is handled in a VLAN environment. It is possible to implement a single spanning domain for the entire campus LAN. Alternatively a separate instance of spanning tree can be implemented on each VLAN. This means that each VLAN could potentially have a different (or indeed the same) root switch. It is important to be clear about the implementation being followed and to plan accordingly. For example, with multiple spanning tree domains it may be prudent to prevent one switch from being the root for all VLANs. This would minimise the traffic disruption should that switch fail. The advantage of having multiple spanning tree domains is that they are smaller and hence faster to converge as well as allowing the optimised choice of root switch for each VLAN. A single spanning-tree implementation on the other hand minimises BPDU traffic and also the amount of spanning tree processing that the switches must perform. As ever it is a question of understanding your environment before making any decisions.
Along with the advent of IP telephony comes a number of design issues to be resolved. The voice traffic should be placed on its own VLAN for both performance and security reasons. Hence a new IP subnet(s) must be commissioned for the IP Phones. Ideally it should be very distinguishable e.g. 10.99.99.0/24 in order to facilitate troubleshooting and management in general.
The number of IP Telephony servers and their location must be decided upon. The capacity of the servers must be assessed in terms of number of registered phones they can support as well as the number of supported busy hour call attempts. This assessment, combined with a budgetary analysis, can help decide between a centralised or distributed model for the location of the servers.
Power can be provided to the phones via a separate power unit for each phone, a power patch panel or by using Ethernet switches that support inline power. Usually the latter option is considered the most reliable and cost-efficient. However the question of standardisation should never be far from your mind when dealing with IP Telephony. If the IP phones are from a different vendor than the Ethernet switch, will the inline power work? While there are standards, they are not always watertight and so the only way to know for sure is through practical pilot testing.
QoS is always an issue with voice. Bandwidth in the LAN is usually plentiful and so congestion management might not necessarily warrant special configuration. Traffic from IP phones is normally marked as IP Precedence 5 and 802.1q COS 5. This classification and marking can be configured on the local switch. Many IP phones pre-mark the traffic. If this is the case then the switch should be configured to 'trust' (i.e. not re-mark) packets from IP phones. Data packets from the PCs should be routinely re-marked as Precedence 0 best effort to prevent data frames being sent to the high priority queues on router WAN interfaces.
About the author
Cormac Long is the author of IP Network Design and Cisco Internetworking and Troubleshooting.
This tip originally appeared on SearchNetworking.com.
This was first published in November 2006