This paper is the intellectual property of the author(s). It was presented at CAUSE98, an EDUCAUSE conference, and is part of that conference's online proceedings. See http://www.educause.edu/copyright.html for additional copyright information.

Using Fast/Gigabit Ethernet to Satisfy Expanding Bandwidth Needs

Joe Rogers
University of South Florida, Academic Computing
Tampa
Florida

Abstract

Computer networks have become an indispensable part of everyone’s life. However, advances in desktop computer technology and multimedia applications have left most computer networks in the dust. Processors have continued to double in performance every two years and we are entering a new age of application development that heavily utilizes multimedia streaming.

Despite these advances and increases in our student enrolment to over 37,000, for the most part, our University network has been running on the same 10 Megabit (Ethernet) technology for the past 8 years. In an effort to provide better quality of service, we have initiated a plan to upgrade our 4,000 faculty and student lab desktop network connections to Fast Ethernet with a Gigabit Ethernet backbone connecting 20 academic buildings. The equipment and over 600,000 feet of new cabling is currently being installed with the help of student employees. This discussion will cover the many aspects of this plan including technology selection decisions, implementation issues, and the benefits we expect to see after completion.

INTRODUCTION

The University of South Florida, Academic Computing began our 100 Megabit University Project after seeing that the existing campus network would not be able to support many of the new multimedia applications we were planning to deploy. Our campus network was, for the most part, shared media 10 megabit building networks routed to a FDDI backbone. This backbone connected five primary campus wiring centers each housing a Cisco AGS+ or 3Com NetBuilder II router. This network served our campus well for the past 8 years, but it was already beginning to show signs of stress even with existing applications. There were many decisions to be made when planning the necessary upgrades. The following paper will explain these decisions and hopefully provide valuable insight for others faced with the same upgrade choices. The paper will follow this simple outline:

MOTIVATION FOR UPGRADING

The first thing to consider when upgrading a network is what kinds of traffic will be flowing across it. For our campus, we had several applications in mind. Probably the most demanding would be our I2 research initiatives. The high-speed network research groups and those groups of scientists and engineers who need large amounts of bandwidth to carry out their research, would be utilizing this new network to access other I2 institutions. We needed to provide an infrastructure that could handle anything they might need for their current research projects and also scale easily to accommodate future needs.

In addition, several departments on the campus were beginning to deploy new multimedia applications that promised to take up large amounts of bandwidth. Some applications would start off small with 500kbps streams of audio and video, but others, like those considered by our distance learning department for real-time course delivery to our satellite campuses, would begin with streams of at least 5Mbps.

These two application arenas themselves would generate enough load to cripple our existing network with only a minimal level of deployment. Our new network must be able to handle these applications and the many others that we could not anticipate.

OVERVIEW OF THE NEW NETWORK DESIGN

Approximately 90% of the buildings at our University are connected via multimode fiber in a physical start topology to the five primary campus wiring centers mentioned above. These five wiring centers are then connected to each other with multimode and singlemode fiber cable.

The core of our new network consists of two 9-Port gigabit ethernet layer 2 switches. Attached via singlemode fiber to each of these switches are 13-slot modular layer 2/layer 3 switches. These larger chassis provide ethernet and fast ethernet connectivity via multimode fiber to each building on the campus. A picture of one of our campus wiring centers is shown in Figure 1. The ports on these switches are grouped into VLANs and layer 3 switching (routing) is performed for connectivity between VLANs. These VLANs represent administrative boundaries between departmental subnets. In buildings that require aggregation of large amounts of ethernet and fast ethernet fiber from their internal wiring centers, we use smaller modular switches. These switches provide layer 2 connectivity between internal building wiring centers and also provide the ability to establish port based VLANs for further segmentation of departmental networks. In the building wiring centers we employ multiple technologies including 10Mbps ethernet hubs, 100Mbps hubs, and 10/100Mbps fast ethernet switches. Figure 2 shows a basic drawing of the network topology just described. This drawing represents only a small portion of the entire campus network.

Our multiple campus networks support over 35,000 students and over 10,100 staff and faculty in 344 buildings. The scope of this particular project was originally limited to approximately 30 of those buildings. We concentrated on buildings that housed the colleges and their support staff. However, the scope of this project has grown substantially over the past two years. Although we are not planning on providing desktop wiring in a number of the other buildings on our campus, the project has in many ways benefited these buildings and users through the expansion of the core network and our efforts to replace legacy equipment. The remainder of this paper will describe the choices we encountered when designing the network to support this University.

BACKBONE TECHNOLOGY SELECTION

Our first decision in upgrading the network was what technology to choose for the new backbone. The existing FDDI backbone was a fine technology for aggregation of 10Mbps networks, but even with some of the new FDDI switches that were available on the market, this technology was not a good choice for the future. For the most part, FDDI interface costs have never really dropped and most vendors are beginning to phase out support for it.

This left us with ethernet and ATM. At the time this project was being designed, ethernet and ATM were being sold as very direct competitors for backbone and desktop connectivity. Some will argue that they are still very close competitors for the backbone, but not many people are still interested in ATM to the desktop. We had several different network equipment vendors knocking at our door praising each of the technologies. However, they quickly realized and often agreed with our strong arguments against using ATM for backbone and desktop connectivity. The ethernet vs. ATM argument often boils down to a strong religious debate. For us, it came down to support for existing infrastructures and of course cost.

ATM is a useful technology for certain environments. In fact, we use ATM for WAN connectivity to our regional campuses. For us, ATM works well mainly as a trunking protocol. We need to deliver high speed data pipes to our other campuses, but at the same time we provide T1’s for the phone switches and fractional T1’s for existing video conferencing systems. So, rather than purchasing separate T1 and DS3 circuits, we use ATM to trunk together all of these services into one larger (OC3) pipe. We do not use any of the QoS or dynamic switching services of ATM. It serves simply as an inverse multiplexor.

As a campus backbone technology, we evaluated ATM as it is used with LANE. LAN emulation is quite a beast. Without going into a tutorial on the innerworkings of ATM and LANE, which could be a full paper in and of itself, basically LANE takes care of mapping ethernet fringe networks (where the users typically reside) into an ATM network that allows the different ethernet segments to interact. It sounds simple, but unfortunately, the actual implementation of the protocol is nowhere near simple.

Rather than deal with the complexities and vendor interoperability issues of ATM as a campus backbone, we began to look at gigabit ethernet. Obviously, having already worked with ethernet and fast ethernet networks for the past 8 years, implementing a gigabit ethernet (GE) backbone would be considerably easier. However, at the time, GE was just on the horizon. The standards committees were still working feverishly to get a single standard approved. Networking trade shows and conferences at the time were just beginning to have GE vendors demonstrating mostly proprietary interfaces just to show that they were at the leading edge of technological development. We believed that GE was going to be a totally standard and stable product given a little time. Interoperability issues with ethernet and fast ethernet were not a problem, so with the same frame format for GE, it was a pretty safe bet that once the standards were set, interoperability would not be a problem.

The next consideration after deployment concerns was cost. ATM interfaces were still very expensive. To obtain ATM OC-12 interfaces (622Mbps, not even a full gigabit), we would have paid at least $3,000 per interface. Gigabit interface costs could only be estimated, and pricing from vendors was hard to pin down. However, we had previous experience with fast ethernet deployment and the costs associated with those interfaces. Fast ethernet fiber interfaces began life at $1,200-$1,500 per port, but it was not long before they had dropped to below $800. Our hope was that GE would follow suit and drop quickly. Our hopes were further fueled by the comparable pricing we received which placed starting GE LX interfaces at $1,400 each.

DESKTOP TECHNOLOGY SELECTION

With gigabit chosen as the backbone technology, we had to consider the desktop arena. This decision was far less complicated. Since NIC and switch port costs were prohibitively high for ATM, it was quickly ruled out as a feasible alternative. We needed a technology that could be cost-effectively and easily deployed side-by-side with existing ethernet networks. As a result, our options seemed limited to some form of ethernet.

We could implement switched 10Mbps, shared 10/100Mbps, or switched 10/100Mbps. A switched 10Mbps network would certainly be cost effective. Interface costs for switched 10Mbps were beginning to approach hub prices and the features available in these switches were quite rich. Switching would provide good traffic containment, segmentation, prioritization, and full duplex user connections. However, 10Mbps service was simply too slow for some of the applications already running on our networks. It made no sense to deploy 10Mbps switches if we would just have to upgrade them in a few years anyway.

Shared 10/100Mbps connections were a fairly medium cost alternative. At the time we were planning this project, 10/100 hubs were just appearing on the market. Prices for these devices were in the $80 per port range. More network bandwidth would certainly be available to users on a 100Mbps shared segment, but unfortunately it is still a shared media network, which means no support for full duplex user connections. Traffic segmentation is also not possible with shared media. If one person on that hub watched a high bandwidth video, then everyone would suffer the slower network performance.

Switched 10/100Mbps is obviously the best solution for performance reasons. It provides the same good traffic containment, segmentation, prioritization, and full duplex connections that switched 10 has, but it can also provide this at 100Mbps speeds. Unfortunately, during our original design process, switched 10/100 prices were still quite expensive (>$200 per port).

One point that should be made about switched 10/100 networks is that the uplink leaving each switch is usually a 100Mbps link. Certainly some vendors support GE uplinks, but then who can afford the upstream switches to aggregate this much GE fiber. Therefore, the 100Mbps uplink effectively becomes a shared resource between all of the users on that switch. If one user on that switch wants to start a large data transfer across the uplink taking up 30Mbps of bandwidth, then 30Mbps is no longer available to other users. However, most switches allow for some kind of prioritization. It may be as simple as a port priority that says one user (port) will get higher priority over another user, or it may prioritize traffic on a IP address or protocol basis. Either way, the switch allows network managers greater control over how the network reacts to high bandwidth data transfers.

Most vendors also offer a way to "bond" together two or more fast ethernet links into one higher speed pipe. This alleviates some of the congestion on an uplink if you can bond together 2 100BaseFX fibers into a single 200Mbps link.

DESKTOP WIRING CHOICES

Unfortunately, USF had one other concern other than which technology and what speed of that technology to choose. Our campus desktop wiring was Cat 4. It was installed during that brief moment in history where Cat 4 actually existed and everyone was scrambling to use all they could get their hands on. This cabling has served USF well for 10Mbit connections for almost 8 years now, but it will not work for 100Mbps at all. In order to implement any kind of 100Mbps technology (besides the near extinct 100BaseT4 and VG standards) we had to upgrade our wiring first. The question was whether to go with inexpensive copper cabling or to make the investment in fiber.

There was quite a bit of talk in trade journals and magazines (particularly Lightwave, and other optical publications) about fiber to the desktop (FTTD). Obviously, fiber would be an excellent solution if you had the money to purchase the equipment that would be needed to drive it. Unfortunately fiber interface costs are still quite high both for NICs and for switch ports. In addition, although the fiber cabling itself is rather comparable in cost to Cat 5, the connectorization costs are still too high ($9.00 per strand per end for Siecor Unicam connectors not counting labor).

As a current example of FTTD, our newly constructed education building on the USF Tampa campus boasts an impressive fiber installation. During the planning phases of this building, the technical staff managed to convince planners to install fiber to the desktop. This is impressive on its own, but in addition to the standard pair of multimode cable that you would expect when talking about FTTD, they also installed a pair of singlemode fiber. This building is certainly ready for the future, but right now the fiber is sitting mostly dormant due to interface costs. However, since the average building will see 30-40 years of service before its next renovation, the installed fiber plant will turn out to be an excellent design choice long before anyone begins to speak of renovation money.

Currently in the industry, a new fiber connector standard is emerging that promises easier termination, higher density connections on network equipment, and most importantly lower costs. These new connectors will certainly be the first step in the realization of FTTD especially since they are reducing duplex fiber connector costs to near $3.00. Unfortunately, USF can not wait long enough for these connectors and the resulting equipment to be cost effective and available. Although we are installing new copper cabling into buildings that will probably not be renovated before some need of FTTD occurs, we can not justify the expense at this point.

Now, with FTTD options ruled out, our task was to decide what category of copper cabling to install. Again, at the time the proposal was being written, Level 6 and 7 wire was far from being common much less standard, but "enhanced Cat 5" was already available. It turned out that enhanced Cat 5 pricing was so close to standard Cat 5 that it was reasonable to install the enhanced cable and hope to hold off FTTD as long as possible. We know now that the gigabit ethernet committees are working hard to create a copper standard that will run over normal Cat 5. However, this is a difficult task, and anything we can do on our part to make the cabling we are currently installing capable of GE down the road will save us considerable time and money.

In an effort to make sure the enhanced cable we are using is correctly installed and functional above Category 5 levels, we use cable certification equipment (Microtest Pentascanner 350) to verify and document each connection. Although we have not had a cable fail yet (other than easily correctable punch-down wire-map mistakes), if it were to ever happen, we would locate the problem and if necessary re-pull the cable.

VENDOR SELECTIONS

After careful consideration of several vendors, USF narrowed our selection down to two: Cisco and 3Com. The other vendors were ruled out for various reasons. Some were small companies that had good products with plenty of features, but we could not be guaranteed that they would be around for any number of years. Other companies offered proprietary solutions that did not fit well into our plans for a standards based network.

Our first vendor selection was for backbone equipment. We needed equipment that could aggregate large amounts of fast ethernet fiber connections at the campus core while still providing legacy ethernet fiber connections for those buildings that did not particularly need or could not currently handle 100Mbps connectivity. We also needed equipment to go into the primary wiring centers of our larger buildings for aggregation of the fiber coming from the building’s smaller wiring closets.

For our campus gigabit core, we chose Cisco 5505’s with their 9-Port gigabit ethernet cards. For the five primary campus wiring centers we chose Cisco 5500’s. These chassis house multiple 10BaseFL and 100BaseFX blades as well as 2 gigabit ethernet ports providing a load-shared and redundant connection to the GE backbone switches. At the time, 3Com’s only offering for aggregation of this much fiber was their Corebuilder 5000 and the newly released 3500. The 5000 had been around for many years and was already reaching its limit for large amounts of fast ethernet. The 3500 was a fine switch, but it did not have the support for more than 24 ports of fiber which was insufficient for our needs in the campus core.

Our next vendor selection was for switches in the larger building wiring centers. Again we needed the ability to aggregate moderate amounts of fast ethernet fiber and possibly some legacy 10Mbit connections. Choosing between Cisco and 3Com for this application was a little more difficult. Both vendors offered equipment that would accomplish the task. Cisco offered their 5505 chassis, which is a smaller version of the 5500 we were already using in the core. 3Com again offered the 3500 as a possible solution. Eventually, we chose to again use Cisco equipment in these buildings. The decision was mainly due to the need for spare equipment. Since the blades that slide into the 5505 chassis were the same as those in the 5500, we would not have to stock twice as much equipment in the event of an equipment failure.

The choice to use 5500’s and 5505’s was also due to their VLAN support. 3Com planned to offer VLAN support in their 3500 line of switches, but there was currently no way for the two vendors to interoperate on VLAN trunking. The 802.1q standard was still being designed but from everything we heard, the standard would not be available for several more months. We were right about that one. It took almost another whole year before the standard was set and vendors were able to interoperate their VLAN trunking.

Since many of our buildings house more than one department and thus more than one network, it was convenient to use 5505’s with their support to aggregate multiple port-based VLANs. Rather than purchase separate switches for each network in a building, we can now configure each port on the switch to serve different networks. This was a major cost saving feature for the project.

VLAN support has afforded us many other benefits in the campus backbone. None of these features are proprietary to Cisco, but they are interesting uses for VLAN capable equipment. Since the GE links that make up the core of our network are actually VLAN trunks, it is possible to define logical networks that span physically different areas of the campus.

For example, we have a building on our campus known as the "Special Events Center". As you can imagine, many different events are held there. Often different colleges use this building as a place to hold orientation and registration for new students. When events such as these are scheduled, we provide network connectivity for the advisors. In the past we had to create a special subnet for the computers used in these events. Now, we simply change the VLAN on the switch port that serves this building and set it to whatever VLAN is used by the particular college hosting the event. Then the advisors can walk in, plug in, and access the network. Their network resources are available just as if they were sitting in their own offices.

Another excellent use of VLANs is the ability to transfer routing control to different routers around the campus. At one point a few months ago, one of the routers failed over in the Health Sciences side of our campus. To fix the problem, I didn’t even have to leave my home. I got the page indicating the failure, logged into the failed router, realized that it was not going to be repaired remotely, and transferred routing control for the VLANs on that 5500 over to another router across the campus. All of the traffic for the Health Sciences side of campus was automatically trunked over to the appropriate router and service was restored. I simply called in the failed module, had a replacement the next day, and once I was happy with the stability of the new module, returned routing control to the primary chassis.

Our final vendor decision was for desktop connectivity. For USF’s original proposal, we allocated $125 per port for some form of interface to the user. During the first phase of this project, USF chose to use shared 10/100Mbit hubs for connectivity in open-use labs and a few select buildings around the campus. For the second phase, 10/100 switch prices had dropped to below $100 per port making them a feasible option. Selection of equipment vendors for each phase required different sets of criteria.

Shared media hubs do not have any need for VLAN abilities, they can not perform multicast containment, and they do not do traffic prioritization. They should however support some form of RMON, have a usable telnet interface, and preferably support some kind of eavesdrop (network sniffing) protection since for the first phase they were to be deployed in open-access computer labs. Each stack of hubs in this project was to be fed by a switched fast ethernet fiber port from the switch located in the building’s primary wiring center or from the campus core if the building was too small for its own local switch. Since we occasionally need the ability to do network analysis to locate troubles in the network, we could possibly need the ability to have RMON probes on every one of these switches. This was not a cost effective solution. Consequently, we looked for hubs that supported at least a few key groups of RMON. Our next criteria, a good telnet interface, is something we look for in any network equipment. Many vendors (particularly the larger ones) say that you should simply use their enterprise management software and forget about simple interfaces like telnet. Unfortunately, enterprise management packages are often very bulky and usually will not run on a laptop that we would carry out into the field.

Others vendors are creating Web interfaces to their hubs. These web interfaces are usually too slow and limited. The hubs are not intended to be speedy web servers, especially when they are under heavy network loads. It is at these times that a simple, low load, low bandwidth telnet interface can be a considerable asset.

Security is a major concern for shared media solutions in open-access environments. Most of our labs run software to secure the machines and prevent users from installing network sniffing software, but unfortunately not all of them have this security level. In order to prevent someone from capturing data in a shared media environment, some vendors have begun to implement layer 2 security measures sometimes called eavesdrop protection. They learn the attached machine’s MAC address on each port and if a packet being repeated by the hub is not destined for the machine on that port, the packet is scrambled enough that it is thrown away by the ethernet card and assumed to be a "bad" packet. This is a debatable security measure for several reasons. One, some ethernet cards must interrupt the CPU every time they receive a packet, errored or not. This causes a higher load on the CPU. Second, depending on the algorithm used to scramble the packet, it may be intact enough to make it still readable by sniffing software capable of receiving all packets from the network even if they are errored packets. For USF, it was enough that were at least making an effort to keep the network more secure for our users and harder to sniff, so this hub security criteria was quite important.

The vendor chosen for the first phase of the project was 3Com and their newly created line of 10/100 fast ethernet hubs. These hubs provided 7 groups of RMON on the 100Mbit side and all 9 groups on the 10Mbit, they supported a telnet interface (although it did lack quite a bit of functionality), and they had the beginnings of a web interface to supplement the telnet. They also supported the eavesdrop protection we required. They have performed quite well in all of the installations so far. For a 10/100 hub they are fairly inexpensive, and the warranty period is excellent (5 year limited warranty).

Once we began planning for phase two for this project, we realized that switched 10/100 ethernet was an option and our priorities changed a little. Now, VLAN support, multicast containment protocols, and traffic prioritization mechanisms were important. We still also required at least some level of RMON support in the switches since our ability to monitor entire groups of users was becoming limited in this highly switched environment.

For the second phase of our project we chose Cisco and their 2900 series of fast ethernet switches. 3Com offered a very strong line of 10/100 switches. Their 3300 provided almost all of the same features that Cisco offered, and in some cases even more features. For instance, the 3300 supports more groups of RMON, is stackable via an interface on the back of the switch, and has a modular slot for several different media types. These features were almost enough to make the 3300 a better choice, but unfortunately their telnet and web interfaces were severely lacking functionality. We found it difficult to get port statistics and configure groups of ports with a set of options. Of course, the answer to these problems would to be to simply use 3Com’s enterprise management software, but this just is not a viable option. We had to weigh our needs for manageability against the desire for stackability and expandability. For our network, management was a high priority concern and this eventually swayed our selection to Cisco.

INSTALLATION AND IMPLEMENTATION ISSUES

Overall cost is one of the primary concerns for any project on a university campus. Funding for large projects is hard to secure and we knew that if the bottom line for this project was too high, it would never even be considered. Therefore, in addition to doing the design and estimations internally, we chose to use student help for the actual cabling and equipment installation process. We already had student assistants working for us in the department and we have been really fortunate in the past to attract very bright individuals to these positions. The pay is not too high, but we do everything we can to present learning opportunities to further the students’ education in the field of computers and networking.

The decision to use student labor to implement the project has several advantages and a few disadvantages. The first advantage: student labor is inexpensive. Being students, they are paid much less than a professional cable installer, but as I mentioned before, we try to offer other benefits such as training whenever we can to offset this lower pay. For the students working on this project and any that work for me in Academic Computing, I teach a TCP/IP networking class. They learn about everything from the different kinds of network cabling to protocol implementations and routing. This is a valuable learning experience for the students and it is something they can use out in the real world when they finish their college education.

Unfortunately, student labor has disadvantages as well. Since most of the students begin working for the department without any previous experience, there is a learning period where they have to be shown every step of the installation process before they can go and work independently. This boils down to them requiring much more supervision than a cabling professional. Also, since they are still taking classes, and I believe that their classwork should come first, it is often difficult to schedule work times that are convenient for the project. It is quite difficult to pull cable with just one person. Simply hiring more students could alleviate this problem, but once the project was finished, there would be too many students and not enough work to be done.

Despite these few drawbacks, my experience working with the students has been a good one. I have a good group and now that they are all trained, it is amazing how well they perform their jobs. Figure 3 shows a picture of an open-use lab install under construction.

Now that their training has given them a good feel for the scope of the project, I have begun to ask each student to design, plan, and supervise the implementation and installation of particular buildings in the project. This requires that they survey a new installation, determine needed materials, present their findings to me, and when the materials arrive, it is their responsibility to oversee the installation and make sure that it proceeds smoothly.

One of the most challenging difficulties in planning and implementing this project was the fact that some of the technologies were just appearing on the market and others were still in the design phases. We attended non-disclosure meetings with several vendors to obtain a better understanding of their equipment development plans and hopefully gain a little clearer view of upcoming technologies. Although these meetings were invaluable in helping us to plan our network upgrades, it also left us anticipating the delivery of equipment long before it ever left the vapor-ware stages. At every point in the design and deployment of our network, we were always waiting for equipment that was "just a couple more months out." It’s almost like seeing all of the presents under the tree but not even being able to pick them up and shake them until Christmas day. We just had to keep telling ourselves that patience is a virtue.

Since the total for our project came to slightly over $1.5 million dollars, it was unlikely that it would be funded in one single lump sum. As an alternative plan, we broke up the project into the two phases that have been discussed throughout this paper. This basically split the entire project into two equally priced halves.

Phase one of the project installed the campus core, including the gigabit ethernet, 5500’s and 5505’s in the buildings. It also installed the necessary fiber infrastructure inside of the buildings for the second phase of the project, re-wired all open-use computer labs, and installed 10/100Mbps shared media hubs to feed these labs. Since phase one was funded in its entirety, we were able to complete most of the planned upgrades over the course of one year. We completed over 40 multimode, multicore fiber runs, installed 500 Cat 5 drops, and provided 900 hub ports for open-use lab machines and a few select buildings.

Equipment availability and delivery was a small problem for the first phase of the project. As I mentioned, most of the planned upgrades were completed on schedule. However, the gigabit backbone was finally installed in October of this year. This was not a real technical problem for the project in general as the switched fast ethernet backbone we used in the interim was quite sufficient to handle the immediate bandwidth needs of the campus. However, we expected delivery of the GE equipment in May of this year. Unfortunately, technical delays on the part of the vendor pushed delivery out until October. We were included in the beta process of the GE equipment. That allowed us to install and test parts of the system in a lab environment, but this still did not meet our expectations and timeline for the project.

There were no other availability problems with the remainder of phase one. We completed all wiring upgrades and switch/hub installations on time and under budget. The biggest problem with these tasks was the availability of open-use labs for renovation work. We had to be considerate of the students who were still working in these labs trying to complete coursework. For the most part, however, they were understanding and patient considering the amount of noise and mess we were making.

Phase two of the project was intended to install Cat 5 and switched 10/100 connectivity to all faculty desktops. This phase has been split again and will now be implemented over two years. We are currently in the first year of phase two. There are at least 7 buildings scheduled for network upgrades at this point. Most likely this number will grow, but we will not know for sure how much money remains until all of the renovations in these buildings have been completed.

Right now, we are composing the final bid specifications for equipment purchases. Once a vendor is selected for delivering the assorted cabling and installation materials we will begin full scale installations. We have begun a few small sub-projects so that some progress can be made before the bulk of the materials arrive.

CONCLUSIONS

This effectively covers the decisions USF made in the upgrade process for our network. We started with a list of applications that the network would need to carry and selected each technology needed to implement a complete design using a top-down approach. We believe that this design will scale well for several years to come, but as is typical with computer technologies, nothing lasts forever. However, we do expect that the desktop cabling and building fiber we are currently installing will carry us through at least one to two more generations of network advances.

ACKNOWLEDGEMENTS

I would like to thank several of my co-workers, Ted Netterfield, Lou Menendez, and Toivo Voll, and my wife, Kimberly Rogers, for providing editorial comments and suggestions for this paper. Their help is very much appreciated.


Figure 1


Figure 2


Figure 3