Main Nav

Top of rack switching, what are your thoughts?  What type of hardware do you utilise for this type of setup (brand, model)?  What advantages do you see utilizing this model?  We like the fact that cabling can be contained to one rack except for uplinks of course.

We have been using a stack of two 48 port switches for TOR switches, but they don't do HA updating.  So each update we apply to these switches causes some downtime.  Usually we try to mix this in with a larger outage so the impact is minimal throughout the year.  The idea is having a LAG between the top and bottom switch.  Each switch has a LAG back to the core.  So if one switch in the stack fails, everything continues to operate.  

I am also interested in hearing about middle of row or end of row switching utilising bigger chassis based switches etc...  I have heard from a few people who are or who have implemented that type of design.


Thanks

--

Jeremy L. Gibbs
Systems Administrator / Network Engineer

********** Participation and subscription information for this EDUCAUSE Constituent Group discussion list can be found at http://www.educause.edu/groups/.

Comments

At this time we're installing Juniper EX-series switches - 2200/3200/4200/4500 - in each rack; the specific model is dependent on the number of needed ports, whether we need a single switch or a stacked pair, whether 1GE or 10GE uplinks are needed, whether 10GE to the servers is needed, etc.

Depending on the number of racks in a row - and, perhaps more important, who is allowed to touch the cables and how neat they are - an end-or-row or middle-of-row may be somewhat less expensive; but they can also lead to a real rat's nest of cables, making maintenance a nightmare.  And, of course, a switch failure here impacts a lot more servers...

We started out with a switch per row several years ago, but we've seen the error of our ways and are only doing top-of-rack now.  In a lower density environment you might be able to get away with one or two ToR switches serving the immediately adjacent racks as well; but only if the cables go in/out through the top of each cabinet, not out the front or back.


Message from iam@st-andrews.ac.uk

This is a topic discussed regularly on nanog, so I won't spend much time beyond stating: buffers, buffers, buffers. We consolidated our nexus into our comms room, so we don't run loads of empty ports around the place, and don't have any airflow control issues. We dual-home servers using LACP across 2 fex's into a pair of 5k's allowing software upgrades in a seamless manner (and a switch failure not to impact service). The patching issue is a management problem, and we only allow our network team to do patching, so it's neat etc, and facilitates a switch swap. -- ian -----Original Message----- From: Kurt Hillig Sent: 16-10-2013, 19:15 To: NETMAN@LISTSERV.EDUCAUSE.EDU Subject: Re: [NETMAN] Top of rack switching
At this time we're installing Juniper EX-series switches - 2200/3200/4200/4500 - in each rack; the specific model is dependent on the number of needed ports, whether we need a single switch or a stacked pair, whether 1GE or 10GE uplinks are needed, whether 10GE to the servers is needed, etc.

Depending on the number of racks in a row - and, perhaps more important, who is allowed to touch the cables and how neat they are - an end-or-row or middle-of-row may be somewhat less expensive; but they can also lead to a real rat's nest of cables, making maintenance a nightmare.  And, of course, a switch failure here impacts a lot more servers...

We started out with a switch per row several years ago, but we've seen the error of our ways and are only doing top-of-rack now.  In a lower density environment you might be able to get away with one or two ToR switches serving the immediately adjacent racks as well; but only if the cables go in/out through the top of each cabinet, not out the front or back.


Olin went from an End of Rack dual switch rats nest deployment (as Kurt put it so elegantly) to a ToR solution and I am so much happier for it.

 

 

For our data center we deployed a set of 4 Juniper EX-4500’s  (40 port 10GB fiber)  in a VC and 2 other sets of 5 ex-4200  ToR VC’s. The ideal situation is the pair of Virtual chassis 4200’s are separate and carry the same vlans  with LAGS back to the 4500’s and every system is dual homed to both VC’s. Upgrade one of the 4200 VC’s and all connections fail to the secondary VC and reverse. So far I am very pleased with the deployment.

 

The EX-4500’s are also LAG’d back to our core as well.

 

Mike

 

We have the Data Center wired from the top of each rack to a central part of the room with horizontal cabling.  There are two 24 port patch panels at the top of each rack.  At the center of the room we have a pair of Nexus 7k’s that we then patch to.  Really clean inside of the server racks without worrying about awkward cooling for top of rack switches. Fairly clean on the 7k side, but can get burdensome at times.  On the plus side we have great switches to connect back to.

 

Kenneth V. Mattson III
Director - Network and Data
DoIT
Creighton University
402-280-2743
402-981-1140
 
A password is like a toothbrush:
Choose a good one, change it regularly and don't share it.

 

Message from dannyeaton@rice.edu

Our data center is a hybrid – we have multiple top of rack switches (one every 3 racks, of the 4948 variety), and then 4 central chassis switches (6500 variety), with a VSS pair at the distribution layer.  We also have a centralized Nexus pair deployed (both as distribution and access), and some Juniper qFabric for 10G access connectivity.  We used the 4948’s for iLom, etc.  Our top of rack is really “ON top of rack”, as it is on a separate mounting gear above the rack, not in the top of the rack. 

 

When we upgraded our data center switching 14 months ago we went with a pair of Nexus 5k's and ToR with Fex 2248's. We did install one Fex centrally as we had a couple of cabinets that were pretty light on port requirements, a Fex in each would have been a waste. We are much happier with the ToR solution versus the old patch panel and end of row switch solution that we were using. Documentation is easier and it takes us half the time to make a new connection. We ran some tests last winter to make sure that the Nexus VSL solution was working, seamlesss upgrades, etc. and it seemed to work well.


For a fully loaded rack pushing lots of bursty traffic, what would a good port buffer size be?  Currently our switches have 4 MB of port buffer (2 MB per 24 ports).  I know our ToR switches struggle at times. 


--

Jeremy L. Gibbs
Systems Administrator / Network Engineer
Utica College IITS



We went with the Juniper EX series just before they came out with QFabric.  We have been very pleased.  We use two 4200s in racks for HA.  Some devices we LAG and the VM clusters we split physical connections between the switches.  With the Junipers they use what they call Virtual Chassis.  The TOR members that are connected in a virtual chassis all share 64G connections, 128 if you count communicating in both directions. They are essentially PCIe8x cables.  We have 8 VCs in our data center and VCs are limited to 10 members.  We LAG back to central 8208s for the core which is handling the layer 3 for the data center vlans.  That is to say, the 10G links that are spread across some of the members in a Chassis all LAG back to the core.  Junipers then have what they call Non Stop System Upgrade.  It in essence, does an upgrade by doing a member at a time in a given VC.  If everything is redundant, you are in good shape.  Then all you want to do is pick a time when the traffic is lighter so the other LAG/redundant paths are not overwhelmed.

 

We use a completely dedicated 4200 VC for storage.  It is not connected to the core 8208s and is isolated.  We home run the storage links to that one VC.  I hooked up a monitor port and for the smaller packets wireshark was reporting 3-6 μs RTT.  Seemed pretty good to us.

 

We also went with more generic deeper cabinets and used a flat panel on the inside that has cutouts for Velcro to provide wire management, a luxury afforded by the deeper cabinets.  It has been simple, straight forward and easy to manage.

 

Hope that helps.

 

Will McCullen

Principal Analyst – IT Networks

Pima Community College

 

We are also using Juniper EX series switches, but VC'ing them.  So "top of rack" is more of a physical concept.  With the Virtual Chassis configuration, we can add redundant downlinks (out of the building).  With the VC (as with any "stacking" technology), your downlinks don't need to be in the same rack.

Our basic design in each rack is:

voice trunk panel(s) --  (voice trunk; we're still using analog/non-voip digital phones) on top
station wiring panel(s) in the middle
juniper switch(es) toward the bottom

If we only have 1 rack, the fiber is above everything.  If we have more than 1 rack, the fiber is placed centrally (e.g in rack 2, if there are 3 racks).

Since our station cabling is ubiquitous, you can patch upward for phone and downward for ethernet.  Very little cabling goes between racks (pretty much just fiber and VC/stacking cables).  Yes, there is a little waste, but the closets where we are using this scheme are very neat.

Just another note, you can never have too much cable management, whether vertical or horizontal.

-Brian

From: The EDUCAUSE Network Management Constituent Group Listserv [NETMAN@LISTSERV.EDUCAUSE.EDU] on behalf of Kurt Hillig [khillig@UMICH.EDU]
Sent: Wednesday, October 16, 2013 2:14 PM
To: NETMAN@LISTSERV.EDUCAUSE.EDU
Subject: Re: [NETMAN] Top of rack switching

At this time we're installing Juniper EX-series switches - 2200/3200/4200/4500 - in each rack; the specific model is dependent on the number of needed ports, whether we need a single switch or a stacked pair, whether 1GE or 10GE uplinks are needed, whether 10GE to the servers is needed, etc.

Depending on the number of racks in a row - and, perhaps more important, who is allowed to touch the cables and how neat they are - an end-or-row or middle-of-row may be somewhat less expensive; but they can also lead to a real rat's nest of cables, making maintenance a nightmare.  And, of course, a switch failure here impacts a lot more servers...

We started out with a switch per row several years ago, but we've seen the error of our ways and are only doing top-of-rack now.  In a lower density environment you might be able to get away with one or two ToR switches serving the immediately adjacent racks as well; but only if the cables go in/out through the top of each cabinet, not out the front or back.


I'm curious if most of you who do TOR switching use 24 ports or 48 port switches? I've noticed in our Datacenter that the 24 port switches lend themselves to much neater wiring. We have a mix of both in our racks. Not by design in most racks.


On 10/16/2013 1:51 PM, Jeremy Gibbs wrote:
Top of rack switching, what are your thoughts?  What type of hardware do you utilise for this type of setup (brand, model)?  What advantages do you see utilizing this model?  We like the fact that cabling can be contained to one rack except for uplinks of course.

We have been using a stack of two 48 port switches for TOR switches, but they don't do HA updating.  So each update we apply to these switches causes some downtime.  Usually we try to mix this in with a larger outage so the impact is minimal throughout the year.  The idea is having a LAG between the top and bottom switch.  Each switch has a LAG back to the core.  So if one switch in the stack fails, everything continues to operate.  

I am also interested in hearing about middle of row or end of row switching utilising bigger chassis based switches etc...  I have heard from a few people who are or who have implemented that type of design.


Thanks

--

Jeremy L. Gibbs
Systems Administrator / Network Engineer

********** Participation and subscription information for this EDUCAUSE Constituent Group discussion list can be found at http://www.educause.edu/groups/.


-- Vlade Ristevski Network Manager IT Services Ramapo College (201)-684-6854 ********** Participation and subscription information for this EDUCAUSE Constituent Group discussion list can be found at http://www.educause.edu/groups/.

We went with 48 port switches for the ToR deployment.

Your needs will vary of course but we went with 48 for expandability over the long run. Yes a little more cramped per switch but the tradeoff is if you have the room in the rack for more switches? We did not have tons of space so we went with higher density. YMMV depending on your situation. Which pretty much is one of the deciding factors in how dense your switches are going to be able to be deployed.

 

 

Michael Horne

Network Engineer

Olin College of Engineering

1000 Olin Way, Milas Hall, Suite LL18

Needham, MA 02492

1-781-292-2438

 

 

 

40 (Nexus 2232) or 48. 24 simply isn't enough. Even the 32 copper ports on the 2232 is a bit of a stretch for us. Mark… From: Vlade Ristevski > Reply-To: The EDUCAUSE Network Management Constituent Group Listserv > Date: Wednesday, October 23, 2013 11:29 AM To: "NETMAN@LISTSERV.EDUCAUSE.EDU" > Subject: Re: [NETMAN] Top of rack switching I'm curious if most of you who do TOR switching use 24 ports or 48 port switches? I've noticed in our Datacenter that the 24 port switches lend themselves to much neater wiring. We have a mix of both in our racks. Not by design in most racks. On 10/16/2013 1:51 PM, Jeremy Gibbs wrote: Top of rack switching, what are your thoughts? What type of hardware do you utilise for this type of setup (brand, model)? What advantages do you see utilizing this model? We like the fact that cabling can be contained to one rack except for uplinks of course. We have been using a stack of two 48 port switches for TOR switches, but they don't do HA updating. So each update we apply to these switches causes some downtime. Usually we try to mix this in with a larger outage so the impact is minimal throughout the year. The idea is having a LAG between the top and bottom switch. Each switch has a LAG back to the core. So if one switch in the stack fails, everything continues to operate. I am also interested in hearing about middle of row or end of row switching utilising bigger chassis based switches etc... I have heard from a few people who are or who have implemented that type of design. Thanks -- Jeremy L. Gibbs Systems Administrator / Network Engineer ********** Participation and subscription information for this EDUCAUSE Constituent Group discussion list can be found at http://www.educause.edu/groups/. -- Vlade Ristevski Network Manager IT Services Ramapo College (201)-684-6854 ********** Participation and subscription information for this EDUCAUSE Constituent Group discussion list can be found at http://www.educause.edu/groups/. ********** Participation and subscription information for this EDUCAUSE Constituent Group discussion list can be found at http://www.educause.edu/groups/.

We have a pair of tens going to each switch so they get eaten up quickly. The other advantage of density of course is that you can keep more of the traffic on the backplane of the switch which is always a big win. With VM’s we are finding that there is more and more localized traffic.

Pete Morrissey

 

Our data center is almost entirely virtualized, so we actually went "top of rack" with patch panels that go back to a rack housing a centrally located stack of Junipers.  Those panels are generally 48 port.  The Junipers are all 48 port.  This way, we are making better use of switch ports but not stringing cables everywhere.  

In our IDF's, we do a "TOR" model .. well, actually "BOR".  We've found it far more common to add switches than it is to add panels, so we have our growth in racks at the bottom in the IDF's.  

We have also standardized on Tripplite SRCABLEDUCT2UHD horizontal cable managers.  They are inexpensive, easy to work with and generally nice looking.

-Brian

Many years ago we had an end-of-room, patch panels in each cabinet and lots of horizontal cabling back to a central switch (6500). Currently we are top-of-every-other-rack, using Cisco 2248’s (FEX). This has worked out fairly well, each 2248 has fiber links back to 2 centralized Nexus 5k’s.  Similar to someone else, a FEX at the top of every cabinet was going to be a waste at times. I’m now thinking of moving to a middle-of-row for a new bunch of cabinets. The idea is that we will get better utilization of our FEX ports that way. With the middle-of-row, we can wait until a FEX becomes full before adding a new one. We have nice cable tray that runs above all the cabinets, but as others have pointed out this cabling can be a pain. That is a trade off in my mind, with ToR you get easy cabling and possibly a lot unused ports, the MoR you have more cabling issues and less unused ports.

 

--------------------

Kent Eitzmann
University of Nebraska–Lincoln

 

********** Participation and subscription information for this EDUCAUSE Constituent Group discussion list can be found at http://www.educause.edu/groups/.

Kent,

 

If you ever want to swing by our Data Center for some show and tell, you are more than welcome.

 

Kenneth V. Mattson III
Director - Network and Data
DoIT
Creighton University
402-280-2743
402-981-1140
 
A password is like a toothbrush:
Choose a good one, change it regularly and don't share it.

 

Close
Close


EDUCAUSE
Annual Conference

October 27–30
Save the Date

Events for all Levels and Interests

Whether you're looking for a conference to attend face-to-face to connect with peers, or for an online event for team professional development, see what's upcoming.

Close

Digital Badges
Member recognition effort
Earn yours >

Career Center


Leadership and Management Programs

EDUCAUSE Institute
Project Management

 

 

Jump Start Your Career Growth

Explore EDUCAUSE professional development opportunities that match your career aspirations and desired level of time investment through our interactive online guide.

 

Close
EDUCAUSE organizes its efforts around three IT Focus Areas

 

 

Join These Programs If Your Focus Is

Close

Get on the Higher Ed IT Map

Employees of EDUCAUSE member institutions and organizations are invited to create individual profiles.
 

 

Close

2015 Strategic Priorities

  • Building the Profession
  • IT as a Game Changer
  • Foundations


Learn More >

Uncommon Thinking for the Common Good™

EDUCAUSE is the foremost community of higher education IT leaders and professionals.