View Single Post
promethh's Avatar
Posts: 211 | Thanked: 61 times | Joined on Aug 2007 @ Washington, DC
#328
Originally Posted by Texrat View Post
You're absolutely right, promethh. Wouldn't it suck if the bottleneck was an old 10bt card on a server?

Don't laugh: at a former employee our entire division network was choked by a single 10bt switch-- surrounded by 100bt devices.
I believe you. We're in the midst of installing 10G switches at all our datacenters, and IPv6 gear to do the IPv6/IPv4 translation between our outward-facing and intranet servers. I've been getting callings about bottlenecks for the past month.

Some of my data resides in Reston, VA, and Washington, DC. Multiple datacenters in the US hit server clusters in DC for lookups and propagation. 11 hours of hassle was caused by network engineers and I arguing about what they configured the switch for and what I configured the origin server cluster for.

10/100/1G/10G switching might make worlds of difference, but if you're configuring switches at 10G/Auto and I'm configuring SunFire servers at 1G/FDX across Cat-6, you're going to see bottlenecks as the switches step down. Might help if network engineers and system administrators spoke more often?

In the end, configuring the switches for 1G/FDX and my servers at 1G/FDX allows us to sync data across multiple datacenters with ease, no collisions, no errors. Would have saved me 11 hours of time if the neteng told me what he was doing, rather than "just doing it"?