Jump to content

Network core redo 10GbE

davidna2002

Rebuilding a core part of my network.  Biggest thing is adding 10GbE to servers and site links.  Basically moving from the lab into production.  Very exciting!

IMG_0003.JPG

IMG_0004.JPG

"Cheapness is not a skill"

Link to comment
Share on other sites

Link to post
Share on other sites

Nice!

The 3850s are nice switches (I've got one at home that was extra in our lab, lol). Hoping to get my hands on some 9300s to play with in our lab soon :) Even if my focus is more on data center stuff, I like to dabble with other stuff when I get a chance.

What model sups are in the 6500? I assume SUP720-3B or 3BXL?

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

Moar pictures! (if possible of course)

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, mynameisjuan said:

Nice! We just moved our 3850s to our backup. My project is now setting up our 100gb core and transport. Its def exciting. 

Ooooo. If you don't mind my asking, what kind of boxes?

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Lurick said:

Ooooo. If you don't mind my asking, what kind of boxes?

Ciena. I am not sure of the model yet (6000?) since I didnt order them. Ill be unboxing them today. 

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, mynameisjuan said:

Nice! We just moved our 3850s to our backup. My project is now setting up our 100gb core and transport. Its def exciting. 

What's your actual average link utilization like? We've got 40Gb  Core, Distribution and to firewalls. Four 40Gb uplinks per rack from two switches and 10Gb to servers. We don't push the 40Gb links much and the highest traffic generator is backups and we use client side dedup and/or change block tracking lol.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, mynameisjuan said:

Ciena. I am not sure of the model yet (6000?) since I didnt order them. Ill be unboxing them today. 

Never heard of them.

100G is very fun to play with. Just (poorly) cabled up 8 Nexus 9504/9508s and something like 14 Nexus 93180s all with 100G uplinks in the lab for some VxLAN testing we'll be doing soon :) 

Also, just got two Spirent MX3 2x100G linecards in our lab so I'm very excited to start pumping some heavy traffic with those :D

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Lurick said:

Never heard of them.

100G is very fun to play with. Just (poorly) cabled up 8 Nexus 9504/9508s and something like 14 Nexus 93180s all with 100G uplinks in the lab for some VxLAN testing we'll be doing soon :) 

Also, just got two Spirent MX3 2x100G linecards in our lab so I'm very excited to start pumping some heavy traffic with those :D

Envy levels are over 9000!!

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

What's your actual average link utilization like? We've got 40Gb  Core, Distribution and to firewalls. Four 40Gb uplinks per rack from two switches and 10Gb to servers. We don't push the 40Gb links much and the highest traffic generator is backups and we use client side dedup and/or change block tracking lol.

Before this upgrade our utilization was around 8Gb on average across about half the ports so there was already plenty of headroom. The new equipment is to sell dedicated services to other clients or even ISPs. There is nothing I could think of that will come close to saturating that.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

Envy levels are over 9000!!

:P

We're also hopefully going to be getting a cool new L1 switch to go with it. I think the vendor is Calient (can't remember if Spirent partnered with or bought them) and it's got something like 320 ports for LC SM fiber. What's nice about it is that it can do anywhere from 1Gb up to 1Tb+ since it's all mirrors, just has to be single mode though which can get a tad expensive.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Lurick said:

:P

We're also hopefully going to be getting a cool new L1 switch to go with it. I think the vendor is Calient (can't remember if Spirent partnered with or bought them) and it's got something like 320 ports for LC SM fiber. What's nice about it is that it can do anywhere from 1Gb up to 1Tb+ since it's all mirrors, just has to be single mode though which can get a tad expensive.

Sometimes being in NZ sucks, we can be so limited in technology since we just don't require it. Only just setup my first servers that have 10/25 utilizing RDMA/RoCE cursing Windows every damn step of the way, seriously I need to install the Hyper-V role to be able to use those features with teaming?!? I don't want or need Hyper-V on these servers.

 

These are also our first servers with NVMe's in them, most amusing part is they are servers for the backup software we use.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Sometimes being in NZ sucks, we can be so limited in technology since we just don't require it. Only just setup my first servers that have 10/25 utilizing RDMA/RoCE cursing Windows every damn step of the way, seriously I need to install the Hyper-V role to be able to use those features with teaming?!? I don't want or need Hyper-V on these servers.

 

These are also our first servers with NVMe's in them, most amusing part is they are servers for the backup software we use.

Yah, it's nice being in an area with a big lab and lots of customers who want to use it :D

I think, on last count, we have something like 3,000 racks in the lab just above me alone.

In total, our campus probably has something like 10K+ racks, easily.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Lurick said:

Yah, it's nice being in an area with a big lab and lots of customers who want to use it :D

I think, on last count, we have something like 3,000 racks in the lab just above me alone.

In total, our campus probably has something like 10K+ racks, easily.

More equipment right there than in all of NZ most likely lol, certainly capacity/performance wise.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×