Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
rcmaehl

IP Apocalypse - RIPE confirms available IPv4 addresses will run out in November

Recommended Posts

22 minutes ago, jasonvp said:

But the sooner you dive in and immerse yourself, the sooner you'll get over it.

Dunno about that, we've been using IPv6 for over 5 years and they still bug me lol. But that has more to do with actual issues with IPv6 like sites not working over IPv6 but work fine over IPv4 that were just working the day before, then having to figure out who's issue it is, us or them. Then there is even dumber issues like the security team forgetting to put firewall rules in for the IPv6 address of the server, I know shouldn't be a thing but yea... happens enough to be annoying.

Link to post
Share on other sites
20 hours ago, bcredeur97 said:

I kinda wish they had just added another Octet or 2 to IPv4.

 

i.e. 0.0.0.0.0 or even 0.0.0.0.0.0

IPv6 is great, but I find the addresses are so much harder to remember because they aren't just numbers...

I always wondered why they didn't just use the existing system for IPv4 and expanded it in a logical manner.

9 hours ago, leadeater said:

I have to work with networking all the time, knowing IP addresses come up just as much and is more important than the DNS name. IPv6 address are just a right pain, we deal with that by prefix of our IPv6 network address then suffix is the IPv4 address so all our IPv6 addresses are identical to our IPv4 addresses just with an IPv6 prefix in front of it, mental life saver honestly.

Interesting idea - I never considered that. Can you give me a made up example of what an IPv6 would look like under your scheme?

8 hours ago, jagdtigger said:

Label the machine? We have two printers where i work. One main and one reserve. Main one is labeled as COK and reserve is COKR........  Stop looking for excuses why you cant use it. I also have IPv6 on my LAN. Infrastructure (routers, AP's, NAS's, etc) has fixed IP, everything else is dynamic. For the fixed stuff i have bookmarks and DNS entries for everything connected to the router. Thats about it

You can call it excuses all you want - but someone working in the industry dealing with networks - using IPv6 is simply more difficult than v4.

I can look at a v4 address and tell what subnet it's on. I can tell the gateway. I can tell a lot of details at a glance, without having to look anything up. From what I can tell from my v6 research (been a long time since College, and we barely even touched on v6), you can tell some of these details, but it's still a significantly more complicated structure.

 

And there are going to be situations where it's simply better to be able to remember a few key IP Addresses on the fly. With v6, that's going to become difficult.

 

They were obviously trying to solve the problem of "never running out of addresses again", but in my opinion, they went too far in this direction without considering usability as an equal factor.


For Sale - iPhone SE 32GB - Unlocked - Rose GoldSold

Spoiler

 

 

* Intel i7-4770K * ASRock Z97 Anniversary * 16GB RAM * 750w Seasonic Modular PSU *

* Crucial M4 128GB SSD (Primary) * Hitachi 500GB HDD (Secondary) *

* Gigabyte HD 7950 WF3 * SATA Blu-Ray Writer * Logitech g710+ * Windows 10 Pro x64 *

 

Link to post
Share on other sites
29 minutes ago, dalekphalm said:

I always wondered why they didn't just use the existing system for IPv4 and expanded it in a logical manner.

Interesting idea - I never considered that. Can you give me a made up example of what an IPv6 would look like under your scheme?

dead:beef:1234:5678:10:200:5:3 could be an example.  It's a valid IPv6 address on the dead:beef:1234:5678/64.  See the IPv4 address there?

 

Quote

 

They were obviously trying to solve the problem of "never running out of addresses again", but in my opinion, they went too far in this direction without considering usability as an equal factor.

 

I don't think "usability" really needs to be a factor in that way.  They were likely thinking that things would be automated far more easily than what's actually happened.  As I wrote previously: folks have to get out of the whole "I need to remember these IP addresses" mindset.  Have.  To.  Seriously.  HAVE.  TO.

 

Or you're going to be automated out of a job because you got replaced by a poorly-written python script.

 


Editing Rig: Mac Pro 17,1

System Specs: 3.2GHz 16-core Xeon | 96GB ECC DDR4 | AMD Radeon Pro Vega II (32GB HBM2) | Lots of SSD and NVMe storage |

Audio: Sound Blaster X7 external DAC/ADC |

 

Gaming Rig: PC

System Specs:  Asus Rampage VI Extreme board | Intel Core i9 7900X | 64GB Corsair Vengeance LPX (OC'd to 4GHz) | 2 x NVidia 2080Ti FE cards (OC'd) | Corsair AX1500i power supply | CaseLabs Magnum THW10 case ( RIP CaseLabs ? ) |

Audio:  Sound Blaster AE-9 card | Mackie DL32R Mixer | Sennheiser HDV820 amp | Sennheiser HD820 phones | Rode Broadcaster mic |

Displays: Asus PG27UQ 4K/144Hz display | 2 x LG 27UK650-W 4K displays |

Cooling:  2 x EK 140 Revo D5 Pump/Res | EK Asus R6E monoblock | 2 x EK 2080Ti waterblocks | 2 x AlphaCool 480mm x 60mm rads | AlphaCool 560mm x 60mm rad | 21 x Noctua 120mm fans | 8 x Noctua 140mm fans | 2 x Aquaero 6XT fan controllers |

Link to post
Share on other sites
9 hours ago, leadeater said:

I know immediately looking at any IPv4 address what subnet and VLAN it belongs to and which building it is coming from or which security zone in the datacenter.

IPv6 has a dedicated subnet group, so you should only have to memorize a four digit code which can follow a pattern similar to the actual layout. It should actually be easier since you don't also have to keep track of arbitrary CIDR boundaries

Link to post
Share on other sites
10 hours ago, leadeater said:

ha you can't because you're not at your desk

We have a little throw around phrase for this:

"Whoever is stupid should have a notebook"    (notebook=the good old paper version,)

 

It may sound insulting but its meant as an advice. I for one follow it, its better than forgetting something and getting into trouble because of it... :D

Link to post
Share on other sites
1 hour ago, jasonvp said:

dead:beef:1234:5678:10:200:5:3 could be an example.  It's a valid IPv6 address on the dead:beef:1234:5678/64.  See the IPv4 address there?

I know, it's the way we do our IPv6 addresses as well, actually mentioned that. But that doesn't mean the other public ones you end up looking at are that easy to remember, it's not that big of an issue because just have it copied and pasted as needed etc. Where you generally lose work efficiency is when you're having to go between different management interfaces, on to difference devices and are switching between the data item you are querying so having to often minimize and maximize that window with the address in it to re-copy it.

 

Nothing world ending, just annoying.

Link to post
Share on other sites
6 hours ago, dalekphalm said:

Interesting idea - I never considered that. Can you give me a made up example of what an IPv6 would look like under your scheme?

IPv4 - 10.5.248.101

IPv6 - 1234:4321:999:55:10:5:248:101

 

 

IPv4 - 10.5.248.102

IPv6 - 1234:4321:999:55:10:5:248:102

 

 

IPv4 - 10.6.242.23

IPv6 - 1234:4321:999:66:10:6:242:23

Link to post
Share on other sites

IPv6 would be a great move forward for more secure networks but the issue is that do the ISP's want to spend the money to developing and deploying IPv6 to everyone? some ISP's would rather see everyone fight over an IP so that they can get access to the internet.

Link to post
Share on other sites
21 hours ago, floofer said:

who got 69.69.69.69 

CenturyLink in Monroe Louisiana


Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to post
Share on other sites
On 10/29/2019 at 12:29 PM, jagdtigger said:

Label the machine? We have two printers where i work. One main and one reserve. Main one is labeled as COK and reserve is COKR........  Stop looking for excuses why you cant use it. I also have IPv6 on my LAN. Infrastructure (routers, AP's, NAS's, etc) has fixed IP, everything else is dynamic. For the fixed stuff i have bookmarks and DNS entries for everything connected to the router. Thats about it

If you work with networking you will encounter IP addresses all the time. It's not as simple as just going "use DNS and label it". When you for example need to troubleshoot a connection issue you might need to chart the traffic flow. You can't do that with DNS names because a single machine might have multiple addresses. The configuration files aren't done with DNS names either, it's done with IPs. Firewall rules are generally not done with host names either, because different DNSes might have different IPs for different the same host (for example internal and external DNSes).

 

DNS solves almost all the end user stuff like "how do I add this printer to my computer", but it doesn't solve the headaches network engineers get.

 

 

On 10/29/2019 at 8:16 PM, jasonvp said:

Really, any company with a large global presence and lots of public-facing servers will want to avoid doing any sort of NAT for their high bandwidth services.  NATs suck, through and through.  They're a bitch to troubleshoot, and they're potentially a bandwidth bottleneck.

I don't think there is any issue with doing NAT for high bandwidth services. NAT only has a very small performance penalty, and almost all of that penalty is in the initial connection setup. Once the session is established the translation is done in

Totally agree with that it can cause issues for troubleshooting though.

 

 

 

 

On 10/29/2019 at 9:27 PM, dalekphalm said:

I always wondered why they didn't just use the existing system for IPv4 and expanded it in a logical manner.

Adding another octet or two would mean that we would probably encounter the same issue again in maybe 100 years or so.

IPv6 was designed to be future proof and as a result they really cranked up the number of available addresses.

Also, IPv4 has a lot of tings that needs improving, so they wanted to redesign the entire packet anyway, and at that point why not do it "properly" all the way?

IPv6 addresses are 4 times as large.

192.168.81.61.132.231.241.184.192.128.177.122.188.118.6.1 would have probably have been even harder to remember than the hexadecimal system IPv6 uses.

Link to post
Share on other sites
12 minutes ago, LAwLz said:

I don't think there is any issue with doing NAT for high bandwidth services. NAT only has a very small performance penalty, and almost all of that penalty is in the initial connection setup.

Totally agree with that it can cause issues for troubleshooting though.

When you have a large collection of things (eg: public facing servers) trying to egress through a small number of things (NATs), you have a bottleneck.  Simply by definition.  And before you say, "but routers are a small number of things!!!" ... large routers aren't limited to PCI-E bandwidth like a server is.  They can push way more bits/second through them than any server.

 

So yes, there's a bottleneck.

 


Editing Rig: Mac Pro 17,1

System Specs: 3.2GHz 16-core Xeon | 96GB ECC DDR4 | AMD Radeon Pro Vega II (32GB HBM2) | Lots of SSD and NVMe storage |

Audio: Sound Blaster X7 external DAC/ADC |

 

Gaming Rig: PC

System Specs:  Asus Rampage VI Extreme board | Intel Core i9 7900X | 64GB Corsair Vengeance LPX (OC'd to 4GHz) | 2 x NVidia 2080Ti FE cards (OC'd) | Corsair AX1500i power supply | CaseLabs Magnum THW10 case ( RIP CaseLabs ? ) |

Audio:  Sound Blaster AE-9 card | Mackie DL32R Mixer | Sennheiser HDV820 amp | Sennheiser HD820 phones | Rode Broadcaster mic |

Displays: Asus PG27UQ 4K/144Hz display | 2 x LG 27UK650-W 4K displays |

Cooling:  2 x EK 140 Revo D5 Pump/Res | EK Asus R6E monoblock | 2 x EK 2080Ti waterblocks | 2 x AlphaCool 480mm x 60mm rads | AlphaCool 560mm x 60mm rad | 21 x Noctua 120mm fans | 8 x Noctua 140mm fans | 2 x Aquaero 6XT fan controllers |

Link to post
Share on other sites
8 minutes ago, jasonvp said:

When you have a large collection of things (eg: public facing servers) trying to egress through a small number of things (NATs), you have a bottleneck.  Simply by definition.  And before you say, "but routers are a small number of things!!!" ... large routers aren't limited to PCI-E bandwidth like a server is.  They can push way more bits/second through them than any server.

 

So yes, there's a bottleneck.

I don't understand what you mean. Please elaborate.

What do you mean by "NATs being a small number of things"?

 

Also, what do you mean specifically when you say bottleneck? I got a feeling we have different definitions. You said it was a bandwidth bottleneck so I assume you mean that NAT has a negative impact on throughput (as in, how many MB of data you can send and receive each second), which it does, but it's not a big one. Like I said, the biggest impact is in the initial session setup. Once everything is loaded into the firewall's (or whichever devices is doing the NAT) memory it is done very quickly.

Link to post
Share on other sites
1 minute ago, LAwLz said:

which it does, but it's not a big one.

We're not talking about the same scales.  I design large-scale data center networks for a living, and that's what I was referring to in my earlier post.  Not little office or small company networks.


Editing Rig: Mac Pro 17,1

System Specs: 3.2GHz 16-core Xeon | 96GB ECC DDR4 | AMD Radeon Pro Vega II (32GB HBM2) | Lots of SSD and NVMe storage |

Audio: Sound Blaster X7 external DAC/ADC |

 

Gaming Rig: PC

System Specs:  Asus Rampage VI Extreme board | Intel Core i9 7900X | 64GB Corsair Vengeance LPX (OC'd to 4GHz) | 2 x NVidia 2080Ti FE cards (OC'd) | Corsair AX1500i power supply | CaseLabs Magnum THW10 case ( RIP CaseLabs ? ) |

Audio:  Sound Blaster AE-9 card | Mackie DL32R Mixer | Sennheiser HDV820 amp | Sennheiser HD820 phones | Rode Broadcaster mic |

Displays: Asus PG27UQ 4K/144Hz display | 2 x LG 27UK650-W 4K displays |

Cooling:  2 x EK 140 Revo D5 Pump/Res | EK Asus R6E monoblock | 2 x EK 2080Ti waterblocks | 2 x AlphaCool 480mm x 60mm rads | AlphaCool 560mm x 60mm rad | 21 x Noctua 120mm fans | 8 x Noctua 140mm fans | 2 x Aquaero 6XT fan controllers |

Link to post
Share on other sites
20 minutes ago, jasonvp said:

We're not talking about the same scales.  I design large-scale data center networks for a living, and that's what I was referring to in my earlier post.  Not little office or small company networks.

Yes... and? You didn't answer any of my questions.

 

What do you mean when you say "NATs is a small number of things"?

What do you mean when you say bottleneck?

 

 

Again, the performance impact of NAT is mostly in the initial session creation (when things are being initialized in memory). Once that is done, the NAT device will save the translation state in memory and do the translation in hardware, really quickly. It has next to no performance impact on throughput.

Link to post
Share on other sites
6 minutes ago, LAwLz said:

Yes... and? You didn't answer any of my questions.

 

What do you mean when you say "NATs is a small number of things"?

What do you mean when you say bottleneck?

I'm not sure what you're trying to accomplish here.  I stated my position quite clearly and attempted to clarify it a few times.  If you can't grok what I was saying, then we're talking past one another and there's nothing I can do to fix that.

 

Exercise for you: design a network that has 20 racks of 40 servers each, all with 1x10GigEth uplinks.  Now show me how you'd put a NAT in front of that collection of servers so that they can egress out of that data center.

 


Editing Rig: Mac Pro 17,1

System Specs: 3.2GHz 16-core Xeon | 96GB ECC DDR4 | AMD Radeon Pro Vega II (32GB HBM2) | Lots of SSD and NVMe storage |

Audio: Sound Blaster X7 external DAC/ADC |

 

Gaming Rig: PC

System Specs:  Asus Rampage VI Extreme board | Intel Core i9 7900X | 64GB Corsair Vengeance LPX (OC'd to 4GHz) | 2 x NVidia 2080Ti FE cards (OC'd) | Corsair AX1500i power supply | CaseLabs Magnum THW10 case ( RIP CaseLabs ? ) |

Audio:  Sound Blaster AE-9 card | Mackie DL32R Mixer | Sennheiser HDV820 amp | Sennheiser HD820 phones | Rode Broadcaster mic |

Displays: Asus PG27UQ 4K/144Hz display | 2 x LG 27UK650-W 4K displays |

Cooling:  2 x EK 140 Revo D5 Pump/Res | EK Asus R6E monoblock | 2 x EK 2080Ti waterblocks | 2 x AlphaCool 480mm x 60mm rads | AlphaCool 560mm x 60mm rad | 21 x Noctua 120mm fans | 8 x Noctua 140mm fans | 2 x Aquaero 6XT fan controllers |

Link to post
Share on other sites
13 minutes ago, jasonvp said:

I'm not sure what you're trying to accomplish here.

You said that you want to avoid doing NAT for high bandwidth servers. That it's a bitch to troubleshoot and potentially a bandwidth bottleneck.

I said that I agree that NAT makes things more complex to troubleshoot, but that performance shouldn't be a factor in why you shouldn't do NAT. NAT affects performance very little.

 

13 minutes ago, jasonvp said:

I stated my position quite clearly and attempted to clarify it a few times.

You've made your position clear, but you haven't justified it.

 

13 minutes ago, jasonvp said:

If you can't grok what I was saying, then we're talking past one another and there's nothing I can do to fix that.

Yes there is something you can do. You can start by answering my questions.

 

What do you mean when you say "NATs is a small number of things"? You should be able to implement NAT on all your edge devices.

What do you mean when you say bottleneck? I interpreted your post as saying NAT reduces throughput, but it shouldn't do that in any meaningful capacity.

 

13 minutes ago, jasonvp said:

Exercise for you: design a network that has 20 racks of 40 servers each, all with 1x10GigEth uplinks.  Now show me how you'd put a NAT in front of that collection of servers so that they can egress out of that data center.

Wait a minute... Do you know what NAT is? You refer to it as if it's a device or something. "Put a NAT in front of it".

NAT should be done at the border routers (or firewalls, or some other NAT capable device). A data center with and without NAT will look exactly the same, physically.

Link to post
Share on other sites
1 hour ago, LAwLz said:

What do you mean by "NATs being a small number of things"?

The whole point of using the NAT is to be the sole point of transit for a large number of machines so those all go through the same endpoit and use the same address. Hence a bottleneck since all these machines' traffic has to go through that poor NAT.

 

Not to mention the difficulty of routing incoming requests...


Desktop: i7-5960X 4.4GHz, Noctua NH-D14, ASUS Rampage V, 32GB, RTX2080S, 2TB NVMe SSD, 2x16TB HDD RAID0, Corsair HX1200, Thermaltake Overseer RX1, Samsung 4K curved 49" TV, 23" secondary

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB NVMe SSD RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

Dell XPS 2 in 1 2019, 32GB, 1TB, 4K / GPD Win 2

Link to post
Share on other sites
2 minutes ago, Kilrah said:

The whole point of using the NAT is to be the sole point of transit for a large number of machines so those all go through the same endpoit and use the same address.

1) "The whole point of using NAT" isn't to translate multiple addresses to one address. NAT is used for way more things than that.

2) Not in data centers. There it's not uncommon to do 1:1 NAT.

Even if you do many-to-one NAT (aka, PAT, aka NAT-overload) you're still not limited to just 1 device to do the NAT. You can have 10 routers all doing NAT-overload if you want. All traffic going out from the data center has to pass through some routing capable device anyway, and that device can do NAT.

 

So doing NAT will be no different than not doing NAT. You'll still have a group of devices passing traffic through a single device, unless you got 1 router for each server.

Link to post
Share on other sites
23 minutes ago, LAwLz said:

Wait a minute... Do you know what NAT is? You refer to it as if it's a device or something. "Put a NAT in front of it".

NAT should be done at the border routers (or firewalls, or some other NAT capable device). A data center with and without NAT will look exactly the same, physically.

And this is precisely why I said we're not talking about the same scale.  It's clear you're not familiar with large scale data centers.

 

Routers don't NAT at line rate.  They can't.  Therefore, network engineers don't configuring NAT'ing on large routers because it throttles them.

Firewalls don't sit at the border of large scale data centers because firewalls are ALSO a bottleneck.


Editing Rig: Mac Pro 17,1

System Specs: 3.2GHz 16-core Xeon | 96GB ECC DDR4 | AMD Radeon Pro Vega II (32GB HBM2) | Lots of SSD and NVMe storage |

Audio: Sound Blaster X7 external DAC/ADC |

 

Gaming Rig: PC

System Specs:  Asus Rampage VI Extreme board | Intel Core i9 7900X | 64GB Corsair Vengeance LPX (OC'd to 4GHz) | 2 x NVidia 2080Ti FE cards (OC'd) | Corsair AX1500i power supply | CaseLabs Magnum THW10 case ( RIP CaseLabs ? ) |

Audio:  Sound Blaster AE-9 card | Mackie DL32R Mixer | Sennheiser HDV820 amp | Sennheiser HD820 phones | Rode Broadcaster mic |

Displays: Asus PG27UQ 4K/144Hz display | 2 x LG 27UK650-W 4K displays |

Cooling:  2 x EK 140 Revo D5 Pump/Res | EK Asus R6E monoblock | 2 x EK 2080Ti waterblocks | 2 x AlphaCool 480mm x 60mm rads | AlphaCool 560mm x 60mm rad | 21 x Noctua 120mm fans | 8 x Noctua 140mm fans | 2 x Aquaero 6XT fan controllers |

Link to post
Share on other sites
2 minutes ago, LAwLz said:

1) "The whole point of using NAT" isn't to translate multiple addresses to one address. NAT is used for way more things than that.

2) Not in data centers. There it's not uncommon to do 1:1 NAT.

In the context of this discussion which is to reduce the number of public addresses it is what you're looking after, not 1:1.


Desktop: i7-5960X 4.4GHz, Noctua NH-D14, ASUS Rampage V, 32GB, RTX2080S, 2TB NVMe SSD, 2x16TB HDD RAID0, Corsair HX1200, Thermaltake Overseer RX1, Samsung 4K curved 49" TV, 23" secondary

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB NVMe SSD RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

Dell XPS 2 in 1 2019, 32GB, 1TB, 4K / GPD Win 2

Link to post
Share on other sites
23 minutes ago, jasonvp said:

Routers don't NAT at line rate. 

What crappy routers do you use? Both Arista and Cisco routers can do NAT at line rate. I would be very surprised if for example Juniper or other vendors can't do it either.

In Nexus you can allocate TCAM space to NAT using the "hardware access-list tcam region nat" command.

 

On page 61 in this document you can see that the Cisco ASR 1000 also uses TCAM for NAT.

If you go to page 195 you can also see that the performance of NAT matches CEF perfectly. Wanna know why? Because both are at line rates and done in hardware.

 

Don't you think that ISPs who implement carrier grade NAT want hardware based implementation of it for performance reasons?

 

 

NAT doesn't impact performance in any meaningful way. Again, the initial session creation takes a tiny bit longer, but once the translation is loaded into TCAM it is done at full speed.

 

 

 

20 minutes ago, Kilrah said:

In the context of this discussion which is to reduce the number of public addresses it is what you're looking after, not 1:1.

Oh, sorry. I was just scrolling through the thread and saw someone say NAT shouldn't be used because of throughput issues I went "what? No it doesn't".

Then with each additional response I got more and more confident than the person I was responding to didn't understand what NAT was, because of for example how he referred to it. Especially since a lot of generalized statements has been thrown around when they only really apply to one type of NAT (PAT).

But anyway, I am pretty sure PAT can be done at line rate as well. For example I mentioned Nexus earlier. It can do NAT-Overload and Cisco specifically says that Nexus do not support software translation, and that all translations are done in hardware.

Quote

The Cisco Nexus device does not support the following:

  • Software translation. All translations are done in the hardware.

 

Link to post
Share on other sites

Was this like mentioned about a year ago that IPv4 is running out.
Oh wait it was also mentioned a year before that.

 

 


Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to post
Share on other sites
2 minutes ago, LAwLz said:

What crappy routers do you use? Both Arista and Cisco routers can do NAT at line rate. I would be very surprised if for example Juniper or other vendors can't do it either.

Again I say: it's clear you don't have any actual, hands-on experience with this topic, but you can Google like no one's business.  You came into a discussion half-assed and are trying so desperately to act like the smartest guy in the room.

 

You're not.  Stop trying.

 

I know exactly what Cisco's and Arista's (and Juniper's) documentation says about line rate NAT'ing.  I also know they don't actually achieve it in real world use cases.  Specially when we start talking about 100GigE and 400GigE (or bundles of aforementioned) throughput numbers.

 

And finally, you came in full of piss and vinegar talking about how NATs aren't a performance hit and didn't even realize we were talking about a 4-to-6 NAT.  Not a 4-to-4.  Again with the "half-assed" approach.

 

After you've had a few decades of hands-on design and engineering experience, come back and we'll have this conversation again.  'K?

 


Editing Rig: Mac Pro 17,1

System Specs: 3.2GHz 16-core Xeon | 96GB ECC DDR4 | AMD Radeon Pro Vega II (32GB HBM2) | Lots of SSD and NVMe storage |

Audio: Sound Blaster X7 external DAC/ADC |

 

Gaming Rig: PC

System Specs:  Asus Rampage VI Extreme board | Intel Core i9 7900X | 64GB Corsair Vengeance LPX (OC'd to 4GHz) | 2 x NVidia 2080Ti FE cards (OC'd) | Corsair AX1500i power supply | CaseLabs Magnum THW10 case ( RIP CaseLabs ? ) |

Audio:  Sound Blaster AE-9 card | Mackie DL32R Mixer | Sennheiser HDV820 amp | Sennheiser HD820 phones | Rode Broadcaster mic |

Displays: Asus PG27UQ 4K/144Hz display | 2 x LG 27UK650-W 4K displays |

Cooling:  2 x EK 140 Revo D5 Pump/Res | EK Asus R6E monoblock | 2 x EK 2080Ti waterblocks | 2 x AlphaCool 480mm x 60mm rads | AlphaCool 560mm x 60mm rad | 21 x Noctua 120mm fans | 8 x Noctua 140mm fans | 2 x Aquaero 6XT fan controllers |

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×