Jump to content

Refresh cycles, Any point in upgrading to next gen servers anymore?

Erkel

What are peoples thought on refresh cycles / e-waste time frames these days.

It used to be about 3 years back in the day,  Then that got stretched out to 5 years about 5-7 years ago.  

Now I am looking at about 8 years.

 

I am trying to justify a refresh on new database server that is 4 years old,  but looking the performance numbers I am only looking at maybe 20% performance increase in my application as queries are singularly threaded, so throwing more than the 16 cores I already have at the problem an't going to fix it.

 

Come on WTF is going on?  4 years used to be an eternity in the IT hardware market,  now all I get is ~20%

 

Is there much point anymore?  My current new workstation build is using the old nearly 8 year old E5620 Xeon quad core in a old server tower machine, throw an SSD and 16-24GB of RAM at it  and a used quadro card, then ~$200 later you have a machine that still stands up decently against newer stuff for non power user applications.

 

The advent of m.2 and 3D shit point an't going to help in the enterprise market as they only help with initial load times, which are very infrequent, so do not even really factor into the equation.

The only thing that matters is CPU and RAM.   All that 4 years has given me is 200Mhz on the CPU and  +866Mhz on the RAM.  (E5 2667v2 4Ghz, turbo, vs Gold 6146 4.2Ghz turbo and 1800Mhz vs 2666Mhz RAM)

 

It used to be easy, if you wanted a good bump in speed on your computer you went out and brought something new and and shiny, now all you largely get is something that more overpriced than ever and only a marginal increase in speed that is not overly significant.

 

What is the solution these days?   Pray that AMD is going to scare intel into giving a shit about performance again,   But even Intel seem to have conceded defeat and are only doing incremental improvements.   The only thing Intel is going to do in the short term performance wise in general is make 3d shit point main stream as a SSD substitute, and that does not help as it only applies to people in the enterprise market that are too poor to afford the proper amount of DRAM that they should be using for their application. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Erkel said:

What is the solution these days?   Pray that AMD is going to scare intel into giving a shit about performance again,

The battle has shifted from clock-speeds to cores, so it's pretty much pointless to pray for that to happen, if your workload relies on per-core performance/scales poorly with the number of cores. The ceiling in how high one can crank clock-speeds is coming up, so it's unlikely we'll ever see such high jumps as we did back in the day -- the way forward is paved with multiple cores.

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

My NAS/MySQL/Medusa/SickRage/Tranmission/SABNZBD server is running an old i5 2300, which is probably overkill in itself, but frankly, if parts don't fail, it could go another 10 years.  It's not like SATA will go out of style for large hard drives afterall.  Maybe they'll introduce SATA4?  Okay, whateves, for media storage assuming they still work on SATA2 and 3 controllers, no problem.

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, Erkel said:

What are peoples thought on refresh cycles / e-waste time frames these days.

It used to be about 3 years back in the day,  Then that got stretched out to 5 years about 5-7 years ago.  

Now I am looking at about 8 years.

From a hardware perspective, why not replace when you need to? If you hit a limit, you upgrade for that. 

 

Support is another perspective separate from the hardware.

 

41 minutes ago, Erkel said:

I am trying to justify a refresh on new database server that is 4 years old,  but looking the performance numbers I am only looking at maybe 20% performance increase in my application as queries are singularly threaded, so throwing more than the 16 cores I already have at the problem an't going to fix it.

Is there a change that could be made to improve it from the application perspective? Throwing hardware at it is only one option.

 

41 minutes ago, Erkel said:

The advent of m.2 and 3D shit point an't going to help in the enterprise market as they only help with initial load times, which are very infrequent, so do not even really factor into the equation.

Xpoint is a major deal as a new class of performance for non-volatile storage. Your use case might not be IO limited, but other cases may be and there it'll really shine.

 

I think it is a good example of how improvements are coming. As a generalisation, the "old stuff" is pretty well optimised already. It is getting more difficult to improve that by a significant amount. The shift is to provide application targeted improvements. There are more and more instructions for specific functions.

 

For my compute uses (FP64), anything before Sandy Bridge is obsolete. Sandy Bridge/Ivy Bridge gave a 2x IPC improvement. Skylake gave another 50% on top of that, and I'm waiting for the numbers for Skylake-X as software hasn't caught up yet. That could be up to another 2x improvement. In parallel, I have to keep an eye on where GPUs are going too. I like the numbers of the Titan Volta, although I can't justify it for my use case.

 

41 minutes ago, Erkel said:

What is the solution these days?   Pray that AMD is going to scare intel into giving a shit about performance again,   But even Intel seem to have conceded defeat and are only doing incremental improvements.   

Intel still dumps over AMD if all you want is raw performance at any cost. AMD's selling point is they're cheaper to get a given spec. They're not pushing absolute performance either.

 

41 minutes ago, Erkel said:

The only thing Intel is going to do in the short term performance wise in general is make 3d shit point main stream as a SSD substitute, and that does not help as it only applies to people in the enterprise market that are too poor to afford the proper amount of DRAM that they should be using for their application. 

I see a different perspective. Optane picks up where you simply can't stuff enough ram into an application even if you have all the money in the world.

 

31 minutes ago, WereCatf said:

The battle has shifted from clock-speeds to cores, so it's pretty much pointless to pray for that to happen, if your workload relies on per-core performance/scales poorly with the number of cores. The ceiling in how high one can crank clock-speeds is coming up, so it's unlikely we'll ever see such high jumps as we did back in the day -- the way forward is paved with multiple cores.

I think the battle isn't quite cores directly, but more performance per watt. As a generalisation, you get better efficiency from more cores at lower clock, than fewer faster cores. Scaling with cores is another problem.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

@Erkel I guess welcome to the brick wall of single threaded performance. CPU progress hasn't been so bad if you need the steady core count bumps each generation, if you don't then there is nothing gained across the E5v1 to E5v4 generations at all or not really.

 

The one exception to this is HPC workloads because that is the one thing Intel has been working on and AVX performance has improved a great deal. You can tell the general business doesn't interest Intel much anymore, we just don't buy enough equipment compared to the HPC operators.

 

6 hours ago, porina said:

From a hardware perspective, why not replace when you need to? If you hit a limit, you upgrade for that. 

 

Support is another perspective separate from the hardware.

Because application demand/requirements for the last 8-10 years hasn't outpaced the performance that even E5v1 hardware is able to deliver and when you have a policy of not running anything business critical on out of warranty hardware you are upgrading for no technical justified reasoning. I don't disagree with insuring that hardware is kept under warranty but at least in the past there was some kind of technical gain to upgrading even if you didn't immediately need it.

 

So your options are replace the hardware, extend the warranty, self spare the hardware with parts/cold standby servers or just accept the risk and run as is. Extending the warranty is only viable so much before it becomes cheaper to have just brought the new server anyway.

 

Edit:

 

Here's a picture of 19 servers I decided to keep around after they were decommissioned for dev/lab work, this is just 1 years worth and not all of them were kept. Most of them are Gen8's with between 64GB - 192GB of ram all running either dual high frequency E5's or E5-2690's, a bunch of ESXi hosts are about to come out soon and those have 384GB of ram so I'll be keeping those around.

 

ES7m1U.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, leadeater said:

at least in the past there was some kind of technical gain to upgrading even if you didn't immediately need it.

I'm not sure of the magnitude, but I'd be surprised if there wasn't some reduction in power usage with each generation. Maybe not significant from immediately adjacent generations, but if you skip some it could add up? I recognise that unless energy costs are really high, you wont likely recoup the spend from power efficiency alone. A trend I'm seeing is also running what were multiple separate services on fewer physical systems via virtualisation. This isn't that new by any means but might there be gains there also?

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, porina said:

I'm not sure of the magnitude, but I'd be surprised if there wasn't some reduction in power usage with each generation. Maybe not significant from immediately adjacent generations, but if you skip some it could add up? I recognise that unless energy costs are really high, you wont likely recoup the spend from power efficiency alone. A trend I'm seeing is also running what were multiple separate services on fewer physical systems via virtualisation. This isn't that new by any means but might there be gains there also?

Sort of, for us we pay next to nothing for power so the reduction in heat output and UPS capacity is a bigger gain than a direct reduction in power cost.

 

For us we're in APAC/Oceania so have been all in on virtualization since VMware opened for business, APAC/Oceania is by far the most virtualized region in the world and has been the case since forever. Part of this is because we are small countries with smaller businesses and networks so don't have the user density to fully utilize a server for any one, or even a few tasks.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, leadeater said:

@ErkelEdit:

 

Here's a picture of 19 servers I decided to keep around after they were decommissioned for dev/lab work, this is just 1 years worth and not all of them were kept. Most of them are Gen8's with between 64GB - 192GB of ram all running either dual high frequency E5's or E5-2690's, a bunch of ESXi hosts are about to come out soon and those have 386GB of ram so I'll be keeping those around.

I'm not really familiar with HPE genreations. Are those v1 CPUs? Even then, that's a lot of potential!

 

Also, when I was talking about energy costs, I was thinking of cooling costs in that, but I neglected to consider the UPS side.

 

BTW I don't work in IT, but my line of work often interfaces me with the IT departments of other companies. I believe most organisations of a certain size have moved onto the VM thing now, but couldn't say when. I occasionally poke around the recycling pile at work as it is a free for all, but the stuff there is generally too old to be of interest. We're talking pre-Sandy Bridge which is where I have to draw the line.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, porina said:

I'm not really familiar with HPE genreations. Are those v1 CPUs? Even then, that's a lot of potential!

Yea I think they are all v1's, Gen8 also had v2 in the later end of it's life-cycle but I don't think any of those servers have come out of prod yet. I haven't actually done a proper hardware audit of them yet.

 

5 hours ago, porina said:

Also, when I was talking about energy costs, I was thinking of cooling costs in that, but I neglected to consider the UPS side.

Another sort of problem with this is legacy. When both virtualization and server efficiency weren't as wide spread much larger air-conditioning units and UPSs were required and when your putting in the really big stuff, 160kva+, those are designed to have a 10+ year service life and everything is replaceable. Now we have counter issues like not enough heat generation to make the aircon units work at proper efficiency levels and more than half the racks are empty but we can't make the room smaller.

 

Datacenter facilities aren't as reactive and adaptable as the equipment that goes in to them, stuff you don't really think about until you hit them.

Link to comment
Share on other sites

Link to post
Share on other sites

At this stage, you should be looking at refresh cycles based on need of warranty coverage.

 

If you need overlapping warranty on critical servers, that's the reason to upgrade. If you don't, you can for sure stretch the lifecycle of the server for probably up to 7-8 years.

 

Alternatively, you could look at Cloud/VPS/Data Centre hosting if you want to completely ditch dealing with hardware and warranties, etc.

 

We typically use a 5 year replacement cycle here at work, and that's mostly to ensure we don't go long without warranty coverage. With the old hardware, we typically don't retire it, but rather move it to non-essential roles, etc.

 

Though we did "retire" (by retire, I mean, it's in my basement working hard) the old work server during our most recent server upgrade (went fully virtualized with VMWare ESXi), because the old server was quite old, and didn't have enough RAM, etc, to be very useful to us anymore.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×