Jump to content

Customer complaining about a slow server

So one of my customers at work was complaining that their server was running very very very slow.

 

They have a decent server, I knew that from memory.

 

So, I logged into our remote software to connect to it and noticed the following:

 

long_boot_server.PNG

 

So I told them it needed rebooting and they were unhappy to say the least, but ayyyyyy 420 ;) 

 

#reliability

Don't forget to @me / quote me for a reply =]

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Electronics Wizardy said:

Have a long uptime shouldn't make it slower. 

if the software is well optimized, it shouldnt, but if it isnt. thennnnnnnnnnnnnnnnnnnn.... shit might happends

you see this? this is my signature. btw im Norwegian 

Spoiler


CPU - Intel I7-5820K, Motherboard - ASUS X99-A, RAM - Crucial DDR4 Ballistix Sport 16GB, GPU - MSI Geforce GTX 970, Case - Cooler Master HAF XB evo, Storage - Intel SSD 330 Series 120GB - OS, WD Desktop Blue 500GB - storage 1, Seagate Barracuda 2TB - storage 2, PSU - Corsair RM850x (overkill i know), Display(s)- AOC 24" g2460Pg, Cooling - Cooler Master Hyper 212 Evo, 2 Noctua 120mm PWM, 1 Corsair 120mm AF RED LED, Keyboard - SpeedLink VIRTUIS Advanced, Mouse - razer deathadder chroma, Sound - Logitech Z313, SteelSeries Siberia V2 HyperX Edition, OS - Windows 10 (prefer windows 7)

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Electronics Wizardy said:

Have a long uptime shouldn't make it slower. 

Windows updates can't be done without a reboot.........

10 minutes ago, paradigm249 said:

if the software is well optimized, it shouldnt, but if it isnt. thennnnnnnnnnnnnnnnnnnn.... shit might happends

Yup, it's good but not too goooooodddd because windows updates and stufff

4 minutes ago, blu4 said:

That's a long uptime.. Is that a Linux server?

Windows, that's the problem and therefore the reason it's slow

Don't forget to @me / quote me for a reply =]

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, blu4 said:

Oh yea, Windows needs reboots once in a while. My uncle has linux server with 3 year uptime running just fine :D

Wow, yeah. This is a reliable server and beastly, runs like 4 VM's, but like any windows machine - it needs updates :) 

Don't forget to @me / quote me for a reply =]

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

That's quite reliable. I have a Windows Server 2012 R2 machine that's only achieved 18 days and counting. 

Main System: Phobos

AMD Ryzen 7 2700 (8C/16T), ASRock B450 Steel Legend, 16GB G.SKILL Aegis DDR4 3000MHz, AMD Radeon RX 570 4GB (XFX), 960GB Crucial M500, 2TB Seagate BarraCuda, Windows 10 Pro for Workstations/macOS Catalina

 

Secondary System: York

Intel Core i7-2600 (4C/8T), ASUS P8Z68-V/GEN3, 16GB GEIL Enhance Corsa DDR3 1600MHz, Zotac GeForce GTX 550 Ti 1GB, 240GB ADATA Ultimate SU650, Windows 10 Pro for Workstations

 

Older File Server: Yet to be named

Intel Pentium 4 HT (1C/2T), Intel D865GBF, 3GB DDR 400MHz, ATI Radeon HD 4650 1GB (HIS), 80GB WD Caviar, 320GB Hitachi Deskstar, Windows XP Pro SP3, Windows Server 2003 R2

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Jamiec1130 said:

That's quite reliable. I have a Windows Server 2012 R2 machine that's only achieved 18 days and counting. 

Ah yeah its great but they do need restarts every so often

Don't forget to @me / quote me for a reply =]

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, JackHubbleday said:

Ah yeah its great but they do need restarts every so often

They shouldn't, I have seen several servers running windows server going without issue running 24/7 for 2-3 years now. 

 

Usually turning it off then back on fixes 99.9% of all problems, but did you even look for the reason why it was slow, like high cpu usage, high memory usage?

 

 •E5-2670 @2.7GHz • Intel DX79SI • EVGA 970 SSC• GSkill Sniper 8Gb ddr3 • Corsair Spec 02 • Corsair RM750 • HyperX 120Gb SSD • Hitachi 2Tb HDD •

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, SLAYR said:

They shouldn't, I have seen several servers running windows server going without issue running 24/7 for 2-3 years now. 

They do though - this is an enherant issue of running a Wintel environment. Restarts need to be scheduled due to a variety of reasons, the biggest being Windows updates that make changes to core operating subsystems. Memory leaks and other similar issues can normally be resolved by restarting the service and in the case of Exchange or SQL, not allowing it to cache as much as it wants in the availible RAM.

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, Windspeed36 said:

They do though - this is an enherant issue of running a Wintel environment. Restarts need to be scheduled due to a variety of reasons, the biggest being Windows updates that make changes to core operating subsystems. Memory leaks and other similar issues can normally be resolved by restarting the service and in the case of Exchange or SQL, not allowing it to cache as much as it wants in the availible RAM.

True, but this shouldn't be a big deal at all. Heck, your systems should be up to date everywhere otherwise your not really PCI compliant. This applies to Linux also. What you really need as a method to spin up a new fully patched server every month and then spin down the old one. This shouldn't be a windows only thing. Having a system with a extremely high uptime is a bad thing - mainly due the increased likelihood that the system wont turn on after a reboot/power outage/etc etc.

 

And it your buying into the cloud computing being beneficial in every case (it's not) you'd just spin up new instances dynamically.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Blake said:

True, but this shouldn't be a big deal at all. Heck, your systems should be up to date everywhere otherwise your not really PCI compliant. This applies to Linux also. What you really need as a method to spin up a new fully patched server every month and then spin down the old one. This shouldn't be a windows only thing. Having a system with a extremely high uptime is a bad thing - mainly due the increased likelihood that the system wont turn on after a reboot/power outage/etc etc.

 

And it your buying into the cloud computing being beneficial in every case (it's not) you'd just spin up new instances dynamically.

Spinning up a new fully patched server (Windows in particular) sounds like an ungodly logistical nightmare. How do you propose to keep that server up to date with whatever service it's running? Reconfigure it every time? Backup the config of the old server and migrate to the new server every month?

 

Also, your last point (system won't turn on after reboot, etc) is highly mitigated simply by being in a virtualized environment.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, dalekphalm said:

Spinning up a new fully patched server (Windows in particular) sounds like an ungodly logistical nightmare. How do you propose to keep that server up to date with whatever service it's running? Reconfigure it every time? Backup the config of the old server and migrate to the new server every month?

 

Also, your last point (system won't turn on after reboot, etc) is highly mitigated simply by being in a virtualized environment.

Spinning up a new Windows build is trivial, and having it patch itself is also very easy. Having it also call a configuration script (Powershell DSC) is also fairly trivial.

 

Oh look you now have a fully patched system. you can now suspend the the old system/systems, and depending on requirements keep to destroy them.

 

Yes but there are a lot of other issues that develop from having extreme uptimes, even when virtualised.

 

tl;dr treat your servers like cattle not pets.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Blake said:

Spinning up a new Windows build is trivial, and having it patch itself is also very easy. Having it also call a configuration script (Powershell DSC) is also fairly trivial.

 

Oh look you now have a fully patched system. you can now suspend the the old system/systems, and depending on requirements keep to destroy them.

 

Yes but there are a lot of other issues that develop from having extreme uptimes, even when virtualised.

 

tl;dr treat your servers like cattle not pets.

That might be totally fine, if your server is running, say, DHCP or whatever - a Windows service. But if it's running a third party service? That still sounds like a logistical nightmare. I don't know a single Server Admin that does this method you describe. In fact, you're the first person who's ever mentioned such a method to me. I can't imagine that the slight savings in uptime is worth all that effort.

 

Extreme uptime - in a Windows environment, should be avoided at all costs, because if you're properly keeping your system up to date, you'll never have extreme uptime, as your system will be rebooted from time to time for scheduled Windows Updates.

 

On Linux? Well that's a different conversation.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, dalekphalm said:

That might be totally fine, if your server is running, say, DHCP or whatever - a Windows service. But if it's running a third party service? That still sounds like a logistical nightmare. I don't know a single Server Admin that does this method you describe. In fact, you're the first person who's ever mentioned such a method to me. I can't imagine that the slight savings in uptime is worth all that effort.

 

Extreme uptime - in a Windows environment, should be avoided at all costs, because if you're properly keeping your system up to date, you'll never have extreme uptime, as your system will be rebooted from time to time for scheduled Windows Updates.

 

On Linux? Well that's a different conversation.

I'll admit it is rare, and alot of applications (read this as shitty devs not knowing how to dev), don't provide proper configuration options. But ALOT do provide a simple msi installer and use a config file. It's a simple matter of transferring the config file across after a silent install.

 

Also I wouldn't do this with windows services, most of them are cluster-able from the get go, why bother? DHCP for example, just go wild, install updates, use powershell to update one then the other.

 

Once you have done it once, you'd be surprised at how easy it is. Soon you'll discover what bliss is when it's patching night and you go home @ 5 and not remote in to do the patching.

 

I've got this process semi automated at this point (with PowerCLI and PowerShell/DSC and Shitty batch scripts, if you use Hyper-V just get rid of the PowerCLI, if you hyper visor is linux based that's a hole other game), It should only be a few months before I have the entire setup automated (assuming I can find the time to work on it).

Link to comment
Share on other sites

Link to post
Share on other sites

@Blake @dalekphalm

 

What Blake is talking about is the actual end goal of DevOps, omg I hate using that buzz word. Anyway this is why toolsets like Puppet, Chef etc exist and to an extent SCCM but this was developed before this type of mind set existed and serves a slightly different purpose but can be wrestled in to line.

 

At work we are currently trying to move every service/application (key point) to an automated life-cycle, release and management process using multiple different tools: Puppet, DSC, Git, Jenkins, Jira, SCCM etc (there's more I forget).

 

This is something that large businesses have already been doing and we really are playing catch up, this same type of toolsets and processes are what drives cloud services provisioning for their services.

 

Where it gets more complicated is not actually installing software on servers, that's easy no matter who's it is if your experienced at doing it, but integration in to other services such as Firewalls, Application Load Balancers, External/Internal DNS etc. Having a server deployed that is a web front end to an application isn't really that useful if it isn't automatically added in to the ALB pool for users to hit and firewall rules in place to allow it to actually function.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×