Jump to content

OldBoIs urgently wanted - States cry for urgent help as Pre-Y2K systems struggle to handle epidemic

rcmaehl
4 hours ago, handymanshandle said:

Fun fact: the ATF does a whole lot of paper shit. Ever wondered why getting a suppressor takes ages? 

Welcome to the US government, where modernizing anything to more modern technology takes at least 10 years, if not longer.

Tip: it doesn't take as long in Texas.

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, Sauron said:

Maybe they can figure out a way to change them for something newer every few years instead of waiting 40 years for it to break and be completely unprepared to deal with it...

The original story makes a lot of assumptions, and also is written to create a false Impression. Take the photo for example, obviously taken in the 70s judging by the outfit. While I don’t know the system they are using, I am sure the hardware itself has been regularly refreshed, probably every 5-10 years. There are still big players actively developing mainframes and their operating systems. I actually work I. The mainframe arena for Fujitsu so I get to see it first hand. We also work with IBM who also develop mainframes.  Most of the banking sector rely on these systems, the market is still huge.

 

COBOL as a language is old, but still superb at what it does. As has already been mentioned, it is amazingly scalable. How many programming languages work as well on a 1 RPF* system as a 10,000 RPF system? I can only guess what has happened here is while the old hardware has been upgraded, a lot of the software has not. It is a testament to how well optimised these machines and their code is that they are still coping all these years later with the load. I would bet the main issue here is staffing rather than the hardware and software. A lack of knowledge on these systems which is why they are asking for people to help.

 

As I and others have said, moving to something else is far from trivial. Here in the UK Lloyd’s and TSB split A few years ago. That was “simply” splitting one banking system into two still on the same tech and that resulted in many months of problems and outages, cost many millions of pounds and affected almost all of their customers. Moving to a different platform is many times harder.
 

 

* RPF - Relative Performance Factor, a way of measuring speed. Think of CD drives or memory cards and how they measure speeds as 4* of 133* etc to get an idea of how it works.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, mr moose said:

They already have considered it.  The idea they waited for it to break insinuates there was/is a viable alternative, that I am afraid is a guess at best.  

Pretty sure handling a database is possible with modern technology 🤔

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Phill104 said:

The original story makes a lot of assumptions, and also is written to create a false Impression. Take the photo for example, obviously taken in the 70s judging by the outfit. While I don’t know the system they are using, I am sure the hardware itself has been regularly refreshed, probably every 5-10 years. There are still big players actively developing mainframes and their operating systems. I actually work I. The mainframe arena for Fujitsu so I get to see it first hand. We also work with IBM who also develop mainframes.  Most of the banking sector rely on these systems, the market is still huge.

it is the US gov it is for sure old AF

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Phill104 said:

The original story makes a lot of assumptions, and also is written to create a false Impression. Take the photo for example, obviously taken in the 70s judging by the outfit. While I don’t know the system they are using, I am sure the hardware itself has been regularly refreshed, probably every 5-10 years. There are still big players actively developing mainframes and their operating systems. I actually work I. The mainframe arena for Fujitsu so I get to see it first hand. We also work with IBM who also develop mainframes.  Most of the banking sector rely on these systems, the market is still huge.

To be honest the hardware is a secondary concern here. The software has clearly barely been touched in the last few decades.

15 minutes ago, Phill104 said:

COBOL as a language is old, but still superb at what it does.

I don't care if they used assembly, it's not the language that matters. If you design a complex system to work in half a MB of ram it's not going to scale well even if you have better hardware.

18 minutes ago, Phill104 said:

It is a testament to how well optimised these machines and their code is that they are still coping all these years later with the load.

The problem didn't fundamentally change, after all people are people and unemployment rates are generally fairly stable. The problem is that, while 40 years ago we didn't have the technology to face a crisis like this, now we do and it hasn't been employed because it's easier and cheaper to just keep the system that has worked until now - even if you have no idea how it works anymore.

20 minutes ago, Phill104 said:

As I and others have said, moving to something else is far from trivial. Here in the UK Lloyd’s and TSB split A few years ago. That was “simply” splitting one banking system into two still on the same tech and that resulted in many months of problems and outages, cost many millions of pounds and affected almost all of their customers. Moving to a different platform is many times harder.

Whatever, it's pennies for the US government. They're not here to turn a profit, the goal should be to have a robust infrastructure. They don't seem to worry about a few millions when they set the military budget.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, Sauron said:

They don't seem to worry about a few millions when they set the military budget.

given how many millions if not billions are wasted at the end of each year because it is spend it or lose it, yeah the millions don't matter.

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, Sauron said:

To be honest the hardware is a secondary concern here. The software has clearly barely been touched in the last few decades.

Maybe not, but it works. As I said, the staffing is more likely the issue here rather Thant the system itself.

Quote

I don't care if they used assembly, it's not the language that matters. If you design a complex system to work in half a MB of ram it's not going to scale well even if you have better hardware.
 

Incorrect, that wholly depends on the type of data you are processing. In this case it is probably sets of numbers, social security and the like. The data levels are probably quite low but the throughput is high, So a well optimised bid of code really is important. That kind of throughput on a single system is hard to achieve on x86 for instance which is one reason mainframe tech is still relevant.

Quote

The problem didn't fundamentally change, after all people are people and unemployment rates are generally fairly stable. The problem is that, while 40 years ago we didn't have the technology to face a crisis like this, now we do and it hasn't been employed because it's easier and cheaper to just keep the system that has worked until now - even if you have no idea how it works anymore.

What technology? This has little to do with the underlying technology, and more to do with keeping the software and the staffing up to date.  This was also something impossible to predict. Who would have thought back in December that most of the planet would be in lockdown, the economy would crash and unemployment would be at a level never seen before by an order of magnitude. Hopefully it will be short lived.

 

When you build and commission a system you could massively over-engineer it, but you have to have limits. This current crisis has exceeded any predictions, so much so that I very much doubt any system would have coped at even 5 times the budget the department have. Even trillion dollar systems that the streaming companies are using are feeling the strain and as such have had to turn the quality down, in some cases by quite a lot. You cannot just turn the quality down with social security systems.

Quote

Whatever, it's pennies for the US government. They're not here to turn a profit, the goal should be to have a robust infrastructure. They don't seem to worry about a few millions when they set the military budget.

I can only guess that this is more to do with staffing than the actual systems involved. There is almost a certain amount of smoke and mirrors involved here. It is easy to blame a system, especially when the powers that be do not understand the system. All they see is and old tech and listen to buzzwords such as cloud and open systems etc. They often have very little understanding, but they do know how to spin a story to deflect the actual issues from their door.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Phill104 said:

 This was also something impossible to predict. Who would have thought back in December that most of the planet would be in lockdown, the economy would crash and unemployment would be at a level never seen before by an order of magnitude. Hopefully it will be short lived.

an event like this was predicted years ago by the pentagon. It was one of the events the US was least prepared for. it is in part why the US wanted cheap long life ventilators for storage 8+ years ago but 0/10,000 delivered.

had in January we reacted fast we could be looking at a very different world.

 

right now it looking like about the same level as the great depression.

5 minutes ago, Phill104 said:

When you build and commission a system you could massively over-engineer it, but you have to have limits. This current crisis has exceeded any predictions, so much so that I very much doubt any system would have coped at even 5 times the budget the department have.

no need to massively over-engineer the hardware as hardware is easy to add more, just got to make the software be able to handle it. it may not be a bad idea to plan for say 1/3 of the country losing their jobs. we do run like a clock.

 

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, GDRRiley said:

an event like this was predicted years ago by the pentagon. It was one of the events the US was least prepared for. it is in part why the US wanted cheap long life ventilators for storage 8+ years ago but 0/10,000 delivered.

had in January we reacted fast we could be looking at a very different world.

 

right now it looking like about the same level as the great depression.

no need to massively over-engineer the hardware as hardware is easy to add more, just got to make the software be able to handle it. it may not be a bad idea to plan for say 1/3 of the country losing their jobs. we do run like a clock.

 

Again, I am sure the software just needs the staffing to deal with this. That is where the true expense is, not the kit involved. For every million you spend on kit you spend four on licensing and twelve on staffing. Also, scaling hardware in a hurry is not a trivial task, it takes a lot of planning as well as lead times to manufacture. You also have to increase to the system bandwidth as well as a number of other things. Until you have been involved in such projects you will not realise just what is involved. I am quite certain those who wrote this story have no clue as to what is involved.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Phill104 said:

 Also, scaling hardware in a hurry is not a trivial task, it takes a lot of planning as well as lead times to manufacture. You also have to increase to the system bandwidth as well as a number of other things.

that is dependent on hardware if you running some kind of cluster just add more you had easily 2 months to prep. again planing from the start if you've got say a 20 strand fiber cable set up half with everything ready to go even if you only need 2 cables worth now so you can scale up.

lack of planning about and reacting to possible events is what got us here. this isn't a bank you don't have to plan for a steady increase. you should over provision, 16x maybe not but still a 10x should be do able for these kinds of systems.

 

Quote

who’ve been working nonstop to make our 40-year-old mainframe systems

https://nj.gov/governor/news/news/562020/approved/20200404b.shtml

NJ one is 40 years old theres

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, GDRRiley said:

that is dependent on hardware if you running some kind of cluster just add more you had easily 2 months to prep. again planing from the start if you've got say a 20 strand fiber cable set up half with everything ready to go even if you only need 2 cables worth now so you can scale up.

lack of planning about and reacting to possible events is what got us here. this isn't a bank you don't have to plan for a steady increase. you should over provision, 16x maybe not but still a 10x should be do able for these kinds of systems.


 

Not really, you have to see the whole picture. Enterprise level systems are very different to your average desktop. You could also consider cloud style nodes, but again, there are huge challenges involved in that. It really is not as simple as you may believe.

25 minutes ago, GDRRiley said:

 

NJ one is 40 years old theres

Spin, that is what is going on there. The system may be that old, but that is the system, not the underlying hardware. There are banking systems for instance that have been running since 1958, but not one bit of hardware or software still exists from then, “System” is a very careful choice of words. Systems evolve over years. Often elements do fall behind the curve. 
 

It is easy to blame IT in this case, it is a big target for those in power, especially those that do not really understand IT. Right to repair has shown us just how little senators understand IT in many cases. Rightly so too, that is not their remit. They have their skill set, and to get where they are must be good at it. So they have to rely on advisors and come to conclusions that keep the public happy. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Phill104 said:

Not really, you have to see the whole picture. Enterprise level systems are very different to your average desktop. You could also consider cloud style nodes, but again, there are huge challenges involved in that. It really is not as simple as you may believe.

 

nothing is simple nor am I trying to paint it as that. nothing I mentions there was about average desktops. If you did clusters a specific way you could toss in desktops if needed, recommend no but possible yes.

2 minutes ago, Phill104 said:

It is easy to blame IT in this case, it is a big target for those in power, especially those that do not really understand IT. Right to repair has shown us just how little senators understand IT in many cases. Rightly so too, that is not their remit. They have their skill set, and to get where they are must be good at it.

I'm not blaming IT at all. I'm blaming it on those in power.

 

2 minutes ago, Phill104 said:

 So they have to rely on advisors and come to conclusions that keep the public happy. 

they rely on lobbyist after congress stop the tech learning service that existed for so long. it isn't public happiness that corporate greed.

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Phill104 said:

Incorrect, that wholly depends on the type of data you are processing. In this case it is probably sets of numbers, social security and the like. The data levels are probably quite low but the throughput is high, So a well optimised bid of code really is important. That kind of throughput on a single system is hard to achieve on x86 for instance which is one reason mainframe tech is still relevant.

I have a reasonable example of something extremely older being much faster than the new replacement. Now this is a long time ago now so exactly which models are involved is a bit fuzzy but the time lines are pretty solid.

 

Back in the 90's (95-97) the hospital here where my dad works purchased two Sun Ultra Enterprise servers, either 5000 or 6000, to run the patient management system on. These ran Solaris and all the other Oracle UltraSPARC specific software, because that is what you brought in to obviously. The entire design of those systems was around high throughput I/O and processing using a zero downtime never fail philosophy. These ran until around 2008-2009 so around 15 years, during the period of CPU performance explosion. During that 15 odd year time there were zero outages and no performance issues, flawless and also easy to upgrade the software and OS.

 

Cut to that 15 years later, now with budget constraints and upper management ignorance to how important IT is and how much it actually cost. The hardware needed replacing so what do you think it got replaced with? The current generation replacement? HAH! No because of two really big reasons, too costly but also they left it too long and Sun had already started to put to bed that enterprise style server system range entirely so while a E4900 or E6900 would of made sense those were EOL Jan 2009.

 

So what were the replacements then? A pair of either T5120 or T5140 because those were the only things Sun were selling with UltraSPARC CPUs at the time. But these are far more modern servers with much more modern multi-core higher frequency CPUs so should be much faster. Well no they were in fact slower and there was a ton of performance issues.

 

Both the older and newer systems ran Solaris and the same software, some upgrades were done during the migration so it was not exactly the same software versions but it's rather pathetic something 15 years newer running the same software doing the same task with the same load demand was slower.

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, leadeater said:

I have a reasonable example of something extremely older being much faster than the new replacement. Now this is a long time ago now so exactly which models are involved is a bit fuzzy but the time lines are pretty solid.

 

Back in the 90's (95-97) the hospital here where my dad works purchased two Sun Ultra Enterprise servers, either 5000 or 6000, to run the patient management system on. These ran Solaris and all the other Oracle UltraSPARC specific software, because that is what you brought in to obviously. The entire design of those systems was around high throughput I/O and processing using a zero downtime never fail philosophy. These ran until around 2008-2009 so around 15 years, during the period of CPU performance explosion. During that 15 odd year time there were zero outages and no performance issues, flawless and also easy to upgrade the software and OS.

 

Cut to that 15 years later, now with budget constraints and upper management ignorance to how important IT is and how much it actually cost. The hardware needed replacing so what do you think it got replaced with? The current generation replacement? HAH! No because of two really big reasons, too costly but also they left it too long and Sun had already started to put to bed that enterprise style server system range entirely so while a E4900 or E6900 would of made sense those were EOL Jan 2009.

 

So what were the replacements then? A pair of either T5120 or T5140 because those were the only things Sun were selling with UltraSPARC CPUs at the time. But these are far more modern servers with much more modern multi-core higher frequency CPUs so should be much faster. Well no they were in fact slower and there was a ton of performance issues.

 

Both the older and newer systems ran Solaris and the same software, some upgrades were done during the migration so it was not exactly the same software versions but it's rather pathetic something 15 years newer running the same software doing the same task with the same load demand was slower.

I recently scrapped a load of T5140s.

 

15 years is a long time to leave something running on the same hardware, mainly because spares become unavailable. I agree though, it is sometimes pathetic how newer kit cannot cope as well as older stuff. Solaris/Sparc systems are a great example. I think that is mainly down to Oracle taking over Sun then dropping most of the hardware. They make their money in other areas as you know.

 

Mainframe tech is a bit different, and in most cases migrating to x86 from even a low end mainframe is quite a challenge and requires a shed load of kit just to have the same functionality. It often ends up costing more, being less reliable and less secure. Also as you say, it can often be slower too.

Link to comment
Share on other sites

Link to post
Share on other sites

I think a lot of people would be surprised at how prevalent a lot of these systems are.

Canadian Tire still uses AS/400 and while most old IBM terminals have been replaced with thin clients and everything emulated, each store has one final terminal. There is supposed to be a new system that was to debut in the Fall of 2019, but it was delayed as most projects go, and with this global pandemic, not sure when it will release.

Walmart Canada also had a system based on COBOL as recently as 2013, to which I believe they have replaced or augmented with new software AFAIK. 

For all the people lamenting why or how these systems still exist, Target Canada launched with different technologies than in the US which was a big portion of why it failed. Rather than working with the tried-and-true systems, do the data conversion, currency conversion, language addition, etc... they went with SAP, a new forecasting system, new POS system....

There were so many problems because of the new technology, and things not even anticipated. New hardware and software definitely has the capability to improve things, but the scope to address and handle things that have already been ironed out or where the process has been altered to run smoothly, is just ridiculously massive. Whether you're government or an executive, being the one who is part of the flicking of that "on" switch to potentially cascading failures is a daunting task.

Link to comment
Share on other sites

Link to post
Share on other sites

Gotta love COBOL!

Link to comment
Share on other sites

Link to post
Share on other sites

Part of the problem with replacing it old systems, is literally figuring out how the old system worked to begin with. Then what relies on it, and how they interact with the machine.


Over the decades changes are made, new equipment is installed that will rely on the old equipment, and these things won't get properly documented.  That in of it's self can cause bigger issues.
Remember a few months ago the videogame WWE 2K20 stopped working because it was the year... 2020? Apparently parking meters also had issues starting in 2020, because they relied on systems that had a quick and dirty Y2K fix to just treat 00-19 as the year 2000.

I wouldn't doubt there are systems that use this machine without even realizing it. Do you know how old your ISP's equipment is? You probably know the age of your router, switch, and modem, but what about the stuff on the other side? For all you know your internet traffic is going through a 20 year old machine.
The people installing a system in 2015 just know you gotta connect to the machine at such and such an address. They don't know its age, or what kind of machine it is. They just know it wants to communicate in some proprietary, or legacy format - Which is not indicative of the age of the machine it's self since their brand new machine will also be talking in that proprietary/legacy format. 
 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Sauron said:

Pretty sure handling a database is possible with modern technology 🤔

I know it seems like that should be the case.  But it's not.  Lots of IT upgrades fail dramatically,  when your infrastructure provides a critical service then risking that failure is not optional.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, mr moose said:

I know it seems like that should be the case.  But it's not.  Lots of IT upgrades fail dramatically,  when your infrastructure provides a critical service then risking that failure is not optional.

Which is why you run systems in parallel during the transition. I don't see how delaying the failure to a time where it's especially needed makes it better.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Sauron said:

Which is why you run systems in parallel during the transition. I don't see how delaying the failure to a time where it's especially needed makes it better.

Because this isn't a small business running office and myob.    The systems are not so simple that you could do that, if they were they would just upgrade everything as they went along. 

 

I don't know anything about this specific installation, but most mainframes (updated or not) are actively chosen by people with experience and education,  they don't just leaves these things to fail because they are tight or lazy.

 

https://www.networkworld.com/article/3148714/why-banks-love-mainframes.html

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, mr moose said:

I don't know anything about this specific installation, but most mainframes (updated or not) are actively chosen by people with experience and education,  they don't just leaves these things to fail because they are tight or lazy.

So experienced and educated that now they're looking for people to try and figure out abandoned code from decades ago...

2 minutes ago, mr moose said:

Because this isn't a small business running office and myob.    The systems are not so simple that you could do that, if they were they would just upgrade everything as they went along. 

If they wanted to I'm sure they could. This isn't some arcane magic that was only possible 30 years ago and cannot be equaled by mere mortals of today. Expensive? Sure. Complex? Maybe. Neither should be a problem for a government funded institution that provides an essential service to the population.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Sauron said:

So experienced and educated that now they're looking for people to try and figure out abandoned code from decades ago...

If they wanted to I'm sure they could. This isn't some arcane magic that was only possible 30 years ago and cannot be equaled by mere mortals of today. Expensive? Sure. Complex? Maybe. Neither should be a problem for a government funded institution that provides an essential service to the population.

 

You keep saying how sure you are that these things can be done, but the reality is they aren't being done instead they are choosing to maintain them,  there is a good reason for that.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Here's another great article about it,  about 92% of banks still use mainframes that run cobol because they're faster, safer and more stable:


 

Quote

 

In fact, while mobile banking seizes the headlines, 92 per cent of the world’s top banks are still reliant on mainframes. And that’s by choice. The advantages remain sound.

 

For a start, there’s the sheer brute force of such systems. What lifts them above other platforms is their ability to process large numbers of transactions, drawing data from several resources, all at the same time.

 

Visa, the world’s largest credit card company, relies heavily on mainframe technology, and processes 145,000 transactions every second using such systems.

 

But speed is only part of the equation. Security remains just as key.

 

Physical mainframe infrastructure has long been respected for its deep encryptions and safety features. And it’s in security where much of the development funding is being invested.

 

 

 

 

https://www.itproportal.com/features/why-the-mainframe-remains-a-crucial-foundation-of-the-banking-sector/

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/9/2020 at 7:36 AM, rcmaehl said:

or task for the creation of a new programming language to replace the aging COBOL.

We already have that. BASIC is one example that comes to mind. Or C, or literally any language other than FORTRAN, ALGOL, and LISP (the only three widely known of languages that are commonly considered to predate COBOL).

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

Just, wow.  I was reasonably competent in COBOL and Fortran IV - in 1972!

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×