Jump to content

YOU Get a Supercomputer!

jakkuh_t
9 minutes ago, Bramimond said:

It would be nice if we could have a cultural shift towards more efficient software.

The trouble with that, as any programmer will tell you, is efficient code is difficult (or more accurately, a PITA) to do, whereas making the assumption that everyone has a powerful computer to run your sloppy code, is easy.

NOTE: I no longer frequent this site. If you really need help, PM/DM me and my e.mail will alert me. 

Link to comment
Share on other sites

Link to post
Share on other sites

< removed quoted part >

 

On 1/11/2023 at 6:33 PM, Eric Kolotyluk said:

Well, comparing Apple Silicon to AMD/Intel on performance per watt is not stupid at all, unless you are claiming Apple is stupid. I guess I should have been more specific. Largely, Apple has really good reasons switching from x86 to ARM, and I tend to respect their credentials more than Intel...

 

AMD routinely has better CPU performance per watt than Intel, but their x86 products still fall short of Apple Silicon in this metric. I really wish AMD would release their ARM projects as products...

I didn't call it stupid, I called it silly because it wouldn't achieve the effect you'd expect. It's a typical fallacy amongst software developers and IT professionals who don't have a background in hardware design. If you'd want the ARM to do the same task as the x86 from Intel or AMD, it'd probably consume about the same if it would clock in around the same performance with the same feature set.

 

For example, do you think I pulled the example of PCIe out of my arse? Depending on the technology, a typical SerDes paired with a differential transceiver might literally take 100 - 200 mW per channel, now you got your PCIe lanes going to those four GPUs, each with 16 lanes with a receive and transmit each, so we're already talking 3.2 W - 6.4 W just running a single hardware interface. This might seem inconsequential, but it's death by a thousand cuts throughout the entire design. Meanwhile, Apple isn't really allowing much in the way of custom hardware interfacing and has a highly integrated design, so they avoid a lot of these type of inefficiencies. But if you'd adjust the M1 and M2 to the degree that they could substitute the CPUs in these workstations with the same degree of modularity, you'd probably end up with approximately the same power consumption for the same performance.

 

As to the performance per watt in the entire Intel vs AMD vs Apple story, this is one has a few reasons. The technology node is definitely one of them, though the differences aren't nearly as big as you might be led to believe from TSMC's questionable marketing. But I suspect this is a good example of where AMD their chiplet approach helps quite a lot, you can implement features, like the aforementioned I/O, in a technology which is more efficient. It also spreads out the area over which the heat is generated, making it easier to handle the power dissipation and lowering the overall temperatures compared to a monolithic die. This might seem inconsequential, but consider the fact that CMOS leakage currents, and the dissipated power as a result of that leakage, tend to be an exponential function of the junction/substrate temperature. So the gains by keeping everything slightly cooler can be surprisingly large. However, I wouldn't be surprised if this comes at a significant cost in terms of packaging yield.

 

On 1/11/2023 at 6:54 PM, Radium_Angel said:

Gov't agencies...for whom budgets are an abstract issue. (I work for a gov't agency...cost is not considered a factor when looking into new toys....err...hardware)

Plenty of engineering positions where this would be an affordable expense, folks who run simulations all day come to mind, the folks working on complex CAD assemblies might also appreciate the performance.

On 1/11/2023 at 6:56 PM, igormp said:

You could have simply mentioned that instead of bringing up credentials, and it does have a minor effect since you need to trade-off between a beefier encoder and micro-op cache vs larger instruction cache, and caches always demand some power. But I digress, those are still not as important when it comes to the bigger picture anyway.

You might also change what you dedicate area to depending on your instruction set and the intended use case. It's not unheard of to perform statistical analysis on typical code when implementing modular soft cores on an FPGA and tweaking the design accordingly.

Edited by LogicalDrm
Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, JoshB2084 said:

Finally found to be able to play Minesweeper at insane FPS!

yes but can I play solitaire raytraced?

                          Ryzen 5800X3D(Because who doesn't like a phat stack of cache?) GPU - 7700Xt

                                                           X470 Strix f gaming, 32GB Corsair vengeance, WD Blue 500GB NVME-WD Blue2TB HDD, 700watts EVGA Br

 ~Extra L3 cache is exciting, every time you load up a new game or program you never know what your going to get, will it perform like a 5700x or are we beating the 14900k today? 😅~

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, ImorallySourcedElectrons said:

Plenty of engineering positions

Certainly possible, though I am not an engineer, I"m just a drain on taxpayers 😄

NOTE: I no longer frequent this site. If you really need help, PM/DM me and my e.mail will alert me. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Radium_Angel said:

Certainly possible, though I am not an engineer, I"m just a drain on taxpayers 😄

Same thing

                          Ryzen 5800X3D(Because who doesn't like a phat stack of cache?) GPU - 7700Xt

                                                           X470 Strix f gaming, 32GB Corsair vengeance, WD Blue 500GB NVME-WD Blue2TB HDD, 700watts EVGA Br

 ~Extra L3 cache is exciting, every time you load up a new game or program you never know what your going to get, will it perform like a 5700x or are we beating the 14900k today? 😅~

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, lotus10101 said:

yes but can I play solitaire raytraced?

No, but it might finally be able to play Crysis 

NOTE: I no longer frequent this site. If you really need help, PM/DM me and my e.mail will alert me. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Radium_Angel said:

No, but it might finally be able to play Crysis 

Cant wait for the Doom #1 raytrace rerelease and it chokes out 4090's

                          Ryzen 5800X3D(Because who doesn't like a phat stack of cache?) GPU - 7700Xt

                                                           X470 Strix f gaming, 32GB Corsair vengeance, WD Blue 500GB NVME-WD Blue2TB HDD, 700watts EVGA Br

 ~Extra L3 cache is exciting, every time you load up a new game or program you never know what your going to get, will it perform like a 5700x or are we beating the 14900k today? 😅~

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Radium_Angel said:

Certainly possible, though I am not an engineer, I"m just a drain on taxpayers 😄

A good engineer will easily cost you upwards of €100 / hour as a company, so having that engineer sit at the coffee machine waiting for a computer is surprisingly expensive. 😄

Link to comment
Share on other sites

Link to post
Share on other sites

What year's supercomputer is today's mid high-end gamer PC equivalent in computational power to?

You own the software that you purchase - Understanding software licenses and EULAs

 

"We’ll know our disinformation program is complete when everything the american public believes is false" - William Casey, CIA Director 1981-1987

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, ImorallySourcedElectrons said:

Working on large simulation date sets comes to mind. Running paraview on a remote cluster is not fun!

Unless it is running high FPS I would think that could be solved by a powerful enough VGPU on the server.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Delicieuxz said:

What year's supercomputer is today's mid high-end gamer PC equivalent in computational power to?

According to the graph on wikipedia a 4090 is close to a supercomputer from 2003-2005 as far as 32 bit flops goes. it can be hard to compare because a lot of supercomputers have fixed function hardware.

 

EDIT: don't trust me on this I may be misinterpreting

If your question is answered, mark it so.  | It's probably just coil whine, and it is probably just fine |   LTT Movie Club!

Read the docs. If they don't exist, write them. | Professional Thread Derailer

Desktop: i7-8700K, RTX 2080, 16G 3200Mhz, EndeavourOS(host), win10 (VFIO), Fedora(VFIO)

Server: ryzen 9 5900x, GTX 970, 64G 3200Mhz, Unraid.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Delicieuxz said:

What year's supercomputer is today's mid high-end gamer PC equivalent in computational power to?

If you are going by year our cell phones are faster than super comuters just a few decades ago.  Go back to the first super computers and its closer to a basic digital watch!

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ewitte said:

Unless it is running high FPS I would think that could be solved by a powerful enough VGPU on the server.

The grossly simplify how it works: the rendering job has to be queued and you got to wait until it's your turn. 🙂 

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, igormp said:

That's a double-edged sword, since it'd take way more time to validate and deliver stuff, which means constrained cash flow.

 

19 hours ago, Radium_Angel said:

efficient code is difficult (or more accurately, a PITA) to do,

 

I am aware of the developer perspective and the business perspective.

But as a user I'd be happy about a cultural shift towards more efficient software.

 

On the other end of that conversations you have games like CP2077 and the new Pokemon, that released as broken messes. It's a shame that this practice is financially successful.

 

I don't mind waiting for better software. But I don't think any amount of waiting is going to lead to, say Adobe software products, becoming bearable in terms of performance.

 

It is unfortunate that no one seems to care about optimization at any point in the process. Just imagine how much energy (or battery life) could be saved simply by incentivizing developers to write more efficient software.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Bramimond said:

I don't mind waiting for better software.

Investors do, that's where the issues comes from. The developers themselves for sure didn't want to deliver such broken stuff that would stain their reputation, but they usually have no say in that matter.

 

So yeah, not much of a dev culture issue but more your regular capitalism complaint.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, igormp said:

not much of a dev culture issue but more your regular capitalism complaint.

That somehow only affects software development and no other industries. 🤔

Also, I never said that developers are the issue, that's just your strawman.

 

But for some reason it is culturally accepted for software to be a bloated mess. Possibly because so few people understand it enough to even know that we could do a lot better with more effort.

 

It's easy to understand that crypto mining is "wasteful" for people outside of the field. But it's harder to wrap your head around that the energy bill of every PC could be halved if Microsoft gave two shits about having an optimized operating system instead. It's also totally possible to reduce the time between turning your computer on and doing productive work to under a second if optimization was a thing that was taken seriously.

 

 

But culturally we just accept how things are. We, the consumers, do not put pressure on for example Microsoft to do a better job. Consumers vote with their wallet. I'm not complaining that companies only do as much as they have to, I'm lamenting that consumers are satisfied enough to pay money for it.

 

Good enough is the archnemesis of better, after all.

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/10/2023 at 10:41 AM, Outdoors-Filter8 said:

Any reason you'd need a Workstation like this in your home instead of having them in a server room and remoting into it?

Latency primarily. Connection stability is another. Can't use cloud services if the connection availability will be subject to connection charges or per-hour costs when the connection is interrupted.

 

I'm going to be blunt, there is exactly one use case that justifies for having this "at home/the local office" vs some data center, and that is data privacy.

 

If you had the ChatGPT for Medical consultations using customer data.

 

If you had a virtual assistant that has access to only your data.

 

Pretty much the reason to have it instead of in a data center is that it's under your control and has no possibility of data leaking by being not connected to the internet. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/12/2023 at 7:29 PM, Bramimond said:

That somehow only affects software development and no other industries. 🤔

Also, I never said that developers are the issue, that's just your strawman.

 

But for some reason it is culturally accepted for software to be a bloated mess. Possibly because so few people understand it enough to even know that we could do a lot better with more effort.

 

It's easy to understand that crypto mining is "wasteful" for people outside of the field. But it's harder to wrap your head around that the energy bill of every PC could be halved if Microsoft gave two shits about having an optimized operating system instead. It's also totally possible to reduce the time between turning your computer on and doing productive work to under a second if optimization was a thing that was taken seriously.

 

 

But culturally we just accept how things are. We, the consumers, do not put pressure on for example Microsoft to do a better job. Consumers vote with their wallet. I'm not complaining that companies only do as much as they have to, I'm lamenting that consumers are satisfied enough to pay money for it.

 

Good enough is the archnemesis of better, after all.

Loads of other products are a bloated mess as well, but the barrier to entry is quite low with software. Meanwhile, mechanical or electronic products require tools and skills that aren't readily available at home.

 

22 hours ago, Kisai said:

I'm going to be blunt, there is exactly one use case that justifies for having this "at home/the local office" vs some data center, and that is data privacy.

 

If you had the ChatGPT for Medical consultations using customer data.

 

If you had a virtual assistant that has access to only your data.

 

Pretty much the reason to have it instead of in a data center is that it's under your control and has no possibility of data leaking by being not connected to the internet. 

May I propose an alternative target group: small companies or lone consultants who don't have a data centre or dedicated IT person.

 

And regarding medical record privacy, I'm not so sure that's a concern. Within the EU's rather strict privacy framework, we can access our medical records online using our identity cards and a card reader. I have full access to images, reports, prescriptions, etc. going back for more than ten years. Older files might take a couple of minutes to retrieve, but all of it is there in that unified medical record system. So I don't think running a medical GPT-like system would face much legal problems at this side of the pond.

 

57 minutes ago, Eric Kolotyluk said:

promote endless devil's advocate style debate in a meaningless attempt to prop up their inadequate self esteem by disrespecting and invalidating others.

Someone disagreeing with you and presenting the other side of the argument is not trolling. As to why I called it a devil's advocate argument, it's simply because the entire RISC vs. CISC debate is pointless given how modern CPUs decode instructions, allocate resources, and execute parts of those instructions. We made massive strides in both semiconductor technology and digital design in the 90s, so the boundary conditions limiting us when designing a CPU are entirely different now. This is also one of the key reasons why Itanium failed as badly as it did, it was about a decade too late to have mattered, and x86/amd64 have too much legacy code going for it.

 

Also, what you're asking for already exists, though not in ARM format. Check up on Tallos II motherboards with IBM Power9 CPUs, that's a proper workstation with RISC CPUs. They're just not very popular.

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/13/2023 at 8:40 PM, ImorallySourcedElectrons said:

Loads of other products are a bloated mess as well, but the barrier to entry is quite low with software

For example?

Also, I'm not talking about people at home - some of which are trying to do something about the bloat.

I am talking about tech giants like Microsoft and ultimately consumers who accept things how they are.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Bramimond said:

For example?

Also, I'm not talking about people at home - some of which are trying to do something about the bloat.

I am talking about tech giants like Microsoft and ultimately consumers who accept things how they are.

Pretty much anything where design by committee applies, the worst case I've ever seen was a satellite design. It had everything: waterfall planning, sunken cost fallacies, political meddling, management collaborating with supplier sales teams against engineering, cost cuttings whenever possible.

 

But it's not uncommon at this point to see folks use a full-fledged 32 bit ARM SoC for something that could be done by a cheap 80s 8-bit MCU operating at a fraction of the power consumption. 

Link to comment
Share on other sites

Link to post
Share on other sites

Jake, you committed one of my biggest pet peeves at 7:50. You say, "There's also more PCIe slots..."  when you should have said ,"There are also more PCIe slots."  People using "is" when they should use "are" drives me crazy. And once you notice it once, you notice it everywhere. I long for death.

Still, cool computer.

System Specs: Second-class potato, slightly mouldy

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×