Jump to content

Build a PC while you still can

Full disclosure: I posted this on the YouTube comments as well but am curious as to y'all's opinions.

 

This is the first time I've left a comment on one of your videos due to an unhappy emotion but felt a need to write my thoughts somewhere they could, ostensibly, be seen. While a shift from X86 is indeed inevitable I disagree with the notion proposed by the choice to compare an M1 Mac to a Custom PC, being that all computers will over time tend towards less serviceable, less upgrade-able, and nearly in-customizable due to the power consumption and efficiency differential between the two. For evidence, I present the audiophile and keyboard markets. While they have both been generally standardized they have actually become more customizable as a result of the extremely dedicated hobbyists who are willing to pay more for less efficient and yet more effective products. I foresee a similar future for the computer market; the HPs and Dells of the world will begin delivering more M1 Mac-like systems as their users expect and require a high amount of energy and space efficiency; the more enthusiast focused brands will likely transition to a model of less more gainful sales instead of the current more less gainful sales and still provide systems that allow their users high customization and higher energy bills.

As an aside, I foresee the efficiency of materials as being of greater import than that of energy as the amount of useful energy in our reach dwarfs the amount of matter, especially that of any specific element, in our reach. I would also argue that over time the ability to swap out broken or outdated components is considerably more silicon efficient than the smaller system that will be ready for the trash chute as soon as any part breaks or becomes outdated.

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, kumicota said:

I like how this thread ended up being about whataboutism only.

 

Person 1: For me the minimum is X.

Person 2: Do you say the same about Y?

Not really, people say they they must have upgradeable memory for SOC with a powerful gpu inside of it. While it is possible to build an SOC with a powerful internal gpu if you want to use the same memory pool to get the advantages of the SOC approach then if that memory is socketed then it is the same as asking for socketed VRAM on a gpu after all that memory is VRAM.   

If you demand SOCs that have powerful GPUs to have socketed RAM then it is technically as demanding stand alone GPUs to have socketed memory. You need to accept the consequences of this (the power draw of a DDR5 memory system that would provide the same bandwidth as a high end desktop gpus memory would be close to the same power draw as the current gpus use it would also take up more space than your entier current PC build and cost a fortune in traces and extra modules, power management etc).

 

The reason you have soldered memory for GPUs is exactly the same reason powerful SOC will have soldiers memory:

* power draw
* cost (not just in the memory dies but all the other axially extras needed for dims)
* complexity (motherboard, memory controllers etc)
* space

GPU board partners can build gpus at scale (ish) but if every GPU needed to have the complex PCB to support 16+ DDR5 dims of memory the complexity of building this (even if you don't care about cost) would massively limit the production capacity, currently the number of PCB vendors that could produce a PCB with that many layers without interference at those speeds is very limited.  No idea how you would even fit that many DDR dims into even a massive 4 slot wide card. 

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Jackite said:

As an aside, I foresee the efficiency of materials as being of greater import than that of energy as the amount of useful energy in our reach dwarfs the amount of matter, especially that of any specific element, in our reach. I would also argue that over time the ability to swap out broken or outdated components is considerably more silicon efficient than the smaller system that will be ready for the trash chute as soon as any part breaks or becomes outdated.

But enery has a limit that we can get now. And energy costs money, so lower power = lower tco. Also battery life depends on power usage, and users want longer battery life.

 

s\

37 minutes ago, Jackite said:

For evidence, I present the audiophile and keyboard markets. While they have both been generally standardized they have actually become more customizable as a result of the extremely dedicated hobbyists who are willing to pay more for less efficient and yet more effective products. I foresee a similar future for the computer market; the HPs and Dells of the world will begin delivering more M1 Mac-like systems as their users expect and require a high amount of energy and space efficiency; the more enthusiast focused brands will likely transition to a model of less more gainful sales instead of the current more less gainful sales and still provide systems that allow their users high customization and higher energy bills.

One thing that I see differences these markets is the amount of design work that is needed for a chip is much more than headphones. You can had headphone and keyboard companies that are profitable making products for mainly enthusiasts. But you can't really make cpus and gpus for just enthusiasts, as there much more expensive to design. So enthusiast parts are normally just tweaked laptop/business desktop/server parts. And intel/amd/nvidia have many large customers that want efficient cpus and gpus to lower tco and increase battery life, so thats what chips are made. 

 

Also the high tdp gpus and cpus are often wanted in the server market as you can decrease the nodes needed for a given level of computer, and need less rack space, and cost. 

 

This has also been a pattern for a long time now. Laptops are moving to more and more soldered parts across the board these days. Buiness desktops need to lower power usage, and those chips are the same as the enthusiast chips. 

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, Jackite said:

Full disclosure: I posted this on the YouTube comments as well but am curious as to y'all's opinions.

 

This is the first time I've left a comment on one of your videos due to an unhappy emotion but felt a need to write my thoughts somewhere they could, ostensibly, be seen. While a shift from X86 is indeed inevitable I disagree with the notion proposed by the choice to compare an M1 Mac to a Custom PC, being that all computers will over time tend towards less serviceable, less upgrade-able, and nearly in-customizable due to the power consumption and efficiency differential between the two. For evidence, I present the audiophile and keyboard markets. While they have both been generally standardized they have actually become more customizable as a result of the extremely dedicated hobbyists who are willing to pay more for less efficient and yet more effective products. I foresee a similar future for the computer market; the HPs and Dells of the world will begin delivering more M1 Mac-like systems as their users expect and require a high amount of energy and space efficiency; the more enthusiast focused brands will likely transition to a model of less more gainful sales instead of the current more less gainful sales and still provide systems that allow their users high customization and higher energy bills.

As an aside, I foresee the efficiency of materials as being of greater import than that of energy as the amount of useful energy in our reach dwarfs the amount of matter, especially that of any specific element, in our reach. I would also argue that over time the ability to swap out broken or outdated components is considerably more silicon efficient than the smaller system that will be ready for the trash chute as soon as any part breaks or becomes outdated.

I've been reading similar posts for many, many, many, many years. 

 

PCs are still customizable after some decades of speculation.

 

Will not see X86 fall off any time soon. It's like trying to get all the Android users to go apple. It just won't ever happen...

 

Nobody uses their systems at a default spec. It's boost boost boost performance and forget about the energy usage. This also hasn't changed in many decades. Because faster requires energy. 

 

Really there is a system or particular computer for just about all people and their needs. 

 

It's all a bunch of hoopla. It's just talk. It's just another bait video. None of those things happened in the past, nor in the foreseeable future.  

 

I'd be willing to say that PCs are way more customizable now compared to the past. The cases alone have come a long way. Be damned if there isn't something with RGB any more.

 

But any old timer will tell you fellas, PCs have gotten better and there's no end in sight. You can see X86 road map well into the future. 

Link to comment
Share on other sites

Link to post
Share on other sites

 Few potentially misleading claims/ misinformation in this one.

RE: ISA's effect on efficiency:  Apart from restrictions on memory consistency, there really isnt much you can directly tie to a chip being aarch64 or x86. See https://research.cs.wisc.edu/vertical/papers/2013/hpca13-isa-power-struggles.pdf

 

Sure the decode is more complicated but current x86 machines all use RISC like uops internally. A lot of the decode overhead is eliminated through Intel's use of a trace cache that stores decoded instructions rather than actual instructions.

 

While Apple Silicon did come from ARM's ISA license, unlike Qualcomm and others, the microarch and physical design was done entirely in house by Apple. Most Qualcomm designs are modifications of existing ARM softcores. 

Link to comment
Share on other sites

Link to post
Share on other sites

So this entire video assumes that the modular nature of PC's is completely incompatible with ARM. I'd like to actually know why it's not possible to have a socketed ARM cpu with socketed RAM and a separate GPU. Similar to what we have in PC's today. Is it impossible to think of new standards/sockets that would allow for that?

Additionally if that's genuinely the case wouldn't it be preferable to not build a PC on a platform that will become unsupported and wait to buy a new arm machine that's going to be much better?

 

7 hours ago, Uttamattamakin said:

I 1000% agree with this as I have been saying for a while APU's are going to soon be so good that for 90 -95% of people they will give good enough video performance for any game that now exist.  Wait 10-15 years from now and an APU will be all of it. 

I mean this could be technically done today. Looking at the PS5 and Series X, both of them use APU's and are pretty powerful and it's pretty hard to imagine why one of those APU's paired with some DDR5 ram wouldn't work in a desktop PC. Maybe AMD's holding desktop APU's back because it'd undercut their lower end gpu's like the 6500XT or 6600 at least, but that's just speculation on my part.

 

Also. I find this funny.

image.png.9a55dff7972c1d867c04d8454a2fe285.png

Link to comment
Share on other sites

Link to post
Share on other sites

I hope LTT forum still exists for the next 10 years so that I can comeback to this thread just to see if Anthony's video aged well or not.

 

But I personally think that the future of x86 will be similar to vehicles with manual transmission and ARM SOCs are automatic transmission. Eventually Intel and AMD will cave in and make their own RISC based chips be it ARM which is a tried and tested ISA or RISC-V which from what I see is still "experimental", but who knows.

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I really disagree with the comparisons made in this video. AMD and Intel aren't really behind Apple in CPU architecture power efficiency, and while I haven't seen actual comparisons I also don't think AMD and Nvidia are behind in the GPU side. Comparing desktop flagship CPUs/GPUs from AMD, Intel and Nvidia to Apple doesn't really work, those parts are way out of their efficiency curves for the sake of beating the competitors and looking good in performance benchmarks. Parts like the 12900K and 3090 can achieve close to 90% of the stock performance while using almost half the power, Apple on the other hand does that by default, keeping the clocks low allowing for way better stock efficiency, not sure if their SOCs even go that high frequency wise though.

Also saying as if ARM SOCs is somewhat better than x86 just because Apple is doing well, while all the others are kinda shit in comparison, which was mentioned in the video, doesn't make a whole lot of sense. Maybe ARM/RISC-V desktop CPUs/SOCs are going to become a thing in the future, but that depends on other manufacturers doing well, not just Apple. Nvidia would probably be the most likely to come up with a consumer desktop ARM CPU/SOC, but that doesn't seem to be their intention for the short term.

Link to comment
Share on other sites

Link to post
Share on other sites

Nvidia has already released an arm cpu for servers.

 

 

https://nvidianews.nvidia.com/news/nvidia-introduces-grace-cpu-superchip


A
md already stated that they want to make arm cpu's.

 

https://www.tomshardware.com/news/amd-we-stand-ready-to-make-arm-chips


Intel on arm cpu's. (Xeon-d) As well as competing with amd.

 

https://www.theregister.com/2022/06/27/intel_edge_amd_arm/

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Zodiark1593 said:

Rosetta does offer the possibility that we don’t need to sacrifice compatibility to finally migrate off x86 (IE, we needn’t drag folk kicking and screaming).

And yet it's a huge concession to the fact that the ISA isn't anywhere close to death. Can you really say you've migrated off of it if you're constantly emulating it to do your job? Do you even get any benefit from the migration if the programs still use x86?

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

this was a odd video. well past what anthony normal does.

MSI x399 sli plus  | AMD theardripper 2990wx all core 3ghz lock |Thermaltake flo ring 360 | EVGA 2080, Zotac 2080 |Gskill Ripjaws 128GB 3000 MHz | Corsair RM1200i |150tb | Asus tuff gaming mid tower| 10gb NIC

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, Sauron said:

And yet it's a huge concession to the fact that the ISA isn't anywhere close to death. Can you really say you've migrated off of it if you're constantly emulating it to do your job? Do you even get any benefit from the migration if the programs still use x86?

Having time to think things through, most client-side applications can probably be well off without any form of x86 compatibility. Most notable productivity apps have been ported to ARM in some form or another. 
 

Only a couple use cases come to mind that really depend on fast x86 compatibility. Users of old, very expensive machinery will probably be left in the cold. And PC gaming would be pretty disastrously affected in losing its vast back catalogue and essentially starting from scratch. 
 

If the industry goes the route of universal ARM (or RISC-V), I’d probably expect some (not all) SoC vendors to implement code morphing or hardware-assisted emulation for specific niches. Nvidia comes to mind, having a pretty vested interest in PC gaming, as does (to a lesser extent) AMD. Valve I can certainly see pouring substantial resources into their own SoC with fast x86 emulation (potentially partnering with AMD) if pressed to do so. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, AndreiArgeanu said:

I'd like to actually know why it's not possible to have a socketed ARM cpu with socketed RAM and a separate GPU

It's not, the ARM ISA supports PCI-e GPU, socketed RAM, storage and more. Ampere ARM processors are an example of that, companies like apple does because it's cheaper and they can sell upgrades for more(as you need to upgrade everything, not only the RAM).

 

9 hours ago, hishnash said:

power draw

That's not true, the memory bus is passive not an active component, even on JEDEC docs they don't take the bus power draw in consideration because it's too small when compared between socketed vs soldered

Link to comment
Share on other sites

Link to post
Share on other sites

What if in the coming years we ARM-SoC-Chiplets-Single_Board-integrated-cool&lowpower lovers enviromentally-shame the big box modular PC users because of their waste of energy? If Linus can right2repair-shame us, we have every right to energy-shame them. Who’s with me? ✊🏼✊🏼✊🏼

 

Excellent video by Anthony, I have always expected LTT to explain all of this in every new Apple Silicon Mac review, but looks like they spun it off to a dedicated video, good. It was long overdue, many discussions about this had already taken place on the forum.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, kumicota said:

That's not true, the memory bus is passive not an active component, even on JEDEC docs they don't take the bus power draw in consideration because it's too small when compared between socketed vs soldered

Memory controller within the SOC has minimal power draw when inactive yes but that says nothing about the power draw of running 16+ DDR5 dims this requires continues power. Also the power needed to talk to these dims I much higher than an on package/local soldered memory solution as the resistance in the wires is an order of magnitude higher. When you try to get 800GB/s or higher bandwidth (as you would expect for mid to high desktop gpu minimum) you're talking 16+ DDR5 modules remember. The power draw of this would be massive. And it would also be very impractical as you would have many many many dims each with a very small amount of storage but each one would need its own power management vrmls etc and the trace lengths to manage all of these would be extremely complex (all your DDR traces need to have the same length to al sockets this is complex enough for handling 6 sockets but if your going with 16 sockets you will need a very thick PCB that will cost a LOT (and all that trace length will draw a LOT of power to be able to run these signals.. that power draw will likly be so much you would need to consider adding thermal solution to cool the PCB itself)...  
 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Zodiark1593 said:

 

Only a couple use cases come to mind that really depend on fast x86 compatibility. Users of old, very expensive machinery will probably be left in the cold. And PC gaming would be pretty disastrously affected in losing its vast back catalogue and essentially starting from scratch.

The thing is if the game is old enough it will really not have that much issue running in a good transcompile like rosseta2.

 

 

7 hours ago, KaitouX said:

I haven't seen actual comparisons I also don't think AMD and Nvidia are behind in the GPU side.

They are quite fare behind apple in perf/W in the gpu space. Apple has been pushing perf/W for over 10 years now with a very hard focus on only considering changes that improve this metric, this has led to some realy brutal choose in what features they support in hardware so as to direct us devs down a much more optimal pathway but the result in quite a bit better perf/w in many ways apples perf/W activate in the GPU space is ahead of their perf/W in the Cpu space. But people do not talk about this since they only care about peak perf and in the cpu space you can compare single core speed as a somewhat relevant metric but luckily people do not try to compare this for GPUs and even if they tried there really is not simple concept of what a GPU `core` is due to how GPUs segment and parrailise work depends so much on the width of the work that is being run. 

 

 

7 hours ago, KaitouX said:

Apple on the other hand does that by default, keeping the clocks low allowing for way better stock efficiency, not sure if their SOCs even go that high frequency wise though.

They do not, this is part of how they get better perf/W as well by only desiring for a lower frequency they have been pushed to design much wider cores, that can do much more in a single cpu cycle.. (apples IPC in most tasks is close to 2x that of AMD or Intel). (the cpus are after all running at close to 1/2 the clock speed more to the time). 

 

 

9 hours ago, captain_to_fire said:

Eventually Intel and AMD will cave in and make their own RISC based chips be it ARM which is a tried and tested ISA or RISC-V which from what I see is still "experimental", but who knows.

If they don't they will become like IBM (still shipping new Power cpus) but not exactly mainstream. There will always be a place for x86 (just as there will always be a place for Power)... just it might well not always be in your gaming rig. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Blademaster91 said:

I personally don't consider Apple's ARM systems to be a PC, since they cannot boot an OS of choice, they're more of a proprietary workstation than anything else.

They will boot anything you want them to boot, there is just nothing except Asahi Linux to boot on them.

 

Don't buy Apple M1 computers with 8GB of RAM

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, AndreiArgeanu said:

those APU's paired with some DDR5 ram wouldn't work in a desktop PC. Maybe AMD's holding desktop APU's back because it'd undercut their lower end gpu's like the 6500XT or 6600 at least, but that's just speculation on my part.

It would work but the GPU (and other axially units) within the SOC would be massively memory bandwidth limited. When you put a powerful gpu in an SOC you also need to provide it with the bandwidth that means just like desktop GPUs use soldered higher bandwidth memory (GDDR) if you build a SOC your going to use GDDR or LPDDR (depending on if your use case is also latency sensitive, LPDDR has better latency at a higher price but also lower power draw).  Any erasability powerful APU will be pointless to pair with socketed regulars DDR as the bandwidth would just not be there. To get the same bandwidth as the PS5 gpu currently has you would be in the ball park of 9 to 10 DDR5 dims!! 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, just_dave said:

They will boot anything you want them to boot, there is just nothing except Asahi Linux to boot on them.

The FreeBSD also have a working bootable OS.  People on this forum are in general very confused between iPhone and Mac for some reason they think the Mac is locked down walled garden when in fact it is much more open than the systems they are all using to game on.  

Link to comment
Share on other sites

Link to post
Share on other sites

50 minutes ago, hishnash said:

Also the power needed to talk to these dims I much higher than an on package/local soldered memory solution as the resistance in the wires is an order of magnitude higher.

Not so much, DDR5 for example uses 1.1V, while the resistance is way higher the difference in current is extremely low also the modern motherboard and GPU traces has a low resistivity,

 

55 minutes ago, hishnash said:

but each one would need its own power management vrmls

This is why they group the chips into modules, where one module has 8-16 chips and even so the efficiency can be improved by using better FET's, you can get fets that has more than 95% of efficiency under high loads.

 

 

I don't disagree that socketed consumes more power and is massively more complicated to do than soldered but there's a reason where JEDEC don't take it into consideration, the difference is not so much as apple and other companies makes it looks

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Jackite said:

Full disclosure: I posted this on the YouTube comments as well but am curious as to y'all's opinions.

 

This is the first time I've left a comment on one of your videos due to an unhappy emotion but felt a need to write my thoughts somewhere they could, ostensibly, be seen.

8 hours ago, KaitouX said:

I really disagree with the comparisons made in this video.

 

I'm on the same boat.

Since I also posted the comment on YT I'll just copy and paste it.
Some points are already in discussion here, but I still hope to make a contribution to the topic.

 

 

I'm sorry, but in my opinion this video is very one-sided and does not meet the usual LTT quality  - which is really weird to me, since Anthony usually has the best takes on tech.

 

To list the things that bother me:

- You compare the Mac Studio power draw to the most power hungry designs the PC market has today. There are several CPUs that are MUCH better in performance/watt while still delivering good or even great performance (5950X, 5800X3D, 5600X, 12700, 12400...) and the 3090, while not being much less efficient than, for example, a 3070, is still a very high power card.

- There is no mention of the higher prices that big chips have compared to small chiplet designs due to worse yields.

- There is also no mention of the lack of upgradability or flexibility that SoCs have to the classic PC design. You want more RAM or a better GPU later in life? Nope, you need a completely new SoC or even a completely new system.
- The same goes for the comparison of a closed system against the open PC standard. A more fitting comparison would've been a Laptop to the Mac Studio, since both are more or less closed systems that are built around a single(-ish) configuration.
- While I find it reasonable to not include Rosetta benchmarks in the comparison (since it's about the potential of the design), there should be mention that programs have to be adapted to have this increased performance, which would take some time after the release of a PC ARM CPU.
- Yes, CPUs and especially GPUs are getting more power hungry in the last few years. This is, however, mostly due to the companies producing them wanting to squeeze the last bit of performance out of them. They could easily reduce the power limit and still have nearly the same performance. The 5700X, for example, has about 93% of the performance of the 5800X while only using about 60% the energy. The same goes for GPUs, although right now, you'd have to powerlimit them yourself.
- A small one: Someone without knowledge of this field might get the impression that the ARM architecture is bound to be on a SoC and could not be integrated as, for example, a chiplet or single die on a classic CPU.

 

Don't get me wrong, the Apple chip is a marvelous piece of engineering and both using an SoC and the ARM architecture can have huge advantages, but the comparison should be fair.

 

P.S.: All the numbers are from the test and indices made by PC Games Hardware. Your numbers might vary.
P.P.S: English is not my main language, sorry for weird phrasing and errors.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Nandrith said:

 

Don't get me wrong, the Apple chip is a marvelous piece of engineering and both using an SoC and the ARM architecture can have huge advantages, but the comparison should be fair

This is a classical thing from LTT, where they do a video that is it's extremely one-sided and poorly researched

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, kumicota said:

Not so much, DDR5 for example uses 1.1V, while the resistance is way higher the difference in current is extremely low also the modern motherboard and GPU traces has a low resistivity,

No. This is not DC, I have no idea what kind of frequency is actually used to drive DDR5 memories, Wikipedia states a clock rate of at least 2.4GHz, I'll go with that. Traces have a parasitic capacitance and inductance, switching these traces up and down from 1.1V to 0V to transmit signals on them consumes power. Each track can be thought as a tiny capacitor (with a tiny resistance of course), it has to be charged and discharged at a frequency of 2.4GHz, reducing the value of capacity is necessary to reach lower power consumption. On package RAM shortens a lot the path, leaving more power budget for other things to do inside the chip (instead of driving PCB traces). Even worse, at 3 GHz a 10cm piace of wire behaves as an antenna (maybe 2.5cm = lambda/4 is already enough), it radiates power, forget resistivity or resistance.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, kumicota said:

This is a classical thing from LTT, where they do a video that is it's extremely one-sided and poorly researched

 

I my experience, most of their one-sided videos are marked as opinions and/or obviously superficial.

 

This video, however, feels different.

Link to comment
Share on other sites

Link to post
Share on other sites

I'd like to add a few thoughts to this matter. I think that Anthony overestimates (as many, many others here and in many other places) the advantages for ARM ISA vs x86. Here 

this is already been addressed. I remember a very long interview on youtube with Jim Keller talking about Apple Silicon and going back and forth on this subject, and I remeber him telling something along the lines of "this is a pointless difference". I do not remember if it was with Dr. Ian Cutress (techtechpotato channel) or someone else. What Apple has done seems to be "boring" to explain, it just is a better designed chip, with maybe better/smarter power gating, smarter cpu blocks design, and (mostly I think) better "integration" between different parts of the chip. Apple chips are made of many different ARM cores that deal with different things, marcan42 on twitter sometimes posts interesting stuff on this. Some of this was related to the flash controller and lack of nvme support. These many parts are all designed by Apple and enable optimizations that other manufacturers can not make, one thing that struck me was the sharing of memory between CPU, GPU and encoding engines, that do not have to perform "translations" between them as they all share the same representation of the same type of data. All the many things, good ideas, optimizations here and there make what Apple SoC are and show what can be done with a well thought design that has everything built into it.

Sorry, I did not want to make that part that long, what I'd like to add is this: both Wendell at level1techs and AdoredTV have talked about desktop's future in some way or another, what they clearly show is that there is little money and interest in this desktop market segment. It is clear that AMD cpu designs (for example) are geared to the server/business market, then what is left, usually the worst silicon, ends up in desktop, where per/w is not an issue. I think the same can be seen from Nvidia, where they used the better process node from TSMC for Quadro cards and left everything else on samsung's less efficient, cheaper node. Even the newer leaked chips, the kind of cooling they require exists (maybe?) in servers, desktops can not be that loud to reach the required airflow.

Mobile and server seems to be where there is money to be made, and there we will see new things. SoCs make sense in the server market and mobile market, desktop will have to use the same chips. Once both servers and mobile use on package RAM (or... maybe huge L3 caches and RAM goes away and becomes something like optane?), desktops will have to use them.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×