Jump to content

Intel seeks to get inside Microsoft's next-gen Xbox console, potentially snatching away lucrative share from AMD

6 minutes ago, hishnash said:

It would need to cost 2x the price and most game devs would be unhappy as users would rather buy games for $20 on steam sale than $50 to $80 on the console.   

 

What makes the consoles succeed for devs and for console makers is the fact you cant just run anything on it.

this

On 2/10/2024 at 12:29 PM, GOTSpectrum said:

I never said they would do it... Just that I thought about it 

 

My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 9 5950X
  • Motherboard
    Gigabyte Aorus GA-AX370-GAMING 5
  • RAM
    32GB DDR4 3200
  • GPU
    Inno3D 4070 Ti
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    Lenovo L29w-30 29 Inch UltraWide Full HD, BenQ - XL2430(portrait), Dell P2311Hb(portrait)
  • Cooling
    MasterLiquid Lite 240
Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, leadeater said:

The PPE wasn't responsible for much computation at all so wasn't really a performance limiter. Also Cell was 100% designed for HPC compute and also used for the PS3 because it had raw performance far beyond anything even a few years after it, C2D was vastly slower.

 

The problem was it was a bitch to actually utilize, everyone (game devs) bloody hated the thing.

 

IBM Cell had 5x the GFLOPs of C2D, in fact a 2500k is "slower"

In terms of raw throughput, the Cell demolish many later CPUs. That’s already well known. Rather, what I suggest, is that in a world where the 8800 GTX exists (a GPU that brought both compute shaders, and cuda, alongside unified shaders to match the 360), I don’t see a lot of reason for Cell to exist.

 

Granted, Cell was probably in development well before the 8800 GTX was on the radar, and at least compared to many GPUs of the time, Cell provided much superior flexibility. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Taf the Ghost said:

Still not sure where Intel would get the tightly packaged GPU that'd be needed.

Battlemage seems the obvious choice. It should be releasing soon and would be well mature by the time the next console gen comes out. I debated if Celestial could be an option too but would probably be pushing timescales.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

Battlemage seems the obvious choice. It should be releasing soon and would be well mature by the time the next console gen comes out. I debated if Celestial could be an option too but would probably be pushing timescales.

Thats a choice. I just dont know how great of one it is. the largest battlemage is as fast as what, the 4060ti since big battlemage was cancled? Thats not much faster at all then the current xbox series X

Either this is just intel CPU and some other GPU company, or its Celestial 

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, starsmine said:

Thats a choice. I just dont know how great of one it is. the largest battlemage is as fast as what, the 4060ti since big battlemage was cancled? Thats not much faster at all then the current xbox series X

I've not been following the rumours closely, and I just tried to catch up on them. I'd view the apparent source as low tier so wouldn't put much weight behind it. I'd only say, don't mix up custom and mainstream dGPU offerings.

 

20 minutes ago, starsmine said:

Either this is just intel CPU and some other GPU company, or its Celestial 

I find it hard to believe Intel would go to anyone else at this point. There are only two alternatives and neither seems a good fit. Any other options are much worse.

 

I'm not ruling out Celestial but if it is a viable option would depend on how far away it is. My concern is it might be too close to when a next gen console might release.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, porina said:

I've not been following the rumours closely, and I just tried to catch up on them. I'd view the apparent source as low tier so wouldn't put much weight behind it. I'd only say, don't mix up custom and mainstream dGPU offerings.

I have seen nothing to suggest its wrong from everything I have seen from any source I find trustworthy. A custom solution for a console being larger then the largest dGPU being released would be something unique. 

28 minutes ago, porina said:

I find it hard to believe Intel would go to anyone else at this point. There are only two alternatives and neither seems a good fit. Any other options are much worse.

 

I'm not ruling out Celestial but if it is a viable option would depend on how far away it is. My concern is it might be too close to when a next gen console might release.

not that intel would go elsewhere, that MS would only contract them for a CPU if that is the route they choose to go. 

unless the assumption is that this would be a foveros type thing with Intel making a true APU. 

Next gen xbox isnt even that close anyways. It wont likely release any sooner then Q4 2026, and I would assume that its a Q4 2027 thing myself. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, starsmine said:

Thats a choice. I just dont know how great of one it is. the largest battlemage is as fast as what, the 4060ti since big battlemage was cancled? Thats not much faster at all then the current xbox series X

A current 4060 Ti performance target is still double that of the Xbox Series X, not that small of a difference.

 

That probably is about what the next consoles will be. Roughly double the last what happened last time and we don't really see gains larger than this in dGPU land.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

don't mix up custom and mainstream dGPU offerings.

Yep, not even the AMD GPUs in the consoles are directly the same as anything they sell retail in SoC or Graphics Card. MS/Sony ask for specific hardware features but want it on an older arch to reduce cost so they get some oddball GPUs archs that are "like" but not actually the same.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, porina said:

Battlemage seems the obvious choice. It should be releasing soon and would be well mature by the time the next console gen comes out. I debated if Celestial could be an option too but would probably be pushing timescales.

While I'd agree with this thought, I imagine Xbox's first question would be "and, on what node is that going to be?". haha

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, leadeater said:

A current 4060 Ti performance target is still double that of the Xbox Series X, not that small of a difference.

 

That probably is about what the next consoles will be. Roughly double the last what happened last time and we don't really see gains larger than this in dGPU land.

is it?
I consider the Xbox Series X GPU to be about the RX 6700, which is really close to the 4060ti performance. They also were one of the first products to use RDNA 2 on the market. 

Xbox Series X is 52 CU, 1.8GHz
RX 6700 is 40 CU at 2.4Ghz

and the RTX 4060ti is like 10% faster then the RX 6700

 

  

14 minutes ago, Taf the Ghost said:

While I'd agree with this thought, I imagine Xbox's first question would be "and, on what node is that going to be?". haha

18A so second generation GAAFET/RibonFet or smaller

this roadmap is still validish, Intel 4 is fully out with meteor lake, Granite Rapids on intel 3 is still on track for launch this year.  And as of now (although they may do a small raptor lake refresh part two) Arrow lake is on track for Q4 with 20A
xda-intel-roadmap-explainer.png

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, starsmine said:

not that intel would go elsewhere, that MS would only contract them for a CPU if that is the route they choose to go. 

MS mix and matching seems as unlikely to me. Why go through that if AMD can be a one stop shop? Intel would have to offer something more attractive than that.

 

2 hours ago, starsmine said:

unless the assumption is that this would be a foveros type thing with Intel making a true APU. 

I'm not even thinking foveros. Monolithic for cost. Maybe the cost of going disaggregated will drop enough to be viable by then.

 

2 hours ago, starsmine said:

Next gen xbox isnt even that close anyways. It wont likely release any sooner then Q4 2026, and I would assume that its a Q4 2027 thing myself. 

I did say earlier in this thread, we're just over 3 years in on what could be a 7 year cycle. Late 2027 would be about the right time unless they mix things up.

 

1 hour ago, Taf the Ghost said:

While I'd agree with this thought, I imagine Xbox's first question would be "and, on what node is that going to be?". haha

I just had a look, not sure if Intel ever said anything official about what process Battlemage or Celestial will be on. I can only find reports they could be on TSMC N4 and N3 respectively.

 

On Intel side this makes it even less predictable given how many processes are going through now and within the next couple of years. I doubt it'll be made on a leading edge node, but more likely one or two back for cost and volume reasons.

 

Edit: I looked at High Volume Manufacturing (HVM) dates for TSMC processes. N7 2018, N5 2020, N3 2022. There's a pattern there? N2 indicated for 2025. Oh well. Depending on the risk profile MS could ask for a design on TSMC process as opposed to Intel. N3 is probably a safe bet. Or even dual source with Intel 3. Again, this being custom means it doesn't have to be tied to other offerings.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, porina said:

MS mix and matching seems as unlikely to me. Why go through that if AMD can be a one stop shop? Intel would have to offer something more attractive than that.

 

I'm not even thinking foveros. Monolithic for cost. Maybe the cost of going disaggregated will drop enough to be viable by then.

I dont think monolithic would be a viable solution for this. But nvidia still exists as well. Xbox one was Intel/Nvidia. 
Xbox series X is 360mm^2

SRAM scaling does come back ish with ribbonfet, but I doubt its enough for the MASSIVE l3 cache these new chips will need to have the CPU and GPU and cache inside the recital limit of 400mm^2

Monolithic is dead for high end chips. Thats why foveros is being finalized on intel 3 before it becomes a hard requirement with intel 20 chips.
 

16 minutes ago, porina said:

I did say earlier in this thread, we're just over 3 years in on what could be a 7 year cycle. Late 2027 would be about the right time unless they mix things up.

 

I just had a look, not sure if Intel ever said anything official about what process Battlemage or Celestial will be on. I can only find reports they could be on TSMC N4 and N3 respectively.

Which is why I think intel would want to do Foveros with those is more likely if they are going to do an APU with a mid to large sized Celestial. 

16 minutes ago, porina said:

On Intel side this makes it even less predictable given how many processes are going through now and within the next couple of years. I doubt it'll be made on a leading edge node, but more likely one or two back for cost and volume reasons.

Consoles often are close to leading edge, 18A in 2027 wont be the leading edge for intel anymore and will be very well established. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, starsmine said:

is it?
I consider the Xbox Series X GPU to be about the RX 6700, which is really close to the 4060ti performance. They also were one of the first products to use RDNA 2 on the market. 

Xbox Series X is 52 CU, 1.8GHz
RX 6700 is 40 CU at 2.4Ghz

and the RTX 4060ti is like 10% faster then the RX 6700

Series X GPU is 12TFLOPs and 4060 Ti is 22 TFLOPs, actual performance is less than I was expecting though. Series X has a bit more GPU memory bandwidth but Nvidia has better memory compression and culling requiring less, plus 40 Series is cache heavy so bandwidth isn't really comparable using historic Nvidia products for reference points.

 

4060 Ti won't actually be twice as fast but it'll be reasonably around 50% or greater, at least for more modern titles or where RT is used anyway. There are many games where the 4060 Ti beats the RX 6700 XT by more than 20% (sometimes double this) and those are generally newer titles. Given console context games would be optimized for that GPU so would generally be on the higher performing end.

 

A RX 6700 is quite a bit slower btw, RX 6700 XT is 9% slower than 4060 Ti and the RX 6700 is ~12% slower than the RX 6700 XT.

 

I guess the performance gap is largely on the games really i.e. Dying Light 2 vs Death Stranding.

 

P.S. A 4060 Ti with ~500 GB/s memory bandwidth and 200W TGP would stomp all over a 6700 XT (basically a 4070 🙃)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, starsmine said:

I dont think monolithic would be a viable solution for this. But nvidia still exists as well. Xbox one was Intel/Nvidia. 
Xbox series X is 360mm^2

SRAM scaling does come back ish with ribbonfet, but I doubt its enough for the MASSIVE l3 cache these new chips will need to have the CPU and GPU and cache inside the recital limit of 400mm^2

XSX is 360mm2 as you say, and early PS5 is 300 dropping to 260 for the shrunk version. They don't have to be big chips, if for no other reason than cost.

 

1 hour ago, starsmine said:

Monolithic is dead for high end chips.

We're talking low(er) cost high volume chips for consoles.

 

1 hour ago, starsmine said:

Consoles often are close to leading edge, 18A in 2027 wont be the leading edge for intel anymore and will be very well established. 

Current gen consoles launched on N7 in 2020. HVM for N7 was 2018, N5 2020. So about a node behind on volume manufacturing, longer if you count risk (early) production.

 

It's hard to tell where Intel want to be then since they haven't said anything beyond 18A have they? Their pace right now is also not going to be representative of their longer term plans. This is very much the end game of their catch up and pass TSMC attempt. Intel's 18A successor could be 2027 but I think it likely 18A would still be leading edge practically speaking. And that's assuming no delays. I haven't looked at ramp up times but between introduction and HVM isn't that fast.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/12/2024 at 9:07 PM, Zodiark1593 said:

In terms of raw throughput, the Cell demolish many later CPUs. That’s already well known. Rather, what I suggest, is that in a world where the 8800 GTX exists (a GPU that brought both compute shaders, and cuda, alongside unified shaders to match the 360), I don’t see a lot of reason for Cell to exist.

 

Granted, Cell was probably in development well before the 8800 GTX was on the radar, and at least compared to many GPUs of the time, Cell provided much superior flexibility. 

Cell wasn't for GPU rendering though, it wasn't even used for that on the PS3. Some researchers did a technical demo using it as a GPU and it did remarkably well but it was never going to be fully GPU pipeline complete, something would have ran like crap on it in that use case.

 

The actual PS3 GPU was an Nvidia RSX aka custom GeForce 7800 produced on 65nm rather than 90nm. If the PS3 were to use a 8800 based GPU the PS3 would have released 6-12 months later.

 

Also CUDA on a 8800 wasn't a replacement for something like Cell, back then CUDA and GPGPU was in it's infancy and "garbage". Making a 8800 do the job of high throughput CPU like Cell at the time would have the same end result as trying to make Cell be a GPU, neither would be actually good. Critical note here was this was the era of Double Precision HPC compute, something the Cell was specifically designed to do, which GPUs of today cannot do other than Nvidia x100 die based GPUs and AMD Instinct GPUs. 10 years later a GTX 1060 can't even do as good a job for FP DP as a Cell/8i.

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, leadeater said:

Also CUDA on a 8800 wasn't a replacement for something like Cell, back then CUDA and GPGPU was in it's infancy and "garbage". Making a 8800 do the job of high throughput CPU like Cell at the time would have the same end result as trying to make Cell be a GPU, neither would be actually good.

CUDA and OpenCL were a small revolution for GPGPU Compute back then. I could run an early sample code for path-tracing on a Radeon 5870 under OCL in 2009. The BrookGPU library was the real garbage before that, where compute code calls were mapped directly to the graphics pipeline with all the weird consequences, incl. system lockup due to unstable kernel-mode drivers not meant for this type of workload.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/8/2024 at 2:48 AM, leadeater said:

Not really anymore, a lot of their devices are just old and not hardware updated. Only the Surface Laptop Studio 2 is Intel 13th Gen and everything else is 12th Gen with a few AMD options in the laptop product line (not 2 in 1).

 

Microsoft doesn't really care about these companies that much, whoever has the best product support followed by technical specifications and performance wins out. AMD's weak point has always been product support, if you want to develop a product you'd want to do it with Intel over AMD just from that.

 

Consoles are a bit different, raw performance actually matters a lot because it's a 2 horse race and how pretty the horse looks doesn't make it run the race any faster.

I am not sure how AMD is bad at product support. I guess I always thought they did a decent job supporting their gpus with drivers for a long time and their cpu support has been very good on the cpu side of things. I am not sure if this is a support issue but from what I understand intels gpus have quite a few bugs and in some games horrible performance. I am not sure if proper drivers could fix this or if it is an architectural issue but I don't think of good support when I think of their dedicated gpus. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, DuckDodgers said:

CUDA and OpenCL were a small revolution for GPGPU Compute back then. I could run an early sample code for path-tracing on a Radeon 5870 under OCL in 2009. The BrookGPU library was the real garbage before that, where compute code calls were mapped directly to the graphics pipeline with all the weird consequences, incl. system lockup due to unstable kernel-mode drivers not meant for this type of workload.

That is hardly what GPUs are used for today or what large clusters of Cell, IBM Roadrunner, were doing then or now. Also "early sample code" says everything about what I said anyway. "Run stuff" isn't quite the same thing as not being pretty garbage. GPUs had tiny memory, very narrow support for anything and nothing like CUDA/OpenCL/ROCm of today.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Brooksie359 said:

I am not sure how AMD is bad at product support. I guess I always thought they did a decent job supporting their gpus with drivers for a long time and their cpu support has been very good on the cpu side of things. I am not sure if this is a support issue but from what I understand intels gpus have quite a few bugs and in some games horrible performance. I am not sure if proper drivers could fix this or if it is an architectural issue but I don't think of good support when I think of their dedicated gpus. 

Not so much bad, just not as good as Intel. I've always found Intel systems to be more complete and bug free from release while AMD not so much. It was a much bigger problem EPYC gen 1 and gen 2, but AMD has a lot more money now and I have to say EPYC gen 3 systems were pretty damn solid out the gate.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/8/2024 at 9:55 AM, tim0901 said:

The other thing that matters though is money.

 

If Intel are willing to continue to undercut AMD that might make them a very appealing option to Microsoft, especially for a Series S replacement where you're not looking for bleeding-edge performance anyway.

It would be really weird for them to go with a downright different architecture for only one of their SKUs... it would be really hard to enforce full compatibility. If they're going to switch I'd expect it would be across all devices.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

51 minutes ago, Sauron said:

It would be really weird for them to go with a downright different architecture for only one of their SKUs... it would be really hard to enforce full compatibility. If they're going to switch I'd expect it would be across all devices.

I agree... 

 

If I were AMD(or intel) I would offer them some sweet deals on other things... Yknow, go with us for Xbox and we will give you a rebate on SoC for laptops... When you have a wide product portfolio it can be leveraged for great things 

My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 9 5950X
  • Motherboard
    Gigabyte Aorus GA-AX370-GAMING 5
  • RAM
    32GB DDR4 3200
  • GPU
    Inno3D 4070 Ti
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    Lenovo L29w-30 29 Inch UltraWide Full HD, BenQ - XL2430(portrait), Dell P2311Hb(portrait)
  • Cooling
    MasterLiquid Lite 240
Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Not so much bad, just not as good as Intel. I've always found Intel systems to be more complete and bug free from release while AMD not so much. It was a much bigger problem EPYC gen 1 and gen 2, but AMD has a lot more money now and I have to say EPYC gen 3 systems were pretty damn solid out the gate.

Oh I feel you on that one. Had a ryzen 1000 and ryzen 2000 series cpus and ran into some really weird bugs and random pc crashes while my 8700k rarely had the same issues. Now I am on a R9 7900x and no issues have come up thus far so way better than early ryzen. 

Link to comment
Share on other sites

Link to post
Share on other sites

It's in Intel's best interest to have their GPU in the consoles which means basically all game studios would optimize games for Intel then. Or at least do better compatibility check and have better support out of the box. This way Intel doesn't have to play catchup with AMD and NVIDIA who have been in this segment for much longer.

 

Just look how AMD benefited from having their CPU and GPU tech in basically all the existing consoles. They may not earn billions from those chips (they still do well), but the indirect benefits they get are huge, mostly in terms of everyone utilizing their tech primarely and NVIDIA basically has to shill their way through with their proprietary stuff on PC's only.

 

I don't think Intel will manage to get into Xbox and PS consoles, but it's possible they might cooperate with Valve to make a 3rd gaming console as indirect competition to Xbox and PS. It would basically be a more powerful Steam Deck meant to be connected to TV and would be powered with latest gen Intel APU. They already have plans to make SteamDeck competitor with Intel internals which they showcased on CES 2024. Question is how interested is Valve for that deal given that SteamDeck already runs AMD's APU...

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RejZoR said:

I don't think Intel will manage to get into Xbox and PS consoles, but it's possible they might cooperate with Valve to make a 3rd gaming console as indirect competition to Xbox and PS. It would basically be a more powerful Steam Deck meant to be connected to TV

So... a Steam Machine?

 

I can't see Valve being remotely interested in pursuing such a device given how awful the last attempt went.

 

And If even Microsoft are reportedly considering pulling out of the home console market due to it not being worth it, I can't see Valve being that enthused to dive in. With basically all hardware being sold at a loss for several years after launch, it's hardly what I'd call lucrative.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, tim0901 said:

So... a Steam Machine?

 

I can't see Valve being remotely interested in pursuing such a device given how awful the last attempt went.

 

And If even Microsoft are reportedly considering pulling out of the home console market due to it not being worth it, I can't see Valve being that enthused to dive in. With basically all hardware being sold at a loss for several years after launch, it's hardly what I'd call lucrative.

Steam Machines didn't take off because of many different things back then, including compatibility. SteamDeck works far better. They could make a SteamDock with no display and more powerful internals to drive higher resolutions on larger displays (TVs), costing about the same as SteamDeck in the end. Would certainly be a great alternative to Xbox or PS. Ultimately, Valve's main goal is to have Steam powered device, because game sales on Steam is what drives their revenue and profits. What people buy on Xbox or PS gives them zero income. I think it would be reasonable approach.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×