Jump to content

So apparently, the Intel/Radeon marriage is a thing that's happening

jasonc_01
Go to solution Solved by captain cactus,

sauce: https://newsroom.intel.com/editorials/new-intel-core-processor-combine-high-performance-cpu-discrete-graphics-sleek-thin-devices/

 

Quote

The new product, which will be part of our 8th Gen Intel Core family, brings together our high-performing Intel Core H-series processor, second generation High Bandwidth Memory (HBM2) and a custom-to-Intel third-party discrete graphics chip from AMD’s Radeon Technologies Group* – all in a single processor package.

It’s a prime example of hardware and software innovations intersecting to create something amazing that fills a unique market gap. Helping to deliver on our vision for this new class of product, we worked with the team at AMD’s Radeon Technologies Group. In close collaboration, we designed a new semi-custom graphics chip, which means this is also a great example of how we can compete and work together, ultimately delivering innovation that is good for consumers.

So this was the semi-custom design AMD talked about a while back. But it has happened. Here's Intel's video:

 

 

So yeah. We now have an Intel CPU with an AMD RTG GPU in the 35-55W TDP range.

 

Here's how they did that:

 

Intel-8th-Gen-CPU-discrete-graphics.jpg

 

That's a single HBM2 stack so we're likely limited to 4GB of video RAM. The details of the AMD GPU are unknown at this point, likely to be a Vega-based GPU, but things as SP count and clock speeds are not known yet.

1 minute ago, Taf the Ghost said:

@cj09beira

 

AMD's GPU department seems to have actually had a problem making the Compute too strong and ending up without enough culling and asset bottlenecks. Beyond just optimization issues, I think the CUs are too far ahead of the rest of the system. An interesting possibility, which lines up with the issues that both Intel & AMD have on the CPU side of things. (It also explains the big push in Memory tech over the last few years. They can't keep the cores filled.)

if you notice ever since the 290x they haven't increased rop amount, so it seems to me like they haven't increased the amount of the other parts of the gpu, so they have more compute but the rest is the same, they have 4 pipelines but it seems they reached the maximum amount of shaders/cus they can keep fed with those, if they ever make another gpu with 64+ cus they need to increase the amount of pipelines from 4 to something like 8, they don't need to be as strong as the ones they have now but it would resolve most of their problems.

one would hope vega 20 is that gpu but we don't know.

navi will resolve the problem in part by allowing them to make smaller gpus where is problem doesn't exist, but eventually they will need to make single gpus (1 die) with more than 64 cus and then they will need to increase the number of pipelines.

all of this explains why even until this day the 290x is such a beast, even though it doesnt have too much compute perf it has the same rops/etc as a fury x with the same amount of clock speed, in my opinion the 290x is one of the most balanced gpus amd made in recent years. 

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, Beverbomb said:

Here's another article that makes some interesting points about why AMD are doing it. This article says that the reason that AMD and Intel are teaming up isn't to remove AMD from the market but is actually to bring AMD into competition with Nvidia (if you read between the lines) cause they want to bring discrete graphics card power to the CPU itself much like what Nvidia has done with the MX150.

 

Also the chip is meant to go into the small 2 in 1's or thin and light ultrabooks which doesn't take away from AMD's discrete cards or CPU's in larger more purpose built gaming laptops. 

 

I think this is great because it could (depending on performance) put more pressure on Nvidia to put research and development into the small form factor graphics cards rather than their Max-Q design and actually make a purpose built chip for the small 2 in 1's or ultrabooks which is what more and more people are buying.

 

https://www.pcworld.com/article/3235934/components-processors/intel-and-amd-ship-a-core-chip-with-radeon-graphics.html

It's not that difficult to understand either AMD's or Intel's positions in this type of deal. AMD is only in certain Markets. Intel is in practically everything, but their graphics is an issue in a bunch of spaces. (They're trapped in by patents as much anything else; this approach is actually cheaper.) A semi-custom deal with AMD solves a chunk of current market for Intel, especially for customers like Apple.

 

It also needs to be noted that Intel, AMD and Nvidia all have a vested interest in each other staying alive. Realistically, none of the 3 companies could be bought out without the FTC blocking a deal. Especially seeing as all of them are US DoD contractors.

Link to comment
Share on other sites

Link to post
Share on other sites

I believe that this is done to purely go for Mixed GPU marketshare, as this will push more developers in some ways to optimize for AMD tech, so low end PCs + consoles with AMD GPUS will be over 50% of gaming tech marketshare in the long run. Plus this enable more programmes to run better on apple since they prefer intel CPUs with AMD GPUs and this might even help to thwart the CUDA ecosystem (at-least on the low end)

//Case: Phanteks 400 TGE //Mobo: Asus x470-F Strix //CPU: R5 2600X //CPU Cooler: Corsair H100i v2 //RAM: G-Skill RGB 3200mhz //HDD: WD Caviar Black 1tb //SSD: Samsung 970 Evo 250Gb //GPU: GTX 1050 Ti //PSU: Seasonic MII EVO m2 520W

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, cj09beira said:

all of this explains why even until this day the 290x is such a beast, even though it doesnt have too much compute perf it has the same rops/etc as a fury x with the same amount of clock speed, in my opinion the 290x is one of the most balanced gpus amd made in recent years. 

I have to agree, I had a Sapphire Vapor X 290, and it was a beast in all aspects.

//Case: Phanteks 400 TGE //Mobo: Asus x470-F Strix //CPU: R5 2600X //CPU Cooler: Corsair H100i v2 //RAM: G-Skill RGB 3200mhz //HDD: WD Caviar Black 1tb //SSD: Samsung 970 Evo 250Gb //GPU: GTX 1050 Ti //PSU: Seasonic MII EVO m2 520W

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Ezilkannan said:

This makes me think AMD or rather RTG is running out of options now. I see nVidia strengthening itself into a monopoly on desktop graphics. It's going to hurt our wallets in future, but can't be helped much. Guess we all should just get a console at this rate :|

Naw... consoles still going to lag behind but RTG not delivering Vega effectively has caused them to miss a big hole in Nvidia line up. All this crap they are doing other then delivering chips to AIB better pay off.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, cj09beira said:

if you notice ever since the 290x they haven't increased rop amount, so it seems to me like they haven't increased the amount of the other parts of the gpu, so they have more compute but the rest is the same, they have 4 pipelines but it seems they reached the maximum amount of shaders/cus they can keep fed with those, if they ever make another gpu with 64+ cus they need to increase the amount of pipelines from 4 to something like 8, they don't need to be as strong as the ones they have now but it would resolve most of their problems.

one would hope vega 20 is that gpu but we don't know.

navi will resolve the problem in part by allowing them to make smaller gpus where is problem doesn't exist, but eventually they will need to make single gpus (1 die) with more than 64 cus and then they will need to increase the number of pipelines.

all of this explains why even until this day the 290x is such a beast, even though it doesnt have too much compute perf it has the same rops/etc as a fury x with the same amount of clock speed, in my opinion the 290x is one of the most balanced gpus amd made in recent years. 

AMD's GPU approach is in transition. The issue is that the really good uArch designs were supposed to go with the APUs and be kings. Problem? Faildozer. It's fairly clear where AMD going with a lot of this stuff from a strategic level, but their GPU tech was ready while their CPU tech has reverted. Going forward, it's clear that AMD is using Zen & Infinity Fabric the entire way, which is also why the Drivers work but none of the new stuff is actually turned on. Thus RX Vega is a faster Fury X. 

 

While GPU tech isn't my strongest suit, I do hope the move to multi-die GPUs will bring with it proper scaling effects in Gaming. In compute, they're going to be monsters. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, DoctorWho1975 said:

Naw... consoles still going to lag behind but RTG not delivering Vega effectively has caused them to miss a big hole in Nvidia line up. All this crap they are doing other then delivering chips to AIB better pay off.

HBM2 was in short supply and Apple bought up most of the rest. AMD has this weird problem of selling pretty much everything they make, but we can't really get a hold of them for a reasonable price in the consumer space.

 

AMD is a weird company to follow at times.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Taf the Ghost said:

AMD's GPU approach is in transition. The issue is that the really good uArch designs were supposed to go with the APUs and be kings. Problem? Faildozer. It's fairly clear where AMD going with a lot of this stuff from a strategic level, but their GPU tech was ready while their CPU tech has reverted. Going forward, it's clear that AMD is using Zen & Infinity Fabric the entire way, which is also why the Drivers work but none of the new stuff is actually turned on. Thus RX Vega is a faster Fury X. 

 

While GPU tech isn't my strongest suit, I do hope the move to multi-die GPUs will bring with it proper scaling effects in Gaming. In compute, they're going to be monsters. 

You bring up a good point.

I believe that with rumours regarding Navi and how Ryzen's architecture is, AMD is moving towards a scalable GPU architecture with an infinity-fabric type of interposer. And even from the Hawai Era the GPU compute tech was miles ahead than their CPU dep. Even with VEGA there is a ton of futures that stay unused because developers wont optimize for AMD. This push with intel even on the low end with an APU will bring some good change in the scene.

//Case: Phanteks 400 TGE //Mobo: Asus x470-F Strix //CPU: R5 2600X //CPU Cooler: Corsair H100i v2 //RAM: G-Skill RGB 3200mhz //HDD: WD Caviar Black 1tb //SSD: Samsung 970 Evo 250Gb //GPU: GTX 1050 Ti //PSU: Seasonic MII EVO m2 520W

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Paragon_X said:

You bring up a good point.

I believe that with rumours regarding Navi and how Ryzen's architecture is, AMD is moving towards a scalable GPU architecture with an infinity-fabric type of interposer. And even from the Hawai Era the GPU compute tech was miles ahead than their CPU dep. Even with VEGA there is a ton of futures that stay unused because developers wont optimize for AMD. This push with intel even on the low end with an APU will bring some good change in the scene.

Raja's statements at the Financial Analyst Day pretty much gave away what AMD is up to in the GPU space. They're going to leverage their compute massively, pushing hard into Nvidia's Quadro & Tesla product stacks. Gaming is the open question. We'll get mid-range GPUs from them, but I think "next-gen, high-end" GPUs might be Nvidia only for a bit.

 

Or at least until AMD works out some of the issues with a multi-die approach. Right now, the 1080 Ti is about 2x as fast as an RX 580, minus places where VRAM gets constrained. If AMD can get a shared memory pool via the Infinity Fabric, much in the way Epyc & Threadripper work, then AMD could put lower clocked/lower voltage GPUs together in x2 and x4 arrays. (Maybe even a x8 for Machine Learning cards.) It's a lot of work to get there, but AMD could roll out cards that murder Nvidia's if they can get it all to work together.

 

That's why I said the direction is pretty clear. AMD's GCN was a brilliant iGPU that didn't scale up greatly to high clocks, but it ran into severe issues because of Bulldozer being unable to ever use it in APUs. Going forward, AMD is going to focus on leveraging its Compute performance, while splitting up the GPU's entire design so they can start running MCM designs that are opaque to the System.

 

In the "64 core Epyc CPUs" thread, I mentioned that AMD's approach makes a lot of sense because they are Fab-less now. MCM with smaller dies greatly increases yield and makes problems at GloFo or TSMC not destroy their product line. AMD will end up never having 20-22nm or 10nm products, as both of those nodes failed at GloFo, it seems.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Taf the Ghost said:

Nvidia has higher end GPUs that perform better under DX11 via their expensively maintained Driver Team. Those aren't just random caveats. Nvidia has an advantage in certain APIs and specific game engines under Windows, this doesn't just translate to the Macintosh environment as a 1 for 1. The technical aspects of "why" are really important here. Apple has the most money of any corporation ever. They have specific reasons for sticking with their AMD alliance.

Exactly part of my point. Technically speaking, Nvidia has got the more efficient GPU architecture, and you would expect Apple would want their GPUs to be integrated to Intel CPUs. But as you said, they probably have specific reasons not to.

My whole point was that Apple ditched Nvidia long ago. Now Intel is ditching them for their next big thing. Even though they have the better technology. So either they don't have something AMD can provide, or both companies no longer want to do business with Nvidia.

Quote

The problem is that this is an nVidia product and scoring any nVidia product a "zero" is also highly predictive of the number of nVidia products the reviewer will receive for review in the future.

On 2015-01-28 at 5:24 PM, Victorious Secret said:

Only yours, you don't shitpost on the same level that we can, mainly because this thread is finally dead and should be locked.

On 2016-06-07 at 11:25 PM, patrickjp93 said:

I wasn't wrong. It's extremely rare that I am. I provided sources as well. Different devs can disagree. Further, we now have confirmed discrepancy from Twitter about he use of the pre-release 1080 driver in AMD's demo despite the release 1080 driver having been out a week prior.

On 2016-09-10 at 4:32 PM, Hikaru12 said:

You apparently haven't seen his responses to questions on YouTube. He is very condescending and aggressive in his comments with which there is little justification. He acts totally different in his videos. I don't necessarily care for this content style and there is nothing really unique about him or his channel. His endless dick jokes and toilet humor are annoying as well.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I wonder how long it will be until this line of Intel/Ryzen products expand and someone steps in and says they're monopolizing. I surely can't be the only one that thought this; it was literally the first thing that came to mind when I read about this. Performance, new tech, and all that aside, it could lead to a slippery slope for the two giants. 

CPU: Intel Core i7 8700k 3.7Ghz MOBO: AsRock Z370 Extreme4

RAM: Corsair Vengeance (4x4) Storage: Crucial MX300 275Gb | WD Blue 1Tb

GPU: XFX RS RX480 8Gb Case: Corsair 270R PSU: Seasonic Focus+ 750w

Monitors: HP 22cwa | BenQ BL2420PT  Cans: ATH-M50x 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, sauce-c said:

I wonder how long it will be until this line of Intel/Ryzen products expand and someone steps in and says they're monopolizing. I surely can't be the only one that thought this; it was literally the first thing that came to mind when I read about this. Performance, new tech, and all that aside, it could lead to a slippery slope for the two giants. 

It isn't like AMD with Ryzen and Intel are teaming up to cap the development of CPU power, they just happen to have at this point in time the best products. The current climate looks like they are pushing each other to perform better in terms of development. Maybe it's my personal opinion but I'm not seeing anything to suggest that AMD or Intel are directly trying to monopolise the market, they just have the best products on offer and the consumer has bought into their reputations as being reputable brands. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, mr moose said:

But the really big question is: how will this effect the steam surveys? and will AMD fanboys now attribute a portion of Intel's GPU percentage to AMD to soften the blow?

If Devs over the next 3-4 year end up reaching into their bag O'tricks to optimize more for AMD stream processors as a result of Intel using AMDs graphics solutions; then... IDK, I think we'd be hard pressed to find fault with people grouping those stats in either camp.  Could be a very grey shade of gray.

5 hours ago, Thendo Marakate said:

well that's news, wouldn't NVidia do the same thing but with AMD CPUs? think about it ,the are Radeon GPUs(AMD) and AMD CPUs .since intel went with Radeon GPUs would it not be logical for NVidia to go with AMD CPUs since 

If AMD and Nvidia manage to find a way to gear Ryzen CPUs to to pump out higher FPS, then sure.  Does seem like there is more to the high end FPS delta than just the clock speed and IPC deficit.  Even then I only see Nvidia doing so in solutions where they can also push G-Sync modules.

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Beverbomb said:

It isn't like AMD with Ryzen and Intel are teaming up to cap the development of CPU power, they just happen to have at this point in time the best products. The current climate looks like they are pushing each other to perform better in terms of development. Maybe it's my personal opinion but I'm not seeing anything to suggest that AMD or Intel are directly trying to monopolise the market, they just have the best products on offer and the consumer has bought into their reputations as being reputable brands. 

That's why i included the "I wonder how long it will be" portion. I don't see it in the near future, but I still feel that if the two companies continue to expand their co-branded goods that someone could try and call them out and say they are monopolizing. Not the CPU market as a whole, but maybe the mCPU market or something. I don't know honestly, and you're probably right this is pretty unlikely to happen. But it's still something that crossed my mind so I thought I would share to hear others opinions. :)

CPU: Intel Core i7 8700k 3.7Ghz MOBO: AsRock Z370 Extreme4

RAM: Corsair Vengeance (4x4) Storage: Crucial MX300 275Gb | WD Blue 1Tb

GPU: XFX RS RX480 8Gb Case: Corsair 270R PSU: Seasonic Focus+ 750w

Monitors: HP 22cwa | BenQ BL2420PT  Cans: ATH-M50x 

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, Taf the Ghost said:

Raja's statements at the Financial Analyst Day pretty much gave away what AMD is up to in the GPU space. They're going to leverage their compute massively, pushing hard into Nvidia's Quadro & Tesla product stacks. Gaming is the open question. We'll get mid-range GPUs from them, but I think "next-gen, high-end" GPUs might be Nvidia only for a bit.

 

Or at least until AMD works out some of the issues with a multi-die approach. Right now, the 1080 Ti is about 2x as fast as an RX 580, minus places where VRAM gets constrained. If AMD can get a shared memory pool via the Infinity Fabric, much in the way Epyc & Threadripper work, then AMD could put lower clocked/lower voltage GPUs together in x2 and x4 arrays. (Maybe even a x8 for Machine Learning cards.) It's a lot of work to get there, but AMD could roll out cards that murder Nvidia's if they can get it all to work together.

 

That's why I said the direction is pretty clear. AMD's GCN was a brilliant iGPU that didn't scale up greatly to high clocks, but it ran into severe issues because of Bulldozer being unable to ever use it in APUs. Going forward, AMD is going to focus on leveraging its Compute performance, while splitting up the GPU's entire design so they can start running MCM designs that are opaque to the System.

 

In the "64 core Epyc CPUs" thread, I mentioned that AMD's approach makes a lot of sense because they are Fab-less now. MCM with smaller dies greatly increases yield and makes problems at GloFo or TSMC not destroy their product line. AMD will end up never having 20-22nm or 10nm products, as both of those nodes failed at GloFo, it seems.

On the part of AMD pooling VRAM,  if AMD does that on the consumer side, I could see NVidia bringing nvlink over to the consumer side as it allows memory pooling.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, sauce-c said:

That's why i included the "I wonder how long it will be" portion. I don't see it in the near future, but I still feel that if the two companies continue to expand their co-branded goods that someone could try and call them out and say they are monopolizing. Not the CPU market as a whole, but maybe the mCPU market or something. I don't know honestly, and you're probably right this is pretty unlikely to happen. But it's still something that crossed my mind so I thought I would share to hear others opinions. :)

amd and intel will not b best friends as intel has tried (and almost succeded) to kill amd countless of times, plus with amd's own cpus getting better i don't see this "partnership" lasting too long, 

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Dylanc1500 said:

On the part of AMD pooling VRAM,  if AMD does that on the consumer side, I could see NVidia bringing nvlink over to the consumer side as it allows memory pooling.

That's where this all is headed, so it'd be good to see. But it'll be interesting if their GPUs could hold coherency for gaming in that situation. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Taf the Ghost said:

That's where this all is headed, so it'd be good to see. But it'll be interesting if their GPUs could hold coherency for gaming in that situation. 

Well I know in the case of NVlink it allows independent direct device to device communication amongst the GPUs in the system so it possibly could be more efficient as they could all tell each other what they are currently drawing and then communicate with CPU so the CPU only has to send the specific instructions to a specific GPU. 

 

I apologize for for my awful and possibly not clear wording as I am cloudy headed right now due to being pretty sick.

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Dylanc1500 said:

Well I know in the case of NVlink it allows independent direct device to device communication amongst the GPUs in the system so it possibly could be more efficient as they could all tell each other what they are currently drawing and then communicate with CPU so the CPU only has to send the specific instructions to a specific GPU. 

 

I apologize for for my awful and possibly not clear wording as I am cloudy headed right now due to being pretty sick.

That's a more standard mGPU setup. The power of a x2 or x4 setup is the computer sees it as a single GPU. While Windows needed some scheduler tweaks to make better use of it, it still sees both Ryzen & Threadripper as 1 CPU, when you can argue it's 2, 4 or 8 NUMA nodes.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Taf the Ghost said:

That's a more standard mGPU setup. The power of a x2 or x4 setup is the computer sees it as a single GPU. While Windows needed some scheduler tweaks to make better use of it, it still sees both Ryzen & Threadripper as 1 CPU, when you can argue it's 2, 4 or 8 NUMA nodes.

Well in the case of nvlink it would operate however you told it, it could operate as a "single GPU" but still have the ability to address everything individually. At that point it would operate similar to zen.

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/6/2017 at 8:07 AM, cj09beira said:

it still seems like a weird move from amd, especially because amd is releasing ryzen based apus very soon with great perf 

I think it's a good move on AMD.  OEM partners are unlikely to give the setup the new apu needs to really shine.  So, by teaming up with Intel they get their gpus into both Intel systems and their own.  I can also see AMD coming out with tech like this with their own CPU and hbm in the future.  Maybe they can even get a low powered 6 and 8 core with the next gen Apu.

Link to comment
Share on other sites

Link to post
Share on other sites

I also wonder what the deal looks like on paper. Does it look like the console deal, where they just make Simi custom chips, and Intel buys them.  If not, how close are they working on this project together, and they sharing IP? 

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/6/2017 at 8:43 AM, FratStar said:

Well I never how can you be so rude! NVidia is no not just simple disease it's the plague /s.

 

You had me almost laughing out loud in the middle of work man.

 

OT: Interesting Intel and AMD totally both denied this viciously a month or two ago now its apparently happening. Why lie? Just own up to it if it leaked that early.

I think that was on a license IP deal.  The same that Intel and nvidia had, but that is not what this is.  Im not 100%, but I think Intel is just buying Simi custom gpus from AMD and "gluing them together." Where with Nvidia was licensing is to Intel so they could make their APU.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, WhiteHammer said:

I think that was on a license IP deal.  The same that Intel and nvidia had, but that is not what this is.  Im not 100%, but I think Intel is just buying Simi custom gpus from AMD and "gluing them together." Where with Nvidia was licensing is to Intel so they could make their APU.

 

Hmmm okay I guess they were reporting that it would ship Radeon Graphics on the actual chip, but I was under the impression that the news was reporting that intel cpus would ship "with" Radeon Graphics, Ugh corporate semantics is so frustrating. 

Spoiler

Cpu: Ryzen 9 3900X – Motherboard: Gigabyte X570 Aorus Pro Wifi  – RAM: 4 x 16 GB G. Skill Trident Z @ 3200mhz- GPU: ASUS  Strix Geforce GTX 1080ti– Case: Phankteks Enthoo Pro M – Storage: 500GB Samsung 960 Evo, 1TB Intel 800p, Samsung 850 Evo 500GB & WD Blue 1 TB PSU: EVGA 1000P2– Display(s): ASUS PB238Q, AOC 4k, Korean 1440p 144hz Monitor - Cooling: NH-U12S, 2 gentle typhoons and 3 noiseblocker eloops – Keyboard: Corsair K95 Platinum RGB Mouse: G502 Rgb & G Pro Wireless– Sound: Logitech z623 & AKG K240

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, cj09beira said:

amd and intel will not b best friends as intel has tried (and almost succeded) to kill amd countless of times, plus with amd's own cpus getting better i don't see this "partnership" lasting too long, 

If I recall correctly, AMD's CPU Division and Radeon are not the same.

CPU - Ryzen 7 3700X | RAM - 64 GB DDR4 3200MHz | GPU - Nvidia GTX 1660 ti | MOBO -  MSI B550 Gaming Plus

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×