Jump to content

Unannounced AMD 400-Series Chipset Appears In PCI-SIG Integrators List

It's me!
Just now, ravenshrike said:

The only way to saturate PCIe 3.0 x8 on the consumer market apart from storage is to run dual Titan Vs. Presumably the V100s do as well on the enterprise market.

You mean people aren't using 100Gb networking at home? Something more expensive that dual or quad Titan V's ;)

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Godlygamer23 said:

Titan V.

 

Ahh yeah, didn't think of that.. again not something I would buy, and if you have enough money for a Titan V, surely you could buy threadripper TR4 instead, or go with intel. But yes I guess we have finally reached the end of pci-e 2.0 and AMD should update their spec with the consumer models of mobos, if not they may end up losing a lot of business to intel again simply for those who want to dual GPUs or more, or high price GPUs that have now exceeded the Pci-e 2 spec.

Just out of interest has anyone done any testing on the titan V to see how much of a performance loss there is using pci-e 2.0 x16 instead of pci-e 3.0 x16?

 

2 hours ago, NumLock21 said:

 A lot of users waited extremely long time to get Ryzen and once they got it. They soon found out the chipset is actually gen 2, while Intel is at gen 3, boy they must be disappointed. x370 was release early of this year, to have X470 release early next year, then X370 is chipset with the shortest lifespan.

Ryzen is 3.0, Chipset is 2.0, but the communication between Ryzen and Chipset is also 3.0. The way I see it is, they took way too long to release it, so by the time it did, the chipset is already consider outdated, when Intel already have chipset running at 3.0. If they released Ryzen back 3 years ago, okay then fine, cause Z97 and X99 were also running at 2.0.

Asmedia, okay I see why, all it does it just handles the SATA, USB, and Asmedia does make SATA and USB controllers. Looks like those 8 gen 2 lanes were slapped on the very last minute. With AMD taking this long to release Ryzen, they should have developed the chipset themselves, instead of relying on a 3rd party like Asmedia.

Yeah, I guess. It is a shame they didn't go with a better chipset  for AM4 300 series. But on the other hand, would they have sold so many AB350 boards, that are dirt cheap? I think for the majority of users it wouldn't be a problem.

I didn't have time to trawl through the linked article, the 400 series is gonna have pci-e 3.0 for the chipset on the consumer boards right? 

I get what people are saying, a LOT of people like to buy a board and stick with it as long as possible, BUT don't forget that with intel they only last like 2 gens before new socket, or in the last case same socket just didn't want people to use the same boards... at least with AM4, they're updating the chipset on the new boards, so that you can swap the CPUs in from previous series if you have one.

And you can still have the old boards with the new CPUs, you'll just have pci-e 2.0 for the second slot, so again won't affect many people IMO. In my mind someone that is likely to stick with a board for a long time, is likely to be budget orientated, to me this suggests that most would be using 1 GPU and possibly an NVME drive, this can be done on the 300 series as is.

 

Please quote my post, or put @paddy-stone if you want me to respond to you.

Spoiler
  • PCs:- 
  • Main PC build  https://uk.pcpartpicker.com/list/2K6Q7X
  • ASUS x53e  - i7 2670QM / Sony BD writer x8 / Win 10, Elemetary OS, Ubuntu/ Samsung 830 SSD
  • Lenovo G50 - 8Gb RAM - Samsung 860 Evo 250GB SSD - DVD writer
  •  
  • Displays:-
  • Philips 55 OLED 754 model
  • Panasonic 55" 4k TV
  • LG 29" Ultrawide
  • Philips 24" 1080p monitor as backup
  •  
  • Storage/NAS/Servers:-
  • ESXI/test build  https://uk.pcpartpicker.com/list/4wyR9G
  • Main Server https://uk.pcpartpicker.com/list/3Qftyk
  • Backup server - HP Proliant Gen 8 4 bay NAS running FreeNAS ZFS striped 3x3TiB WD reds
  • HP ProLiant G6 Server SE316M1 Twin Hex Core Intel Xeon E5645 2.40GHz 48GB RAM
  •  
  • Gaming/Tablets etc:-
  • Xbox One S 500GB + 2TB HDD
  • PS4
  • Nvidia Shield TV
  • Xiaomi/Pocafone F2 pro 8GB/256GB
  • Xiaomi Redmi Note 4

 

  • Unused Hardware currently :-
  • 4670K MSI mobo 16GB ram
  • i7 6700K  b250 mobo
  • Zotac GTX 1060 6GB Amp! edition
  • Zotac GTX 1050 mini

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, paddy-stone said:

Yeah, I guess. It is a shame they didn't go with a better chipset  for AM4 300 series. But on the other hand, would they have sold so many AB350 boards, that are dirt cheap? I think for the majority of users it wouldn't be a problem.

The funny part is their X300 and A300 designed for SFF are Gen 3 and not Gen 2.

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Taf the Ghost said:

If I remember right, it's actually an AM4 limit. More simple designs, less traces on the motherboard.

 

Also, it's 4x PCIe 3.0 to the PCH. The 2.0 version appears to be their reference for testing the original Zen designs.

thats a bummer :( 

4 hours ago, paddy-stone said:

 

Ahh yeah, didn't think of that.. again not something I would buy, and if you have enough money for a Titan V, surely you could buy threadripper TR4 instead, or go with intel. But yes I guess we have finally reached the end of pci-e 2.0 and AMD should update their spec with the consumer models of mobos, if not they may end up losing a lot of business to intel again simply for those who want to dual GPUs or more, or high price GPUs that have now exceeded the Pci-e 2 spec.

Just out of interest has anyone done any testing on the titan V to see how much of a performance loss there is using pci-e 2.0 x16 instead of pci-e 3.0 x16?

 

Yeah, I guess. It is a shame they didn't go with a better chipset  for AM4 300 series. But on the other hand, would they have sold so many AB350 boards, that are dirt cheap? I think for the majority of users it wouldn't be a problem.

I didn't have time to trawl through the linked article, the 400 series is gonna have pci-e 3.0 for the chipset on the consumer boards right? 

I get what people are saying, a LOT of people like to buy a board and stick with it as long as possible, BUT don't forget that with intel they only last like 2 gens before new socket, or in the last case same socket just didn't want people to use the same boards... at least with AM4, they're updating the chipset on the new boards, so that you can swap the CPUs in from previous series if you have one.

And you can still have the old boards with the new CPUs, you'll just have pci-e 2.0 for the second slot, so again won't affect many people IMO. In my mind someone that is likely to stick with a board for a long time, is likely to be budget orientated, to me this suggests that most would be using 1 GPU and possibly an NVME drive, this can be done on the 300 series as is.

 

you know that this only affects the non cpu lanes right, it just means that extra stuff you plug into the other slots will run at pcie 2.0 the 2 main slots are still pcie 3.0

which isnt much of a problem

1 hour ago, NumLock21 said:

The funny part is their X300 and A300 designed for SFF are Gen 3 and not Gen 2.

but x300 never reached the market, i had hopes that we would see that chipset as it was supposed to be more flexible 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, cj09beira said:

thats a bummer :( 

you know that this only affects the non cpu lanes right, it just means that extra stuff you plug into the other slots will run at pcie 2.0 the 2 main slots are still pcie 3.0

which isnt much of a problem

 

Yes, that was my whole point of what I wrote, lol 9_9

Please quote my post, or put @paddy-stone if you want me to respond to you.

Spoiler
  • PCs:- 
  • Main PC build  https://uk.pcpartpicker.com/list/2K6Q7X
  • ASUS x53e  - i7 2670QM / Sony BD writer x8 / Win 10, Elemetary OS, Ubuntu/ Samsung 830 SSD
  • Lenovo G50 - 8Gb RAM - Samsung 860 Evo 250GB SSD - DVD writer
  •  
  • Displays:-
  • Philips 55 OLED 754 model
  • Panasonic 55" 4k TV
  • LG 29" Ultrawide
  • Philips 24" 1080p monitor as backup
  •  
  • Storage/NAS/Servers:-
  • ESXI/test build  https://uk.pcpartpicker.com/list/4wyR9G
  • Main Server https://uk.pcpartpicker.com/list/3Qftyk
  • Backup server - HP Proliant Gen 8 4 bay NAS running FreeNAS ZFS striped 3x3TiB WD reds
  • HP ProLiant G6 Server SE316M1 Twin Hex Core Intel Xeon E5645 2.40GHz 48GB RAM
  •  
  • Gaming/Tablets etc:-
  • Xbox One S 500GB + 2TB HDD
  • PS4
  • Nvidia Shield TV
  • Xiaomi/Pocafone F2 pro 8GB/256GB
  • Xiaomi Redmi Note 4

 

  • Unused Hardware currently :-
  • 4670K MSI mobo 16GB ram
  • i7 6700K  b250 mobo
  • Zotac GTX 1060 6GB Amp! edition
  • Zotac GTX 1050 mini

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, RagnarokDel said:

you cant even saturate a PCIE 3.0 X16 anyway why would you care?

 

PS: What is this, people are saying? Ryzen has PCIE 3.0 or are we talking about the secondary/third slots?

You most certainly can in a compute workload.

 

13 hours ago, paddy-stone said:

Personally, I don't give a shit what the secondary and lower PCI-e slots gen are, most people would only use the main slot and secondary at most, and pci-e 2.0 is still enough for todays GPUs IIRC?

 

Is there anything currently on the market that could over-saturate a pci-e 2.0 x16 lanes?

I could easily do that on a GTX 750TI running a map reduce job through CUDA.

 

Even a 2600K can saturate the memory bus of the Titan XP if you throw the right workload at it, such as a reduction. Saturating PCIe is child's play, and it's why Cray, Intel, AMD, and Nvidia have all come up with custom fabrics which are vastly faster.

 

4 cores * 4GHz * 256 bits/32 bits-per-float = 256GFlops. If you're trying to get a sum or multiplicative product of a huge amount of data, that translates to 4 bytes-per-float * 256*10^9 floats-per-second = 1.024*10^12 bytes-per-second, and that's only in one direction. If you're doing step-wise transforms of all of this data and committing it back out to memory, your bandwidth need is double that, or a whopping 2TB/s in bandwidth (DDR4 actually only runs at half the stated clocks, but they list the aggregated bandwidth in both directions, so in truth you only get half in each direction).

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, ravenshrike said:

The only way to saturate PCIe 3.0 x8 on the consumer market apart from storage is to run dual Titan Vs.

A single 1080 starts bottlenecking on x8 in some games. Titan Xp/ TitanX(P), and 1080Ti are going to bottleneck in more games and by a bigger margin. Two generations from now, midrange GPUs could EASILY saturate a PCIe 3.0 x8 connection.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, cj09beira said:

thats a bummer :( 

you know that this only affects the non cpu lanes right, it just means that extra stuff you plug into the other slots will run at pcie 2.0 the 2 main slots are still pcie 3.0

which isnt much of a problem

but x300 never reached the market, i had hopes that we would see that chipset as it was supposed to be more flexible 

I know, checking out asmedia site shows no such chipset.

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Bit_Guardian said:

I could easily do that on a GTX 750TI running a map reduce job through CUDA.

 

Even a 2600K can saturate the memory bus of the Titan XP if you throw the right workload at it, such as a reduction. Saturating PCIe is child's play, and it's why Cray, Intel, AMD, and Nvidia have all come up with custom fabrics which are vastly faster.

 

4 cores * 4GHz * 256 bits/32 bits-per-float = 256GFlops. If you're trying to get a sum or multiplicative product of a huge amount of data, that translates to 4 bytes-per-float * 256*10^9 floats-per-second = 1.024*10^12 bytes-per-second, and that's only in one direction. If you're doing step-wise transforms of all of this data and committing it back out to memory, your bandwidth need is double that, or a whopping 2TB/s in bandwidth (DDR4 actually only runs at half the stated clocks, but they list the aggregated bandwidth in both directions, so in truth you only get half in each direction).

Basically all of this would stay within the GPU memory though until resultant data output so you wouldn't need that kind of bandwidth on the PCIe bus, also Nvidia worked really hard on CUDA memory management so it would only grab and load data in to GPU memory when necessary reducing the required bandwidth over the PCIe bus and reducing the used GPU memory too.

 

Should really only have issues with PCIe bandwidth at initial data load or when GPU memory is too small for the data set causing constant page faults. 

 

The problem is still there though and it's very similar to when a process needs to access storage, performance brick wall.

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, leadeater said:

Basically all of this would stay within the GPU memory though until resultant data output so you wouldn't need that kind of bandwidth on the PCIe bus, also Nvidia worked really hard on CUDA memory management so it would only grab and load data in to GPU memory when necessary reducing the required bandwidth over the PCIe bus and reducing the used GPU memory too.

 

Should really only have issues with PCIe bandwidth at initial data load or when GPU memory is too small for the data set causing constant page faults. 

 

The problem is still there though and it's very similar to when a process needs to access storage, performance brick wall.

And you intend to get all of that data to the GPU in the first place using PCIe bandwidth... Welcome to why map-reduce is still primarily run on CPU-only systems (and these days off of Sparc M7 or Power 8/9 systems using HMC 3 as their primary memory unless you've got a massive analytics engine running off the results).

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Bit_Guardian said:

And you intend to get all of that data to the GPU in the first place using PCIe bandwidth... Welcome to why map-reduce is still primarily run on CPU-only systems (and these days off of Sparc M7 or Power 8/9 systems using HMC 3 as their primary memory unless you've got a massive analytics engine running off the results.

Depends on the data set size and if you need to load all of it, you can do it on GPU which has been done and the limiting factors which prevent it's wider use are slowly with each generation of both hardware and software being addressed. The interesting dynamic of GPUs becoming less and less graphical processors and more and more just math engines.

https://sites.google.com/site/mapreduceongpu/home/why-how

 

Intel servers are still the most commonly used even for this, that's basically what most of the quad socket and higher systems are used for in HPC clusters. You've either got dual socket systems with GPUs and moderate ram or quad+ socket systems filled with as much ram as possible. Bit of a brute force solution in my opinion though however sometimes necessary, you'd think they would find ways to reduce data footprint or something but it's actually cheaper to chuck more ram at it heh.

 

Not really my problem I just build and deploy the servers.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, leadeater said:

Depends on the data set size and if you need to load all of it, you can do it on GPU which has been done and the limiting factors which prevent it's wider use are slowly with each generation of both hardware and software being addressed. The interesting dynamic of GPUs becoming less and less graphical processors and more and more just math engines.

https://sites.google.com/site/mapreduceongpu/home/why-how

 

Intel servers are still the most commonly used even for this, that's basically what most of the quad socket and higher systems are used for in HPC clusters. You've either got dual socket systems with GPUs and moderate ram or quad+ socket systems filled with as much ram as possible. Bit of a brute force solution in my opinion though however sometimes necessary, you'd think they would find ways to reduce data footprint or something but it's actually cheaper to chuck more ram at it heh.

 

Not really my problem I just build and deploy the servers.

It only makes sense to spend the time moving everything TO the GPU if you have enough of a compute bottleneck CPU-side to make up for it. If ALL you're doing is map-reduce and/or horizontal reductions, you're limited by the speed of storage first and foremost anyway. By the time the next batch of data loads, the first is long finished being processed, so thinking about GPUs is pointless in that scenario. If you have all the data living in RAM from the start, then you can start thinking about the GPU upsides, but you still need a pretty significant compute bottleneck to make that worthwhile. This is also why Intel, Nvidia, and AMD have been working with select customers on integrating their GPUs (or FPGAs in Intel's case) into traditional Xeon, Opteron, Power, or Sparc CPU packages, because then you have the right amount of compute power to address your bottleneck without spending so much power and money on dGPUs and re-optimising your software.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

@leadeater These days some people go full-ham and store everything in GPU memory from the start. I know some high-speed trading algorithms depend on having rapid compute analysis over a large set of data, and each change has tons of ripple effects on the analysis profile. Some machines running Stratix and Altera FPGAs to do the networking and decision making use that technique.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Bit_Guardian said:

If you have all the data living in RAM from the start

A lot do this for the mentioned storage problem, not that I've seen one personally but 6TB ram in a server has been a thing and used for a while now.

 

Also storage speed doesn't have to be a problem, you can build really fast platforms using something like Ceph with 40Gbe or IB with multiple NVMe SSDs per server. Single channel DDR4 gives around 20GB/s so it would only (lol yea) take 5 servers to match it, so only 20 for quad channel. RDMA really helps with keeping latency down but the overhead in just accessing storage no matter how fast is still a problem.

 

Edit:

That's why Gen-z could be so useful, treat everything equally with no translation.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, leadeater said:

A lot do this for the mentioned storage problem, not that I've seen one personally but 6TB ram in a server has been a thing and used for a while now.

 

Also storage speed doesn't have to be a problem, you can build really fast platforms using something like Ceph with 40Gbe or IB with multiple NVMe SSDs per server. Single channel DDR4 gives around 20GB/s so it would only (lol yea) take 5 servers to match it, so only 20 for quad channel. RDMA really helps with keeping latency down but the overhead in just accessing storage no matter how fast is still a problem.

 

Edit:

That's why Gen-z could be so useful, treat everything equally with no translation.

or 2 epyc servers :P 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, cj09beira said:

or 2 epyc servers :P 

Only if you used 100Gb/s networking, calculation was done off network bandwidth since that is the first bottleneck you'd hit.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

Also getting 10GB/s+ out of a single systems is very very hard thing to do, even with a ton of NVMe SSDs.

why, is it cpu overhead ?

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, cj09beira said:

why, is it cpu overhead ?

Every little thing is counting against you at this sort of performance level. NIC drivers and tuning, storage controllers, software scaling issues, ram speed, cpu frequency, so it's a bit liking bowling a perfect 300 game while balancing a bucket of water on your head.

 

Getting pure 100Gb networking in and out of a server is one thing but linking that up with a storage subsystem that is capable of doing it and the two meshing together perfectly to operate in unison to actually achieve it is worlds different.

 

Then you got other issues with how something like Ceph works. If your pool is using replication then you have to contend with write amplification, client traffic can generate three times that to other server nodes so you'd need many NICs in the server to deal with it. The power of distributed storage systems is very much in the name, scale out not up.

http://www.mellanox.com/related-docs/solutions/ppt_ceph_mellanox_ceph_day.pdf

 

Edit:

Also damn this is off topic, back to talking about AMD CPUs and chipsets.

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, cj09beira said:

Dam it, :| 

HPC programmers and system engineers are paid extremely well for good reason. I can tune your CPU, GPU, and RPC (I.E. Network API/Fabric function call) performance, but I can't write you a faster Ethernet or Infiniband or Omnipath driver or firmware. I can't give you any advice on scaling a BTRFS/ZFS/Gluster/XYZ distributed file system. It's also quite difficult to reason about memory and storage performance as a heterogeneous system wherein the CPU and/or GPUs are also eating up most of the bandwidth.

 

There is more black magic in testing all of this and locking in your settings than there is actual insight in the beginning. This is why Cray and IBM take years to build a supercomputer. It is so much more than just standing the racks up, hooking up the networking, and getting your cooling solution set. You have to tune hundreds of racks, thousands of CPU cores, tens of thousands of NICs... Even optimising for just 2 monolithic, homogeneous nodes is painstaking for any one man who has all the values and variables for every system latency in front of him.

 

This is the big leagues. This is the level these people play at, and it's truly amazing.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Bit_Guardian said:

HPC programmers and system engineers are paid extremely well for good reason. I can tune your CPU, GPU, and RPC (I.E. Network API/Fabric function call) performance, but I can't write you a faster Ethernet or Infiniband or Omnipath driver or firmware. I can't give you any advice on scaling a BTRFS/ZFS/Gluster/XYZ distributed file system. It's also quite difficult to reason about memory and storage performance as a heterogeneous system wherein the CPU and/or GPUs are also eating up most of the bandwidth.

 

There is more black magic in testing all of this and locking in your settings than there is actual insight in the beginning. This is why Cray and IBM take years to build a supercomputer. It is so much more than just standing the racks up, hooking up the networking, and getting your cooling solution set. You have to tune hundreds of racks, thousands of CPU cores, tens of thousands of NICs... Even optimising for just 2 monolithic, homogeneous nodes is painstaking for any one man who has all the values and variables for every system latency in front of him.

 

This is the big leagues. This is the level these people play at, and it's truly amazing.

That makes a lot of sense. It also explains some of those weirder Supercomputer configurations over the years, as they were trying to optimize for setup & tuning with equipment before instillation. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 23.12.2017 at 6:44 AM, NumLock21 said:

They release Ryzen so late into the game, where they can easily implement Gen 3 and yet they still choose Gen 2. Dafuq is wrong with them?

dude ryzen has 3.0 lanes ,its the chipset that does not , whatever is plugged into the primary X16 slot gets a full 16 3.0 lanes fresh from the cpu , or in case of crossfire both shud get 8x 3.0 

RyzenAir : AMD R5 3600 | AsRock AB350M Pro4 | 32gb Aegis DDR4 3000 | GTX 1070 FE | Fractal Design Node 804
RyzenITX : Ryzen 7 1700 | GA-AB350N-Gaming WIFI | 16gb DDR4 2666 | GTX 1060 | Cougar QBX 

 

PSU Tier list

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Space Reptile said:

dude ryzen has 3.0 lanes ,its the chipset that does not , whatever is plugged into the primary X16 slot gets a full 16 3.0 lanes fresh from the cpu , or in case of crossfire both shud get 8x 3.0 

Well people still got screwed when you know Intel already have 3.0 on the chipset. And the mia amd x300 is also 3.0

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, NumLock21 said:

Well people still got screwed when you know Intel already have 3.0 on the chipset. And the mia amd x300 is also 3.0

honestly what card that is not a gpu can fully utilize the bandwidth of a 2.0 pcie slot? yea it kinda sucks that its not pcie3 , but its not gonna hurt you in any way , the graphics card is guaranteed 16x 3.0 from the cpu and whatever device you may use in a second slot or the m.2 will not saturate the available bandwidth of the pcie 2.0 lanes 

RyzenAir : AMD R5 3600 | AsRock AB350M Pro4 | 32gb Aegis DDR4 3000 | GTX 1070 FE | Fractal Design Node 804
RyzenITX : Ryzen 7 1700 | GA-AB350N-Gaming WIFI | 16gb DDR4 2666 | GTX 1060 | Cougar QBX 

 

PSU Tier list

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×