Jump to content

Half a Terabyte straight to your GPU - PCIe 7.0 Announced

BachChain

Summary

 

Despite PCIe 5 just now gaining traction, the PCI Special Interest Group is continuing to forge ahead, with today's announcement of version 7 of their Peripheral Component Interconnect express standard. Continuing the historic trend of each generation doubling the speed of the previous, PCIe 7 tops out at 128 GT/s over 6's 64 GT/s. This translates to a 16x connection being capable of sending 512 GB/s in both directions. Additional features include better power efficiency, improved reliability, and lower latency. While the spec has been announced, it's not planned to be formally released until 2025.

 

Quotes

Quote

PCI-SIG technical workgroups will be developing the PCIe 7.0 specification with the following feature goals:

  • Delivering 128 GT/s raw bit rate and up to 512 GB/s bi-directionally via x16 configuration
  • Utilizing PAM4 (Pulse Amplitude Modulation with 4 levels) signaling
  • Focusing on the channel parameters and reach
  • Continuing to deliver the low-latency and high-reliability targets
  • Improving power efficiency
  • Maintaining backwards compatibility with all previous generations of PCIe technology

 

My thoughts

More more is more better, but with the current adoption rate trends and diminishing returns in the consumer space, it's possible that manufacturers just won't bother for most PC's.

 

Sources

https://www.phoronix.com/scan.php?page=news_item&px=PCI-Express-7.0-Spec

https://pcisig.com/blog/announcing-pcie®-70-specification-doubling-data-rate-128-gts-next-generation-computing

Link to comment
Share on other sites

Link to post
Share on other sites

Man how long have we stayed with pcie 2.0 and 3.0?   Pcie 4.0 has already been replaced after one generation of hardware and now they want to launch pcie 7.0 in a few years. 
it’s going to fast

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, BachChain said:

Summary

 

Despite PCIe 5 just now gaining traction, the PCI Special Interest Group is continuing to forge ahead, with today's announcement of version 7 of their Peripheral Component Interconnect express standard. Continuing the historic trend of each generation doubling the speed of the previous, PCIe 7 tops out at 128 GT/s over 6's 64 GT/s. This translates to a 16x connection being capable of sending 512 GB/s in both directions. Additional features include better power efficiency, improved reliability, and lower latency. While the spec has been announced, it's not planned to be formally released until 2025.

 

Quotes

 

My thoughts

More more is more better, but with the current adoption rate trends and diminishing returns in the consumer space, it's possible that manufacturers just won't bother for most PC's.

 

Sources

https://www.phoronix.com/scan.php?page=news_item&px=PCI-Express-7.0-Spec

https://pcisig.com/blog/announcing-pcie®-70-specification-doubling-data-rate-128-gts-next-generation-computing

A better way to look at is how it relates to lanes on a consumer CPU vs GPU configurations.

 

PCI 2.0 = hell yeah 4 GPU's

PCI 3.0 = Can only fit 2 GPU's

PCI 4.0 = only one GPU

 

So PCIe 5, all non-top-end gpu's get reduced to 8 lanes and the other lanes get repurposed for into 8 lane PCIe SSD's

PCIe 6, GPU's now 4 lanes can be connected by TB, SSD's now 16 lanes

PCIe 7 GPU's now 2 lanes and can be connected by TB, SSD's now in SLI configurations.

PCIe 8 GPU's now 1 lane, and people connect 16 of them by TB8 just to say they can.

 

I am of course joking. It's more likely that x60 GPU parts might get pushed down to 8 and 4 lane configurations just because they can't use 16 lanes of bandwidth, but only after those become PCIe 5+ only. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Kisai said:

I am of course joking. It's more likely that x60 GPU parts might get pushed down to 8 and 4 lane configurations just because they can't use 16 lanes of bandwidth, but only after those become PCIe 5+ only.

Not sure of that, mid tier and low tier GPUs have never been able to use the full bandwidth of the then current PCIe standard and GPUs with less than x16 connections have remained rare. Part of that in the past however was the lack of other devices that actually needed PCIe lanes so there was always well more on offer than needed, I guess not so much the case now days.

 

I do wonder how DirectStorage might change the required bandwidth through to GPUs though, since that is going to be quite a bit of additional bandwidth required. For that reason I suspect Nvidia won't go below 8 lanes on x50 and higher.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Not sure of that, mid tier and low tier GPUs have never been able to use the full bandwidth of the then current PCIe standard and GPUs with less than x16 connections have remained rare. Part of that in the past however was the lack of other devices that actually needed PCIe lanes so there was always well more on offer than needed, I guess not so much the case now days.

Too bad there is no way to steer the lane configuration on a motherboard so you can have 32 electrically connected PCIe lanes on two slots, even when it's only possible to do 16/0/4/4 or 8/8/4/4 or 8/16/0/0, or do 8/8/4/4. Usually what happens is the second PCIe x16 lane is only 8x connected and a third is only 4x connected. If you have 2 SSD's then steer those unused lanes to the SSD's rather than the absent device in the 8x slot.

 

Anyway PC's are kind of overdue for a redesign for 12VO but also move things like SSD's to the backside of the board just to make it less of a pain in the ass over the current "remove the GPU and water cooling to reach the SSD" method currently.

 

 

1 hour ago, leadeater said:

I do wonder how DirectStorage might change the required bandwidth through to GPUs though, since that is going to be quite a bit of additional bandwidth required. For that reason I suspect Nvidia won't go below 8 lanes on x50 and higher.

I suspect that DirectStorage will not be of any benefit to anything below the x80 parts unless all parts start coming with a minimum 12GB VRAM standard. I suspect, that DirectStorage will demand that the SSD be the same PCIe gen as the GPU, otherwise any performance benefit gained from it will be lost waiting for the data transfers.

 

 

Quote

Video game consoles such as the XBox Series X|S address these issues by offloading aspects of this to hardware - making use of the NVMe hardware queue to manage IO and hardware accelerated decompression. As we expect to see more titles designed to take advantage of the possibilities offered by this architecture it becomes important that Windows has similar capabilities.

https://github.com/microsoft/DirectStorage

 

One of the blog posts from Microsoft has a comment about it being able to decompress 3GB in 7ms. Which is insane, but we have no context what was decompressed, cause it sure isn't going to be PNG texture atlases.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kisai said:

A better way to look at is how it relates to lanes on a consumer CPU vs GPU configurations.

 

PCI 2.0 = hell yeah 4 GPU's

PCI 3.0 = Can only fit 2 GPU's

PCI 4.0 = only one GPU

 

So PCIe 5, all non-top-end gpu's get reduced to 8 lanes and the other lanes get repurposed for into 8 lane PCIe SSD's

PCIe 6, GPU's now 4 lanes can be connected by TB, SSD's now 16 lanes

PCIe 6 GPUs no longer have VRam leading to a full integration of "gaming" and "workstation" cards causing GPU prices to jump so high noone can buy them

PCIe 7 in response to shrinking GPU sales forced double slot/double pcie pinout cards for workstations and single PCIe pinouts for gaming SSDs are still used for VRAM

The best gaming PC is the PC you like to game on, how you like to game on it

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Kisai said:

It's more likely that x60 GPU parts might get pushed down to 8 and 4 lane configurations just because they can't use 16 lanes of bandwidth, but only after those become PCIe 5+ only. 

I wish they don't, not everyone have the latest generation of hardware and this will make people only have pcie 3.0 x4 lanes, just like the 6500xt or something

Theoretically it works with fine on 4.0, but not everyone has that yet

 

6 hours ago, Kisai said:

but also move things like SSD's to the backside of the board just to make it less of a pain in the ass over the current "remove the GPU and water cooling to reach the SSD" method currently.

So you want to add "remove motherboard to access your SSD" to the list as well?

 

Just add a m.2 slot on the GPU

 

6 hours ago, Kisai said:

I suspect that DirectStorage will not be of any benefit to anything below the x80 parts unless all parts start coming with a minimum 12GB VRAM standard.

Isn't the point of direct storage is to allow direct data access from SSD so GPU don't need that much VRAM? I'm not too sure on this

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

Can we just get consumer CPUs with more than 20 available lanes, please?

I sold my soul for ProSupport.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Needfuldoer said:

Can we just get consumer CPUs with more than 20 available lanes, please?

Honestly, so long as we are given more flexablity to split the avaliable lanes, 20 is probably enough.

split to 8 for the GPU, 8 for two NVME drives, and you got 4 1x slots for any other expansion cards you want, like sound card, 1x for a network card, 1x for more usb, etc.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, jaslion said:

They really just said. Screw I/O optimizations we just gonna make everything twice as fast every year didnt they? 😛

That's kinda how things USED to be.

Also with how fast the I/O is it now makes sense to do memory operations on top of PCIe/CXL. This can REALLY be a game changer for a lot of things. At least in the enterprise.

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

Honestly.... I just wish more motherboards would have the sides of their PCI-e slots sawed off, to allow someone to put in a 16x card in a 1x or 4x slot.
Because at those speeds... And with how few lanes we get on consumer products... A GPU could probably run just fine at 4X, freeing more lanes for everything else that would actually benefit from the speed, like super fast storage or something ridiculous like RAM in a PCI-e slot that could be used for extra VRAM or something.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Imagine 8K144hz from a NVME port.

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kisai said:

Too bad there is no way to steer the lane configuration on a motherboard so you can have 32 electrically connected PCIe lanes on two slots, even when it's only possible to do 16/0/4/4 or 8/8/4/4 or 8/16/0/0, or do 8/8/4/4. Usually what happens is the second PCIe x16 lane is only 8x connected and a third is only 4x connected. If you have 2 SSD's then steer those unused lanes to the SSD's rather than the absent device in the 8x slot.

This is possible however generally undesirable due to cost and potential latency, PLX chips.

 

4 hours ago, Kisai said:

I suspect that DirectStorage will not be of any benefit to anything below the x80 parts unless all parts start coming with a minimum 12GB VRAM standard. I suspect, that DirectStorage will demand that the SSD be the same PCIe gen as the GPU, otherwise any performance benefit gained from it will be lost waiting for the data transfers.

DirectStorage reduces the VRAM requirement not increases. Also the way it works means the GPU performance doesn't matter at all really because that's not the area of GPU pipeline involved or being talked about.

 

Same PCie gen doesn't also matter either, just actual bandwidth so any PCIe 4.0 x4 SSD will have the connection bandwidth required and that will be true for a long time. SSD performance is limited by controller and NAND design, 7.9GB/s is well more than enough but as you'll notice not many SSDs can actually do that so it's not the PCIe connection that matters here so long as it's PCIe 4.0 and x4.

 

4 hours ago, Kisai said:

One of the blog posts from Microsoft has a comment about it being able to decompress 3GB in 7ms. Which is insane, but we have no context what was decompressed, cause it sure isn't going to be PNG texture atlases.

DirectStorage has part of it's specification game file asset data type and compression, it's documented somewhere as I have definitely seen it talked about. Finding it again is another matter sadly.

 

4 hours ago, Moonzy said:

Isn't the point of direct storage is to allow direct days access from SSD so GPU don't need that much VRAM? I'm not too sure on this

Yes that is somewhere between 50% to 99% of the reason why. It will also allow greater detail game assets due to the decrease VRAM requirement from not having to load in everything that maybe could, just in case, need to be used.

 

DirectStorage is essentially JIT (Just In Time) manufacturing workflow, only what you need when you need it

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, lostcattears said:

We should just skip pcie 6.0 entirely at this point

Development is not sequential, but done in parallel. Future standards will be at different points in their development. PCIe 6.0 obviously would be further along than PCIe 7.0, and thus come to market first. It's likely someone out there is even thinking about what future tech might go into PCIe 8.0 even if there is no formal announcement for it yet.

 

The only way 6.0 might get scrapped in preference to 7.0 is if there is some reason for a long delay that 7.0 essentially catches up.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Neat to see fast paced development though. While GPUs and SSDs may not necessarily need it, we'll see though. Spliting lanes and having more IO options etc would be good too. I definitely want to see DirectStorage in action for sure. Soon enough hopefully. Hopefully by then new type of SSD can be available for better 4K random reads. Sequential will be insane eventually. Imagine zipping that ~1TB game in a sec or so hah.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Moonzy said:

I wish they don't, not everyone have the latest generation of hardware and this will make people only have pcie 3.0 x4 lanes, just like the 6500xt or something

Theoretically it works with fine on 4.0, but not everyone has that yet

 

So you want to add "remove motherboard to access your SSD" to the list as well?

I'm not sure how you're getting that.

 

11 hours ago, Moonzy said:

Just add a m.2 slot on the GPU

That ain't happening. Do remember at one point GPU's had SODIMM or socketable VRAM chips. This largely stopped being a thing because GPU's stopped using the same memory that the motherboard/CPU used.

 

11 hours ago, Moonzy said:

Isn't the point of direct storage is to allow direct data access from SSD so GPU don't need that much VRAM? I'm not too sure on this

No, the point of direct storage is to remove the CPU from being the bottleneck. You're not going to get GPU's and NVMe drives speaking different PCIe speeds without something in between which is why I'm sure if you only have a PCIe NVMe the GPU will be forced into PCIe3 mode to use DirectStorage.

 

Likewise I don't see the feature being on the low-end parts because they won't have enough VRAM to do anything with it. It's like the problem we have with SSD's that don't have RAM buffers on them. DirectStorage on a GT 1030-equivilent ain't happening. Expecting it on any low-end parts including x50 and x60 is wishful thinking because the low-end parts often do not have the video memory capacity or bandwidth to benefit from it. You aren't going to be reading textures straight off the SSD unless the textures are uncompressed raw RGBA files, let alone HDR. Just for reference, a 1K texture is 4MB, a 4K texture is 64MB, and a 8K texture is 256MB. A 16K texture is 1GB. Just for Standard resolution. That 1GB texture may be compressed on disk to 64MB if it's cartoony enough, or lossy compressed with BC1 to 175MB. That does not negate the need for 1GB of video memory at all to load the texture. It's not like the GPU is going to decompress that every time it needs it per frame. It's only going to decompress it once.

 

No what's going to happen is that if you try to use directstorage on a x50/x60 part is you'll lose other features that consume significant video memory like DLSS or SSAA. 

 

Like some of y'all need to realize the point of directstorage is not to reduce the video memory requirements at all. The intent is to not decompress textures to system memory before uploading them to the GPU memory. That means the GPU memory requirements increases, not decreases.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Kisai said:

No what's going to happen is that if you try to use directstorage on a x50/x60 part is you'll lose other features that consume significant video memory like DLSS or SSAA. 

I don't think your understanding is entirely correct, but mine is no better so I'm not going that direction. Instead I suggest looking at directstorage equivalent examples in the form of current gen consoles. Both XB X and PS5 are generally similar overall. They seem to be generally compared to 2070/3060 tier graphics performance, and have 16GB of total shared system ram. Devs may have some opportunity to allocate more or less to GPU. I think it reasonable to say available vram might be of the magnitude of 8GB depending if they want more complex game code or push it towards graphics. That amount is right in the PC GPU value sweet spot at the moment. By the time we see practical adoption of directstorage in PC space we could well be on 40 generation, and 4050 would take up that performance level. There may be some small argument for console optimisations allowing efficiency that way, but at the end of the day the real work still has to be done by hardware and I feel PC overheads are overstated. Still, the main point of this is to say it could provide benefits at existing console hardware levels, without even considering the XB S. It seems plausible for gaming PCs in the mid range to likewise benefit. 

 

The bigger question to me is why haven't we even seen tech demos of this on PC? I don't know how different it is from console to port over an example. Maybe the bigger question on that side is how do devs cope with systems that can't support it, they'll still need a fallback path for a long time and it seems only providing that legacy option is the easy route today.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kisai said:

You're not going to get GPU's and NVMe drives speaking different PCIe speeds without something in between which is why I'm sure if you only have a PCIe NVMe the GPU will be forced into PCIe3 mode to use DirectStorage.

No they totally can and nothing is needed in between. PCIe is just a data bus and DirectStorage is just a protocol over top of it, I don't think you have read and understood what DirectStorage actually is/does.

 

It's like you're saying only 10Mbps networking devices can only talk to other 10Mbps network devices and not 100Mbps or 1000Mbps, that's not how it works either.

 

The only thing that matters is that the NVMe device and the PCIe connection bandwidth is enough, what version does not matter.

 

Quote

NVMe devices are not only extremely high bandwidth SSD based devices, but they also have hardware data access pipes called NVMe queues which are particularly suited to gaming workloads.

An NVMe device can have multiple queues and each queue can contain many requests at a time. This is a perfect match to the parallel and batched nature of modern gaming workloads. The DirectStorage programming model essentially gives developers direct control over that highly optimized hardware.

 

In addition, existing storage APIs also incur a lot of ‘extra steps’ between an application making an IO request and the request being fulfilled by the storage device, resulting in unnecessary request overhead. These extra steps can be things like data transformations needed during certain parts of normal IO operation. However, these steps aren’t required for every IO request on every NVMe drive on every gaming machine. With a supported NVMe drive and properly configured gaming machine, DirectStorage will be able to detect up front that these extra steps are not required and skip all the necessary checks/operations making every IO request cheaper to fulfill.

For these reasons, NVMe is the storage technology of choice for DirectStorage and high-performance next generation gaming IO.

https://devblogs.microsoft.com/directx/directstorage-is-coming-to-pc/

 

DirectStorage is just an API that you can use to interact with hardware, utilizing hardware features that are for example found in the NVMe protocol. PCIe has next to nothing to do with this other than it's the data bus and you also need enough bandwidth.

 

Quote

DirectStorage has both hardware and software requirements for it to work. PC users running Windows 11 or Windows 10 must be using an NVMe drive.

On the GPU side of the equation, you need a DirectX 12 GPU that supports Shader Model 6.0 In practice this means AMD GPUs that use RDNA2 GPUs or better and RTX 2000-series or better cards from Nvidia.

These are the requirement, note how PCIe is not mentioned at all.

 

4 hours ago, Kisai said:

Like some of y'all need to realize the point of directstorage is not to reduce the video memory requirements at all. The intent is to not decompress textures to system memory before uploading them to the GPU memory. That means the GPU memory requirements increases, not decreases.

Again no, there is no effective change at all. Whether you decompress in system memory then send to GPU or send the data to the GPU compressed and decompress on GPU that data is still stored in the GPU memory, and it's not copied in then decompressed so stored twice it's decompressed as it's streamed in.

 

Right now most GPU memory is wasted with pre-loaded and allocated memory just in case the data is needed because doing anything other than having it preemptively loaded is too slow. DirectStorage allows for data to be called later and when required rather than dumping it all in GPU memory just in case. Does this mean we'll actually see less GPU memory usage? Well that depends because one of the goals is to use DirectStorage to allow higher quality game graphics be way of eliminating this wasted GPU memory overhead and instead replacing it with actually used data.

 

A DirectStorage game might still be using 8GB of GPU memory but without it that usage might have been 20GB or 30GB.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, porina said:

The bigger question to me is why haven't we even seen tech demos of this on PC? I don't know how different it is from console to port over an example. Maybe the bigger question on that side is how do devs cope with systems that can't support it, they'll still need a fallback path for a long time and it seems only providing that legacy option is the easy route today.

There are some tech demos but they are quite basic. Right now decompression on GPU isn't actually supported and DirectStorage API is still being worked on, we'll likely find that RTX 40 and RDNA3 will actually be required because they'll have some in hardware support for this, even though technically current GPUs support this practically speaking the full benefits and feature support will require new hardware features.

Link to comment
Share on other sites

Link to post
Share on other sites

Nice, dual 800Gb port NICs or single 1.6Tb port NIC to my PC when 😄

Edited by Lurick
Forgot bandwidth is combined bidi speed

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, leadeater said:

No they totally can and nothing is needed in between. PCIe is just a data bus and DirectStorage is just a protocol over top of it, I don't think you have read and understood what DirectStorage actually is/does.

 

It's like you're saying only 10Mbps networking devices can only talk to other 10Mbps network devices and not 100Mbps or 1000Mbps, that's not how it works either.

 

Now you're talking out your behind. A device connected at 10Mbs to a 10/100 SWITCH can talk to 100Mbit devices because the SWITCH is doing the work. If you connect a 10mbps device to a 10/100 HUB, you will get nothing but collisions as the 10mbit and 100mbit devices screw each other over because they don't know the other is on the hub, as only one device can talk at a time. That's why you don't see Gigabit hubs. Ethernet hubs were a mistake. A switch does that translation, and even then, it's limited to it's switching bandwidth.

 

Motherboards aren't going to add PCIe switches to support directstorage. You get the lanes attached to the CPU that can do this translation, and everything else doesn't have the benefit.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×