Jump to content

PCI-Express Gen 4 to arrive next year

Djole123
3 hours ago, Vode said:

PCI-e is such a greedy bastard. Taking all the bandwidth for itself.

 

Should give the old SATA a piece of the cake.

Or we should all just switch to PCI-e drives.

 

I wouldn't mind to be honest. MOAR SPEED!

 

#NEVERENOUGH #DEATHTOLOADINGSCREENS

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Coaxialgamer said:

Great . Now AMD will stick to pcie 3.0 and upgrade to 4.0 when pcie 5.0 comes out ...xD

You sir got ninja'd

We have a NEW and GLORIOUSER-ER-ER PSU Tier List Now. (dammit @LukeSavenije stop coming up with new ones)

You can check out the old one that gave joy to so many across the land here

 

Computer having a hard time powering on? Troubleshoot it with this guide. (Currently looking for suggestions to update it into the context of <current year> and make it its own thread)

Computer Specs:

Spoiler

Mathresolvermajig: Intel Xeon E3 1240 (Sandy Bridge i7 equivalent)

Chillinmachine: Noctua NH-C14S
Framepainting-inator: EVGA GTX 1080 Ti SC2 Hybrid

Attachcorethingy: Gigabyte H61M-S2V-B3

Infoholdstick: Corsair 2x4GB DDR3 1333

Computerarmor: Silverstone RL06 "Lookalike"

Rememberdoogle: 1TB HDD + 120GB TR150 + 240 SSD Plus + 1TB MX500

AdditionalPylons: Phanteks AMP! 550W (based on Seasonic GX-550)

Letterpad: Rosewill Apollo 9100 (Cherry MX Red)

Buttonrodent: Razer Viper Mini + Huion H430P drawing Tablet

Auralnterface: Sennheiser HD 6xx

Liquidrectangles: LG 27UK850-W 4K HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Swatson said:

PCI-E is probably the only data transfer/connetor standard to actually be ahead of the needs of the devices. I wish every other standard committee would follow their footsteps.

Uh, no, it's miles behind where it needs to be from the perspective of datacenters and supercomputers.

 

1 hour ago, Briggsy said:

Is fiber optics really that useful for a few inches of communication?

 

I'm not being rhetorical, I'm genuinely curious. I would think switching from electrical to optical, then back again to electrical carries some kind of latency penalty. If used over thousands of miles its definitely worth eliminating signal repeaters by using fiber instead of copper, but you don't have that issue in a PC.

When your algorithms are tuned on the order of nanoseconds, yes. That's why high-speed trading is done using FPGAs with 4 200Gbps NICs attached directly to the card. The latency of copper wire is too much even for short-distance communications in some datacenters, and as we attempt to approach exascale computing, that applicability will only increase.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, djdwosk97 said:

Sata is at its limits. That's why storage jumped to PCIE rather than Sata 4. The amount of power required for Sata 4 is what makes it an impossible standard. 

SATA 4 requires a lot of power? How much more over SATA 3?

.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, AlwaysFSX said:

SATA 4 requires a lot of power? How much more over SATA 3?

I don't know, it's just something I remember reading a couple years back -- and why the sudden jump to PCIE and no announcement for Sata 4. 

3 minutes ago, AresKrieger said:

The only question I have is it backward compatible on both ends ie would a pcie 4.0 compliant card work on 3.0 and vise versa

I would be very surprised if they weren't compatible. 

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

I would be very surprised if they weren't compatible. 

I only asked due to an old rumor I heard saying it would be compatible 1 way but not the other, granted it was a rumor and probably false

https://linustechtips.com/main/topic/631048-psu-tier-list-updated/ Tier Breakdown (My understanding)--1 Godly, 2 Great, 3 Good, 4 Average, 5 Meh, 6 Bad, 7 Awful

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, manikyath said:

100Gbps network cards appareantly..

 

although i doubt there's many places that'll use those :P

100GB/s card...

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Prysin said:

and still we cannot max out Gen2 under gaming loads

That's partly because studios now just dump all the data onto the GPU from the start instead of streaming it. In datacenters, there's no such luck.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

You people realize that computers are used for more than just gaming and web browsing right? I can see these speeds being very useful for computational work and in datacenters that require massive clusters of computers working in sync. This will also improve coprocessor communication speeds and reduce drive bottlenecks when working with massive, complex datasets.

Primary PC-

CPU: Intel i7-6800k @ 4.2-4.4Ghz   CPU COOLER: Bequiet Dark Rock Pro 4   MOBO: MSI X99A SLI Plus   RAM: 32GB Corsair Vengeance LPX quad-channel DDR4-2800  GPU: EVGA GTX 1080 SC2 iCX   PSU: Corsair RM1000i   CASE: Corsair 750D Obsidian   SSDs: 500GB Samsung 960 Evo + 256GB Samsung 850 Pro   HDDs: Toshiba 3TB + Seagate 1TB   Monitors: Acer Predator XB271HUC 27" 2560x1440 (165Hz G-Sync)  +  LG 29UM57 29" 2560x1080   OS: Windows 10 Pro

Album

Other Systems:

Spoiler

Home HTPC/NAS-

CPU: AMD FX-8320 @ 4.4Ghz  MOBO: Gigabyte 990FXA-UD3   RAM: 16GB dual-channel DDR3-1600  GPU: Gigabyte GTX 760 OC   PSU: Rosewill 750W   CASE: Antec Gaming One   SSD: 120GB PNY CS1311   HDDs: WD Red 3TB + WD 320GB   Monitor: Samsung SyncMaster 2693HM 26" 1920x1200 -or- Steam Link to Vizio M43C1 43" 4K TV  OS: Windows 10 Pro

 

Offsite NAS/VM Server-

CPU: 2x Xeon E5645 (12-core)  Model: Dell PowerEdge T610  RAM: 16GB DDR3-1333  PSUs: 2x 570W  SSDs: 8GB Kingston Boot FD + 32GB Sandisk Cache SSD   HDDs: WD Red 4TB + Seagate 2TB + Seagate 320GB   OS: FreeNAS 11+

 

Laptop-

CPU: Intel i7-3520M   Model: Dell Latitude E6530   RAM: 8GB dual-channel DDR3-1600  GPU: Nvidia NVS 5200M   SSD: 240GB TeamGroup L5   HDD: WD Black 320GB   Monitor: Samsung SyncMaster 2693HM 26" 1920x1200   OS: Windows 10 Pro

Having issues with a Corsair AIO? Possible fix here:

Spoiler

Are you getting weird fan behavior, speed fluctuations, and/or other issues with Link?

Are you running AIDA64, HWinfo, CAM, or HWmonitor? (ASUS suite & other monitoring software often have the same issue.)

Corsair Link has problems with some monitoring software so you may have to change some settings to get them to work smoothly.

-For AIDA64: First make sure you have the newest update installed, then, go to Preferences>Stability and make sure the "Corsair Link sensor support" box is checked and make sure the "Asetek LC sensor support" box is UNchecked.

-For HWinfo: manually disable all monitoring of the AIO sensors/components.

-For others: Disable any monitoring of Corsair AIO sensors.

That should fix the fan issue for some Corsair AIOs (H80i GT/v2, H110i GTX/H115i, H100i GTX and others made by Asetek). The problem is bad coding in Link that fights for AIO control with other programs. You can test if this worked by setting the fan speed in Link to 100%, if it doesn't fluctuate you are set and can change the curve to whatever. If that doesn't work or you're still having other issues then you probably still have a monitoring software interfering with the AIO/Link communications, find what it is and disable it.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, patrickjp93 said:

That's partly because studios now just dump all the data onto the GPU from the start instead of streaming it. In datacenters, there's no such luck.

true, but as we both know. Mobo manufacturers are going to market this as the "you either have it or you suck" kind of feature. Despite it being completely irrelevant to gamers.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, pyrojoe34 said:

You people realize that computers are used for more than just gaming and web browsing right? I can see these speeds being very useful for computational work and in datacenters that require massive clusters of computers working in sync. This will also improve coprocessor communication speeds and reduce drive bottlenecks when working with massive, complex datasets.

Exactly, I'm sure a PHI could theoretically saturate PCIe Gen 3 if it was given a wider bus. 

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

PCI Express link performance[27][29]
PCI Express
version
Line code Transfer rate[a] Throughput[a]
×1 ×4 ×8 ×16
1.0 8b/10b 2.5 GT/s 250 MB/s GB/s 2 GB/s 4 GB/s
2.0 8b/10b 5 GT/s 500 MB/s 2 GB/s 4 GB/s 8 GB/s
3.0 128b/130b 8 GT/s 984.6 MB/s 3.938 GB/s 7.877 GB/s 15.754 GB/s
4.0 (expected in 2017) 128b/130b 16 GT/s 1.969 GB/s 7.877 GB/s 15.754 GB/s 31.508 GB/s
5.0 (far future)[28] 128b/130b 32 / 25 GT/s 3.9 / 3.08 GB/s 15.8 / 12.3 GB/s 31.5 / 24.6 GB/s 63.0 / 49.2 GB/s

 

100 gbps = ~ 12.5 GB/s ... close to the v3.0 x16 limits. Makes sense for that card to be backwards compatible with 3.0.

 

IMHO Since the same data encoding is used, depending on how they're designed, existing processors could be 4.0 compatible and possible be upgraded to that through some microcode. But, I doubt anyone would actually do it. (if possible) The microcode would have to be pushed to users through motherboard bios updates or through cpu/chipset drivers or something like that.

It's like with the current video cards, which say they support some version of displayport officially, but the cards actually support or can be made to support through firmware update newer version, they just don't advertise it because the new version wasn't fully standardized at the time of release.

 

// chart above from Wikipedia

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Bananasplit_00 said:

WHAT NEEDS THIS BANDWITH? like its getting freaking silly, everything should be PCI-e lol, screw USB, just have a 1x slot :P but still the through put speed is just getting insaine...

Hmmm Linus for the swag and videos?

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, patrickjp93 said:

Except in data centers where it is the single biggest bottleneck after accessing disk (SSD OR HDD). It's actually an enormous problem for exascale computing.

 

No, it's 100GB/s, just like Omnipath, NVLink, and Infiniband.

Edit: Whoops, didn't read the picture label. :P

plus 4.0 speeds wouldn't be enough for 100GB/s either, they're supposed to cap out at 32GB/s on x16

and we don't measure NICs in bytes, because history

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Curufinwe_wins said:

100GB/s card...

Gb/s, look at OP's picture ;)

(i prefer to write Gbps like so, and GB/s like so to keep them more seperate in my brain :P)

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Bananasplit_00 said:

WHAT NEEDS THIS BANDWITH? like its getting freaking silly, everything should be PCI-e lol, screw USB, just have a 1x slot :P but still the through put speed is just getting insaine...

Google does...

System Specs:

CPU: Ryzen 7 5800X

GPU: Radeon RX 7900 XT 

RAM: 32GB 3600MHz

HDD: 1TB Sabrent NVMe -  WD 1TB Black - WD 2TB Green -  WD 4TB Blue

MB: Gigabyte  B550 Gaming X- RGB Disabled

PSU: Corsair RM850x 80 Plus Gold

Case: BeQuiet! Silent Base 801 Black

Cooler: Noctua NH-DH15

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Briggsy said:

Is fiber optics really that useful for a few inches of communication?

 

I'm not being rhetorical, I'm genuinely curious. I would think switching from electrical to optical, then back again to electrical carries some kind of latency penalty. If used over thousands of miles its definitely worth eliminating signal repeaters by using fiber instead of copper, but you don't have that issue in a PC.

Well bandwidth I guess. Since copper can't keep up. It's still giving good speeds though. But to go faster they'd have to make connection like PCIe optical based I guess.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

Can it also supply more power through the slot? It would be pretty awesome if it could.

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Djole123 said:

So as you might know, PCIe Gen 3 was out for quite a while now.

But, fear no more, PCIe Gen 4 is coming next year!

 

It's promised to have double the data rate (8 GT/s vs 16 GT/s) and PCI-SIG already has some plans for PCIe Gen 5!

 

First card with PCIe Gen 4 did appear, it's a ConnectX PCIe 16x 100GB/s card.

[image removed]

 

Source:

http://videocardz.com/63305/pci-express-4-0-to-arrive-next-year

The image showcasing the product says 100 gigabits per second, not gigabytes.

"It pays to keep an open mind, but not so open your brain falls out." - Carl Sagan.

"I can explain it to you, but I can't understand it for you" - Edward I. Koch

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Senzelian said:

Can it also supply more power through the slot? It would be pretty awesome if it could.

Power is a product of voltage and current. There's only so much current those tiny contacts in the slot can transfer to the video card, without adding more contacts (and therefore changing the slot shape) you really can't increase the current by a huge amount (probably safe to go up to around 100w through those contacts).

 

One solution would be going for higher voltage, but that would break compatibility. Maybe in the near future, we'll see video cards work with 20v or something like that through different pci-e power connectors (for example 4 pin 20v connector or 6 pin 20v connector, a 4 pin connector would be capable of safely delivering up to about 10-15A which at 20v means 200-300 watts) 

The pci-e slot can be left alone with 12v for backwards compatibility, the 75w that can be provided through that can be used by video cards just to power the memory on the card or something like that. 8 GB GDDR5 uses about 20-30 watts, well below the limits of the slot.

 

USB 3 is already designed to allow devices to negociate between 5v and 12v and 20v but usb controllers that would respond to device requests for higher than 5v are not implemented in computers yet. There are motherboards designed to accept DC input from a regular laptop adapter jack (which is 18.5-19v, close enough to 20v) ... it seems lots of things are gravitating towards 20v figure.

 

I'm just hoping we'll see at some point a new ATX power supply standard that would add 20v voltage besides 12v and 5v (and make 3.3v and -12v obsolete, because it's fairly easy to put a dc-dc converter on the motherboard to produce 3.3v for m.2 and ssd devices and not waste energy through voltage drops in the cables between power supply and motherboard.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, manikyath said:

Gb/s, look at OP's picture ;)

(i prefer to write Gbps like so, and GB/s like so to keep them more seperate in my brain :P)

Dah! I fell prey to the OP's mistyping. My bad.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

To everyone who can't stand the GB/Gb mistake, here, it's fixed.

Athlon X2 for only 27.31$   Best part lists at different price points   Windows 1.01 running natively on an Eee PC

My rig:

Spoiler

Celeronator (new main rig)

CPU: Intel Celeron (duh) N2840 2.16GHz Dual Core

RAM: 4GB DDR3 1333MHz

HDD: Seagate 500GB

GPU: Intel HD Graphics 3000 Series

Spoiler

Frankenhertz (ex main rig)

CPU: Intel Atom N2600 1.6GHz Dual Core

RAM: 1GB DDR3-800

HDD: HGST 320GB

GPU: Intel Graphics Media Accelerator (GMA) 3600

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×