Jump to content

U.S. revises chip export rules to China, RTX 4090D likely to be affected

Hardware vendors can't catch a break. The US government is turning up the heat on export controls for advanced AI tech.

 

Quotes

Quote

Scheduled to be implemented on Thursday, April 4th, the document outlining the revisions has been accessible since March 29th. Spanning 166 pages, the PDF lays out the specific types of chips permissible for export and integration into various devices, such as laptops. The Department of Commerce, responsible for export controls, published the draft document.

 

The updated regulations now mandate a licensing requirement for components, as well as computers, that surpass 70 TeraFLOPS of compute performance.

This in particular affects RTX 4090D graphics card which has single-precision compute power of 73.5 TFLOPS and NVIDIA H20 (Hopper) data-center accelerator featuring 74 TFLOPS.

 

Dunno what to say more about this farce, but that the looming black market for high-end GPUs amid tightening trade restrictions will push up the prices even more for all of us.

 

Source: https://videocardz.com/newz/u-s-revises-chip-export-rules-to-china-geforce-rtx-4090d-likely-to-be-affected

Link to comment
Share on other sites

Link to post
Share on other sites

:(.

I am NOT a professional and I write before I think, so REFRESH THE PAGE!!!  Theres a 99% chance I've edited my post.

 

Also: Please enable XMP/D.O.H.C before asking why your ram is too slow.

Link to comment
Share on other sites

Link to post
Share on other sites

Can't Nvidia come up with a more creative solution than removing 4 shaders and pretending it's a different chip?

 

-Sell China-only "RTX 4010" cards that turn into a 4090 when a different BIOS is flashed

-Split the graphics processor into two dies (ATi Fire GL4 style, separate the pixel and vertex shaders)

-Sell the boards to Chinese vendors then sell the chips separately

-Hire con men to transport 4090s to Asia in submarines

 

Come on, have some fun!

Link to comment
Share on other sites

Link to post
Share on other sites

cant they just downclock the thing? you can nuke the performance of a gpu pretty easily by just downclocking the shit out of it, then the chinese can just oc it when it gets into china

 

seems like a pretty easy way to bypass any restrictions without even having to cut down on the gpu itself just that the end user has to tune the thing

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, DuckDodgers said:

Dunno what to say more about this farce

Biden posted a tweet about how he talked to President Xi Jinping about US/China relations and other world issues. Im not sure if he did that before or after he forgot where he was.........

 

At this point its getting comical. We cant put the AI back in the box and really all these restrictions do is slow down the inevitable.

I just want to sit back and watch the world burn. 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah this seems like a stupid decision imo. This gives China one more reason to take over Taiwan and in turn tsmc which produces the majority of the world's high performance chips. Also not sure how much such a ban is going to slow down china's AI infrastructure realistically. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Donut417 said:

Biden posted a tweet about how he talked to President Xi Jinping about US/China relations and other world issues. Im not sure if he did that before or after he forgot where he was.........

 

At this point its getting comical. We cant put the AI back in the box and really all these restrictions do is slow down the inevitable.

It's not like he owns the world

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

Is any company running AI on a large scale using 4090s? I thought the go-to chip is the Nvidia H100, (also restricted for China by the US Govt).

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, DuckDodgers said:

 

Dunno what to say more about this farce, but that the looming black market for high-end GPUs amid tightening trade restrictions will push up the prices even more for all of us.

 

 

Honestly, I'm very much on a "who cares" line of thinking.

 

If nvidia wanted to circumvent this, it's very easy. Go chiplet. Unless they are restricted to selling only complete boards, not bare chips, all a company has to do is glue two chips together at the chiplet side. (Gross oversimplification.)

 

The reality is that the US goverment kinda just misunderstands the situation, and trying to hobble nvidia from selling advanced "AI" chips to China means they will just import gaming GPU's to build those super computer AI systems and waste significantly more power in doing so. Just because they don't have the GPU's in china doesn't mean they can't get access to compute power in the US or Europe.

 

As it is, I'm getting the distinct feeling that Nvidia's own statements about AGI in 5 years probably put them in hot water for this.

 

Again, as I've said many times. There will be no AGI in 5 years. We will hit a ceiling in die shinks very soon, and everything will come to a halt without parallelizing the chip designs themselves. I predict what we will see are "Chip sandwiches" where it goes pcb-die-ram-heat interposer-ram-die-pcb and GPU/CPU's have to build sealed heat exchangers because "4 slot GPU's" are quite frankly bonkers stupid. Go back to the blower design even if you have to cut the clock in half, or wait for desktops to push the kind of "4slot+4slot" split PCIe lanes with the CPU in the middle like Dell Precision 7960 workstations.

 

Cause we are caught in a very stupid situation at the moment where the combined GPU+CPU power can't exceed the wall power of 1800w (single circuit) , a space heater is 1500w, and a 1500w PSU really doesn't have 1500w of power.

 

Nvidia can't put those server chips in gaming cards because the power requirements exceed that which you could reasonably get in the US 120v 15A and I'm sure AU, UK and EU standards aren't that far away. Unless gaming and AI computers are going to be things you can only own if you have a 240v circuit in your house, something has to give, and I'm sure it's going to be the GPU.

Link to comment
Share on other sites

Link to post
Share on other sites

This entire thing is laughable just shows how technically deficient people running the government are. What moron thought that they should be controlling the export based on the output performance o no the poor Chinese now they have to overclock their cards.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Fasterthannothing said:

This entire thing is laughable just shows how technically deficient people running the government are. What moron thought that they should be controlling the export based on the output performance o no the poor Chinese now they have to overclock their cards.

For those that sells ML as the new nuclear weapons, it makes sense that the new fissile material becomes an heavily restricted strategic resource with controlled and regulated exports. limiting access to top tier chips will result in a slow down of chinese developments and in the USA buying time.

 

3 hours ago, Kisai said:

As it is, I'm getting the distinct feeling that Nvidia's own statements about AGI in 5 years probably put them in hot water for this.

Indeed, Jensen and Sam Altman have only themselves to blame. Sam Altman pleaded to be given control over worldwide chip production and regulation! Talk about self important! They hyped the tech so much to boost sales that the government is afraid to letting them do their thing. May CEOs learn a lesson or two on overhyping tech!

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, 05032-Mendicant-Bias said:

For those that sells ML as the new nuclear weapons, it makes sense that the new fissile material becomes an heavily restricted strategic resource with controlled and regulated exports. limiting access to top tier chips will result in a slow down of chinese developments and in the USA buying time.

 

Indeed, Jensen and Sam Altman have only themselves to blame. Sam Altman pleaded to be given control over worldwide chip production and regulation! Talk about self important! They hyped the tech so much to boost sales that the government is afraid to letting them do their thing. May CEOs learn a lesson or two on overhyping tech!

Did you even read what I wrote? Limiting the speed of the output without trying to block the actual chips is like saying there are no longer speed limits on the roads but we have but a wooden block under your acceleration peddle to keep you from going too fast. Just be sure not to remove that easily removable wooden block.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Kisai said:

Go back to the blower design even if you have to cut the clock in half, or wait for desktops to push the kind of "4slot+4slot" split PCIe lanes with the CPU in the middle like Dell Precision 7960 workstations.

From what I read NVIDIA specifically forbid AIBs from making blowers, so that enterprise had to buy their overpriced enterprise cards rather than 4090s.  Or they'd have to buy the hybrid FE cards directly from NVIDIA, where they limited supply to one card per address and get a bigger cut of the profits.

 

If they weren't allowed to do this I think the 4090 either would not have existed at all, or would have been even more insanely expensive.

 

That said, I'd never buy another normal blower card anyway as they either ran stupidly hot or stupidly noisy.  The only way to make them quieter is to make them bigger, like the FE card.  Even those run hotter than AIBs, even if you crank the fan up to annoying noise levels.  In fact I was quite surprised that past about 60% fan speed, cranking up the FE card fans seems to make barely any difference to temperatures.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Alex Atkin UK said:

From what I read NVIDIA specifically forbid AIBs from making blowers, so that enterprise had to buy their overpriced enterprise cards rather than 4090s.  Or they'd have to buy the hybrid FE cards directly from NVIDIA, where they limited supply to one card per address and get a bigger cut of the profits.

All the server cards are blower models, where the noise doesn't matter since they're supposedly going into servers that already sound like jet engines. If you look at the clocks for those cards, they're lower.

image.thumb.png.880029e52d9f171186afc55742c65b13.png

Compare to the 4090Ti

image.thumb.png.a77e82be3ee21dc5041216aecf0ebc70.png

The TDP is double on the 4090Ti, perhaps why we never saw anyone build one. Given needing "two" 16-pin power connectors, I'm not sure how many PSU's shipped with that.

 

4 hours ago, Alex Atkin UK said:

If they weren't allowed to do this I think the 4090 either would not have existed at all, or would have been even more insanely expensive.

What needs to happen is to move away from "3 slot"/"4-slot" >300w designs in the first place. This is unsustainable.

 

4 hours ago, Alex Atkin UK said:

That said, I'd never buy another normal blower card anyway as they either ran stupidly hot or stupidly noisy.  The only way to make them quieter is to make them bigger, like the FE card.  Even those run hotter than AIBs, even if you crank the fan up to annoying noise levels.  In fact I was quite surprised that past about 60% fan speed, cranking up the FE card fans seems to make barely any difference to temperatures.

I think the problem overall is that nvidia hasn't really "improved" the GPU product by making it more efficient, only just trying to min-max the thermals. The standard design should have been 2-slot 300w blower designs, and AIB's could make non-blower designs with AIO's on them if they want to make 450 and 600w models, but they'd have to sell the AIO part as a separate piece since those will undoubtly require maintenance, and some OEM's might sell crappy AIO's if it's sold as one unit. If I wanted a GPU with a Noctua air cooler, that should be an option to mount to the card instead.

 

But this all goes back to basically needing a "side car" for the large GPU's, because few chassis actually have enough space to mount air coolers, and they come at the cost of ALL expansion space. Like what I'd like to see is a 4xTB "GPU ONLY" card that goes in the 16x slot, and connect that to a side car box with it's own PSU and Cooling for the dGPU for things that are >300w. The sidecar then lets you mount anything from an aftermarket air cooler to a massive liquid cooler.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Brooksie359 said:

Yeah this seems like a stupid decision imo. This gives China one more reason to take over Taiwan and in turn tsmc which produces the majority of the world's high performance chips.

China isn't taking over Taiwan to capture TSMC. Not a centimetre of any fab would survive.

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, Kisai said:

What needs to happen is to move away from "3 slot"/"4-slot" >300w designs in the first place. This is unsustainable.

 

I think the problem overall is that nvidia hasn't really "improved" the GPU product by making it more efficient, only just trying to min-max the thermals. The standard design should have been 2-slot 300w blower designs, and AIB's could make non-blower designs with AIO's on them if they want to make 450 and 600w models, but they'd have to sell the AIO part as a separate piece since those will undoubtly require maintenance, and some OEM's might sell crappy AIO's if it's sold as one unit. If I wanted a GPU with a Noctua air cooler, that should be an option to mount to the card instead.

I strongly disagree, having thicker cards means they're quieter.  Plus now that 300+ cat is out of the bag, if they reduced it now then next years card would perform dramatically worse than this years.

 

The problem was the 30x0 series was not much of an improvement in a lot of ways so they cranked up the power budget to hide that, my 3080 has about the same performance per watt as my 2080.

 

Then we got 40x0 series where they DID improve the efficiency, quite a lot.  But they chose to clock the snot out of them which killed that efficiency.

Most of the cards didn't need 3 slots, I have a 4070 Ti 2 slot card which pulls about the same as a 3080 but is considerably faster, in compute at least.  If you reduce the clocks a little, its incredibly efficient.  But at stock settings, its quite noisy compared to 3 slot versions.

Just running curve optimiser on my 4090 shaved about 100W off its power consumption running Folding@Home, locking the clock to 2.475Mhz (which can't be done in Afterburner or it disables the custom voltage curve) drops it to 230W total, losing only around 10% of the performance (again in F@H, not tested in games).

 

They didn't need to clock the cards this hard, WE don't need to clock the cards this hard.  However I'd rather have a card unlocked so I CAN clock it hard if I want to, they just should have made the stock clocks lower.  But they probably would have made them use less slots then, so we wouldn't have been able to cool them at those higher clocks.

 

47 minutes ago, Kisai said:

All the server cards are blower models, where the noise doesn't matter since they're supposedly going into servers that already sound like jet engines. If you look at the clocks for those cards, they're lower.

Which is exactly why they forbid AIBs making blowers, as people would buy them for enterprise use then where they can slap more than one in the same motherboard.  The 3090 for example came in blower variants, I think the Ti even did.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, Alex Atkin UK said:

I strongly disagree, having thicker cards means they're quieter.  Plus now that 300+ cat is out of the bag, if they reduced it now then next years card would perform dramatically worse than this years.

 

Nah, what I mean is that they should have sold the "bare card", and the card auto-tune based on the thermal ramp up.

 

So you buy say, a 5090Ti that "can" pull 600w, but if you stick a wimpy 80mm fan on it, it will top out at a 5050's performance. Let's do for GPU's what we do for CPU's. OEM fan at baseline, aftermarket fan/cooler for quieter/higher thermal headroom. Separate the coolers from the boards except at the low-end (eg 5050/5050Ti/5060/5060Ti) and keep those as 1-slot and 2-slot cards. Everything higher treat like we do CPU's that are i5/i7/i9 parts. On the box "300w TDP cooler required." 

 

Because the 3070Ti and the 3090 I have, the cooler is EXACTLY the same. When I pulled the 3070Ti out to put the 3090 in, I was like "is this the same ****ing card?", I had to check the markings.

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Kisai said:

All the server cards are blower models, where the noise doesn't matter since they're supposedly going into servers that already sound like jet engines. If you look at the clocks for those cards, they're lower.

 

3 hours ago, Kisai said:

The TDP is double on the 4090Ti, perhaps why we never saw anyone build one. Given needing "two" 16-pin power connectors, I'm not sure how many PSU's shipped with that.

We don't see blower cards because it's actually true Nvidia doesn't allow them, only not to PCI SIG spec ones. All designs do actually have to be approved by Nvidia and all Geforce cards are not "allowed" to be used in server/datacenter usage so they don't allow card designs that would make that generally possible.

 

Every Geforce blower design isn't to PCI SIG specification in either height or width (just ever so slightly wider than 2 slot) and although not a PCI SIG specification has the PCIe power connector on the top of the card not the end preventing usage in 1U/2U and even most 3U servers. They also have the TDP/TBP above the PCI SIG maximum on the high end cards meaning no server OEM will ever put them on HCL or even test them.

 

Everything about Geforces cards that make them unsuitable for server/datacenter usage will never change because that is literally the point.

 

1 hour ago, Kisai said:

Nah, what I mean is that they should have sold the "bare card", and the card auto-tune based on the thermal ramp up.

That is already how Nvidia GPU boost works. You won't see thermally constrained cards because Nvidia prices the GPU packages on performance target, not limited, so it doesn't make any sense to pay a premium then put a weaker cooler on it restricting performance requiring a lower market price when every other AIB, including yourself, will have unconstrainted options. All that is going to do is get a bad review and worse profit margin.

 

If Nvidia allows a GPU package to draw 600W then all the cards will be designed with 600W power usage in mind, not less.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, leadeater said:

 

That is already how Nvidia GPU boost works. You won't see thermally constrained cards because Nvidia prices the GPU packages on performance target, not limited, so it doesn't make any sense to pay a premium then put a weaker cooler on it restricting performance requiring a lower market price when every other AIB, including yourself, will have unconstrainted options. All that is going to do is get a bad review and worse profit margin.

 

If Nvidia allows a GPU package to draw 600W then all the cards will be designed with 600W power usage in mind, not less.

What I'm getting at, is that the mid-to-high end GPU's, the ones that keep being built as wider than 2 slots, could/should probably be sold the same way CPU's are sold, where you buy the i5/i7/i9 model tier and stick bigger fans/quieter fans/water-blocks that fit all models, without having to buy some AIB's cheap fans on one model and RGB fans on another that you then have to throw away. If all the coolers were interchangeable between OEMS, it would make the possibility of different configurations not limited to the often one-design-fits-all-but-really-requires-4-slots status quo. It would also keep the disaster of the 16 pin power connector from happening again as different AIB's could sell models that use different power connectors at different positions outside the cooler footprint.

 

Anyways, the status quo is stupid, where AIB's sell OC and non-OC cards with the exact same cooler and you just pay $50-$100 more for it.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kisai said:

What I'm getting at, is that the mid-to-high end GPU's, the ones that keep being built as wider than 2 slots, could/should probably be sold the same way CPU's are sold, where you buy the i5/i7/i9 model tier and stick bigger fans/quieter fans/water-blocks that fit all models, without having to buy some AIB's cheap fans on one model and RGB fans on another that you then have to throw away. If all the coolers were interchangeable between OEMS

Ahhhh, hmm not bad idea. Only slight downside is that would give Nvidia a little more design control since everything has to be in the same place at the same height so every card will likely be the exact same power delivery design i.e. 12+2 etc.

 

Company like EK would really like this though, me too. Not having to throw away my actually expensive water blocks would be nice.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Ydfhlx said:

China isn't taking over Taiwan to capture TSMC. Not a centimetre of any fab would survive.

 

Not just that, those fabs are so valuable to so many that China would be taking on every developed or developing nation in the process. It's why Chi9na is trying to get it's own chip manufacturing up. It can't really touch Taiwan unless it can somehow wean the rest of the world off TSMC.

 

As for the OP. This isn't a suprise, i expect we'll see more and more tightening of the restrictions as time goes on. It's just an inevitable effect of the geopolitical climate.

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/2/2024 at 8:22 PM, Brooksie359 said:

Yeah this seems like a stupid decision imo. This gives China one more reason to take over Taiwan and in turn tsmc which produces the majority of the world's high performance chips. Also not sure how much such a ban is going to slow down china's AI infrastructure realistically. 

There's an American nuclear deterrent to that. Also a huge navy.
Also the untested possibility (not likely to work but still a threat) of Taiwan destroying the Three Gorges Dams and wiping out a huge swathes of China. 

With that said, if there's a major conflict involving China, EVERYONE in the world loses. We're talking major economic ramifications for countries not directly involved, starvation, material deprivation, etc. 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

again, these are *manufactured* in china therefore not allowing export *to* china seems to make no sense.  

 

nvidia isnt even an American company lol.

 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Note the source in OP has been updated, with a claim that Nvidia apparently said those new regulations don't apply to 4090D or H20. 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×