Jump to content

AMD Begins Massive Wave of Price Cuts on R9 290x and 290

We consider 50-65 FPS too low for tomb raider? And are you one of the twits who thinks you need AA on 4K? You don't.

 

You get 38 FPS in Tomb Raider with only FXAA (minimal performance hit).

Link to comment
Share on other sites

Link to post
Share on other sites

You get 38 FPS in Tomb Raider with only FXAA (minimal performance hit).

At 4K you don't need AA unless you're working on a 30"+ screen.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah well FXAA doesn't affect the framerate that much, so that's really a moot point.

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah well FXAA doesn't affect the framerate that much, so that's really a moot point.

Furthermore, do you actually see a difference between high and ultra texture? I for one don't, and that gets you back another 5-8 depending on where you are in the game.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I think ill stick to getting a 970.

Specs: CPU - Intel i7 8700K @ 5GHz | GPU - Gigabyte GTX 970 G1 Gaming | Motherboard - ASUS Strix Z370-G WIFI AC | RAM - XPG Gammix DDR4-3000MHz 32GB (2x16GB) | Main Drive - Samsung 850 Evo 500GB M.2 | Other Drives - 7TB/3 Drives | CPU Cooler - Corsair H100i Pro | Case - Fractal Design Define C Mini TG | Power Supply - EVGA G3 850W

Link to comment
Share on other sites

Link to post
Share on other sites

 

Yeah agreed, so does nvidia. Drivers can't kill cards. Fan control is done by the bios, tools like MSI AB can overwrite it but drivers have no reason to do this, 600 series and up have resistors preventing to go above 1.21V and the bios is locked for 600 series to around 1.175V and again there's no reason a driver should overvolt the GPU. Stop sucking AMD's dick for a second, lately you were proven wrong about your maxwell perf/watt claims by me and you decided to copypaste it a new thread again.  

 

 

And you can't go around that nvidia uses compression, their 256 bit performs like a 384 bit.

Also you're completely wrong about Gbps and GB/s. A 384bit at 7000MHz in DDR5 will have 330GB/s. It's not in Gbps. Since most 290's hardly go above 6000MHz, a 780 ti with Samsung IC easily hits 8000MHz will have more bandwidth. Formula is wrong too, not that I checked it cba.

It's memory speed x DDR number x bus width. 

A 780 ti : 384bit * 5 * 1750MHz = 336GB/s

A 970: 336GB/s *5 * 1750 = 224GB/s

A 290x: 512bit * 5 * 1375MHz = 352GB/s

A 290x average OC is ~6000MHz: 512 * 5 * 1500MHz = 384GB/s

A 780 ti average OC ~7600MHz: 384bit * 5 * 1900MHz = 384GB/s

Side note: as we know the 780 ti sits at stock at 7000MHz, since DDR5 is quad pumped -> 7000/4 = 1750MHz. 

It's not even about the total bandwidth, it's about how big the memory bus itself is. Its the bridge between the GPU & memory. Have your memory as fast as you want, without a fast memory controller you aren't getting anywhere.

 

Just an FYI, your math is wrong there.

 

Memory bandwidth is:

Base Clock * 4 * Bus Width

 

It's not "times the DDR number"

 

GDDR5 is based on DDR3. They both have double the data lanes of traditional DDR memory. Traditional DDR memory is also "double-data rate". So DDR was "base clock * 2", but GDDR5 and DDR3 are "base clock * 4"

 

http://en.wikipedia.org/wiki/GDDR5

 

Like its predecessor, GDDR4, GDDR5 is based on DDR3 SDRAM memory which has double the data lines compared to DDR2 SDRAM

 

The DDR number is nothing more than a version number or product evolution marketing label.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

oh goodie,amd evolved games for comparison with nvidia cards.

Don't you love it when they cherry pick Mantle-enabled games.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Wow. :o

Main rig on profile

VAULT - File Server

Spoiler

Intel Core i5 11400 w/ Shadow Rock LP, 2x16GB SP GAMING 3200MHz CL16, ASUS PRIME Z590-A, 2x LSI 9211-8i, Fractal Define 7, 256GB Team MP33, 3x 6TB WD Red Pro (9TB partition for general storage + 2TB partition for dumping ground), 4x 8TB WD White Label/Red (Plex) (all 3 arrays in their respective Windows Parity storage spaces), Corsair RM750x, Windows 11 Education

Sleeper HP Pavilion A6137C

Spoiler

Intel Core i7 6700K @ 4.4GHz, 4x8GB G.SKILL Ares 1800MHz CL10, ASUS Z170M-E D3, 128GB Team MP33, 1TB Seagate Barracuda, 320GB Samsung Spinpoint (for video capture), MSI GTX 970 100ME, EVGA 650G1, Windows 10 Pro

OptiPlex 7040M

Spoiler

Intel Core i7 6700, 2x16GB Mushkin Redline (stuck at 2133MHz CL13), 240GB Corsair MP510, 130w Dell power brick, Windows 10 Pro

Mac Mini (Late 2020)

Spoiler

Apple M1, 8GB RAM, 256GB, macOS Sonoma

Consoles: Softmodded 1.4 Xbox w/ 500GB HDD, Xbox 360 Elite 120GB Falcon, XB1X w/2TB MX500, Xbox Series X, PS1 1001, PS2 Slim 70000 w/ FreeMcBoot, PS4 Pro 7015B 1TB (retired), PS5 Digital, Nintendo Switch OLED, Nintendo Wii RVL-001 (black)

Link to comment
Share on other sites

Link to post
Share on other sites

Competition is great for consumers. I just hope that AMD doesn't get knocked out while grinning. If they leave themselves too thin of a profit margin, paying for R&D could get difficult and Nvidia is already ahead of AMD in terms of graphical performance. I wouldn't want to see what Nvidia would do without any competition.

Paragon [ Intel i7 4770k @ 4.2GHz | 16GB @ 1866MHz Kingston HyperX Fury DDR3 | ASUS Z87-Pro | Zotac AMP GTX 1080 Ti | Corsair H100i GTX | Samsung 850 EVO 500GB SSD | 2 x Crucial M500 240GB SSDs | Seagate Barracuda 2TB HDD | EVGA SuperNOVA 850W G2 | Corsair Carbide 300R ] (Backup Storage: Seagate Expansion 5TB USB3.0 HDD)

 

 ASUS ROG Swift PG279Q }

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just an FYI, your math is wrong there.

 

Memory bandwidth is:

Base Clock * 4 * Bus Width

 

It's not "times the DDR number"

 

GDDR5 is based on DDR3. They both have double the data lanes of traditional DDR memory. Traditional DDR memory is also "double-data rate". So DDR was "base clock * 2", but GDDR5 and DDR3 are "base clock * 4"

1500MHz*4*384 isn't giving you 288GB/s which is 780's bandwidth. It's (Base clock x pump amount x bus width) / 8. For GDDR5 using 5 instead of 4 and without dividing by 8 will give the same result. 

DDR1 isn't dual pumped. DDR2 is. DDR3 is quad pumped. GDDR5 is just a modified version of DDR3. GDDR3 is dual pumped too, modified version of DDR2. http://www.rtcmagazine.com/archive_images/rtc0902/rtc0902se_pen01.jpg

A 1600MHz stick is just 200MHz base and quad pumped to 800MHz and since its DDR multiply it by two which gives you 1600MHz/1600MT/s or 1.6GT/s. 1GHz=1GT/s. 

Link to comment
Share on other sites

Link to post
Share on other sites

DDR1 isn't dual pumped.

 

Reheheheheally?

 

 

DDR stands for Double Data Rate, which specifically refers to SDRAM that is double pumped.

Link to comment
Share on other sites

Link to post
Share on other sites

Just an FYI, your math is wrong there.

 

Memory bandwidth is:

Base Clock * 4 * Bus Width

 

It's not "times the DDR number"

 

GDDR5 is based on DDR3. They both have double the data lanes of traditional DDR memory. Traditional DDR memory is also "double-data rate". So DDR was "base clock * 2", but GDDR5 and DDR3 are "base clock * 4"

 

http://en.wikipedia.org/wiki/GDDR5

 

DDR3 is not quad pumped, it's double pumped like DDR2 and DDR1. So the data rate is the I/O clock times 2, not times 4. Fire up CPU-Z and check your memory; it'll report DRAM frequency at half what the memory advertises (so if your memory is DDR3-1333, the clock frequency will be 666 MHz).

 

GDDR5 is different and a bit more complicated, because it uses more clock signals. But if you look at the CK, which is the clock frequency usually referred to (along with the data rate that should be in MT/s or GT/s instead of MHz or GHz), then it is effectively quad pumped. That's basically why it has more clock signals; it transfers data on the rising and falling edges of two separate clock signals.

Link to comment
Share on other sites

Link to post
Share on other sites

Makes sense, considering the GTX 970 beats the R9 290. I'd still pick the 970 over it, even if it does save me $50.

 

The R9 290 still beats the 970 if you have a good card like the Vapor-X or the PCS+ and there is overclocking headroom. The 970 has newer features, and is much more efficient so it is definitely a better buy. I think the price cut makes since, but AMD is offering you better RAW performance at a slightly lower price, whereas nVidia has a newer, more up-to-date product (displayport 1.3, hdmi 2.0, etc) that doesn't cost that much more and will be a better long term investment most likely being a generation newer.

 CPU:  Intel i7-4790K      Cooler:  Noctua NH-D14     GPU: ZOTAC GTX 1070 TI MINI     Motherboard:  ASUS Z97 Gryphon     RAM:  32GB G Skill Trident X     

Storage: 2x 512GB Samsung 850 EVO (RAID 0) / 2TB Seagate Barracuda     PSU: 850W EVGA SuperNova G2     Case: Fractal Design Node 804

Link to comment
Share on other sites

Link to post
Share on other sites

The R9 290 still beats the 970 if you have a good card like the Vapor-X or the PCS+ and there is overclocking headroom. The 970 has newer features, and is much more efficient so it is definitely a better buy. I think the price cut makes since, but AMD is offering you better RAW performance at a slightly lower price, whereas nVidia has a newer, more up-to-date product (displayport 1.3, hdmi 2.0, etc) that doesn't cost that much more and will be a better long term investment most likely being a generation newer.

It does very much depend on the overclocking.

For example the Asus Strix 970 beats the Sapphire Vapor-X marginally at stock clocks at 1440p.

I beats it by almost 10% at 1080p settings.

CPU: Ryzen 9 5900 Cooler: EVGA CLC280 Motherboard: Gigabyte B550i Pro AX RAM: Kingston Hyper X 32GB 3200mhz

Storage: WD 750 SE 500GB, WD 730 SE 1TB GPU: EVGA RTX 3070 Ti PSU: Corsair SF750 Case: Streacom DA2

Monitor: LG 27GL83B Mouse: Razer Basilisk V2 Keyboard: G.Skill KM780 Cherry MX Red Speakers: Mackie CR5BT

 

MiniPC - Sold for $100 Profit

Spoiler

CPU: Intel i3 4160 Cooler: Integrated Motherboard: Integrated

RAM: G.Skill RipJaws 16GB DDR3 Storage: Transcend MSA370 128GB GPU: Intel 4400 Graphics

PSU: Integrated Case: Shuttle XPC Slim

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

Budget Rig 1 - Sold For $750 Profit

Spoiler

CPU: Intel i5 7600k Cooler: CryOrig H7 Motherboard: MSI Z270 M5

RAM: Crucial LPX 16GB DDR4 Storage: Intel S3510 800GB GPU: Nvidia GTX 980

PSU: Corsair CX650M Case: EVGA DG73

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

OG Gaming Rig - Gone

Spoiler

 

CPU: Intel i5 4690k Cooler: Corsair H100i V2 Motherboard: MSI Z97i AC ITX

RAM: Crucial Ballistix 16GB DDR3 Storage: Kingston Fury 240GB GPU: Asus Strix GTX 970

PSU: Thermaltake TR2 Case: Phanteks Enthoo Evolv ITX

Monitor: Dell P2214H x2 Mouse: Logitech MX Master Keyboard: G.Skill KM780 Cherry MX Red

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Don't you love it when they cherry pick Mantle-enabled games.

 

How about a brand new Warner Brothers "Nvidia game"? The Sapphire R 290 Tri-X I bought for 289.99 has better temps than some 970's and it blows away most 290x's. It has better temps than 780 TI's. Anandtech has a review on the Tri-X. The thing runs in the low 70's on a OC and high 60's C stock. It has awards from all over the damn place. Some R9 290's had some horrible cooling on them. Doesn't make them all bad. 

 

It can do pretty much a locked 30 on Shadows of Mordor at 4k resolution. Linus/Slick did benchmarks.

http://www.youtube.com/watch?v=rp5meTIP3Pk&list=UUXuqSBlHAE6Xw-yeJA0Tunw

 

Hell it can do Ultra "6 gig textures" (bull@$%@ requirement) at pretty playable FPS at 4k. For 289 bucks this thing is a bargain. Do you need to check reviews of the R9 290's and get  one of the better ones? Sure. A You should do that with all cards though. I wouldn't buy the EVGA 970 this time around over the Gigabyte/MSi, and I think EVGA is a fantastic company. Sometimes third party vendors swing and miss.  It also came with Star Citizen which I was 100 percent going to buy anyways. I have a 770 and the one freakin game I had fun with Physx in was Planetside 2 and they removed the effects a long time ago cus it is coming to PS4 which uses a AMD 7000 series architecture just like the XB1. Add to that? I am guessing the emulators will run first on AMD cards and I like my emulation.

 

 

Do I think AMD's software suite could be better? Hell yeah. The software suite is where I think they are the most behind to be honest. For a complete amateur Nvidia DSR is awesome. People like me on this forum? Have been downsampling for years though. The tech isn't night and day though and from all the articles I read on the Sapphire Tri-x the power usage is completely overblown on a decently cooled R9 290. Bad cooling means fans spinning like crazy which inflates the hell out of the power usage.

 

It this was 500 dollars vs 600 dollars. Then I might care about the newer Maxwell features. This is 289 vs 370 for the good Nvidia cards, and those cards do not exist at that price (not in stock) and are hitting up to 600 dollars for PRE ORDERS from sites like NCIX. 

 

I am not going to wait months to get the exact same performance I can get today at 100 bucks cheaper lol. I also will be damned before I buy a price gouged card (which is not Nvidia's fault). Price gouging only works because people buy the cards at that price.

CPU:24/7-4770k @ 4.5ghz/4.0 cache @ 1.22V override, 1.776 VCCIN. MB: Z87-G41 PC Mate. Cooling: Hyper 212 evo push/pull. Ram: Gskill Ares 1600 CL9 @ 2133 1.56v 10-12-10-31-T1 150 TRFC. Case: HAF 912 stock fans (no LED crap). HD: Seagate Barracuda 1 TB. Display: Dell S2340M IPS. GPU: Sapphire Tri-x R9 290. PSU:CX600M OS: Win 7 64 bit/Mac OS X Mavericks, dual boot Hackintosh.

Link to comment
Share on other sites

Link to post
Share on other sites

@deathjester Well damn that was a mouthful... The Tri-X is a great cooler, but honestly I'm not even in the market for a graphics card right now. Those last me at least 4 years, preferably more if I can get a cheaper one down the road to SLI with (given I've never bought an AMD GPU).

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

DDR stands for Double Data Rate, which specifically refers to SDRAM that is double pumped.

No.

SDR200 -> 100 MHz -> 100 MHz - > 100Mb/s because it's not DDR. If it was dual pumped, it would be 100MHz -> 200 MHz -> 200MB/s because it's SDR.

Looking at DDR200 -> 100MHz -> 100 MHz (no pumping) -> DDR -> 200Mb/s

Looking at DDR400 -> 100MHz -> 200MHz (here's the pumping) -> DDR -> 400Mb/s

Looking at DDR1600 -> 200MHz -> 800MHz (here's the pumping) -> DDR -> 1600Mb/s

rtc0902se_pen01.jpg

- QDR 200 MHz -> 800MHz -> (quad pumped) -> 3200Mb/s (quad data rate)

 

 

 4. Fire up CPU-Z and check your memory; it'll report DRAM frequency at half what the memory advertises (so if your memory is DDR3-1333, the clock frequency will be 666 MHz).

 

That's not reporting the actual base clock.

 

GDDR5 is different and a bit more complicated, because it uses more clock signals. But if you look at the CK, which is the clock frequency usually referred to (along with the data rate that should be in MT/s or GT/s instead of MHz or GHz), then it is effectively quad pumped. That's basically why it has more clock signals; it transfers data on the rising and falling edges of two separate clock signals.

 

Like you refered earlier to DDR = double data rate = double pump. GDDR5 = Quad pumped? Doesn't make much sense. In that case it would be QDR which stands for quad data rate. And no GDDR5 is still double data rate but the clock is quad pumped. It uses the same amount of clock signals which is 2, 1 for SDR and 4 for QDR.

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah agreed, so does nvidia. Drivers can't kill cards. Fan control is done by the bios, tools like MSI AB can overwrite it but drivers have no reason to do this, 600 series and up have resistors preventing to go above 1.21V and the bios is locked for 600 series to around 1.175V and again there's no reason a driver should overvolt the GPU. Stop sucking AMD's dick for a second, lately you were proven wrong about your maxwell perf/watt claims by me and you decided to copypaste it a new thread again.  

 

 

And you can't go around that nvidia uses compression, their 256 bit performs like a 384 bit.

Also you're completely wrong about Gbps and GB/s. A 384bit at 7000MHz in DDR5 will have 330GB/s. It's not in Gbps. Since most 290's hardly go above 6000MHz, a 780 ti with Samsung IC easily hits 8000MHz will have more bandwidth. Formula is wrong too, not that I checked it cba.

It's memory speed x DDR number x bus width. 

A 780 ti : 384bit * 5 * 1750MHz = 336GB/s

A 970: 336GB/s *5 * 1750 = 224GB/s

A 290x: 512bit * 5 * 1375MHz = 352GB/s

A 290x average OC is ~6000MHz: 512 * 5 * 1500MHz = 384GB/s

A 780 ti average OC ~7600MHz: 384bit * 5 * 1900MHz = 384GB/s

Side note: as we know the 780 ti sits at stock at 7000MHz, since DDR5 is quad pumped -> 7000/4 = 1750MHz. 

It's not even about the total bandwidth, it's about how big the memory bus itself is. Its the bridge between the GPU & memory. Have your memory as fast as you want, without a fast memory controller you aren't getting anywhere.

Software CAN kill hardware you're delusional if you think otherwise.

Are you dumb ? 7Ghz GDDR5 memory is capable of 7Gbps bandwidth even Nvidia says that "7.0Gbps DRAM"

BandwidthSavings.png

AMD does color compression as well and it's better than Nvidia's

AMD-Radeon-R9-285-Tonga-Color-Compressio

Link to comment
Share on other sites

Link to post
Share on other sites

 

No.

SDR200 -> 100 MHz -> 100 MHz - > 100Mb/s because it's not DDR. If it was dual pumped, it would be 100MHz -> 200 MHz -> 200MB/s because it's SDR.

Looking at DDR200 -> 100MHz -> 100 MHz (no pumping) -> DDR -> 200Mb/s

Looking at DDR400 -> 100MHz -> 200MHz (here's the pumping) -> DDR -> 400Mb/s

Looking at DDR1600 -> 200MHz -> 800MHz (here's the pumping) -> DDR -> 1600Mb/s

rtc0902se_pen01.jpg

- QDR 200 MHz -> 800MHz -> (quad pumped) -> 3200Mb/s (quad data rate)

 

 

 

That's not reporting the actual base clock.

 

 

Like you refered earlier to DDR = double data rate = double pump. GDDR5 = Quad pumped? Doesn't make much sense. In that case it would be QDR which stands for quad data rate. And no GDDR5 is still double data rate but the clock is quad pumped. It uses the same amount of clock signals which is 2, 1 for SDR and 4 for QDR.

 

 

You're looking at the wrong clocks. You need to look at the I/O clocks.

 

GDDR5 transfers data on both the rising and falling edge of WCK01 and WCK23, so it is double pumped with respect to each of those clock signals.

 

A clock signal cannot be quad pumped, that expression makes no sense. Pumping refers to transferring data multiple times in a clock cycle, so it cannot refer to just the clock signal (or signals) alone.

Link to comment
Share on other sites

Link to post
Share on other sites

You're looking at the wrong clocks. You need to look at the I/O clocks.

 

GDDR5 transfers data on both the rising and falling edge of WCK01 and WCK23, so it is double pumped with respect to each of those clock signals.

 

A clock signal cannot be quad pumped, that expression makes no sense. Pumping refers to transferring data multiple times in a clock cycle, so it cannot refer to just the clock signal (or signals) alone.

A clock cycle can be quad-pumped by using the middle states as well as peaks/valleys.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

A clock cycle can be quad-pumped by using the middle states as well as peaks/valleys.

 

The clock signal itself isn't quad-pumped, it's just a signal. The quad pumping is when you perform an action (transfer data) 4 times in each cycle of that clock signal.

 

Semantics aside, GDDR5 is not quad-pumped in that sense. It only transfers data on the rising and falling edges of the clock signal, not on the peaks or valleys. It achieves a similar effect though, by having two clock signals phase shifted by 90 degrees, and transferring on the rising and falling edge of both of them.

 

By the way, these clock signals are not regular sinus curves, they are square waves. So AFAIK there's no easy way to tell when the signal hits the peak or the trough.

Link to comment
Share on other sites

Link to post
Share on other sites

It does very much depend on the overclocking.

For example the Asus Strix 970 beats the Sapphire Vapor-X marginally at stock clocks at 1440p.

I beats it by almost 10% at 1080p settings.

 

fair enough, I do agree the extent of each overclock is a huge factor, but with price factored in, I think the 290 will retain the edge in performance/dollar while remaining behind in terms of features. It will be interesting to see if it gets rebranded as an r9 380 though, and how much that costs.

 CPU:  Intel i7-4790K      Cooler:  Noctua NH-D14     GPU: ZOTAC GTX 1070 TI MINI     Motherboard:  ASUS Z97 Gryphon     RAM:  32GB G Skill Trident X     

Storage: 2x 512GB Samsung 850 EVO (RAID 0) / 2TB Seagate Barracuda     PSU: 850W EVGA SuperNova G2     Case: Fractal Design Node 804

Link to comment
Share on other sites

Link to post
Share on other sites

1500MHz*4*384 isn't giving you 288GB/s which is 780's bandwidth. It's (Base clock x pump amount x bus width) / 8. For GDDR5 using 5 instead of 4 and without dividing by 8 will give the same result. 

Your statement here makes no sense.

 

Lets look at your example of the 780. It has a 288GB/s (GigaBYTE) bandwidth. It has a 384-Bit bus.

 

So because we're going from Bits to Bytes, you need to divide the final value by 8.

 

1500MHz * 4 * 384 Bits = 2,304,000 Million Bits/s (It's Million Bits because we're dealing with MHz, rather then straight up Hz, so the answer needs to take that conversion into account)

2,304,000 / 8 = 288,000 Million Bits = 288 GB/s

 

If you take your calc:

1500MHz * 5 * 384 = 2,880,000

The problem with this figure is that it makes no sense. It's not bits (or bytes), because that would be 2.88Mb/s (or MB/s - both wrong). It's also not Million Bits, because that would equal 2.88Tb/s (or TB/s), which would be INSANE! (One day we'll get 2.88 TB/s GPU bandwidth).

 

So your math just doesn't worth out in any way that I can see. If you have a source for your "quint pump" theory, then please post it. But everything I've looked up says that GDDR5 is quad pumped (Which is sort of a misnomer, but it's still effectively multiplied by 4).

 

I'll just leave this here:

About 2 and a half minutes in he starts the math of it.

 

The confusion on this particular topic is not surprising. In doing my own research, I've come across at least 3 or 4 different calculations. Of which, I'm sure several are wrong. I even saw one calculation that used "double pumping" only. Finding good information that is accurate is difficult.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Would some of you care to share the retailers/etailers that have these prices in effect? Personally in Canada I shop at NCIX and Newegg. I am not really seeing the prices drops.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×