Jump to content

AMD Begins Massive Wave of Price Cuts on R9 290x and 290

You're so knowledgeable about some things but you seem completely incompetent with others...no offense meant here, but the other day you called OS X Linux based too. Just a bit inconsistent for a CS major is all...

 

Objective-C is a compiled language originally a competitor to C++. It took bits of Smalltalk, and added them to C to make an object oriented language. It was never considered particularly better than C++ though, hence the adage "Objective-C took the runtime speed of Smalltalk combined with the syntactical elegance of C." (i.e., the worst parts of both languages) Apple is really the only company using it at this point.

 

I don't know where you're getting C++ having a lot of overhead from, though. Windows AND OS X use it for at least something. Windows is mostly C++ I believe. Linux is pure C though. C is only used for the kernels in Windows and OS X. 

 

http://stackoverflow.com/questions/580292/what-languages-are-windows-mac-os-x-and-linux-written-in

 

Basically they have an LLVM bytecode compiler for Objective-C and now Swift so theoretically they can recompile on any platform with an LLVM bytecode to native compiler. They love LLVM. They hired the guy who invented it.

 

Come to think of it you probably confused Objective-C with C#, Microsoft's Java that is JIT compiled on the .NET virtual machine.

The reason C++ has overhead has to do with direct support of generics, something C doesn't do. OS don't use object-oriented design in the classical sense. There's a strict set of things an OS supports, which is why software inter compatibility is a real bitch to design for. This is also why UIs are generally limited in customizability in real time.

 

The answers to this question explain what I mean. OS Kernels actually try to avoid methods/functions as much as possible and simply use global variables and clever in-line jumps to move from place to place to avoid pushing and popping things to the stack. Doing that is what makes (or made) C++ an intolerable OS language for so long.

 

http://stackoverflow.com/questions/144993/how-much-overhead-is-there-in-calling-a-function-in-c

 

Also, no, I knew about C#, but I've taken looks at Objective-C code and said to myself "you'd have to be some sort of masochist to choose to code in this language." So I never decided to learn more about it. Then you have MindFuck (yes, it's a language) which was clearly written for the lulz.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

The reason C++ has overhead has to do with direct support of generics, something C doesn't do. OS don't use object-oriented design in the classical sense. There's a strict set of things an OS supports, which is why software inter compatibility is a real bitch to design for. This is also why UIs are generally limited in customizability in real time.

 

The answers to this question explain what I mean. OS Kernels actually try to avoid methods/functions as much as possible and simply use global variables and clever in-line jumps to move from place to place to avoid pushing and popping things to the stack. Doing that is what makes (or made) C++ an intolerable OS language for so long.

 

http://stackoverflow.com/questions/144993/how-much-overhead-is-there-in-calling-a-function-in-c

 

Also, no, I knew about C#, but I've taken looks at Objective-C code and said to myself "you'd have to be some sort of masochist to choose to code in this language." So I never decided to learn more about it. Then you have MindFuck (yes, it's a language) which was clearly written for the lulz.

I know all of this already :P 

 

I was basically just telling you the same thing. Kernels are written in pure C, the rest (in Windows and OS X) is written in an object-oriented C variant.

 

The language is brainfuck, not MindFuck.

"You have got to be the biggest asshole on this forum..."

-GingerbreadPK

sudo rm -rf /

Link to comment
Share on other sites

Link to post
Share on other sites

I know all of this already :P

 

I was basically just telling you the same thing. Kernels are written in pure C, the rest (in Windows and OS X) is written in an object-oriented C variant.

 

The language is brainfuck, not MindFuck.

It's 12:31. Forgive me for slightly messing up the name of an esoteric laughing stock programming language no one but the insane/lulzy use. 

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

It's 12:31. Forgive me for slightly messing up the name of an esoteric laughing stock programming language no one but the insane/lulz uses it. 

It's used quite frequently in code obfuscation contests. In one I spectated somebody who wrote a cross compiler to it for Python :D

"You have got to be the biggest asshole on this forum..."

-GingerbreadPK

sudo rm -rf /

Link to comment
Share on other sites

Link to post
Share on other sites

It's used quite frequently in code obfuscation contests. In one I spectated somebody who wrote a cross compiler to it for Python :D

>.> For Pete's sake. What happened to using lousy variable/function names, passing quadruple pointer references, implicit UTF to short mathematics, and in-line assembly?

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

P-P-P PRICE WARS!!!!

C-C-CHANGES

 

Spoiler

i5 4670k, GTX 970, 12GB 1600, 120GB SSD, 240GB SDD, 1TB HDD, CM Storm Quickfire TK, G502, VG248QE, ATH M40x, Fractal R4

Spoiler

i5 4278U, Intel Iris Graphics, 8GB 1600, 128GB SSD, 2560x1600 IPS display, Mid-2014 Model

Spoiler

All the parts are here, just need to get customized cords to connect the motherboard to the front panel.

Link to comment
Share on other sites

Link to post
Share on other sites

>.> For Pete's sake. What happened to using lousy variable/function names, passing quadruple pointer references, implicit UTF to short mathematics, and in-line assembly?

I've seen the future of writing fucked up code. This is it.

"You have got to be the biggest asshole on this forum..."

-GingerbreadPK

sudo rm -rf /

Link to comment
Share on other sites

Link to post
Share on other sites

I've seen the future of writing fucked up code. This is it.

But there's no art to it. It's just difficult.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

But there's no art to it. It's just difficult.

I think there's an art to having those kinds of ideas in the first place.

"You have got to be the biggest asshole on this forum..."

-GingerbreadPK

sudo rm -rf /

Link to comment
Share on other sites

Link to post
Share on other sites

I think there's an art to having those kinds of ideas in the first place.

It's no different than assembly, just with a reduced instruction set.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

It's no different than assembly, just with a reduced instruction set.

Brainfuck like assembly? What? Your brain is fucked dude. Go to bed :D

"You have got to be the biggest asshole on this forum..."

-GingerbreadPK

sudo rm -rf /

Link to comment
Share on other sites

Link to post
Share on other sites

Brainfuck like assembly? What? Your brain is fucked dude. Go to bed :D

No no no think about it. You do pointer arithmetic and put starts and ends of loops. It's just like doing the jumps in assembly and working on registers, except in in BrainFuck it's implicit instead of explicit.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

No no no think about it. You do pointer arithmetic and put starts and ends of loops. It's just like doing the jumps in assembly and working on registers, except in in BrainFuck it's implicit instead of explicit.

It's not true for anything else (well not that I can think of) but I think Brainfuck might be less readable than assembler. :)

"You have got to be the biggest asshole on this forum..."

-GingerbreadPK

sudo rm -rf /

Link to comment
Share on other sites

Link to post
Share on other sites

It's not true for anything else (well not that I can think of) but I think Brainfuck might be less readable than assembler. :)

Some of the looping makes mild sense, but I can't make heads or tails of the Hello World program.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Some of the looping makes mild sense, but I can't make heads or tails of the Hello World program.

That's when you know you've made a language that's impossible to use. When even a CS major can't figure out how to Hello World you know you're fucked. 

 

I suppose that was their intention, though. So mission accomplished for the Brainfuck developers.

"You have got to be the biggest asshole on this forum..."

-GingerbreadPK

sudo rm -rf /

Link to comment
Share on other sites

Link to post
Share on other sites

I think MSI has a multimedia mobile workstation with a mobile quadro in it, but it works like shit. If only Intel had bought Nvidia back when they weren't competing... It'd be great to watch Carrizo go up against an x86-enabled Tegra K1.

 

Yeah, I probably am not going to want (Or need) a quadro, even a mobile one. If anything, most of my personal tasks and potential workloads will probably work fine off of regular GTX or Radeon cards, seeing as I don't have need for render farms. 

 

Though, hopefully, the performance increases we're currently seeing in Maxwell Desktop will translate to the massive gains rumored for Maxwell Mobile. Until then, it's wait and see (Still saving up my cash in general.)

We all need a daily check-up from the neck up to avoid stinkin' thinkin' which ultimately leads to the hardening of attitudes. - Zig Ziglar

The sad fact about atheists is that they stand for nothing while standing against things that have brought much good to the world. Now ain't that sad. - Anonymous

Replace fear with faith and fear will disappear. - Billy Cox  ......................................Also, Legalism, Education-bred Arrogance and Hubris-based Assumption are BULLSHIT.

Link to comment
Share on other sites

Link to post
Share on other sites

At 4K the reference stock clocked GTX 970 gives the reference stock clocked R9 290X a run for its money. @

Batman Arkham Origins 3840x2160

GTX 970:43.1fps

R9 290X:44.6fps

batman_ao_3840_2160.gif

Battlefield 4 3840x2160 

GTX 970:24.5FPS

R9 290X:24.9FPS

bf4_3840_2160.gif

Far Cry 3 3840x2160

GTX 970:31.1FPS

R9 290X:29.8FPS

farcry3_3840_2160.gif

 

Grid 2

GTX 970:53.9FPS

R9 290X:55.3FPS

grid2_3840_2160.gif

Metro Last Light 3840x2160

GTX 970:26.3FPS

R9 290X:27.5FPS

metro_lastlight_3840_2160.gif

 

Tomb Raider 3840x2160

GTX 970:34.9FPS

R9 290X:37.4FPS

 tombraider_3840_2160.gif

 

Watch Dogs 3840x2160

GTX 970:28.4FPS

R9 290X:29.2FPS

watchdogs_3840_2160.gif

Let me make two points to underline the discussion here. The first being that truly there is no reference GTX 970. The GTX 970 with the Titan-esque cooler isn't sold anywhere. You're left with good custom cooled versions and crappy ones which literally have been equipped with GTX 760 blower style coolers which are hopelessly inadequate.

The second point is that the 290X really was never a good value compared to the R9 290 even before the 900 series came out. Now if you'd look at Gigabyte Windforce R9 290 vs WIndforce GTX 970 benchmarks you'll find that they'll both perform almost exactly the same but you're saving $59 with the R9 290 and you're getting a Never Settle Gold bundle which includes three triple A games for free.

R9 290 vs GTX 970 in power you're looking at 216W and 178W respectively. So a 38W reduction in power or about 17.5%. So you'll have to decide whether that power reduction is worth the $59 premium for you because otherwise the cards are almost certainly identical unless we start talking about overclocking. In which case you stand to gain more from the 290 again thanks to the massive memory bus and extremely conservative factory clock speeds of the card.

 

According to Anandtech the GTX 980 and GTX 970 both come with a voltage setting of 1.25v right out of the factory which is considerably higher than the 780 Ti and the 780 and even AMD's 290X and 290.

The new cards have obviously been pushed hard to be more competitive on the desktop which is naturally going to eat through the overclocking headroom. There's a distinct difference between how high the GTX 900 series can clock and how well they overclock. Bumping the frequency from 1250mhz to 1450mhz might seem impressive but actually nets a smaller performance uplift in comparison to overclocking from 950 to 1150. The clock speeds mentioned are what the majority of 970s and 290s can achieve respectively. But because the 290 starts off with a much lower clock speed and voltage setting the card can be overclocked further than the 970 percentage wise, 21% vs 16% specifically. The freedom to overvolt by as much as you'd like on the AMD cards also opens up the door for extraordinary overclocks underwater, something that's missing from the 900 series due to the very limited voltage control.

Link to comment
Share on other sites

Link to post
Share on other sites

It is a good time in the GPU market. :D

9900K  / Noctua NH-D15S / Z390 Aorus Master / 32GB DDR4 Vengeance Pro 3200Mhz / eVGA 2080 Ti Black Ed / Morpheus II Core / Meshify C / LG 27UK650-W / PS4 Pro / XBox One X

Link to comment
Share on other sites

Link to post
Share on other sites

Well at least AMD doesn't release drivers that actually KILL cards.

Yeah agreed, so does nvidia. Drivers can't kill cards. Fan control is done by the bios, tools like MSI AB can overwrite it but drivers have no reason to do this, 600 series and up have resistors preventing to go above 1.21V and the bios is locked for 600 series to around 1.175V and again there's no reason a driver should overvolt the GPU. Stop sucking AMD's dick for a second, lately you were proven wrong about your maxwell perf/watt claims by me and you decided to copypaste it a new thread again.  

 

 

You just can't go around the fact that GM204 physically has half the memory bus width of Hawaii. Also remember that overclocking the memory on the 512bit bus of the 290 and 290X would result double the bandwidth increase vs overclocking the memory on a 256bit bus of the 970/980.

And you can't go around that nvidia uses compression, their 256 bit performs like a 384 bit.

Also you're completely wrong about Gbps and GB/s. A 384bit at 7000MHz in DDR5 will have 330GB/s. It's not in Gbps. Since most 290's hardly go above 6000MHz, a 780 ti with Samsung IC easily hits 8000MHz will have more bandwidth. Formula is wrong too, not that I checked it cba.

It's memory speed x DDR number x bus width. 

A 780 ti : 384bit * 5 * 1750MHz = 336GB/s

A 970: 336GB/s *5 * 1750 = 224GB/s

A 290x: 512bit * 5 * 1375MHz = 352GB/s

A 290x average OC is ~6000MHz: 512 * 5 * 1500MHz = 384GB/s

A 780 ti average OC ~7600MHz: 384bit * 5 * 1900MHz = 384GB/s

Side note: as we know the 780 ti sits at stock at 7000MHz, since DDR5 is quad pumped -> 7000/4 = 1750MHz. 

It's not even about the total bandwidth, it's about how big the memory bus itself is. Its the bridge between the GPU & memory. Have your memory as fast as you want, without a fast memory controller you aren't getting anywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

It is a good time in the GPU market. :D

the really Good time will be in a few months when AMD releases their next gen
Link to comment
Share on other sites

Link to post
Share on other sites

single-monitor 4K it's enough for everything but FPS games where every last frame per second matters.

 

And yeah, $3400 for a laptop, even with 1TB of SSD and 32 GB of RAM, is ridiculous when the dGPU you get is the gtx 750m... I mean come on...the 850m's been out for a while now. Still, hopefully it will last me 6-7 years like every other computer I've ever owned except the shitty Dell Latitude I bought for class because English teachers are dicks.

 

I disagree. It's not enough for games like Assassin's Creed IV, Tomb Raider, Company of Heroes 2, or Watch Dogs either. Turning up the resolution only to end up with lower framerates and lower graphics settings means you might as well not bother.

Link to comment
Share on other sites

Link to post
Share on other sites

 

It's memory speed x DDR number x bus width. 

 

What? The DDR revision number is irrelevant. You multiply the transfer rate (eg. 7000 MT/s = 7 GT/s) by the bus width (eg. 384-bit) and divide by the number of bits per byte (8). For example, your calculation for the 780 Ti is 384 x 5 x 1750, but that is off by a full order of magnitude (if anything you should be multiplying by 0.5 instead of 5, but that 0.5 just arises from the x4 from quad pumping and the /8 from converting from bits to bytes; it has nothing to do with a "DDR number"). 7 x 384 / 8 gives the correct 336 GB/s.

Link to comment
Share on other sites

Link to post
Share on other sites

What? The DDR revision number is irrelevant. For example, your calculation for the 780 Ti is 384 x 5 x 1750, but that is off by a full order of magnitude (if anything you should be multiplying by 0.5 instead of 5, but that 0.5 just arises from the x4 from quad pumping and the /8 from converting from bits to bytes; it has nothing to do with a "DDR number"). 7 x 384 / 8 gives the correct 336 GB/s.

If the DDR revision is irrelevant you wouldn't have a quad pump.

Link to comment
Share on other sites

Link to post
Share on other sites

If the DDR revision is irrelevant you wouldn't have a quad pump.

 

I didn't say the DDR revision is irrelevant, I said the DDR revision number in itself is. The 5 in GDDR5 does not have anything to do with how you calculate the memory bandwidth. Just like the 3 in GDDR3 didn't matter. Or an even better example, the calculation is exactly the same for DDR3 and DDR4 despite the different number.

Link to comment
Share on other sites

Link to post
Share on other sites

I disagree. It's not enough for games like Assassin's Creed IV, Tomb Raider, Company of Heroes 2, or Watch Dogs either. Turning up the resolution only to end up with lower framerates and lower graphics settings means you might as well not bother.

We consider 50-65 FPS too low for tomb raider? And are you one of the twits who thinks you need AA on 4K? You don't.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×