Jump to content

AMD responds to 1080p gaming tests on Ryzen. Supports ECC RAM. Win 10 SMT bug

3DOSH
5 minutes ago, XenosTech said:

What you say makes as much sense as partick calling the zen chip a xeon

mate, I asked you to point me specifics of what I said that doesn't sit well with you

maybe we'll sort it out

 

but all I see from you is randomness about other forum member I have nothing in common with

so, care to get back on topic? or shall I ignore you from now on?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Zi6iX said:

I do believe that the low performance in gaming is due to some intel optimization. I mean look at cinebench, CPU-Z, 3d mark, and all of those neutral test they're pretty much on par with intel. We also know that the ipc about 6% lower than sky/kabylake.

People keep claiming software bias but I don't buy it. AMD has continually used this excuse and it's getting old.

 

Which tests are "neutral"?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Zi6iX said:

I do believe that the low performance in gaming is due to some intel optimization. I mean look at cinebench, CPU-Z, 3d mark, and all of those neutral test they're pretty much on par with intel. We also know that the ipc about 6% lower than sky/kabylake.

But that's irrelevant here. It's easier to ignore the facts and keep spouting how much better Intel is at everything. Everyone here seems to be experts at CPU design and can spin something better out of their arse.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, zMeul said:

mate, I asked you to point me specifics of what I said that doesn't sit well with you

maybe we'll sort it out

 

but all I see from you is randomness about other forum member I have nothing in common with

so, care to get back on topic? or shall I ignore you from now on?

 

1 hour ago, zMeul said:

AMD blaming Intel for their poor performance in gaming .. yes, yes, what's new?

 

but wait! weren't those same compilers used for the software where Zen showed to be better than Broadwell - yeah, that's what I thought

 

1 hour ago, zMeul said:

the same compilers were used in the cases where Zen beats Broadwell, so ... hmm

 

3 hours ago, zMeul said:

1800X is more expensive that 7700K, 1700X is still more expensive than 7700K

1700 is cheaper, 10$, than 7700K but also slower than 7700K

what do you chose? I know .. the A10-7890K because it's not in the margin of error xD

This one takes the cake though

3 hours ago, zMeul said:

except the 7700k beats it:

85887.png

 

having more cores does not equate to having more performance ;)

I really can't do this all day, I have work to do. Networks to fix, server deployments to do

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

Let the damage control begin!  It's like AMD has a flow chart of excuses when their engineering can't cash the checks their marketing writes.  Been like this forever.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Memories4K said:

gallery_141885_3517_64378.jpg

I mean shit, a Core 2 will do good at 4K. xD 

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, XenosTech said:

I really can't do this all day, I have work to do. Networks to fix, server deployments to do

  1. the compiler developers use for <insert random software name> are the same with what game developers use - to claim that game developers use "special" compilers provided by Intel to skew the results in their favor is utter bullshit; this is especially damning on games that AMD worked closely with the game developers and showed poor performance with Zen
  2. ^ takes care of this too
  3. it was soft of a joke, don't recall exactly the context
  4. yes, having more cores does not translate in better performance - just look at the 6900K vs 7700K in gaming, for example; WatchDogs2 is a game that uses all the cores of the 1800X and yet it still gets stomped by the 7700K
Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, zMeul said:
  1. the compiler developers use for <insert random software name> are the same with what game developers use - to claim that game developers use "special" compilers provided by Intel to skew the results in their favor is utter bullshit; this is especially damning on games that AMD worked closely with the game developers and showed poor performance with Zen
  2. ^ takes care of this too
  3. it was soft of a joke, don't recall exactly the context
  4. yes, having more cores does not translate in better performance - just look at the 6900K vs 7700K in gaming, for example; WatchDogs2 is a game that uses all the cores of the 1800X and yet it still gets stomped by the 7700K

The mere fact that all this time you were jumping up and down saying the 1800x was bad because it loses to the 7700k but now you finally acknowledge that the 6900k is getting creamed by the 7700k... Interesting

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, XenosTech said:

6900k is getting creamed by the 7700k... Interesting

was this not the case!? did you saw me say otherwise?!

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, M.Yurizaki said:

Not really number crunching. It's more like it shifted the burden of providing work for the GPU from the API to the application. The API opens more channels and defines new data sets so the driver can compile them into GPU tasks faster.

 

However that actually makes it potentially more sensitive to more cores. But at the moment, most games only implement three workload queues, which three threads can handle easily.

 

Yes. x86 only defines the operation, the input, and the expected output. It does not define how it should be implemented. Some operations on Ryzen can be faster or slower than the same operation on Skylake.

Technically all processors do is number crunching

Main Gaming PC - i9 10850k @ 5GHz - EVGA XC Ultra 2080ti with Heatkiller 4 - Asrock Z490 Taichi - Corsair H115i - 32GB GSkill Ripjaws V 3600 CL16 OC'd to 3733 - HX850i - Samsung NVME 256GB SSD - Samsung 3.2TB PCIe 8x Enterprise NVMe - Toshiba 3TB 7200RPM HD - Lian Li Air

 

Proxmox Server - i7 8700k @ 4.5Ghz - 32GB EVGA 3000 CL15 OC'd to 3200 - Asus Strix Z370-E Gaming - Oracle F80 800GB Enterprise SSD, LSI SAS running 3 4TB and 2 6TB (Both Raid Z0), Samsung 840Pro 120GB - Phanteks Enthoo Pro

 

Super Server - i9 7980Xe @ 4.5GHz - 64GB 3200MHz Cl16 - Asrock X299 Professional - Nvidia Telsa K20 -Sandisk 512GB Enterprise SATA SSD, 128GB Seagate SATA SSD, 1.5TB WD Green (Over 9 years of power on time) - Phanteks Enthoo Pro 2

 

Laptop - 2019 Macbook Pro 16" - i7 - 16GB - 512GB - 5500M 8GB - Thermal Pads and Graphite Tape modded

 

Smart Phones - iPhone X - 64GB, AT&T, iOS 13.3 iPhone 6 : 16gb, AT&T, iOS 12 iPhone 4 : 16gb, AT&T Go Phone, iOS 7.1.1 Jailbroken. iPhone 3G : 8gb, AT&T Go Phone, iOS 4.2.1 Jailbroken.

 

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Memories4K said:

gallery_141885_3517_64378.jpg

lol did she really say that?

"Ryzen is doing really well in 1440p and 4K gaming when the applications are more graphics bound" - Dr. Lisa Su, 2017

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, zMeul said:
  1. the compiler developers use for <insert random software name> are the same with what game developers use

The same code compiled for different ARM cores is optimized for the target processor. It will still run on multiple variants, but with lower performance. x86 is the same, code can be targetted towards either Intel or AMD, or you can have a more generic output code that takes advantage of no optimisations, but is guaranteed to work just as badly on either.

 

There is likely no compiler in widespread use currently that can compile code which takes advantage of Zen. In fact, any AMD optimisation is likely to be aimed towards Bulldozer, and may assume only 4 FPU's are available when I believe that Ryzen 8c/16t has 16 FPU's.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Memories4K said:

lol. I'm going to add that to my signature as a quote. ty

"Ryzen is doing really well in 1440p and 4K gaming when the applications are more graphics bound" - Dr. Lisa Su, 2017

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Curious Pineapple said:

There is likely no compiler in widespread use currently that can compile code which takes advantage of Zen. In fact, any AMD optimisation is likely to be aimed towards Bulldozer, and may assume only 4 FPU's are available when I believe that Ryzen 8c/16t has 16 FPU's.

Microsoft's compilers are the ones used and MS' compilers are agnostic

MS' compilers for Windows are centered around the x86 arch - why are you even bringing up ARM!?

x86 was developed by Intel .. tough luck

 

if AMD makes changes in the logic of the x86 arch, it's on them

what MS can do is to issue microcode updates for the kernel to work around the issues - Zen has been 5y years in development and you're telling me that only today they found out they're having issues?!

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, MyName13 said:

Is software optimization for an x86 CPU architecture a thing?Isn't software supposed to be optimized to use additional cores and threads instead?

Yes I've done a small bit of game engine modification myself. Jumping into the source code of id tech engines, source, etc. The code detects the cpu brand and run specific operations and code to do the same thing and it may be faster for. Intel or AMD, but they yield the same/similar results.

Spoiler

Cpu: Ryzen 9 3900X – Motherboard: Gigabyte X570 Aorus Pro Wifi  – RAM: 4 x 16 GB G. Skill Trident Z @ 3200mhz- GPU: ASUS  Strix Geforce GTX 1080ti– Case: Phankteks Enthoo Pro M – Storage: 500GB Samsung 960 Evo, 1TB Intel 800p, Samsung 850 Evo 500GB & WD Blue 1 TB PSU: EVGA 1000P2– Display(s): ASUS PB238Q, AOC 4k, Korean 1440p 144hz Monitor - Cooling: NH-U12S, 2 gentle typhoons and 3 noiseblocker eloops – Keyboard: Corsair K95 Platinum RGB Mouse: G502 Rgb & G Pro Wireless– Sound: Logitech z623 & AKG K240

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, goodtofufriday said:

I Feel like this needs to be here too.

 

-snip-

Can you kindly Ctrl+A > Ctrl+X > Ctrl+Shift+V or Ctrl+A and then click Remove Formating everytime you want to retoast that?   :P 

 

BURN.jpg

 

| Intel i7-3770@4.2Ghz | Asus Z77-V | Zotac 980 Ti Amp! Omega | DDR3 1800mhz 4GB x4 | 300GB Intel DC S3500 SSD | 512GB Plextor M5 Pro | 2x 1TB WD Blue HDD |
 | Enermax NAXN82+ 650W 80Plus Bronze | Fiio E07K | Grado SR80i | Cooler Master XB HAF EVO | Logitech G27 | Logitech G600 | CM Storm Quickfire TK | DualShock 4 |

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, xAcid9 said:

Can you kindly Ctrl+A > Ctrl+X > Ctrl+Shift+V or Ctrl+A and then click Remove Formating everytime you want to retoast that?   :P 

 

Lol Is it coming out white on the dark theme?

CPU: Amd 7800X3D | GPU: AMD 7900XTX

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Curufinwe_wins said:

Well that and AMDs alu/fp units are incredibly weak compared to Intel's right now...

 

For most consumer workloads it doesn't matter (more so because of poor software optimization), but it is quite reasonable to see some bottlenecks in things that should rely heavily on that (like draw call limited gaming).

 

Also yea... Zen is NOT a HPC cpu. Unfortunately...

 

I mean it wasn't unexpected... but it is a bit disappointing to see just how much weaker it is in those workflows...

 

Still great launch by AMD for consumers.

I wouldn't be surprised if in their next iteration of Zen they put in a larger FPU to make it more suitable for HPC. They've said they know of a lot of small additions that they want to make with the next iteration of Zen so I wouldn't be surprised if the next iteration has a 2x256 bit FMAC.

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, zMeul said:

Microsoft's compilers are the ones used and MS' compilers are agnostic

MS' compilers for Windows are centered around the x86 arch - why are you even bringing up ARM!?

x86 was developed by Intel .. tough luck

 

if AMD makes changes in the logic of the x86 arch, it's on them

what MS can do is to issue microcode updates for the kernel to work around the issues - Zen has been 5y years in development and you're telling me that only today they found out they're having issues?!

I'm bringing up ARM because it's the same thing. There is a whole lineup of ARM based processors out there, each different, and code can be optimised for each of them. x86/x86-64 is the same. 2 different companies with products based on the same base instruction set/arch, each designed differently, and will react differently to the same code.

 

Zen has been in development for 5 years yes, but how long have they had actual working chips for? How many different wafers have they been through? It takes/can take months to actually go from raw silicon wafers to a packaged CPU. They have to at some point just start making them, bugs or not. If they kept making changes and waiting until it was the perfect CPU and was as good as it can be, it will never be released. When laying out 2 billion transistors in silicon, and any bugs are either minor or fixable without a hardware change, it's good to go. CPU design isn't programming, it's basic electronics on a nanometer scale.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Curious Pineapple said:

Zen has been in development for 5 years yes, but how long have they had actual working chips for? How many different wafers have they been through? It takes/can take months to actually go from raw silicon wafers to a packaged CPU. They have to at some point just start making them, bugs or not. If they kept making changes and waiting until it was the perfect CPU and was as good as it can be, it will never be released. When laying out 2 billion transistors in silicon, and any bugs are either minor or fixable without a hardware change, it's good to go. CPU design isn't programming, it's basic electronics on a nanometer scale.

as I said previously, they didn't just stumbled on the finished product yesterday

no one forced them to release the CPUs without verifying everything and pushing the updates to the right developers

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, zMeul said:

as I said previously, they didn't just stumbled on the finished product yesterday

no one forced them to release the CPUs without verifying everything and pushing the updates to the right developers

OK, let's put this another way. The god damn thing was released today, reviewers have had them for longer. Let's assume that AMD have given Microsoft all they need to produce code that performs better on Ryzen. VS 2017 is 5 days away, are they really going to update, rebuild and push out a new version of an older toolchain a week before they release the replacement version?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Curious Pineapple said:

OK, let's put this another way. The god damn thing was released today, reviewers have had them for longer. Let's assume that AMD have given Microsoft all they need to produce code that performs better on Ryzen. VS 2017 is 5 days away, are they really going to update, rebuild and push out a new version of an older toolchain a week before they release the replacement version?

and you are assuming people will just top using VS2015 because?!

 

mainstream support ends 10/13/2020; extended support ends 10/14/2025  https://support.microsoft.com/en-us/lifecycle/search/19591

there goes your excuse out of the window ;)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×