Jump to content

AMD Ryzen HAS BEEN ANNOUNCED!!!

DocSwag
2 hours ago, TheMissxu said:

My understanding is that CPU encoding simply outrivals NVENC, Quicksync, and VCE when it comes to streaming. This is due to the fact that twitch limits all streamers (aside from special cases) bitrate to 3,500, which GPU encoders handle very poorly. Youtube streaming does allow for a 10,000+ bitrate, but you'd need an upload plan to match,

Yes, CPU encoders will generally get better quality per bit if your preset is high enough. But for streaming to twitch and the likes it won't matter that much because it will get compressed again before being sent to the viewers.

And even high end CPUs are often not enough to do high quality encoding with x264 in real time.

 

Maybe I should do a comparison. NVENC vs x264 veryfast, both at constant 3500Kbps.

 

 

1 hour ago, Sprawlie said:

the 4670, when dedicated 100% towards the game keeps up fine. with FPS in that ranges from 35-70, depending on area of the map and whats on screen.

 

It's as i said earlier. I tend not to just game when i'm playing though. I tend to have GTA 5, NHL.com flash based stream going in chrome, Discord, Slack, few other random tabs in Chrome open. I'm running a 2560x1080p display for gaming, and 2 1080p displays for other activities. 

 

So it's all that "other" stuff which hurts my game performance. As I'm sharing 4 cores amongst all of that.

What GPU do you have? Because I was getting better results than that with my 2500K, and yes I do heavily multitask too.

Link to comment
Share on other sites

Link to post
Share on other sites

FWIW:

 

Someone at computerbase said:
BIOS still not working properly with many mainboard manufacturers (working yes, looong boot times though).
RyZen does not overclock well on air and water, lower your expectations.
Looks like 2666 MHz DDR4 officially suported but more complicated than that.
Cannot confirm nor deny anything 1600X related.
XFR looks disappointing.

CPU: Intel Core i7 7820X Cooling: Corsair Hydro Series H110i GTX Mobo: MSI X299 Gaming Pro Carbon AC RAM: Corsair Vengeance LPX DDR4 (3000MHz/16GB 2x8) SSD: 2x Samsung 850 Evo (250/250GB) + Samsung 850 Pro (512GB) GPU: NVidia GeForce GTX 1080 Ti FE (W/ EVGA Hybrid Kit) Case: Corsair Graphite Series 760T (Black) PSU: SeaSonic Platinum Series (860W) Monitor: Acer Predator XB241YU (165Hz / G-Sync) Fan Controller: NZXT Sentry Mix 2 Case Fans: Intake - 2x Noctua NF-A14 iPPC-3000 PWM / Radiator - 2x Noctua NF-A14 iPPC-3000 PWM / Rear Exhaust - 1x Noctua NF-F12 iPPC-3000 PWM

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, LAwLz said:

Yes, CPU encoders will generally get better quality per bit if your preset is high enough. But for streaming to twitch and the likes it won't matter that much because it will get compressed again before being sent to the viewers.

And even high end CPUs are often not enough to do high quality encoding with x264 in real time.

 

Maybe I should do a comparison. NVENC vs x264 veryfast, both at constant 3500Kbps.

 

 

What GPU do you have? Because I was getting better results than that with my 2500K, and yes I do heavily multitask too.

I have the 1070

 

My i5-4670 is stock. it's not K and has no overclock. running on a gigabyte z87 Motherboard, with 16gb of Kingston DDR3 @ 1600mhz.

 

I'm also running at 2560x1080, so I am pushing about 30% more pixels than standard 1080p

 

 

Quote

"Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so." - Douglas Adams

System: R9-5950x, ASUS X570-Pro, Nvidia Geforce RTX 2070s. 32GB DDR4 @ 3200mhz.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Vode said:

Yeah, but wasn't Penryn around 110mm2? 

 

I'm aware of the difference in nodes die size is what matters.

Penryn was mobile only, no? So they didn't have any IHS.

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, djdwosk97 said:

Penryn was mobile only, no? So they didn't have any IHS.

Correct. I believe he meant Wolfdale, which is a good question nonetheless, since the core2duo wolfdale's were indeed soldered and only 82-107mm. The quad-core version was Yorkfield is memory serves correctly, but I cannot recall the die size of yorkfield. I'd guess around double wolfdale's, but I am not certain.

 

51 minutes ago, Vode said:

Yeah, but wasn't Penryn around 110mm2? 

 

I'm aware of the difference in nodes die size is what matters.

Honestly, I am unqualified to answer this question. I don't really know much about those older Intel CPU's and their exposure to thermal shock, and whether or not there were any issues with the solder cracking. All I know is what has been explained to me over the years, which is what I've continued to relay to others. Intel themselves normally lists this information in their whitesheets (along with their thermal shock testing parameters) so I'd look there if I were you. You might find something that helps make sense of it. Either way, Intel used this excuse as the reason to not solder Haswell, and I assume AMD will see the same limitations (unless they've managed to solder in a vacuum). It will be interesting to see how it all plays out.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, TheMissxu said:

My understanding is that CPU encoding simply outrivals NVENC, Quicksync, and VCE when it comes to streaming. This is due to the fact that twitch limits all streamers (aside from special cases) bitrate to 3,500, which GPU encoders handle very poorly. Youtube streaming does allow for a 10,000+ bitrate, but you'd need an upload plan to match,

Twitch really needs to step up with that bitrate though.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Corsair K63 Cherry MX red | Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, MageTank said:

Correct. I believe he meant Wolfdale, which is a good question nonetheless, since the core2duo wolfdale's were indeed soldered and only 82-107mm. The quad-core version was Yorkfield is memory serves correctly, but I cannot recall the die size of yorkfield. I'd guess around double wolfdale's, but I am not certain.

 

Honestly, I am unqualified to answer this question. I don't really know much about those older Intel CPU's and their exposure to thermal shock, and whether or not there were any issues with the solder cracking. All I know is what has been explained to me over the years, which is what I've continued to relay to others. Intel themselves normally lists this information in their whitesheets (along with their thermal shock testing parameters) so I'd look there if I were you. You might find something that helps make sense of it. Either way, Intel used this excuse as the reason to not solder Haswell, and I assume AMD will see the same limitations (unless they've managed to solder in a vacuum). It will be interesting to see how it all plays out.

Yes I meant E8600 etc... Anyhow it's an interesting subject. :)

 

I have a feeling that this whole ordeal isn't an engineering problem but rather a business and margins decision.

\\ QUIET AUDIO WORKSTATION //

5960X 3.7GHz @ 0.983V / ASUS X99-A USB3.1      

32 GB G.Skill Ripjaws 4 & 2667MHz @ 1.2V

AMD R9 Fury X

256GB SM961 + 1TB Samsung 850 Evo  

Cooler Master Silencio 652S (soon Calyos NSG S0 ^^)              

Noctua NH-D15 / 3x NF-S12A                 

Seasonic PRIME Titanium 750W        

Logitech G810 Orion Spectrum / Logitech G900

2x Samsung S24E650BW 16:10  / Adam A7X / Fractal Axe Fx 2 Mark I

Windows 7 Ultimate

 

4K GAMING/EMULATION RIG

Xeon X5670 4.2Ghz (200BCLK) @ ~1.38V / Asus P6X58D Premium

12GB Corsair Vengeance 1600Mhz

Gainward GTX 1080 Golden Sample

Intel 535 Series 240 GB + San Disk SSD Plus 512GB

Corsair Crystal 570X

Noctua NH-S12 

Be Quiet Dark Rock 11 650W

Logitech K830

Xbox One Wireless Controller

Logitech Z623 Speakers/Subwoofer

Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, LAwLz said:

Maybe I should do a comparison. NVENC vs x264 veryfast, both at constant 3500Kbps.

I would like to see the results of that. Add in "detached" Quicksync and whatever AMD is using if you can.

Read the community standards; it's like a guide on how to not be a moron.

 

Gerdauf's Law: Each and every human being, without exception, is the direct carbon copy of the types of people that he/she bitterly opposes.

Remember, calling facts opinions does not ever make the facts opinions, no matter what nonsense you pull.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, LAwLz said:

Are you saying that the benchmarks are apples vs oranges? Because that's like saying you shouldn't compare the 7700K vs the 1700 in Cinebench multithreading because "it's 4 cores vs 8".

Ryzen's 8 core chip does not clock as high as Intel's quad core, and on top of that it's behind in IPC. You can't just say the benchmark is invalid just because it measures the type of scenario where Ryzen appears to be behind, just like you can't say a benchmark is invalid when it measures the type of scenario where Ryzen is ahead.

 

The benchmark is not apples to oranges, and I am getting really sick and tired of people say it is.

You misunderstood my post.  I was specifically talking about the single-threaded performance, not the multi-threaded performance.  

 

It's apples to oranges because the 1700 boosts to 3.7GHz and the 7700K to 4.5GHz.  For an apples to apples comparison both would be clocked at the same frequency.  Now it may be - and probably is - that Kaby Lake can achieve higher stable clock rates, but that's a separate issue from IPC.  To find which has higher IPC, they must be clocked at the same rate.

Xeon E3-1241 @3.9GHz, 1.07V | Asus Z97-E/USB 3.1 | G.Skill Ripjaws X 8GB (2x4GB) DDR3-1600 | MSI RX 480 Gaming X 4GB @1350MHz/2150MHz, 1.09V/.975V | Crucial MX100 256GB | WD Blue 1TB 7200RPM | EVGA 750W G2 80+ Gold | CM Hyper 212+ w/ Noctua F12 | Phanteks Enthoo Pro M | Windows 10 Retail

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, flipped_bit said:

You misunderstood my post.  I was specifically talking about the single-threaded performance, not the multi-threaded performance.  

 

It's apples to oranges because the 1700 boosts to 3.7GHz and the 7700K to 4.5GHz.  For an apples to apples comparison both would be clocked at the same frequency.  Now it may be - and probably is - that Kaby Lake can achieve higher stable clock rates, but that's a separate issue from IPC.  To find which has higher IPC, they must be clocked at the same rate.

Testing at 3.7 vs. 4.5ghz is a shitty comparison in determining IPC, but it's a good comparison in terms of seeing overall performance (although, it's likely that the 1700 will be able to achieve higher clocks than 3.7ghz, but we also don't know that yet).

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, flipped_bit said:

You misunderstood my post.  I was specifically talking about the single-threaded performance, not the multi-threaded performance.  

 

It's apples to oranges because the 1700 boosts to 3.7GHz and the 7700K to 4.5GHz.  For an apples to apples comparison both would be clocked at the same frequency.  Now it may be - and probably is - that Kaby Lake can achieve higher stable clock rates, but that's a separate issue from IPC.  To find which has higher IPC, they must be clocked at the same rate.

 

You better believe that if AMD somehow matched or beat Kaby Lake in single-threaded performance, of all companies on earth that would make that fact very well known, it would be AMD.

 

Like everyone else, I'm anxious to see some 3rd party reviews.  

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, flipped_bit said:

You misunderstood my post.  I was specifically talking about the single-threaded performance, not the multi-threaded performance.  

 

It's apples to oranges because the 1700 boosts to 3.7GHz and the 7700K to 4.5GHz.  For an apples to apples comparison both would be clocked at the same frequency.  Now it may be - and probably is - that Kaby Lake can achieve higher stable clock rates, but that's a separate issue from IPC.  To find which has higher IPC, they must be clocked at the same rate.

You can't just blindly look at IPC because IPC is just one part of the equation.

It doesn't matter if Ryzen has close to the same IPC as Skylake if Skylake still outperforms it by a lot core for core, because of much higher clock speeds.

 

Would you be satisfied with Ryzen if it had Skylake IPC, but the chip literally caught on fire if you tried to run it over 900MHz? Of course not, because that chip would be useless. Nobody would buy it. That's why you can't just ignore the frequency. Cinebench doesn't measure IPC. It measure single and multicore performance. Skylake appears to be far better than Ryzen core for core.

 

It is an apples to apple test. It's just that you don't like that Ryzen 7 is weaker than Skylake when it comes to frequency, so you make excuses for why it doesn't matter when it clearly plays a huge role in both synthetic and real world performance.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, done12many2 said:

 

You better believe that if AMD somehow matched or beat Kaby Lake in single-threaded performance, of all companies on earth that would make that fact very well known, it would be AMD.

 

Like everyone else, I'm anxious to see some 3rd party reviews.  

Besides 3rd party benchmarks I'm looking forward to Silicon Lottery's binning stats. March 5th.

.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, dexT said:

Besides 3rd party benchmarks I'm looking forward to Silicon Lottery's binning stats. March 5th.

 

Good point.  I have used SL as a standard for my own personal binning.  If he sells it, I want to find one like it myself.  

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, LAwLz said:

Maybe I should do a comparison. NVENC vs x264 veryfast, both at constant 3500Kbps.

 

2 hours ago, Colonel_Gerdauf said:

I would like to see the results of that. Add in "detached" Quicksync and whatever AMD is using if you can.

It's slightly off-topic but...

Done!

 

Here is the drive folder with all the different clips.

 

If you're wondering why it says fraps at the top, it's because I used FRAPS to record a 30 second clip of lossless game footage. I then took that game footage and played it in MPC-HC and recorded the playback. That way I can compare each codec using the exact same "gameplay", instead of having to try try take the exact same steps in-game three times. Since the gameplay was recorded losslessly it should not affect the quality either.

 

Also, this made me realize how horrible 3500Kbps footage looks.

Oh and please bear in mind that I am on Sandy Bridge, and Intel has tweaked QuickSync each generation. So on Skylake it will probably look slightly better.

 

Here are YouTube links to the videos as well, but I recommend you download them from Drive instead. Remember to set it to 1080p if you watch on youtube.

Spoiler

 

 

 

Edit: Added a test with the "faster" x264 preset as well (instead of veryfast which is the default). It should result in higher quality, but my CPU was barely able to handle it.

 

Edit 2: Added comparison images taken from the videos. They are in the Google Drive.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LAwLz said:

You can't just blindly look at IPC because IPC is just one part of the equation.

It doesn't matter if Ryzen has close to the same IPC as Skylake if Skylake still outperforms it by a lot core for core, because of much higher clock speeds.

 

Would you be satisfied with Ryzen if it had Skylake IPC, but the chip literally caught on fire if you tried to run it over 900MHz? Of course not, because that chip would be useless. Nobody would buy it. That's why you can't just ignore the frequency. Cinebench doesn't measure IPC. It measure single and multicore performance. Skylake appears to be far better than Ryzen core for core.

 

It is an apples to apple test. It's just that you don't like that Ryzen 7 is weaker than Skylake when it comes to frequency, so you make excuses for why it doesn't matter when it clearly plays a huge role in both synthetic and real world performance.

You're being deliberatively argumentative.  I'm not some AMD fanboy, and I said in my original post that Ryzen IPC is likely 10-15% below Kaby Lake.  Comparing an 8-core 65W CPU that boosts to 3.7GHz to a 4-core 80W CPU that boosts to 4.5GHz is not an apples to apples comparison.  

Xeon E3-1241 @3.9GHz, 1.07V | Asus Z97-E/USB 3.1 | G.Skill Ripjaws X 8GB (2x4GB) DDR3-1600 | MSI RX 480 Gaming X 4GB @1350MHz/2150MHz, 1.09V/.975V | Crucial MX100 256GB | WD Blue 1TB 7200RPM | EVGA 750W G2 80+ Gold | CM Hyper 212+ w/ Noctua F12 | Phanteks Enthoo Pro M | Windows 10 Retail

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, flipped_bit said:

You're being deliberatively argumentative.  I'm not some AMD fanboy, and I said in my original post that Ryzen IPC is likely 10-15% below Kaby Lake.  Comparing an 8-core 65W CPU that boosts to 3.7GHz to a 4-core 80W CPU that boosts to 4.5GHz is not an apples to apples comparison.  

That's simply not true. It is apples to apples. In this case, both apples are CPU's. The problem with trying to segment them into their own sub-category is that the general consumer won't see it that way. In fact, AMD themselves advertise it as a gaming CPU, therefore, you must compare it against gaming CPU's. IPC argument aside (we don't have the exact details yet, my personal estimates are 5-10% within Skylake/Kaby), we are seeing a clear clock speed deficiency against  Intel's latest Kaby offering. Sure, we don't know how Ryzen overclocks, but we did see it's LN2 overclock potential, and it wasn't great compared to Intel's delidded air/water offerings.

 

With your logic, we shouldn't even compare the Ryzen SKU's against Intel's X99 SKU's, even though both are CPU's with similar core configurations, because they have different TDP's, therefore both are not "apples to apples". It's time to stop nitpicking the details, and realize both are CPU's, and therefore both are equally comparable to each other as long as both are within the same segment (mobile, desktop, etc). 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

This news thread has 23 (and going) pages already! Hype train tickets are selling like hotcakes!!!

 

Anyway, I thought the 40% increase in IPC puts it at the same level as Haswell? The additional 12% would now put it at Broadwell/Skylake levels(???)
 

I heard/read about this somewhere...

You can bark like a dog, but that won't make you a dog.

You can act like someone you're not, but that won't change who you are.

 

Finished Crysis without a discrete GPU,15 FPS average, and a lot of heart

 

How I plan my builds -

Spoiler

For me I start with the "There's no way I'm not gonna spend $1,000 on a system."

Followed by the "Wow I need to buy the OS for a $100!?"

Then "Let's start with the 'best budget GPU' and 'best budget CPU' that actually fits what I think is my budget."

Realizing my budget is a lot less, I work my way to "I think these new games will run on a cheap ass CPU."

Then end with "The new parts launching next year is probably gonna be better and faster for the same price so I'll just buy next year."

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, YoloSwag said:

This news thread has 23 (and going) pages already! Hype train tickets are selling like hotcakes!!!

 

Anyway, I thought the 40% increase in IPC puts it at the same level as Haswell? The additional 12% would now put it at Broadwell/Skylake levels(???)
 

I heard/read about this somewhere...

Yep it does.

 

People are complaining about how AMD is probably over'hyping this but i doubt it considering what they did with the current gen amd GPU's...

QUOTE/TAG ME WHEN RESPONDING

Please Spend As Much Time Writing Your Question As You Want Me To Spend Responding To It. Take Time & Explain

 

New TOS RUINED the meme that used to be below :( 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, TVwazhere said:

I personally am buying the I3-7100 (the g4560 equivalent ish) ONLY because i have a deadline in May and cant wait. Otherwise I would wait, because 4 cores are better than 2 cores with hyper threading. (it can also OC so)

9 hours ago, MageTank said:

I say go with the pentium as a stopgap. Use it until you can afford a real Intel quadcore, and upgrade later on. Sure, AMD's quad core might be cheaper, but it's gonna be several months before we even see it on the market. In that amount of time, you should be able to save up enough cash to buy a real Intel quad core. Let's be real, it's not like AMD's quad core is going to out-perform Intel's, so you will still end up with better performance, it will just come at a higher cost.

Thank you both for replying.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MageTank said:

That's simply not true. It is apples to apples. In this case, both apples are CPU's. The problem with trying to segment them into their own sub-category is that the general consumer won't see it that way. In fact, AMD themselves advertise it as a gaming CPU, therefore, you must compare it against gaming CPU's. IPC argument aside (we don't have the exact details yet, my personal estimates are 5-10% within Skylake/Kaby), we are seeing a clear clock speed deficiency against  Intel's latest Kaby offering. Sure, we don't know how Ryzen overclocks, but we did see it's LN2 overclock potential, and it wasn't great compared to Intel's delidded air/water offerings.

 

With your logic, we shouldn't even compare the Ryzen SKU's against Intel's X99 SKU's, even though both are CPU's with similar core configurations, because they have different TDP's, therefore both are not "apples to apples". It's time to stop nitpicking the details, and realize both are CPU's, and therefore both are equally comparable to each other as long as both are within the same segment (mobile, desktop, etc). 

Well comparing it to an X99 8c/16t part makes a whole lot more sense than comparing it to a 4c/8t part.  I'm sure I don't need to tell you that, in general, a smaller number of cores can be overclocked higher than a larger number of cores in the same architecture.  I expect the 4c/8t Ryzen part will overclock higher than whatever the 8c/16t parts ends up doing.  

 

Again, compare similar parts at identical clocks.  That's a fair comparison.  Making conclusions based on comparisons involving different clock speeds and core counts is ridiculous.  If a 1700 has a higher single threaded performance than a i5 6400, can we conclude that the Ryzen chip has higher IPC?  Of course not, because the 6400 is clocked lower.

Xeon E3-1241 @3.9GHz, 1.07V | Asus Z97-E/USB 3.1 | G.Skill Ripjaws X 8GB (2x4GB) DDR3-1600 | MSI RX 480 Gaming X 4GB @1350MHz/2150MHz, 1.09V/.975V | Crucial MX100 256GB | WD Blue 1TB 7200RPM | EVGA 750W G2 80+ Gold | CM Hyper 212+ w/ Noctua F12 | Phanteks Enthoo Pro M | Windows 10 Retail

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, flipped_bit said:

You're being deliberatively argumentative.  I'm not some AMD fanboy, and I said in my original post that Ryzen IPC is likely 10-15% below Kaby Lake.  Comparing an 8-core 65W CPU that boosts to 3.7GHz to a 4-core 80W CPU that boosts to 4.5GHz is not an apples to apples comparison.  

They were comparing them based on price. The price was very similar. And they were showing that for certain people, specifically streamers and content creators, Ryzen might be a better choice.

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, LAwLz said:

-snip-

So, what these results are telling me is to use either tweaked x264 or QuickSync, depending on the computer configuration (overclocking helps with this) and the nature of the game you are playing. The QuickSync capture, from the images, looked pitiful, but as you said you are using an early revision of a QuickSync-supported CPU. You have also proven that the use of NVENC comes at a great loss, as you are making requests from an already-busy GPU. In terms on running them as video streams, however, the differences become negligible. I might do some benchmarks myself when I have the time.

 

I have tested 3500kbps with OBS myself, and I agree with you that it is an absolutely awful thing to see. If anybody out there is considering recording 1080P gaming footage for later use, 16mbps is the way to go.

Read the community standards; it's like a guide on how to not be a moron.

 

Gerdauf's Law: Each and every human being, without exception, is the direct carbon copy of the types of people that he/she bitterly opposes.

Remember, calling facts opinions does not ever make the facts opinions, no matter what nonsense you pull.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, MageTank said:

That's simply not true. It is apples to apples. In this case, both apples are CPU's. The problem with trying to segment them into their own sub-category is that the general consumer won't see it that way. In fact, AMD themselves advertise it as a gaming CPU, therefore, you must compare it against gaming CPU's. IPC argument aside (we don't have the exact details yet, my personal estimates are 5-10% within Skylake/Kaby), we are seeing a clear clock speed deficiency against  Intel's latest Kaby offering. Sure, we don't know how Ryzen overclocks, but we did see it's LN2 overclock potential, and it wasn't great compared to Intel's delidded air/water offerings.

 

With your logic, we shouldn't even compare the Ryzen SKU's against Intel's X99 SKU's, even though both are CPU's with similar core configurations, because they have different TDP's, therefore both are not "apples to apples". It's time to stop nitpicking the details, and realize both are CPU's, and therefore both are equally comparable to each other as long as both are within the same segment (mobile, desktop, etc). 

 

Well, to be honest. Nothing on any official basis that anyone can take with a grain of salt has been tested at all. We will still see.  Too much talking in absolutes without any verifiable data. BIOS updates, driver updates, all these things will happen. There will be bugs, are there ever not?  Why are so many people talking in such absolutes when we really haven't seen a damn thing worth mentioning yet?

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×